From their DeepMind project beating champions of Alpha Go at their own game, to recent announcements Magneta and Springboard, not to mention driverless cars, its clear that AI and Machine Learning are central to Google's strategy across its vast portfolio. In a recent interview with Hollywood Reporter, Alphabet chairman Eric Schmidt played down the fears that surround advancements in AI: 'To be clear, we're not talking about consciousness, we're not talking about souls, we're not talking about independent creativity." However, being acutely aware of the concerns around intelligent technology, the company's AI research division Google Brain recently published an AI Precision Safety whitepaper. Powerful Infrastructure Underpinning all of these projects, as well as the company's flagship Search, Translate and Youtube products is Google Cloud Platform, providing developers with the tools to build a range of programs from simple websites to complex, intelligent applications. As part of our AI in Business Festival, we spoke to Miles Ward, Global Head of Solutions at Google Cloud Platform, to find out more about the machine learning tools they offer to developers. From their DeepMind project beating champions of Alpha Go at their own game, to recent announcements Magneta and Springboard, not to mention driverless cars, its clear that AI and Machine Learning are central to Google's strategy across its vast portfolio. In a recent interview with Hollywood Reporter, Alphabet chairman Eric Schmidt played down the fears that surround advancements in AI: 'To be clear, we're not talking about consciousness, we're not talking about souls, we're not talking about independent creativity."
Large Language Models have taken the world by storm recently. The capabilities shown by these models, combined with the way it seems like they can do everything have gotten the AI community very excited (and some AGI doomers terrified, lol). As LLMs become more powerful, we will naturally see them serve as foundation tasks for all kinds of applications. The impact they will have can't be overstated. However, it's crucial to ensure that these models are safe and don't come with seriously problematic biases/cases.
Instagram is testing new methods for age verification, including having the user upload a video selfie and then letting an AI judge their age. Here's how it will work. When an Instagram user attempts to edit their birth date on the service to be 18 or over, Instagram will require them to verify their age. There will be several ways to do this, including uploading an ID or asking three mutual friends to verify your age. The most interesting way to do this, however, is to upload a video selfie.
Instagram is testing new age verification methods including asking followers to vouch for your age and even using AI that can estimate your age via a video selfie. It's part of a push to ensure users are at least the minimum 13 years old and "to make sure that teens and adults are in the right experience for their age group," it announced. For the "social vouching" system, Instagram asks three mutual followers of the user to confirm their age. Those followers must be at least 18 and have three days to respond to the request. Users can still verify their age with pictures of ID cards as well.
Instagram has started testing new age-verification tools, including new technology that claims to be able to estimate the user's age using a video selfie. The'Age Estimation' technology from digital identity company Yoti analyses the user's facial features using artificial intelligence (AI), in order to predict their age. Instagram is also a new age-verification method that involves asking three separate users to confirm how old they are. The photo sharing app, owned by tech conglomerate Meta, has begun testing the tools in the US as of today, with the aim of providing more age-appropriate experiences. A third age verification method of uploading a valid form of ID, like a drivers license or ID card, is already available.
If you've spent any time on Twitter lately, you may have seen a viral black-and-white image depicting Jar Jar Binks at the Nuremberg Trials, or a courtroom sketch of Snoop Dogg being sued by Snoopy. These surreal creations are the products of Dall-E Mini, a popular web app that creates images on demand. Type in a prompt, and it will rapidly produce a handful of cartoon images depicting whatever you've asked for. More than 200,000 people are now using Dall-E Mini every day, its creator says--a number that is only growing. A Twitter account called "Weird Dall-E Generations," created in February, has more than 890,000 followers at the time of publication.
Richard Socher: "We'll never be as bad as Google. We'll never sell your data." Are you happy with Google search? Regardless of how you answer this question, chances are you still use it. With the notable exceptions of China and Russia, where Baidu and Yandex lead, respectively, Google's market share in search is over 90% worldwide.
Lesbians on dating and hookup apps aren't looking for men, but that's what platforms like Bumble and Tinder are serving them. On today's show, Madison and Rachelle speak to some queer women who've had this problem and what sorts of issues it creates. Then they discuss the women-focused apps that've tried to fill that space, and why it's so difficult to find safe queer dates online. This podcast is produced by Daniel Schroeder, Madison Malone Kircher, and Rachelle Hampton.
LaMDA is a software program that runs on Google TPU chips. Like the classic brain in a jar, some would argue the code and the circuits don't form a sentient entity because none of it engages in life. Google engineer Blake Lemoine caused controversy last week by releasing a document that he had circulated to colleagues in which Lemoine urged Google to consider that one of its deep learning AI programs, LaMDA, might be "sentient." Google replied by officially denying the likelihood of sentience in the program, and Lemoine was put on paid administrative leave by Google, according to an interview with Lemoine by Nitasha Tiku of The Washington Post. There has been a flood of responses to Lemoine's claim by AI scholars. University of Washington linguistics professor Emily Bender, a frequent critic of AI hype, told Tiku that Lemoine is projecting anthropocentric views onto the technology. "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," Bender told Tiku. In an interview with MSNBC's Zeeshan Aleem, AI scholar Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, observed that the concept of sentience has not been rigorously explored. Mitchell concludes the program is not sentient, however, "by any reasonable meaning of that term, and the reason is because I understand pretty well how the system works."
Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Today, the internet is a mostly 2D platform that we consume through a screen. It is a command-line prompt for the reality we live in. Instagram posts, Tiktoks, text messages, emails and voice memos are all digital artifacts things people create and receive in the physical world. But this will change when the metaverse becomes so immersive and photo-realistic that physical reality extends into virtual spaces.