raghavan
The Morning After: Why Google's Gemini image generation feature overcorrected for diversity
After complaints that Google's image generator built into its Gemini AI was (ugh) woke, Google explained why it may have overcorrected for diversity. Prabhakar Raghavan, the company's senior vice president for knowledge and information, said Google's efforts to ensure a wide range of people generated in images "failed to account for cases that should clearly not show a range." Users criticized Google for depicting specific white figures or historically white groups of people as racially diverse individuals. In Engadget's tests, asking Gemini to create illustrations of the Founding Fathers resulted in images of white men with a single person of color or woman among them. When we asked the chatbot to generate images of popes through the ages, we got photos depicting Black women and Native Americans as the leader of the Catholic Church.
- Leisure & Entertainment (0.32)
- Law (0.32)
Google explains why Gemini's image generation feature overcorrected for diversity
After promising to fix Gemini's image generation feature and then pausing it altogether, Google has published a blog post offering an explanation for why its technology overcorrected for diversity. Prabhakar Raghavan, the company's Senior Vice President for Knowledge & Information, explained that Google's efforts to ensure that the chatbot would generate images showing a wide range of people "failed to account for cases that should clearly not show a range." Further, its AI model grew to become "way more cautious" over time and refused to answer prompts that weren't inherently offensive. "These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong," Raghavan wrote. Google made sure that Gemini's image generation couldn't create violent or sexually explicit images of real persons and that the photos it whips up would feature people of various ethnicities and with different characteristics.
Google paid $26 billion in 2021 for default search engine status
Vice president Prabhakar Raghavan testified Friday that Google paid $26.3 billion in 2021 for the purpose of maintaining default search engine status and acquiring traffic, Bloomberg reports. It's likely the lion's share of that sum went to Apple, which it has showered with exorbitant sums for many years in order to remain the default search option on iPhone, iPad and Mac. Raghavan, who was testifying as part of the DOJ's ongoing antitrust suit against the company, said Google's search advertising made $146.4 billion in revenue in 2021, which puts the $26 billion it paid for default status in perspective. The executive clarified that default status was the most costly part of what it pays to acquire traffic. Raghavan didn't mention how much of the $26.3 billion went to Apple. But CNBC reports that an estimate from private wealth management firm Bernstein ballparked that Google could pay Apple up to $19 billion this year for the default privilege.
Can Tech Stop Animal Poachers in Their Tracks?
This story was originally published by Slate's Future Tense partnership and is reproduced here as part of the Climate Desk collaboration. In August 2021, forest range officer Remya Raghavan caught three people carrying wild boar meat in the Wayanad forest of Kerala, a state in southern India. Possessing wild animal meat is a crime under the country's 1972 Wildlife Protection Act, so Raghavan entered all the details of the crime--location, witnesses, names of the accused, items seized, and section of the forest--in a mobile application. Just like that, the case was officially registered in the app-based system, which signaled that it needed to be taken to court. The app Raghavan used is called HAWK, or Hostile Activity Watch Kernel, and it appears to be the first such digital intelligence gathering system for wildlife crime in India.
- Africa > South Africa (0.05)
- Africa > Kenya (0.05)
- North America > United States > Utah (0.05)
- (6 more...)
In the Shadowy, Hard-to-Track Poaching Industry, Governments Hope a New Tool Can Solve an Old Problem
In August 2021, forest range officer Remya Raghavan caught three people carrying wild boar meat in the Wayanad forest of Kerala, a state in southern India. Possessing wild animal meat is a crime under the country's 1972 Wildlife Protection Act, so Raghavan entered all the details of the crime--location, witnesses, names of the accused, items seized, and section of the forest--in a mobile application. Just like that, the case was officially registered in the app-based system, which signaled that it needed to be taken to court. The app Raghavan used is called HAWK, or Hostile Activity Watch Kernel, and it appears to be the first such digital intelligence gathering system for wildlife crime in India. It helps officers like Raghavan centralize and share information on forest and wildlife crimes in real time.
- Africa > South Africa (0.05)
- Africa > Kenya (0.05)
- North America > United States > Utah (0.05)
- (7 more...)
Do AI chatbots like ChatGPT pose a major cybersecurity risk?
ChatGPT is the spiciest thing in artificial intelligence (AI) right now. Powered by OpenAI's Chat GPT-3 large language model (LLM), it's a computer program that can understand and converse back to users in a way that feels extremely close to talking with a human. The sophisticated generative AI chatbot attracted one million users in just four days, and has left other industry giants scrambling to announce a version of their own. The power of ChatGPT has amplified speculation on what is possible with the power of AI – with people already finding cool ways to use the system, including forming weight loss plans, writing code, creating whole stand-up routines, and templating emails. Of course, speculation comes with two sides, and there has been a lot of talk about how it could impact the job roles of humans.
- Information Technology > Security & Privacy (0.75)
- Government > Military > Cyberwarfare (0.42)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.59)
The Morning After: The verdict on PlayStation VR2
PlayStation's next-gen VR headset is here. It's high-spec and, boy, high priced. Engadget's Devindra Hardawar says it's a massive step forward from the original PSVR, thanks to its high-resolution screens and innovative features like headset haptics. Back in 2016, when the original launched, VR was making another push into the mainstream, which kicked off with the Oculus Rift and HTC Vive. The tech has evolved at an incredible pace, so seven years later, this sequel headset feels more comfortable and comes with far more advanced controllers.
- Media (1.00)
- Leisure & Entertainment > Games > Computer Games (0.72)
- Information Technology > Hardware (0.72)
Google relies on human employees to improve Bard chatbot's responses
In a video ad Google posted on Twitter, its yet-to-be-launched AI chatboard Bard confidently spouted misinformation about the James Webb Space Telescope. "JWST took the very first pictures of a planet outside of our own solar system," the chatbot replied, which is patently false. Now, the tech giant is looking to improve Bard's accuracy, and according to CNBC, it's asking employees for help. Google's VP for search, Prabhakar Raghavan, reportedly sent an email to staff members, asking them to rewrite Bard responses on topics they know well. The chatbot "learns best by example," Raghavan said, and training it with factual answers will help improve its accuracy.
Google Chatbot Blunders As AI Battle With Microsoft Heats Up
Google on Wednesday announced a slew of features powered by Artificial Intelligence (AI), but a mistake in an ad caused its share price to tank. The search engine giant is rushing into the space after the bot ChatGPT caught the imagination of web users around the world with its ability to generate essays, speeches and even exam papers in seconds. Microsoft has announced a multibillion-dollar partnership with ChatGPT maker OpenAI and unveiled new products on Tuesday, while Google tried to steal the march a day earlier by announcing its "Bard" alternative. The bots are quickly being integrated into search engines and Google is battling to preserve its two-decade dominance of the web search industry. But astronomers on Twitter quickly noticed that Google's Bard had given out an error in an ad on Twitter touting its new technology.
Google shares tank 8% as AI chatbot Bard flubs answer in ad
Shares of Google's parent company lost more than $100bn in market value on Wednesday after its Bard chatbot advertisement showed inaccurate information and analysts said its AI search event lacked details on how it will answer Microsoft's ChatGPT challenge. Reuters was the first to point out the error in Google's advertisement, which debuted Monday, about which satellite first took pictures of a planet outside the Earth's solar system. Shares of the company's parent Alphabet fell 8 percent or $8.59 a share to $99.05 and was one of the most actively traded on US exchanges. The tech giant posted a short GIF video of Bard in action via Twitter, describing the chatbot as a "launchpad for curiosity" that would help simplify complex topics, but it delivered an inaccurate answer that was spotted just hours before the launch event for Bard in Paris. "This is a hiccup here and they're severely punishing the stock for it, which is justified because obviously everybody is pretty excited to see what Google's going to counter with Microsoft coming out with a pretty decent product," said Dennis Dick, founder and market structure analyst at Triple D Trading.