Google explains why Gemini's image generation feature overcorrected for diversity
After promising to fix Gemini's image generation feature and then pausing it altogether, Google has published a blog post offering an explanation for why its technology overcorrected for diversity. Prabhakar Raghavan, the company's Senior Vice President for Knowledge & Information, explained that Google's efforts to ensure that the chatbot would generate images showing a wide range of people "failed to account for cases that should clearly not show a range." Further, its AI model grew to become "way more cautious" over time and refused to answer prompts that weren't inherently offensive. "These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong," Raghavan wrote. Google made sure that Gemini's image generation couldn't create violent or sexually explicit images of real persons and that the photos it whips up would feature people of various ethnicities and with different characteristics.
Feb-24-2024, 12:15:32 GMT