AI is worse at identifying household items from lower-income countries
Object recognition algorithms sold by tech companies, including Google, Microsoft, and Amazon, perform worse when asked to identify items from lower-income countries. These are the findings of a new study conducted by Facebook's AI lab, which shows that AI bias can not only reproduce inequalities within countries, but also between them. In the study (which we spotted via Jack Clark's Import AI newsletter), researchers tested five popular off-the-shelf object recognition algorithms -- Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition, and IBM Watson -- to see how well each program identified household items collected from a global dataset. The dataset included 117 categories (everything from shoes to soap to sofas) and a diverse array of household incomes and geographic locations (from a family in Burundi making $27 a month to a family in Ukraine with a monthly income of $10,090). The researchers found that the object recognition algorithms made around 10 percent more errors when asked to identify items from a household with a $50 monthly income compared to those from a household making more than $3,500.
Jun-12-2019, 12:51:27 GMT
- Country:
- Africa
- Burkina Faso (0.05)
- Burundi (0.25)
- Middle East > Somalia (0.05)
- Asia
- Europe > Ukraine (0.25)
- North America > United States
- California (0.05)
- Africa
- Industry:
- Information Technology > Services (1.00)
- Technology: