jernite
Big Tech builds AI with bad data. So scientists sought better data.
Yacine Jernite's fears about bias in artificial intelligence were vividly affirmed in 2017, when a Facebook translation error led Israeli police to arrest a Palestinian construction worker. The man had posted a picture of himself leaning against a bulldozer with the caption, in Arabic, "good morning." Facebook mistakenly translated it, in Hebrew, as "attack them." The error was quickly discovered and the man released, according to a report in Haaretz, but the incident cemented personal concerns about AI for Jernite, who joined Facebook's AI division soon after. As the child of Moroccan parents in post-9/11 America, Jernite said he has "spent hours upon hours in immigration secondary interviews -- in a way that I could not at the time trace to the technology that was being applied."
- Europe > France (0.15)
- North America > United States > New York (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (3 more...)
- Information Technology > Services (0.49)
- Government > Regional Government (0.35)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.51)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.49)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.34)
Open-source language AI challenges big tech's models
Researchers have warned against possible harms from AI that processes and generates text.Credit: Getty An international team of around 1,000 largely academic volunteers has tried to break big tech's stranglehold on natural-language processing and reduce its harms. Trained with US$7-million-worth of publicly funded computing time, the BLOOM language model will rival in scale those made by firms Google and OpenAI, but will be open-source. BLOOM will also be the first model of its scale to be multilingual. The collaboration, called BigScience, launched an early version of the model on 17 June, and hopes that it will ultimately help to reduce harmful outputs of artificial intelligence (AI) language systems. Models that recognize and generate language are increasingly used by big tech firms in applications from chat bots to translators, and can sound so eerily human that a Google engineer this month claimed that the firm's AI model was sentient (Google strongly denies that the AI possesses sentience).
- Oceania > Australia > Western Australia (0.05)
- North America > United States > Rhode Island > Providence County > Providence (0.05)
- Europe > France (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
Open-source language AI challenges big tech's models
Researchers have warned against possible harms from AI that processes and generates text.Credit: Getty An international team of around 1,000 largely academic volunteers has tried to break big tech's stranglehold on natural-language processing and reduce its harms. Trained with US$7-million-worth of publicly funded computing time, the BLOOM language model will rival in scale those made by firms Google and OpenAI, but will be open-source. BLOOM will also be the first model of its scale to be multilingual. The collaboration, called BigScience, launched an early version of the model on 17 June, and hopes that it will ultimately help to reduce harmful outputs of artificial intelligence (AI) language systems. Models that recognize and generate language are increasingly used by big tech firms in applications from chat bots to translators, and can sound so eerily human that a Google engineer this month claimed that the firm's AI model was sentient (Google strongly denies that the AI possesses sentience).
- Oceania > Australia (0.15)
- North America > United States > Rhode Island (0.15)