Goto

Collaborating Authors

language model


Google Artificial Intelligence Team Draws From Critical Race Theory, Internal Document Shows

#artificialintelligence

Google's artificial intelligence (AI) work draws from Critical Race Theory, a philosophical framework that posits that nearly every interaction should be seen as a racial power struggle and seeks to "disrupt" American society which it views as immutably racist, according to a company document obtained by The Daily Wire. A screenshot of an internal company page, obtained by The Daily Wire, says under the header "Ethical AI": We focus on AI at the intersection of Machine Learning and society, developing projects that inform the general public; bringing the complexities of individual identity into the development of human-centric AI; and creating ways to measure different kinds of biases and stereotypes. Out [sic] work includes lessons from gender studies, critical race theory, computational linguistics, computer vision, engineering education, and beyond! Google's Ethical AI team appears intent on encoding far-left ideology into its algorithms even after previous leaders of the team plunged the section into chaos over their insistence on overlaying progressive politics onto mathematics. Until recently, the team was co-led by Timnit Gebru, who cofounded a "Black in AI" racial affinity group and in 2018 coauthored a paper saying facial recognition technology was less accurate at recognizing women and minorities.


The Efforts to Make Text-Based AI Less Racist and Terrible

WIRED

In July 2020, OpenAI launched GPT-3, an artificial intelligence language model that quickly stoked excitement about computers writing poetry, news articles, and programming code. Just as quickly, it was shown to sometimes be foulmouthed and toxic. OpenAI said it was working on fixes, but the company recently discovered GPT-3 was being used to generate child porn. Now OpenAI researchers say they've found a way to curtail GPT-3's toxic text by feeding the program roughly 100 encyclopedia-like samples of writing by human professionals on topics like history and technology but also abuse, violence, and injustice. OpenAI's project shows how the tech industry is scrambling to constrain the dark side of a technology that's shown enormous potential but also can spread disinformation and perpetuate biases.


A Short Discussion on Bias in Machine Learning

#artificialintelligence

In the last decade, advances in data science and engineering have made possible the development of various data products across industry. Problems that not so long ago were treated as very difficult for machines to tackle are now solved (to some extent) and available at large scale capacities. These include many perceptual-like tasks in computer vision, speech recognition, and natural language processing (NLP). Nowadays, we can contract large-scale deep learning-based vision systems that can recognize and verify faces on images and videos. In the same way, we can take advantage of large-scaled language models to build conversational bots, analyze large bodies of text to find common patterns, or use translation systems that can work on nearly any modern language.


Meet Wu Dao 2.0, the Chinese AI model making the West sweat

#artificialintelligence

A new artificial intelligence model developed by Chinese researchers is performing untold feats with image creation and natural language processing -- making rivals in Europe and the U.S. nervous about falling behind. The model, dubbed Wu Dao 2.0, is able to understand everything people say -- the grammar too -- but can also recognize images and generate realistic pictures based on descriptions. It can also write essays and poems in traditional Chinese, as well as predict the 3D structures of proteins, POLITICO'S AI: Decoded reported. Developed by the government-funded Beijing Academy of Artificial Intelligence and unveiled last week, Wu Dao 2.0 appears to be among the world's most sophisticated AI language models. Wu Dao 2.0's creators say it's 10 times more powerful than its closest rival GPT-3, developed by the U.S. firm OpenAI.


OpenAI claims to have mitigated bias and toxicity in GPT-3

#artificialintelligence

In a study published today, OpenAI, the lab best known for its research on large language models, claims it's discovered a way to improve the "behavior" of language models with respect to ethical, moral, and societal values. The approach, OpenAI says, can give developers the tools to dictate the tone and personality of a model depending on the prompt that the model's given. Despite the potential of natural language models like GPT-3, many blockers exist. The models can't always answer math problems correctly or respond to questions without paraphrasing training data, and it's well-established that they amplify the biases in data on which they were trained. That's problematic in the language domain, because a portion of the data is often sourced from communities with pervasive gender, race, and religious prejudices.


What Really Happened When Google Ousted Timnit Gebru

WIRED

One afternoon in late November of last year, Timnit Gebru was sitting on the couch in her San Francisco Bay Area home, crying. Gebru, a researcher at Google, had just clicked out of a last-minute video meeting with an executive named Megan Kacholia, who had issued a jarring command. Gebru was the coleader of a group at the company that studies the social and ethical ramifications of artificial intelligence, and Kacholia had ordered Gebru to retract her latest research paper--or else remove her name from its list of authors, along with those of several other members of her team. The paper in question was, in Gebru's mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software--most famously exemplified by a system called GPT-3--that was stoking excitement in the tech industry.


Google Hopes AI Can Turn Search Into a Conversation

WIRED

Google often uses its annual developer conference, I/O, to showcase artificial intelligence with a wow factor. In 2016, it introduced the Google Home smart speaker with Google Assistant. In 2018, Duplex debuted to answer calls and schedule appointments for businesses. In keeping with that tradition, last month CEO Sundar Pichai introduced LaMDA, AI "designed to have a conversation on any topic." In an onstage demo, Pichai demonstrated what it's like to converse with a paper airplane and the celestial body Pluto.


China's GPT-3? BAAI Introduces Superscale Intelligence Model 'Wu Dao 1.0'

#artificialintelligence

Since the May 2020 release of OpenAI's GPT-3, AI researchers have embraced super-large-scale pretraining models. Packing an epoch-making 175 billion parameters, GPT-3 has achieved excellent performance across multiple natural language processing (NLP) tasks. Despite their size and power however, such models still lack common sense or cognitive abilities, and so struggle with complex reasoning tasks like open dialogue, knowledge-based Q&A, visual reasoning, etc. In a bid to promote the research and development of China's own large-scale pretraining models and further explore universal intelligence from a more fundamental perspective, the Beijing Academy of Artificial Intelligence (BAAI) recently unveiled Wu Dao 1.0, China's first homegrown super-scale intelligent model system. The work was led by BAAI Research Academic Vice President and Tsinghua University Professor Tang Jie, with contributions from a team of more than 100 AI scientists from Peking University, Tsinghua University, Renmin University of China, Chinese Academy of Sciences and other institutes.


Researchers open-source benchmarks measuring quality of AI-generated code

#artificialintelligence

The applications of computer programming are vast in scope. And as computers become ubiquitous, the demand for quality code draws an ever-growing number of aspiring programmers to the profession. After years of study to become proficient at coding, experts learn to convert abstracts into concrete, executable programs. But what if AI could do the same? In recent years, large-scale AI language models have shown promise in generalizing to tasks including writing code, implying that humans' work may be one day supplemented by AI systems.


AI still writes lousy poetry

ZDNet

Her eyes, twin pools of mystic light, Forever in her radiance white--, She sought the bosom of the Night. Away it came, that mystic sight! A survey of recent literature in the machine learning category of artificial intelligence shows steady progress in the development of techniques for automatically generating poetry. The output remains fairly mediocre, but it is getting good enough that some human readers will give the poems respectable marks in controlled evaluations. And some people will even be fooled into ascribing human authorship to machine poetry.