Partner Content

#artificialintelligence 

Late last year, a Google engineer named Blake Lemoine felt certain he'd found something special. For months, Lemoine, who worked with the company's ethical AI division, had been testing Google's Language Model for Dialogue Applications, or LaMDA, from the living room of his San Francisco home. LaMDA is a hugely sophisticated chatbot, trained on trillions of words hoovered up from Wikipedia entries and internet posts and libraries' worth of books, and Lemoine's job was to ensure that the exchanges it produced weren't discriminatory or hateful. He posed questions to LaMDA about religion, ethnicity, sexual orientation and gender. The machine had some bugs -- there were a few ugly, racist impressions -- which Lemoine dutifully reported.