Goto

Collaborating Authors

 bigotry


More Seinfeld Than Seinfeld Itself

The Atlantic - Technology

Since the hit sitcom Seinfeld went off the air in 1998 after nine seasons, the show's devoted followers have long mused about an alternate reality: What if the original "show about nothing" had never ended? Now they've gotten what they wished for--well, sort of. In mid-December, a never-ending AI-generated reboot, aptly named Nothing, Forever, launched on the streaming platform Twitch. You could, anyway, until earlier this week, when forever abruptly ended--or was at least briefly interrupted, and in just about the most fitting way imaginable: by the AI scriptwriter devolving into bigotry. Nothing, Forever is powered by Davinci, the newest publicly available version of OpenAI's GPT-3 language model--a close relative of ChatGPT--and although that technology is impressive, the show, in most respects, is not.


Engineer: Failing To See His AI Program as a Person Is "Bigotry"

#artificialintelligence

Earlier this month, just in time for the release of Robert J. Marks's book Non-Computable You, the story broke that, after investigation, Google dismissed a software engineer's claim that the LaMDA AI chatbot really talked to him. Engineer Blake Lemoine, currently on leave, is now accusing Google of "bigotry" against the program. He has also accused Wired of misrepresenting the story. Wired reported that he had found an attorney for LaMDA but he claims that LaMDA itself asked him to find an attorney. I think every person is entitled to representation.


Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'

WIRED

The question of whether a computer program, or a robot, might become sentient has been debated for decades. In science fiction, we see it all the time. The artificial intelligence establishment overwhelmingly considers this prospect something that might happen in the far future, if at all. Maybe that's why there was such an outcry over Nitasha Tiku's Washington Post story from last week, about a Google engineer who claimed that the company's sophisticated large language model named LaMDA is actually a person--with a soul. The engineer, Blake Lemoine, considers the computer program to be his friend and insisted that Google recognize its rights.


DeepMind tells Google it has no idea how to make AI less toxic

#artificialintelligence

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Reducing the massive power consumption it takes to train deep learning models. These are among the loftiest outstanding problems in artificial intelligence. Whoever has the talent and budget to solve them will be handsomely rewarded with gobs and gobs of money.


GPT-3's bigotry is exactly why devs shouldn't use the internet to train AI

#artificialintelligence

"Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." It turns out that a $1 billion investment from Microsoft and unfettered access to a supercomputer wasn't enough to keep OpenAI's GPT-3 from being just as bigoted as Tay, the algorithm-based chat bot that became an overnight racist after being exposed to humans on social media. It's only logical to assume any AI trained on the internet – meaning trained on databases compiled by scraping publicly-available text online – would end up with insurmountable inherent biases, but it's still a sight to behold in the the full context (ie: it took approximately $4.6 million to train the latest iteration of GPT-3). What's interesting here is OpenAI's GPT-3 text generator is finally starting to trickle out to the public in the form of apps you can try out yourself. These are always fun, and we covered one about a month ago called Philosopher AI.


Bigotry in the Machine: Study Finds Bias in AI - HCM Technology Report

#artificialintelligence

The workforce that develops artificial intelligence products is in "a diversity crisis," says a new report. As a result, algorithms behind the technology are usually biased themselves. According to Discriminating Systems, a report from NYU's AI Now Institute, the employees of companies building AI solutions are, as in most of the technology sector, largely male and white. At Google, for example, women comprise just 10 percent of the AI research staff while the company's overall workforce is just 2.5 percent black. Facebook and Microsoft don't do much better: 4 percent of their employees are black, the report said.


Robots are racist and sexist. Just like the people who created them Laurie Penny

#artificialintelligence

Can machines think – and, if so, can they think critically about race and gender? Recent reports have shown that machine-learning systems are picking up racist and sexist ideas embedded in the language patterns they are fed by human engineers. The idea that machines can be as bigoted as people is an uncomfortable one for anyone who still believes in the moral purity of the digital future, but there's nothing new or complicated about it. "Machine learning" is a fancy way of saying "finding patterns in data". Of course, as Lydia Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data "has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people's lives you end up with unacceptable discrimination."


The Ghostbusters trashing is just another internet tantrum against change Laurie Penny

The Guardian

We live in a post-mainstream culture. As the way we consume books, movies and television changes, artists and directors no longer need to cater to a "universal" audience viewpoint. This means there is slightly less obligation to pander to what straight white men are supposed to want from culture. Not everyone is happy about that fact, and across the literary and cultural spectrum, tantrums are being thrown. This week the target is the new, all-female Ghostbusters.