Goto

Collaborating Authors

 angwin


Grammarly pulls AI author-impersonation tool after backlash

BBC News

Writing tool Grammarly has disabled an AI feature which mimicked personas of prominent writers, including Stephen King and scientist Carl Sagan, following a backlash from people impersonated. The Expert Review function, which offered writing feedback inspired by the styles of famous authors and academics, was taken down this week by Superhuman, the tech firm which runs Grammarly. The feature was met with resistance, including a multi-million dollar lawsuit, from writers who found their names and reputations used as AI personas without their consent. Shishir Mehrotra, the firm's chief executive, apologised on LinkedIn, acknowledging the tool had misrepresented the voices of experts. Investigative journalist Julia Angwin, a New York Times contributing opinion writer, is the lead plaintiff in a class-action lawsuit filed against Superhuman and Grammarly in the Southern District of New York.


It's Time to Believe the AI Hype

WIRED

Tech pundits are fond of using the term "inflection points" to describe those rare moments when new technology wipes the board clean, opening up new threats and opportunities. But one might argue that in the past few years what used to be called out as an inflection point might now just be called "Monday." Certainly that applied this week. OpenAI, denying rumors that it would unveil either an AI-powered search product or its next-generation model GPT-5, instead announced something different, but nonetheless eye-popping, on Monday. It was a new flagship model called GPT-4o, to be made available for free, which uses input and output in various modes--text, speech, vision--for disturbingly natural interaction with humans. What struck many observers about the demo was how playful and even provocative the emotionally expressive chatbot was--but also imbued with the encyclopedic knowledge of data sets encompassing much of the world's knowledge.


Decoding the Hype About AI – The Markup

#artificialintelligence

Hello World is a weekly newsletter--delivered every Saturday morning--that goes deep into our original reporting and the questions we put to big thinkers in the field. If you have been reading all the hype about the latest artificial intelligence chatbot, ChatGPT, you might be excused for thinking that the end of the world is nigh. The clever AI chat program has captured the imagination of the public for its ability to generate poems and essays instantaneously, its ability to mimic different writing styles, and its ability to pass some law and business school exams. Teachers are worried students will use it to cheat in class (New York City public schools have already banned it). Writers are worried it will take their jobs (BuzzFeed and CNET have already started using AI to create content).


Building Ethical Artificial Intelligence – The Markup

#artificialintelligence

As computers get more powerful, we are increasingly using them to make predictions. The software that makes these predictions is often called artificial intelligence. It's interesting that we call it "intelligence," because other tasks we assign to computers--computing huge numbers, running complex simulations--are also things that we label as "intelligence" when humans do them. For instance, my kids are graded on their intelligence at school based on their ability to do complex mathematical calculations. When we let computers project into the future and make their own decisions about what step to take next--what chess move to make, what driving route to suggest--we seem to want to call it artificial intelligence.


Confronting the Biases Embedded in Artificial Intelligence – The Markup

#artificialintelligence

Hardly a day goes by without another revelation of race, gender, and other biases being embedded in artificial intelligence systems. Just this month, for example, Silicon Valley's much-touted AI image generation system DALL-E disclosed that its system exhibits biases including gender stereotypes and tends "to overrepresent people who are White-passing and Western concepts generally." For instance, it produces images of women for the prompt "a flight attendant" and images of men for the prompt "a builder." In the disclosure, OpenAI, the entity that trained DALL-E, says it is only releasing the program to a limited group of users while it works on mitigating bias and other risks. Meanwhile, researchers using machine learning to examine electronic health records found that Black patients were more than twice as likely to be described in derogatory terms (like "resistant" or "noncompliant") in their patient records. And those are the types of records that often make up the raw material for future AI programs, like the one that aimed to predict patient-reported pain from X-ray data but was only able to make successful predictions for White patients.