Goto

Collaborating Authors

 existential


Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination

Fleisig, Eve, Smith, Genevieve, Bossi, Madeline, Rustagi, Ishita, Yin, Xavier, Klein, Dan

arXiv.org Artificial Intelligence

We present a large-scale study of linguistic bias exhibited by ChatGPT covering ten dialects of English (Standard American English, Standard British English, and eight widely spoken non-"standard" varieties from around the world). We prompted GPT-3.5 Turbo and GPT-4 with text by native speakers of each variety and analyzed the responses via detailed linguistic feature annotation and native speaker evaluation. We find that the models default to "standard" varieties of English; based on evaluation by native speakers, we also find that model responses to non-"standard" varieties consistently exhibit a range of issues: lack of comprehension (10% worse compared to "standard" varieties), stereotyping (16% worse), demeaning content (22% worse), and condescending responses (12% worse). We also find that if these models are asked to imitate the writing style of prompts in non-"standard" varieties, they produce text that exhibits lower comprehension of the input and is especially prone to stereotyping. GPT-4 improves on GPT-3.5 in terms of comprehension, warmth, and friendliness, but it also results in a marked increase in stereotyping (+17%). The results suggest that GPT-3.5 Turbo and GPT-4 exhibit linguistic discrimination in ways that can exacerbate harms for speakers of non-"standard" varieties.


World Powers Say They Want to Contain AI. They're Also Racing to Advance It

WIRED

Yesterday, 28 countries including the US, members of the EU, and China signed a declaration warning that artificial intelligence is advancing with such speed and uncertainty that it could cause "serious, even catastrophic, harm." The declaration, announced at the AI Safety Summit organized by the British government and held at the historic World War II code-breaking site, Bletchley Park, also calls for international collaboration to define and explore the risks from the development of more powerful AI models, including large language models such as those powering chatbots like ChatGPT. "This is a landmark achievement that sees the world's greatest AI powers agree on the urgency behind understanding the risks of AI--helping ensure the long-term future of our children and grandchildren," the UK prime minister, Rishi Sunak, said in a statement. The venue for the Summit paid homage to Alan Turing, the British mathematician who did foundational work on both computing and AI, and who helped the Allies break Nazi codes during the Second World War by developing early computing devices. The AI hype-train has a knack for turning even close allies into competitors, though.


Tech expert says 'existential' fears from AI are overblown, but sees 'very disturbing' workplace threats

FOX News

A bipartisan panel of voters weighed in on the future of artificial intelligence and growing concerns surrounding the potential dangers of the emerging technology. A U.K.-based tech expert said he is not losing sleep at night over the recent growth of artificial intelligence but argued he does have concerns over AI potentially becoming a hellish boss that oversees an employee's every move. Michael Wooldridge is a professor of computer science at the University of Oxford who has been a leading expert on AI for at least 30 years. He spoke with The Guardian this month regarding upcoming lectures he will lead this winter to demystify artificial intelligence, while noting what concerns he does have with the tech. He told the outlet that he does not share the same worries as some AI experts who warn the powerful systems could one day lead to the downfall of humanity.


Senators leave classified AI briefing confident but wary of 'existential' threat posed by China

FOX News

Fox News contributor Dr. Marc Siegel weighs in on how artificial intelligence can change the patient-doctor relationship on'America's Newsroom.' Senators left a classified briefing on artificial intelligence Tuesday with a deeper understanding of how AI is already being used to bolster U.S. national security and the looming threat China poses as it deploys its own AI capabilities. "I think, from a military perspective, it's very existential because China's playing for keeps," Sen. Eric Schmitt, R-Mo., told Fox News Digital after the closed-door session. So, it's moving quickly, but I think the best we can do right now is get a firm understanding." Tuesday afternoon's briefing was the first-ever classified meeting with senators and key Pentagon officials about AI. Discussion included how the U.S. is using AI to maintain its national security edge and how adversaries like China are using this emerging tool. Senate Majority Leader Chuck Schumer, D-N.Y., told reporters what he learned was "eye-opening." It comes after he told senators in a letter over the weekend that Congress is moving full steam ahead on his AI regulatory framework, which Schumer said Tuesday could take months to develop. HOW AI HAS SHAPED A VITAL NATO ALLY'S PRESIDENTIAL ELECTION Senate Majority Leader Chuck Schumer, D-N.Y., speaks to reporters after a classified Senate briefing on artificial intelligence at the U.S. Capitol July 11, 2023, in Washington, D.C. (Drew Angerer/Getty Images) "This briefing shows just depth, complexity, but necessity of getting something real done.


No 10 acknowledges 'existential' risk of AI for first time

The Guardian

The "existential" risk of artificial intelligence has been acknowledged by No 10 for the first time, after the prime minister met the heads of the world's leading AI research groups to discuss safety and regulation. Rishi Sunak and Chloe Smith, the secretary of state for science, innovation and technology, met the chief executives of Google DeepMind, OpenAI and Anthropic AI on Wednesday evening and discussed how best to moderate the development of the technology to limit the risks of catastrophe. "They discussed safety measures, voluntary actions that labs are considering to manage the risks, and the possible avenues for international collaboration on AI safety and regulation," the participants said in a joint statement. "The lab leaders agreed to work with the UK government to ensure our approach responds to the speed of innovations in this technology both in the UK and around the globe. "The PM and CEOs discussed the risks of the technology, ranging from disinformation and national security, to existential threats … The PM set out how the approach to AI regulation will need to keep pace with the fast-moving advances in this technology." It is the first time the prime minister has acknowledged the potential "existential" threat of developing a "superintelligent" AI without appropriate safeguards, a risk that contrasts with the UK government's generally positive approach to AI development.


The Companies Profiting From A.I. Are Profiting From A.I. Panic

Slate

Over the past few weeks, there's been some very public hand-wringing about artificial intelligence--a lot of it coming from people who have made A.I. their life's work. Geoffrey Hinton, dubbed the "godfather of A.I.," recently left his job at Google to embark upon a sort of media tour warning about the dangers of the technology. There was a public letter from Elon Musk and others calling for a pause in A.I. development and an essay in Time from theorist Eliezer Yudkowsky saying generative A.I. can harm humanity--or even end it. On Friday's episode of What Next: TBD, I spoke with Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at NYU, to sort through the real threat of A.I. and what the doomerism discourse is missing. Our conversation has been edited and condensed for clarity. What do you make of the concerns raised by Geoffrey Hinton and others when it comes to A.I. safety?


The Synergy Between the Brain and Artificial Intelligence

#artificialintelligence

According to Merriam-Webster, artificial intelligence is, "A branch of computer science dealing with the simulation of intelligent behavior in computers." Alternatively, the Encyclopedia Britannica deems AI to be, "The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings." But these are very broad interpretations of a complex subject that are grossly insufficient to describe it. Alan Turing, one of the pioneers of the modern electronic computer, stated that in order for a machine to be considered capable of intelligent thought, it must convincingly imitate life. To this end he devised an elegant proof whereby a human interrogator blindly questions both a machine and a person to determine if their comparative responses betray the imitator.


Tom Cruise's Existential Need for Speed

The New Yorker

On July 3rd, Tom Cruise will be sixty years old. The fact that he does not look it, at all, even in IMAX closeups so tight you can study the grain of his tooth enamel, adds a note of cognitive dissonance to "Top Gun: Maverick," the long-aborning sequel in which he's called back to mentor a squad of younger stick-jockeys who address him as Pops and Old-Timer until he wins their respect in the air. Even for a physical performer like Cruise, sixty is no longer an expiration date. Mick Jagger blew by that milestone in 2003, as did Sylvester Stallone in 2006, and, thanks presumably to healthy habits and/or medical technology dreamt of only by science fiction, they're both still out there, doing a version of the kind of thing they've always done. But the level of performance expected of a Rolling Stone or an Expendable is one thing, and the work that Tom Cruise appears to demand of himself is something else entirely.


Complexity of Arithmetic in Warded Datalog+-

Berent, Lucas, Nissl, Markus, Sallinger, Emanuel

arXiv.org Artificial Intelligence

Warded Datalog+- extends the logic-based language Datalog with existential quantifiers in rule heads. Existential rules are needed for advanced reasoning tasks, e.g., ontological reasoning. The theoretical efficiency guarantees of Warded Datalog+- do not cover extensions crucial for data analytics, such as arithmetic. Moreover, despite the significance of arithmetic for common data analytic scenarios, no decidable fragment of any Datalog+- language extended with arithmetic has been identified. We close this gap by defining a new language that extends Warded Datalog+- with arithmetic and prove its P-completeness. Furthermore, we present an efficient reasoning algorithm for our newly defined language and prove descriptive complexity results for a recently introduced Datalog fragment with integer arithmetic, thereby closing an open question. We lay the theoretical foundation for highly expressive Datalog+- languages that combine the power of advanced recursive rules and arithmetic while guaranteeing efficient reasoning algorithms for applications in modern AI systems, such as Knowledge Graphs.


Exploring the Landscape of Relational Syllogistic Logics

Kruckman, Alex, Moss, Lawrence S.

arXiv.org Artificial Intelligence

This paper explores relational syllogistic logics, a family of logical systems related to reasoning about relations in extensions of the classical syllogistic. These are all decidable logical systems. We prove completeness theorems and complexity results for a natural subfamily of relational syllogistic logics, parametrized by constructors for terms and for sentences.