Goto

Collaborating Authors

 ai debate


Scarlett Johansson and Cate Blanchett back campaign accusing AI firms of theft

The Guardian

Johansson was dragged into the AI debate after OpenAI's voice assistant used her vocal likeness, prompting the actor to say she was'angered' by the move. Johansson was dragged into the AI debate after OpenAI's voice assistant used her vocal likeness, prompting the actor to say she was'angered' by the move. Scarlett Johansson, Cate Blanchett, REM and Jodi Picoult are among hundreds of Hollywood stars, musicians and authors backing a new campaign accusing AI companies of "theft" of their work. The "Stealing Isn't Innovation" drive launched on Thursday with the support of approximately 800 creative professionals and bands. It adds: "Artists, writers, and creators of all kinds are banding together with a simple message: Stealing our work is not innovation.


Game at centre of AI debate in running for top Bafta award

BBC News

A video game at the centre of a debate over artificial intelligence (AI) is in the running for the top prize at next year's Bafta Game Awards. Arc Raiders, from Swedish developer Embark Studios, has been a smash-hit since its October launch, selling more than four million copies. But the multiplayer shooter has been criticised for using text-to-speech tools to create additional lines, based on dialogue previously recorded by the game's actors. It is one of 10 titles longlisted for the prestigious best game award, with a shortlist to be announced in the run-up to April's annual ceremony. Other games up for the top prize include blockbusters Ghost of Yōtei and Death Stranding 2, indie games Hollow Knight: Silksong and Hades II, and indie adventure Blue Prince.


With Letter to Trump, Evangelical Leaders Join the AI Debate

TIME - Tech

Rodriguez, the President of the National Hispanic Christian Leadership Conference, spoke at Trump's first presidential inauguration in 2017. Moore, who is also the founder of the public relations firm Kairos, served on Trump's Evangelical executive board during his first presidential candidacy. The letter is a sign of growing ties between religious and AI safety groups, which share some of the same worries. It was shared with journalists by representatives of the Future of Life Institute--an AI safety organization that campaigns to reduce what it sees as the existential risk posed by advanced AI systems. The world's biggest tech companies now all believe that it is possible to create so-called "artificial general intelligence"--a form of AI that can do any task better than a human expert. Some researchers have even invoked this technology in religious terms--for example, OpenAI's former chief scientist Ilya Sutskever, a mystical figure who famously encouraged colleagues to chant "feel the AGI" at company gatherings.


Setting the AI Agenda -- Evidence from Sweden in the ChatGPT Era

Bruinsma, Bastiaan, Fredén, Annika, Hansson, Kajsa, Johansson, Moa, Kisić-Merino, Pasko, Saynova, Denitsa

arXiv.org Artificial Intelligence

This paper examines the development of the Artificial Intelligence (AI) meta-debate in Sweden before and after the release of ChatGPT. From the perspective of agenda-setting theory, we propose that it is an elite outside of party politics that is leading the debate -- i.e. that the politicians are relatively silent when it comes to this rapid development. We also suggest that the debate has become more substantive and risk-oriented in recent years. To investigate this claim, we draw on an original dataset of elite-level documents from the early 2010s to the present, using op-eds published in a number of leading Swedish newspapers. By conducting a qualitative content analysis of these materials, our preliminary findings lend support to the expectation that an academic, rather than a political elite is steering the debate.


Debate Helps Supervise Unreliable Experts

Michael, Julian, Mahdi, Salsabila, Rein, David, Petty, Jackson, Dirani, Julien, Padmakumar, Vishakh, Bowman, Samuel R.

arXiv.org Artificial Intelligence

As AI systems are used to answer more difficult questions and potentially help create new knowledge, judging the truthfulness of their outputs becomes more difficult and more important. How can we supervise unreliable experts, which have access to the truth but may not accurately report it, to give answers that are systematically true and don't just superficially seem true, when the supervisor can't tell the difference between the two on their own? In this work, we show that debate between two unreliable experts can help a non-expert judge more reliably identify the truth. We collect a dataset of human-written debates on hard reading comprehension questions where the judge has not read the source passage, only ever seeing expert arguments and short quotes selectively revealed by 'expert' debaters who have access to the passage. In our debates, one expert argues for the correct answer, and the other for an incorrect answer. Comparing debate to a baseline we call consultancy, where a single expert argues for only one answer which is correct half of the time, we find that debate performs significantly better, with 84% judge accuracy compared to consultancy's 74%. Debates are also more efficient, being 68% of the length of consultancies. By comparing human to AI debaters, we find evidence that with more skilled (in this case, human) debaters, the performance of debate goes up but the performance of consultancy goes down. Our error analysis also supports this trend, with 46% of errors in human debate attributable to mistakes by the honest debater (which should go away with increased skill); whereas 52% of errors in human consultancy are due to debaters obfuscating the relevant evidence from the judge (which should become worse with increased skill). Overall, these results show that debate is a promising approach for supervising increasingly capable but potentially unreliable AI systems.


Concerns raised over the 'dangerous' ideology shaping AI debate

The Japan Times

Silicon Valley's favorite philosophy, long-termism, has helped to frame the debate on artificial intelligence around the idea of human extinction. The approach prioritizes taking action in the present to improve the distant future and reduce long-term risks, potentially at the expense of addressing more immediate problems. But increasingly vocal critics are warning that the philosophy is dangerous, and the obsession with extinction distracts from real problems associated with AI, like data theft and biased algorithms.

  Country: North America > United States > California (0.37)
  Industry: Information Technology (0.79)

'We have to flip the AI debate towards hope': Labour's techno-optimist, Darren Jones

The Guardian

In the same way as you upgrade your iPhone, we need to upgrade Britain." Labour MP Darren Jones believes artificial intelligence will bring an economic change on the scale of the industrial revolution, which politicians must be ready to shape. As chair of the business and trade select committee, the ambitious 36-year-old backbencher, who represents Bristol North West, has built a reputation for himself in Westminster as a tough interrogator. With speculation raging last week about the future of Thames Water, he took to the airwaves to criticise the way the heavily indebted sector has been regulated, saying he was "increasingly sick" of its failures. However, Jones is at his most animated when talking about AI. He has clashed with company bosses over their use of technology to monitor and control staff – including at Amazon and Royal Mail. But he is an evangelist for the upsides of innovation, including the arrival of large language models (LLMs) such as the hit dialogue-based AI software ChatGPT. "It's really important that we flip this debate.


AI is Far Worse Than Nuclear War, Says Prominent Researcher

#artificialintelligence

Artificial general Intelligence (AGI) researcher Eliezer Yudkowsky says AI innovation is far worse than the nuclear bomb and could lead to the death of everyone on earth. But that may not be entirely accurate, according to some of his peers, who believe the risks are overstated. Yudkowsky spoke in the wake of an open letter signed recently by several luminaries including Apple co-founder Steve Wozniak, billionaire Elon Musk, Gary Marcus, and others, calling for a six-month moratorium on large language AI training in the world. "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter," he warned, in a recent article published by the Time Magazine. Also read: Trouble in ChatGPT Paradise?


An AI Debates Its Own Ethics At Oxford Union - What It Said Was Startling

#artificialintelligence

The debate topic was: "This house believes that AI will never be ethical." Not a day passes without a fascinating snippet on the ethical challenges created by "black box" artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions – often without a human giving them any moral basis for how to do it. Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase "field hockey" or the first name "Jared". More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process.


What Politicians Don't Understand About The AI Debate

#artificialintelligence

The U.S. economy continues to expand, with reports from the Bureau of Labor Statistics indicating there are more job openings in the U.S. than people to fill them. Despite this data, working Americans are very concerned about job loss due to AI and automation, especially as politicians fan the flames of fear to gain advantage as the election race heats up. Democratic primary candidate and tech entrepreneur Andrew Yang has proposed the Freedom Dividend, a form of universal basic income (UBI) to supplement income lost by automation. Senator Elizabeth Warren is instead focusing on policy reform, training her sights on the multinational corporations moving their factories overseas. Former Vice President Joe Biden has proposed 14 years of public education in an effort to prepare workers for a future where technology reigns.