worrying
Meta's Chatbot Ingested My Books, So I Asked It What It Thought of Them. What I Learned Was Deeply Worrying.
When I learned that Meta's programmers downloaded 183,000 books for a database to teach the company's generative A.I. machines how to write, I was curious whether any of my own books had been fed into the crusher. Alex Reisner of the Atlantic has provided a handy search tool--type in an author's name, out comes all of his or her books that the LLaMA used. I typed "Fred Kaplan" and found that three of my six books (1959, Dark Territory, and The Insurgents) had been assimilated into the digital Borg. My first reaction, like that of many other authors, was outrage at the violation. However, my second reaction--also, I assume, like that of many other authors--was outrage that the program didn't include my other three books (The Bomb, Daydream Believers, and The Wizards of Armageddon). Were there really 182,997 books that were better than those three?
The Darwinian Argument for Worrying About AI
A broad coalition of AI experts recently released a brief public statement warning of "the risk of extinction from AI." There are many different ways in which AIs might become serious dangers to humanity, and the exact nature of the risks is still debated, but imagine a CEO who acquires an AI assistant. They begin by giving it simple, low-level assignments, like drafting emails and suggesting purchases. As the AI improves over time, it progressively becomes much better at these things than their employees. So the AI gets "promoted."
Exciting, Useful, Worrying, Futuristic: Public Perception of Artificial Intelligence in 8 Countries
Kelley, Patrick Gage, Yang, Yongwei, Heldreth, Courtney, Moessner, Christopher, Sedley, Aaron, Kramm, Andreas, Newman, David T., Woodruff, Allison
As the influence and use of artificial intelligence (AI) have grown As the influence and use of artificial intelligence (AI) have grown and its transformative potential has become more apparent, many and its transformative potential has become more apparent [32, 54], questions have been raised regarding the economic, political, social, many questions have been raised regarding the economic, political, and ethical implications of its use. Public opinion plays an important social, and ethical implications of its use [27]. The development role in these discussions, influencing product adoption, commercial and application of AI increasingly features in media, academic, development, research funding, and regulation. In this paper we industrial, regulatory, and public discussions [18, 23, 28], with active present results of an in-depth survey of public opinion of artificial debate on wide-ranging issues such as the impact of automation intelligence conducted with 10,005 respondents spanning eight on the future of work [8, 50, 52], the interaction of AI with human countries and six continents. We report widespread perception rights issues such as privacy and discrimination [1, 4, 10, 16], the that AI will have significant impact on society, accompanied by ethics of autonomous weapons [53, 59], and the development and strong support for the responsible development and use of AI, and availability of dual-use technologies such as synthetic media that also characterize the public's sentiment towards AI with four key may be used for either benevolent or nefarious purposes [48].
- Asia > South Korea (0.15)
- Asia > India (0.06)
- Africa > Nigeria (0.06)
- (7 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.47)
- Personal > Interview (0.46)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Banking & Finance (0.68)
- Media > News (0.68)
Worrying About Artificial Intelligence Starting a Nuclear War: Eye on A.I.
An organization that won the Nobel Prize in 2017 for its work to eliminate nuclear weapons is sounding the alarm about the possibility of artificial intelligence leading to unintended wars. Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons, is worried that hackers could breach A.I. technologies that are used in nuclear programs or that they could use A.I. to dupe countries into launching attacks. For example, deepfakes, or realistic-looking computer-altered videos, may be used to "create a perceived threat that might not be there," she warns, prompting governments to overreact. Fihn told Fortune that she wants to convene a meeting in the fall with nuclear weapons experts and some of the leading companies in A.I. and cybersecurity. Participants in the off-the-record event, she said, would produce a document that her group would use to inform governments and others about the danger.
- Africa (0.07)
- North America > United States > California (0.06)
- Europe > France (0.06)
- (4 more...)
- Information Technology (1.00)
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.73)
- Health & Medicine > Therapeutic Area > Immunology (0.49)
Worrying About Artificial Intelligence Starting a Nuclear War: Eye on A.I.
An organization that won the Nobel Prize in 2017 for its work to eliminate nuclear weapons is sounding the alarm about the possibility of artificial intelligence leading to unintended wars. Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons, is worried that hackers could breach A.I. technologies that are used in nuclear programs or that they could use A.I. to dupe countries into launching attacks. For example, deepfakes, or realistic-looking computer-altered videos, may be used to "create a perceived threat that might not be there," she warns, prompting governments to overreact. Fihn told Fortune that she wants to convene a meeting in the fall with nuclear weapons experts and some of the leading companies in A.I. and cybersecurity. Participants in the off-the-record event, she said, would produce a document that her group would use to inform governments and others about the danger.
- Africa (0.06)
- North America > United States > California (0.06)
- Europe > France (0.06)
- (4 more...)
- Information Technology (1.00)
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.72)
- Health & Medicine > Therapeutic Area > Immunology (0.48)
Worrying About Artificial Intelligence Starting a Nuclear War: Eye on A.I.
An organization that won the Nobel Prize in 2017 for its work to eliminate nuclear weapons is sounding the alarm about the possibility of artificial intelligence leading to unintended wars. Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons, is worried that hackers could breach A.I. technologies that are used in nuclear programs or that they could use A.I. to dupe countries into launching attacks. For example, deepfakes, or realistic-looking computer-altered videos, may be used to "create a perceived threat that might not be there," she warns, prompting governments to overreact. Fihn told Fortune that she wants to convene a meeting in the fall with nuclear weapons experts and some of the leading companies in A.I. and cybersecurity. Participants in the off-the-record event, she said, would produce a document that her group would use to inform governments and others about the danger.
Why We Should Stop Worrying About A.I. (And Start Worrying About Data)
The one-two punch of data and artificial intelligence are in the midst of transforming the world as we know it. But how do we make sure that the new world that emerges in their wake is one we'll want to live in? A big part of the equation is ensuring that consumers' data is handled properly. Speaking on a panel at Fortune's Most Powerful Women Summit in Laguna Niguel, Calif. on Monday, Clara Shih, CEO and co-founder of Hearsay Systems, offered a straight-forward, four-point system for doing just that: Let people know what information will be used and how. Be clear about when people can opt in or out of having their personal data collected.
Boy addicted to gaming who just couldn't stop left with a damaged bowel
Children are developing long-term health problems because of the time they spend gaming or glued to devices, a leading doctor warned yesterday. Jo Begent, a paediatric consultant, said a boy of ten had come into her surgery with a deformation so severe that at first she believed it was a tumour. Further examination revealed the child had developed a dilated bowel because he had stopped himself from going to the toilet so he could carry on gaming. The boy was playing World of Warcraft, Call of Duty and FIFA for eight hours at a time, Dr Begent said. Dr Begent, who practises at University College London Hospital, said: 'I was doing my general paediatric clinic one day and a boy walked in, a ten-year-old limping and looking really poorly.
- Leisure & Entertainment > Games > Computer Games (0.59)
- Health & Medicine > Health Care Providers & Services (0.58)
- Health & Medicine > Consumer Health (0.40)