What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
Artificial Intelligence (AI) is already reconfiguring the world in conspicuous ways. Data drives our global digital ecosystem, and AI technologies reveal patterns in data. Smartphones, smart homes, and smart cities influence how we live and interact, and AI systems are increasingly involved in recruitment decisions, medical diagnoses and judicial verdicts. Whether this scenario is utopian or dystopian depends on your perspective. The potential risks of AI are enumerated repeatedly.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI. This article is part of the special track on AI and Society.
Google and Facebook have an almost perfect log of your comings and goings and they can combine that information with artificial intelligence to predict things about you. The systems are big and sophisticated, but their capabilities are still a far cry from the friendly, charismatic Samantha in the movie Her or the devilish, horrifying HAL in 2001: A Space Odyssey. After all, no one wants a HAL taking us out.) But algorithms are slowly getting smarter. Computer scientists are programming systems that can teach themselves to play Atari games and poker, all on their own.
Coman, Alexandra (National Research Council/Naval Research Laboratory) | Johnson, Benjamin (National Research Council/Naval Research Laboratory) | Briggs, Gordon (National Research Council/Naval Research Laboratory) | Aha, David W. (Naval Research Laboratory)
Human attitudes of objection, protest, and rebellion have undeniable potential to bring about social benefits, from social justice to healthy balance in relationships. At times, they can even be argued to be ethically obligatory. Conversely, AI rebellion is largely seen as a dangerous, destructive prospect. With the increase of interest in collaborative human/AI environments in which synthetic agents play social roles or, at least, exhibit behavior with social and ethical implications, we believe that AI rebellion could have benefits similar to those of its counterpart in humans. We introduce a framework meant to help categorize and design Rebel Agents, discuss their social and ethical implications, and assess their potential benefits and the risks they may pose. We also present AI rebellion scenarios in two considerably different contexts (military unmanned vehicles and computational social creativity) that exemplify components of the framework.
Two students have built an AI that could be the basis of future killer robots. In a controversial move, the pair trained an AI bot to kill human players within the classic video game Doom. Critics have expressed concern over the AI technology and the risk it could pose to humans in future. Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University in Pittsburgh trained an AI bot - nicknamed Arnold - using'deep reinforcement learning' techniques. While Google's AI software had previously been shown to tackle vintage 2D Atari games such as Space Invaders, the students wanted to expand the technology to tackle three-dimensional first-person shooter games like Doom.
Tom Murphy graduated from Carnegie Mellon University with a PhD in computer science. Then he built software that learned to play Nintendo games. In some cases, the system works well. Playing Super Mario, for instance, it learns to exploit a bug in the game, stomping on enemy Goombas even when floating below them. It can rack up points by attacking the game with a reckless abandon you and I would never try.
Tom Murphy graduated from Carnegie Melon University with a PhD in computer science. Then he built software that learned to play Nintendo games. In some cases, the system works well. Playing Super Mario, for instance, it learns to exploit a bug in the game, stomping on enemy Goombas even when floating below them. It can rack up points by attacking the game with a reckless abandon you and I would never try.