What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
Much of today's technology reporting is focused on the potential threats posed by developments. Dangers are seen in everything from robots to flying drones and two-wheeled "hoverboards". Physicist Stephen Hawking has even warned that full artificial intelligence "could spell the end of the human race". Such concerns are not new, according to Carl Benedikt Frey, co-director of the Oxford Martin programme on technology and employment at Oxford university. "Fears about technology, and certainly fears that technology will destroy our jobs, have been with us for as long as jobs have existed," he says.
Robotics Professor Ken Goldberg is the first to acknowledge the public anxiety about what automation and AI might mean for future jobs. Singularity--the hypothesis that AI will become increasingly powerful, decimating professions and remaking civilization--is becoming a mainstream concept. All one has to do is read news headlines about robot-driven factories, watch movies like "Ex Machina," and listen to public figures like Elon Musk stating that AI poses our greatest existential threat. Yet at a recent Blum Center Faculty Salon on the Digital Transformation of Development, Goldberg, UC Berkeley's Department Chair of Industrial Engineering and Operations Research and the William S. Floyd Jr. Distinguished Chair in Engineering, argued that such fears are exaggerated. He is among computer scientists and roboticists who believe there is inadequate evidence to support the mass unemployment theories, such as the often cited Oxford University study that estimated 47 percent of U.S. jobs are at risk of computerization.