What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
By the way, you can listen to a neuron fire here (what you're actually hearing is the electro-chemical firing of a neuron, converted to audio). Some electrodes want to take the relationship to the next level and will go for a technique called the patch clamp, whereby it'll get rid of its electrode tip, leaving just a tiny little tube called a glass pipette,21 and it'll actually directly assault a neuron by sucking a "patch" of its membrane into the tube, allowing for even finer measurements:39 A patch clamp also has the benefit that, unlike all the other methods we've discussed, because it's physically touching the neuron, it can not only record but stimulate the neuron,22 injecting current or holding voltage at a set level to do specific tests (other methods can stimulate neurons, but only entire groups together). Finally, electrodes can fully defile the neuron and actually penetrate through the membrane, which is called sharp electrode recording. If the tip is sharp enough, this won't destroy the cell--the membrane will actually seal around the electrode, making it very easy to stimulate the neuron or record the voltage difference between the inside and outside of the neuron. But this is a short-term technique--a punctured neuron won't survive long.