Goto

Collaborating Authors


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Understanding artificial intelligence ethics and safety

arXiv.org Artificial Intelligence

A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. This guide, written for department and delivery leads in the UK public sector and adopted by the British Government in its publication, 'Using AI in the Public Sector,' identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. It stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. It also highlights the need for algorithmically supported outcomes to be interpretable by their users and made understandable to decision subjects in clear, non-technical, and accessible ways. Finally, it builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.



Leading AI country will be 'ruler of the world,' says Putin

@machinelearnbot

Russian President Vladimir Putin warned Friday (Sept. AI development "raises colossal opportunities and threats that are difficult to predict now," Putin said in a lecture to students, warning that "it would be strongly undesirable if someone wins a monopolist position." Future wars will be fought by autonomous drones, Putin suggested, and "when one party's drones are destroyed by drones of another, it will have no other choice but to surrender." U.N. urged to address lethal autonomous weapons AI experts worldwide are also concerned. On August 20, 116 founders of robotics and artificial intelligence companies from 26 countries, including Elon Musk and Google DeepMind's Mustafa Suleyman, signed an open letter asking the United Nations to "urgently address the challenge of lethal autonomous weapons (often called'killer robots') and ban their use internationally."