Collaborating Authors


GPT-3 Creative Fiction


What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.

Abolish the #TechToPrisonPipeline


The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.

How Big Tech Manipulates Academia to Avoid Regulation


The irony of the ethical scandal enveloping Joichi Ito, the former director of the MIT Media Lab, is that he used to lead academic initiatives on ethics. After the revelation of his financial ties to Jeffrey Epstein, the financier charged with sex trafficking underage girls as young as 14, Ito resigned from multiple roles at MIT, a visiting professorship at Harvard Law School, and the boards of the John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, and the New York Times Company. Many spectators are puzzled by Ito's influential role as an ethicist of artificial intelligence. Indeed, his initiatives were crucial in establishing the discourse of "ethical AI" that is now ubiquitous in academia and in the mainstream press. In 2016, then-President Barack Obama described him as an "expert" on AI and ethics. Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University.

The 2018 Survey: AI and the Future of Humans


"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.

A 20-Year Community Roadmap for Artificial Intelligence Research in the US Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.

Speakers - AI for Good Global Summit


CEO at Things Foundation, Technology Manager at Let's Do It Foundation Head of Cortex Ventures- Africa's first AI only focused VC Fund

Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI Artificial Intelligence

This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent Variables (operational definitions of the measures and metrics), Independent Variables (conditions), and Control Conditions.

Embrace big data and robots -- they're the future of work


President Donald Trump's July 19 executive order establishing the President's National Council for the American Worker is directed at preparing Americans for the workplace of the future. Although short on specifics, the order sends a powerful message about the need for revitalizing educational opportunities if Americans are to thrive in the era of big data, robots and artificial intelligence. The president's intent is to lay the groundwork for tackling a national "skills crisis." His order accepts that Americans need additional skills to fill the current 6.7 million job vacancies. In fact, the executive order gives official imprimatur to what many in industry and academia have feared for some time: "The economy is changing at a rapid pace because of the technology, automation, and artificial intelligence," and existing programs have "prepared Americans for the economy of the past."

Artificial Intelligence Law is Here, Part One


In the early to mid-90's while my friends were getting into Indie Rock, I was hacking away at robots and getting them to learn to map a room. A computer science graduate student, I programmed LISP algorithms for parsing nursing records in order to predict intervention codes. I was no less a nerd (or to put it a better way, a technology enthusiast) in law school, when I wrote about how natural language processing can improve legal research tools. I didn't put much thought, either as a computer scientist or law student to whether artificial intelligence (AI) should be regulated. Frankly, we were in such the early days of the technology, that AI regulations seemed like science fiction a la Isaac Asimov's three laws of robotics.