Collaborating Authors

How will AI change your life? AI Now Institute founders Kate Crawford and Meredith Whittaker explain.


Ask a layman about artificial intelligence and they might point to sci-fi villains such as HAL from 2001: A Space Odyssey or the Terminator. But the co-founders of the AI Now Institute, Meredith Whittaker and Kate Crawford, want to change the conversation. Instead of talking about far-flung super-intelligent AI, they argued on the latest episode of Recode Decode, we should be talking about the ways AI is affecting people right now, in everything from education to policing to hiring. Rather than killer robots, you should be concerned about what happens to your résumé when it hits a program like the one Amazon tried to build. "They took two years to design, essentially, an AI automatic résumé scanner," Crawford said. "And they found that it was so biased against any female applicant that if you even had the word'woman' on your résumé that it went to the bottom of the pile." That's a classic example of what Crawford calls "dirty data." Even though people think of algorithms as being ...

A 20-Year Community Roadmap for Artificial Intelligence Research in the US Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.

Author Jerry Kaplan talks Artificial Intelligence with Gigaom


Jerry Kaplan is widely known as an Artificial Intelligence expert, technical innovator, serial entrepreneur, and bestselling author. He is currently a Fellow at The Center for Legal Informatics at Stanford University and a visiting lecturer in the computer science department, where he teaches social and economic impact of Artificial Intelligence. Kaplan founded several technology companies over his 35-year career, two of which became public companies. As an inventor and entrepreneur, he was a key contributor to the creation of numerous familiar technologies including tablet computers, smart phones, online auctions, and social computer games. Kaplan is the author of three books: the best-selling classic Startup: A Silicon Valley Adventure; Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence (2015); and Artificial Intelligence: What Everyone Needs to Know (2016). In 1998, Kaplan received the Ernst & Young Emerging Entrepreneur of the Year Award, Northern California. He has been profiled in The New York Times, The Wall Street Journal and Forbes, among others. He received a BA degree from the University of Chicago and a PhD in Computer and Information Science from the University of Pennsylvania. Jerry will be speaking at the Gigaom AI Now in San Francisco, February 15-16th.

Notes on a New Philosophy of Empirical Science Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.