This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.
An engineer makes an adjustment to the robot "The Incredible Bionic Man" at the Smithsonian National Air and Space Museum in Washington October 17, 2013. The robots are coming for our jobs, and Wall Street is getting a little nervous. A recent survey of 1000 financial professionals conducted by LinkedIn, the networking site, found that 25% of Wall Streeters are worried their job could be jeopardized by automation. Retail bankers are the most fearful with a third of respondents saying they view automation as a threat, according to a report highlighting the survey's results. It stands to reason that some folks are nervous, as Wall Street firms look to automate their infrastructure to cut jobs.
Byron Reese: This is "Voices in AI" brought to you by Gigaom. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.
Over the past few years, artificial intelligence has rapidly matured as a viable field of technology. Machines that learn from experience, adjust to new inputs, and perform tasks once uniquely the domain of humans, have entered our daily lives in ways seen and unseen. Given the current breakneck pace of change and innovation, the question for governments and policymakers is how to harness the benefits of artificial intelligence, and not be trampled by the robot takeover of our nightmares. The answer is simple: make them work for us. Recently, the IMF's Managing Director Christine Lagarde convened some of the most distinguished voices in the field of artificial intelligence, including Malcolm Frank of Cognizant; Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future; Chief Analytics Officer of IBM, Martin Fleming; and Andrew McAfee and Simon Johnson, the latter a former Chief Economist of the IMF, and both professors at MIT.