Goto

Collaborating Authors

GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Are robots really going to take your job?

#artificialintelligence

Carl Benedikt is Co-Director of the Oxford Martin Programme on Technology and Employment at the Oxford Martin School, and Economics Associate of Nuffield College, both University of Oxford. He is also a Senior Fellow of the Programme on Employment, Equity and Growth at the Institute for New Economic Thinking in Oxford, and the Department of Economic History at Lund University. His research focuses the transition of industrial nations to digital economies, and subsequent challenges for economic growth, labour markets and urban development. To secure impact for his research outside academia, Carl Benedikt is widely engaged in policy, advisory and media activities. In partnership with Citigroup, he works to help global leaders navigate the rapidly changing world economy.


The 84 biggest flops, fails, and dead dreams of the decade in tech

#artificialintelligence

The world never changes quite the way you expect. But at The Verge, we've had a front-row seat while technology has permeated every aspect of our lives over the past decade. Some of the resulting moments -- and gadgets -- arguably defined the decade and the world we live in now. But others we ate up with popcorn in hand, marveling at just how incredibly hard they flopped. This is the decade we learned that crowdfunded gadgets can be utter disasters, even if they don't outright steal your hard-earned cash. It's the decade of wearables, tablets, drones and burning batteries, and of ridiculous valuations for companies that were really good at hiding how little they actually had to offer. Here are 84 things that died hard, often hilariously, to bring us where we are today. Everyone was confused by Google's Nexus Q when it debuted in 2012, including The Verge -- which is probably why the bowling ball of a media streamer crashed and burned before it even came to market.


Notes on a New Philosophy of Empirical Science

arXiv.org Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.


What's happening to the pound?

BBC News

The pound plummeted in Asian trading early on Friday. And no-one really seems to know why. Did a trader make a mistake? Did the computers go haywire? And where will the pound go next?