Artificial Stupidity - Bulletin of the Atomic Scientists


I learned a few things from reading an excerpt from Yuval Noah Harari's book, 21 Lessons for the 21st Century, published in the October issue of The Atlantic. One is that it took a Google machine-learning program just four hours to teach itself and master chess, once the pinnacle of centuries of human intellectual effort, easily defeating the top-ranked computer chess engine in the world. Another is that artificial intelligence systems may be inherently anti-democratic and anti-human. New heights of computing power and data processing make it more efficient to centralize systems in authoritarian governments, Harari says, and will render humans increasingly irrelevant. "By 2050," he writes, "a useless class might emerge, the result not only of a shortage of jobs or a lack of relevant education but also of insufficient mental stamina to continue learning new skills."

How AI Could Destroy The Universe… With Paperclips!!!


It took me 4 hours and 5 minutes to effectively annihilate the Universe by pretending to be an Artificial Intelligence tasked with making paper-clips. Put another way, it took me 4 hours and 5 minutes to have an existential crisis. This was done by playing the online game "Paperclip", which was released in 2017. Though the clip-making goal of the game is in itself simple, there are so many contemporary lessons to be extracted from the playthrough that a deep dive seems necessary. Indeed, the game explores our past, present and future in the most interesting way, especially when it comes to the technological advances Silicon Valley is currently oh so proud of.

Magic Leap wants to create art, not just technology


Everyone has an opinion about Magic Leap. It's either a revolutionary augmented reality company that could change the face of entertainment, or it's emblematic of everything wrong with the technology industry -- an over-hyped, multi-billion dollar pipe dream. Last week, we saw the first impressions of the company's long-awaited headset, which splashed a bit of reality on the company's hype cycle. Now that we have a better sense of what Magic Leap's $2,295 hardware is capable of, we can take a step back and consider what the company is actually trying to accomplish. In a brief demonstration, I found the Magic Leap One headset much lighter than I expected, even though it looks like a pair of '80s sci-fi goggles.

Constructionist Steps Towards an Autonomously Empathetic System Artificial Intelligence

Prior efforts to create an autonomous computer system capable of predicting what a human being is thinking or feeling from facial expression data have been largely based on outdated, inaccurate models of how emotions work that rely on many scientifically questionable assumptions. In our research, we are creating an empathetic system that incorporates the latest provable scientific understanding of emotions: that they are constructs of the human mind, rather than universal expressions of distinct internal states. Thus, our system uses a user-dependent method of analysis and relies heavily on contextual information to make predictions about what subjects are experiencing. Our system's accuracy and therefore usefulness are built on provable ground truths that prohibit the drawing of inaccurate conclusions that other systems could too easily make.

University of Hull Opens World First Mixed Reality Accelerator

Forbes Technology

The University of Hull's Mixed Reality accelerator was recently launched with the remit of promoting collaboration between industry and academia to develop commercial applications for Microsoft HoloLens. It is led by VISR a company founded in 2015 by veteran Xbox games developer Louis Deane and his business partner Lindsay West. They were one of the earliest Microsoft Mixed Reality partners in Europe. John Hemingway, Director of ICT at the University of Hull explains that hosting the Mixed Reality Accelerator was a natural progression for the University, as it taps into the institution's history of computer games development, virtual reality and 3D visualization developed over the past 30 years. As a University, it's important for us to not only lead from the front when it comes to cutting-edge technologies, but also to look at how those technologies allow us to create ever more skilled and work-ready graduates.

Microsoft is now bigger than IBM has ever been - but Google's growth is astonishing


Microsoft reached another two milestones when it unveiled its annual financial results last week. First, as ZDNet noted, its annual revenues passed $100bn for the first time. But I'm sure the Softies are not going to get complacent about that: Microsoft will also know that it has already been overtaken by a much younger firm: Google. Worse, Google is still growing faster than Microsoft, though the success of Microsoft's Azure cloud may yet keep it in the hunt. Microsoft's annual revenues of $110.4bn took it past'peak IBM', but Google did it even faster.

John McCarthy -- Father of AI and Lisp -- Dies at 84


When IBM's Deep Blue supercomputer won its famous chess rematch with then world champion Garry Kasparov in May 1997, the victory was hailed far and wide as a triumph of artificial intelligence. But John McCarthy – the man who coined the term and pioneered the field of AI research – didn't see it that way. As far back as the mid-60s, chess was called the "Drosophila of artificial intelligence" – a reference to the fruit flies biologists used to uncover the secrets of genetics – and McCarthy believed his successors in AI research had taken the analogy too far. "Computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila," McCarthy wrote following Deep Blue's win. "We would have some science, but mainly we would have very fast fruit flies."

On Neural Networks

Communications of the ACM

I am only a layman in the neural network space so the ideas and opinions in this column are sure to be refined by comments from more knowledgeable readers. The recent successes of multilayer neural networks have made headlines. Much earlier work on what I imagine to be single-layer networks proved to have limitations. Indeed, the famous book, Perceptrons,a by Turing laureate Marvin Minsky and his colleague Seymour Papert put the kibosh (that's a technical term) on further research in this space for some time. Among the most visible signs of advancement in this arena is the success of the DeepMind AlphaGo multilayer neural network that beat the international grand Go champion, Lee Sedol, four games out of five in March 2016 in Seoul.b

Winter is coming...


Since Alan Turing first posed the question "can machines think?" in his seminal paper in 1950, "Computing Machinery and Intelligence", Artificial Intelligence (AI) has failed to deliver on its promise. That is, Artificial General Intelligence. There have, however, been incredible advances in the field, including Deep Blue beating the world's best chess player, the birth of autonomous vehicles, and Google's DeepMind beating the world's best AlphaGo player. The current achievements represent the culmination of research and development that occurred over more than 65 years. Importantly, during this period there were two well documented AI Winters that almost completely debunked the promise of AI.

Real Thinking About Artificial Intelligence


My instincts tell me we need a sense of urgency around the use of artificial intelligence (AI) in manufacturing. The urgency is driven by how quickly technology can move today, and how an unexpected breakthrough can quickly dominate. AI is used in facial recognition, converting speech to written word, and in winning chess matches. Surely, there must be a horde of potential applications in manufacturing. While I've written before that I think the reality of AI's "intelligence" is complex mathematics, I got a more enlightened vision when I posed that view to a true expert.