Games


John McCarthy -- Father of AI and Lisp -- Dies at 84

#artificialintelligence

When IBM's Deep Blue supercomputer won its famous chess rematch with then world champion Garry Kasparov in May 1997, the victory was hailed far and wide as a triumph of artificial intelligence. But John McCarthy – the man who coined the term and pioneered the field of AI research – didn't see it that way. As far back as the mid-60s, chess was called the "Drosophila of artificial intelligence" – a reference to the fruit flies biologists used to uncover the secrets of genetics – and McCarthy believed his successors in AI research had taken the analogy too far. "Computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila," McCarthy wrote following Deep Blue's win. "We would have some science, but mainly we would have very fast fruit flies."


On Neural Networks

Communications of the ACM

I am only a layman in the neural network space so the ideas and opinions in this column are sure to be refined by comments from more knowledgeable readers. The recent successes of multilayer neural networks have made headlines. Much earlier work on what I imagine to be single-layer networks proved to have limitations. Indeed, the famous book, Perceptrons,a by Turing laureate Marvin Minsky and his colleague Seymour Papert put the kibosh (that's a technical term) on further research in this space for some time. Among the most visible signs of advancement in this arena is the success of the DeepMind AlphaGo multilayer neural network that beat the international grand Go champion, Lee Sedol, four games out of five in March 2016 in Seoul.b


Winter is coming...

#artificialintelligence

Since Alan Turing first posed the question "can machines think?" in his seminal paper in 1950, "Computing Machinery and Intelligence", Artificial Intelligence (AI) has failed to deliver on its promise. That is, Artificial General Intelligence. There have, however, been incredible advances in the field, including Deep Blue beating the world's best chess player, the birth of autonomous vehicles, and Google's DeepMind beating the world's best AlphaGo player. The current achievements represent the culmination of research and development that occurred over more than 65 years. Importantly, during this period there were two well documented AI Winters that almost completely debunked the promise of AI.


Real Thinking About Artificial Intelligence

#artificialintelligence

My instincts tell me we need a sense of urgency around the use of artificial intelligence (AI) in manufacturing. The urgency is driven by how quickly technology can move today, and how an unexpected breakthrough can quickly dominate. AI is used in facial recognition, converting speech to written word, and in winning chess matches. Surely, there must be a horde of potential applications in manufacturing. While I've written before that I think the reality of AI's "intelligence" is complex mathematics, I got a more enlightened vision when I posed that view to a true expert.


AI Technology Revolution Is Just Getting Started

#artificialintelligence

That should be very good for the companies that are the arms merchants in AI technology, particularly chip companies like Micron Technology (ticker: MU) and Xilinx (XLNX). A new form of computing is emerging, and it demands new chips. The change is every bit as profound as the rise of micro-computing in the 1970s that made Intel a king of microprocessors. It makes Micron and Xilinx more important, but it will probably also lead to future chip stars that aren't public now or may not even have been founded yet. Barron's first explored the new AI in an October 2015 cover story, "Watch Out Intel, Here Comes Facebook."


Frontier AI: How far are we from artificial "general" intelligence, really?

#artificialintelligence

Some call it "strong" AI, others "real" AI, "true" AI or artificial "general" intelligence (AGI)… whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human -- possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences. This has been a recurring theme of science fiction for many decades, but given the dramatic progress of AI over the last few years, the debate has been flaring anew with particular intensity, with an increasingly vocal stream of media and conversations warning us that AGI (of the nefarious kind) is coming, and much sooner than we'd think. Latest example: the new documentary Do you trust this computer?, which streamed last weekend for free courtesy of Elon Musk, and features a number of respected AI experts from both academia and industry. The documentary paints an alarming picture of artificial intelligence, a "new life form" on planet earth that is about to "wrap its tentacles" around us. There is also an accelerating flow of stories pointing to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity face generator and deepfakes, with full video generation and speech synthesis being likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one: robots cooperating to open a door) and reports about Google's AI getting "highly aggressive" However, as an investor who spends a lot of time in the "trenches" of AI, I have been experiencing a fair amount of cognitive dissonance on this topic.


Nvidia looks to AI for the future of medical imaging technologies

#artificialintelligence

The name Nvidia usually creates a synapse to the video game industry -- or more recently the self-driving car business. But now the computer hardware company is looking to get a foothold in the healthcare industry. Last month at the GPU Technology Conference the company revealed plans for a new AI platform called Clara, which will use AI to create a virtual medical imaging platform. "What we are building is a computing platform for medical imaging -- it is a virtual medical imaging super computer. What we are doing is taking all the more recent, last five-ten years, modern computing [technologies] … like cloud, virtualization, and GPU (graphics processing unit) and we are bringing it all together so that medical industry people can take advantage of it," Kimberly Powell, VP of healthcare at Nvidia, told MobiHealthNews.


How poker and other games help artificial intelligence evolve

#artificialintelligence

When he was growing up in Ohio, his parents were avid card players, dealing out hands of everything from euchre to gin rummy. Meanwhile, he and his friends would tear up board games lying around the family home and combine the pieces to make their own games, with new challenges and new markers for victory. Bowling has come far from his days of playing with colourful cards and plastic dice. He has three degrees in computing science and is now a professor at the University of Alberta. But, in his heart, Bowling still loves playing games.


How poker and other games help artificial intelligence evolve

#artificialintelligence

Michael Bowling has always loved games. When he was growing up in Ohio, his parents were avid card players, dealing out hands of everything from euchre to gin rummy. Meanwhile, he and his friends would tear up board games lying around the family home and combine the pieces to make their own games, with new challenges and new markers for victory. Bowling has come far from his days of playing with colourful cards and plastic dice. He has three degrees in computing science and is now a professor at the University of Alberta.


Future Factory: How Technology Is Transforming Manufacturing

#artificialintelligence

From advanced robotics in R&D labs to computer vision in warehouses, technology is making an impact on every step of the manufacturing process. Lights-out manufacturing refers to factories that operate autonomously and require no human presence. These robot-run settings often don't even require lighting, and can consist of several machines functioning in the dark. While this may sound futuristic, these types of factories have been a reality for more than 15 years. Famously, the Japanese robotics maker FANUC has been operating a "lights-out" factory since 2001, where robots are building other robots completely unsupervised for nearly a month at a time. "Not only is it lights-out," said FANUC VP Gary Zywiol, "we turn off the air conditioning and heat too." To imagine a world where robots do all the physical work, one simply needs to look at the most ambitious and technology-laden factories of today. For example, the Dongguan City, China-based phone part maker Changying Precision Technology Company has created an unmanned factory. Everything in the factory -- from machining equipment to unmanned transport trucks to warehouse equipment -- is operated by computer-controlled robots. The technical staff monitors activity of these machines through a central control system. Where it once required about 650 workers to keep the factory running, robot arms have cut Changying's human workforce to less than a tenth of that, down to just 60 workers. A general manager for the company said that it aims to reduce that number to 20 in the future. As industrial technology grows increasingly pervasive, this wave of automation and digitization is being labelled "Industry 4.0," as in the fourth industrial revolution. So, what does the future of factories hold? Manufacturers predict overall efficiency to grow annually over the next five years at 7x the rate of growth seen since 1990.