Collaborating Authors


AI Rundown


AI is a big buzzword in the tech industry. But how did this all start? The history of Artificial Intelligence can be traced back centuries, but the biggest advancements began in the 1950s. Alan Turing was a British polymath who argued that machines could likely use available information and reason to solve problems just as humans do. His 1950 paper, Computing Machinery and Intelligence, covered how to build and test intelligent machines.

A brief history of AI: how to prevent another winter (a critical review) Artificial Intelligence

The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.

Is Ireland braced for the approaching AI storm?


The only way to win at technological change like artificial intelligence is to stay ahead of it. And Ireland is doing just that, writes John Kennedy. Thanks to artificial intelligence (AI), we are in the midst of the biggest technological upheaval humanity has ever seen. It is both seminal and frightening. If you really want a metaphor for the storm of change that is coming, check out this recent Bloomberg documentary Inside China's High-Tech Dystopia which shows thousands of workers in Shenzhen assembling the latest smartphones.

Winter is coming...


Since Alan Turing first posed the question "can machines think?" in his seminal paper in 1950, "Computing Machinery and Intelligence", Artificial Intelligence (AI) has failed to deliver on its promise. That is, Artificial General Intelligence. There have, however, been incredible advances in the field, including Deep Blue beating the world's best chess player, the birth of autonomous vehicles, and Google's DeepMind beating the world's best AlphaGo player. The current achievements represent the culmination of research and development that occurred over more than 65 years. Importantly, during this period there were two well documented AI Winters that almost completely debunked the promise of AI.

The Real Risks of Artificial Intelligence

Communications of the ACM

The vast increase in speed, memory capacity, and communications ability allows today's computers to do things that were unthinkable when I started programming six decades ago. Then, computers were primarily used for numerical calculations; today, they process text, images, and sound recordings. Then, it was an accomplishment to write a program that played chess badly but correctly. Today's computers have the power to compete with the best human players. The incredible capacity of today's computing systems allows some purveyors to describe them as having "artificial intelligence" (AI). They claim that AI is used in washing machines, the "personal assistants" in our mobile devices, self-driving cars, and the giant computers that beat human champions at complex games. Remarkably, those who use the term "artificial intelligence" have not defined that term. I first heard the term more than 50 years ago and have yet to hear a scientific definition. Even now, some AI experts say that defining AI is a difficult (and important) question--one that they are working on. "Artificial intelligence" remains a buzzword, a word that many think they understand but nobody can define. Application of AI methods can lead to devices and systems that are untrustworthy and sometimes dangerous.

Rise of Intelligent Machines as Artificial Intelligence Goes Mainstream - Experfy Insights


Artificial Intelligence has been around since the 1950's. Alan Turing envisioned a machine that could think. He devised a test, aptly named the Turing Test, published in an article titled Computing Machinery and Intelligence. He proposed the notion that a computational machine could answer a series of questions from a panel of judges. The responses would be rational, thoughtful, and indistinguishable to another human.

Artificial Intelligence Could Be on Brink of Passing Turing Test

AITopics Original Links

One hundred years after Alan Turing was born, his eponymous test remains an elusive benchmark for artificial intelligence. Now, for the first time in decades, it's possible to imagine a machine making the grade. Turing was one of the 20th century's great mathematicians, a conceptual architect of modern computing whose codebreaking played a decisive part in World War II. His test, described in a seminal dawn-of-the-computer-age paper, was deceptively simple: If a machine could pass for human in conversation, the machine could be considered intelligent. Artificial intelligences are now ubiquitous, from GPS navigation systems and Google algorithms to automated customer service and Apple's Siri, to say nothing of Deep Blue and Watson -- but no machine has met Turing's standard.