Keep Your Thinking Machines, I'll Take Human-Computer Interaction Any Day

#artificialintelligence

It's hard to discuss the role of Artificial Intelligence (AI) in the workplace until you decide what AI is. Some academics tell us -- using lots of words -- that AI is computers that think, learn and ultimately act like humans while others hold that maximizing the interaction between computers and their humans -- such as in Human Computer Interaction or HCI -- qualifies as the closest thing to AI we are likely to see. Until you decide on which side of that dichotomy you fall, it's difficult to understand how, or if, AI contributes to business, and if so, how to improve its contributions. Our fascination with the idea of machines that think like humans goes back millennia, but it's only recently that it appears to potentially be in reach. And while AI research has uncovered some amazing technological capabilities, it has also run into a quagmire in its attempts to 1) agree on just what human intelligence is; and 2) the extent to which technology might be capable of replicating it.


Some considerations on how the human brain must be arranged in order to make its replication in a thinking machine possible

arXiv.org Artificial Intelligence

For the most of my life, I have earned my living as a computer vision professional busy with image processing tasks and problems. In the computer vision community there is a widespread belief that artificial vision systems faithfully replicate human vision abilities or at least very closely mimic them. It was a great surprise to me when one day I have realized that computer and human vision have next to nothing in common. The former is occupied with extensive data processing, carrying out massive pixel-based calculations, while the latter is busy with meaningful information processing, concerned with smart objects-based manipulations. And the gap between the two is insurmountable. To resolve this confusion, I had had to return and revaluate first the vision phenomenon itself, define more carefully what visual information is and how to treat it properly. In this work I have not been, as it is usually accepted, biologically inspired . On the contrary, I have drawn my inspirations from a pure mathematical theory, the Kolmogorov s complexity theory. The results of my work have been already published elsewhere. So the objective of this paper is to try and apply the insights gained in course of this my enterprise to a more general case of information processing in human brain and the challenging issue of human intelligence.


The Thinking Machine

AITopics Original Links

"When you are born, you know nothing." This is the kind of statement you expect to hear from a philosophy professor, not a Silicon Valley executive with a new company to pitch and money to make. A tall, rangy man who is almost implausibly cheerful, Hawkins created the Palm and Treo handhelds and cofounded Palm Computing and Handspring. His is the consummate high tech success story, the brilliant, driven engineer who beat the critics to make it big. Now he's about to unveil his entrepreneurial third act: a company called Numenta. But what Hawkins, 49, really wants to talk about -- in fact, what he has really wanted to talk about for the past 30 years -- isn't gadgets or source codes or market niches.


Making Law for Thinking Machines? Start with the Guns - Netopia

#artificialintelligence

The Bank of England's warning that the pace of artificial intelligence development now threatens 15m UK jobs has prompted calls for political intervention.


Rise of the Thinking Machines: AI Beats Humans At Their Own Game

#artificialintelligence

If you happen to have a free 30 hours or so, I would highly recommend watching Google's AlphaGo program take on one of the best players in the world at the ancient Chinese board game Go. If you don't have that much time, you could instead just watch the 6-hour third match, where the program wrapped up the best of five series. It's literally history being made. Some news outlets have covered this feat, but I don't think many people understand how monumental this actually is. Back in 1997, when Garry Kasparov was beaten by IBM's Deep Blue in chess, people were more excited about the future of computing.