If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
I recently attended the 45th ACM/IEEE International Symposium on Computer Architecture (ISCA) in Los Angeles, and was struck by the atmosphere of dramatic change in the field. First, a bit of perspective: For the last 15 years, computer architecture, and as a consequence computing as a whole, has been disrupted from below with the end of Dennard Scaling,1 producing a dramatic slowing in the growth of clock rate and single-thread performance, a shift to multicore, and the rise of throughput engines such GPUs. In addition, and in particular on mobile platforms, we have seen the rise of heavily customized architectures in mobile and embedded devices. These changes, precipitated by device and circuit-level effects, have rippled through the software stack compelling large-scale software rewrites, new compiler techniques and programming approaches, and the pervasive adoption of parallelism as a fundamental basis of performance. In the past three years, we have seen the effective end of Moore's Law scaling, with per transistor prices flat or increasing with recent technology nodes,2 and the slowing rate of advance to new process nodes at Intel (and across the industry).3
Remember the days when record-keeping trouble, such as an enormous and clearly erroneous bill for property taxes, was attributed to "computer error?" It can be seen easily in exaggerations like this, from a tech news digest: "Google's Artificial Intelligence (AI) has learned how to navigate like a human being." See the Nature article by the Google researchers2 for the accurate, cautious, description and assessment. The quote given cites an article in Fast Company, which states that "AI has spontaneously learned how to navigate to different places."4 But this is not the root of the problem.
Disdain for regulation is pervasive throughout the tech industry. In the case of automated decision making, this attitude is mistaken. Early engagement with governments and regulators could both smooth the path of adoption for systems built on machine learning, minimize the consequences of inevitable failures, increase public trust in these systems, and possibly avert the imposition of debilitating rules. Exponential growth in the sophistication and applications of machine learning is in the process of automating wholly or partially many tasks previously performed only by humans. This technology of automated decision making (ADM) promises many benefits, including reducing tedious labor as well as improving the appropriateness and acceptability of decisions and actions.
The first networked electronic mail message was sent by Ray Tomlinson of Bolt Beranek and Newman in 1971. This year, according to market research firm Radicati Group, 3.8 billion email users worldwide will send 281 billion messages every day. You may feel like a substantial number of them end up in your inbox. And yet, some observers say email is dying. It's so'last century', they say, compared to social media messaging, texting, and powerful new collaboration tools.
In the 1970s, when Microsoft and Apple were founded, programming was an art only a limited group of dedicated enthusiasts actually knew how to perform properly. CPUs were rather slow, personal computers had a very limited amount of memory, and monitors were lo-res. To create something decent, a programmer had to fight against actual hardware limitations. In order to win in this war, programmers had to be both trained and talented in computer science, a science that was at that time mostly about algorithms and data structures. The first three volumes of the famous book The Art of Computer Programming by Donald Knuth, a Stanford University professor and a Turing Award recipient, were published in 1968–1973.
Ryan Calo's "law and Technology" Viewpoint "Is the Law Ready for Driverless Cars?" (May 2018) explored the implications, as Calo said, of " ... genuinely unforeseeable categories of harm" in potential liability cases where death or injury is caused by a driverless car. He argued that common law would take care of most other legal issues involving artificial intelligence in driverless cars, apart from such "foreseeability." Calo also said the courts have worked out problems like AI before and seemed confident that AI foreseeability will eventually be accommodated. One can agree with this overall judgment but question the time horizon. AI may be quite different from anything the courts have seen or judged before for many reasons, as the technology is indeed designed to someday make its own decisions.
I am only a layman in the neural network space so the ideas and opinions in this column are sure to be refined by comments from more knowledgeable readers. The recent successes of multilayer neural networks have made headlines. Much earlier work on what I imagine to be single-layer networks proved to have limitations. Indeed, the famous book, Perceptrons,a by Turing laureate Marvin Minsky and his colleague Seymour Papert put the kibosh (that's a technical term) on further research in this space for some time. Among the most visible signs of advancement in this arena is the success of the DeepMind AlphaGo multilayer neural network that beat the international grand Go champion, Lee Sedol, four games out of five in March 2016 in Seoul.b
As we do every year, ACM convenes a gala event to celebrate and honor colleagues in our computing universe who have achieved pinnacle success in the field. Our most prestigious recognition is the ACM A.M. Turing Award and the 2017 award goes to John Hennessy and David Patterson: Their primary insight was to find a method to systematically and quantitatively evaluate machine instructions for their utility and to eliminate the least used of them, replacing them with sequences of simpler instructions with faster execution times requiring lower power. In the end, their designs resulted in Reduced Instruction Set Complexity or RISC. Today, most chips make use of this form of instruction set. A complete summary of their accomplishments can be found within this issue and at the ACM Awards website.a
Mobile and embedded devices increasingly rely on deep neural networks to understand the world--a feat that would have overwhelmed their system resources only a few years ago. Further integration of machine learning and embedded/mobile systems will require additional breakthroughs of efficient learning algorithms that can function under fluctuating resource constraints, giving rise to a field that straddles computer architecture, software systems, and artificial intelligence. N. D. Lane and P. Warden, "The Deep (Learning) Transformation of Mobile and Embedded Computing," in Computer, vol.
When Angelica Lim bakes macaroons, she has her own kitchen helper, Naoki. Her assistant is only good at the repetitive tasks, like sifting flour, but he makes the job more fun. Naoki is very cute, just under two feet tall. He's white, mostly, with blue highlights, and has speakers where his ears should be. The little round circle of a mouth that gives him a surprised expression is actually a camera, and his eyes are infrared receivers and transmitters. "I just love robots," said Lim in 2013, at the time a Ph.D. student in the Department of Intelligent Science and Technology at Kyoto University in Japan.