If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The first networked electronic mail message was sent by Ray Tomlinson of Bolt Beranek and Newman in 1971. This year, according to market research firm Radicati Group, 3.8 billion email users worldwide will send 281 billion messages every day. You may feel like a substantial number of them end up in your inbox. And yet, some observers say email is dying. It's so'last century', they say, compared to social media messaging, texting, and powerful new collaboration tools.
In the 1970s, when Microsoft and Apple were founded, programming was an art only a limited group of dedicated enthusiasts actually knew how to perform properly. CPUs were rather slow, personal computers had a very limited amount of memory, and monitors were lo-res. To create something decent, a programmer had to fight against actual hardware limitations. In order to win in this war, programmers had to be both trained and talented in computer science, a science that was at that time mostly about algorithms and data structures. The first three volumes of the famous book The Art of Computer Programming by Donald Knuth, a Stanford University professor and a Turing Award recipient, were published in 1968–1973.
Ryan Calo's "law and Technology" Viewpoint "Is the Law Ready for Driverless Cars?" (May 2018) explored the implications, as Calo said, of " ... genuinely unforeseeable categories of harm" in potential liability cases where death or injury is caused by a driverless car. He argued that common law would take care of most other legal issues involving artificial intelligence in driverless cars, apart from such "foreseeability." Calo also said the courts have worked out problems like AI before and seemed confident that AI foreseeability will eventually be accommodated. One can agree with this overall judgment but question the time horizon. AI may be quite different from anything the courts have seen or judged before for many reasons, as the technology is indeed designed to someday make its own decisions.
I am only a layman in the neural network space so the ideas and opinions in this column are sure to be refined by comments from more knowledgeable readers. The recent successes of multilayer neural networks have made headlines. Much earlier work on what I imagine to be single-layer networks proved to have limitations. Indeed, the famous book, Perceptrons,a by Turing laureate Marvin Minsky and his colleague Seymour Papert put the kibosh (that's a technical term) on further research in this space for some time. Among the most visible signs of advancement in this arena is the success of the DeepMind AlphaGo multilayer neural network that beat the international grand Go champion, Lee Sedol, four games out of five in March 2016 in Seoul.b
As we do every year, ACM convenes a gala event to celebrate and honor colleagues in our computing universe who have achieved pinnacle success in the field. Our most prestigious recognition is the ACM A.M. Turing Award and the 2017 award goes to John Hennessy and David Patterson: Their primary insight was to find a method to systematically and quantitatively evaluate machine instructions for their utility and to eliminate the least used of them, replacing them with sequences of simpler instructions with faster execution times requiring lower power. In the end, their designs resulted in Reduced Instruction Set Complexity or RISC. Today, most chips make use of this form of instruction set. A complete summary of their accomplishments can be found within this issue and at the ACM Awards website.a
Mobile and embedded devices increasingly rely on deep neural networks to understand the world--a feat that would have overwhelmed their system resources only a few years ago. Further integration of machine learning and embedded/mobile systems will require additional breakthroughs of efficient learning algorithms that can function under fluctuating resource constraints, giving rise to a field that straddles computer architecture, software systems, and artificial intelligence. N. D. Lane and P. Warden, "The Deep (Learning) Transformation of Mobile and Embedded Computing," in Computer, vol.
When Angelica Lim bakes macaroons, she has her own kitchen helper, Naoki. Her assistant is only good at the repetitive tasks, like sifting flour, but he makes the job more fun. Naoki is very cute, just under two feet tall. He's white, mostly, with blue highlights, and has speakers where his ears should be. The little round circle of a mouth that gives him a surprised expression is actually a camera, and his eyes are infrared receivers and transmitters. "I just love robots," said Lim in 2013, at the time a Ph.D. student in the Department of Intelligent Science and Technology at Kyoto University in Japan.
The ACM constitution provides that our Association hold a general election in the even-numbered years for the positions of President, Vice President, Secretary/Treasurer, and Members-at-Large. Biographical information and statements of the candidates appear on the following pages (candidates' names appear in random order). In addition to the election of ACM's officers--President, Vice President, Secretary/Treasurer--two Members-at-Large will be elected to serve on ACM Council. Please refer to the instructions posted at https://www.esc-vote.com/acm2018. To access the secure voting site, you will need to enter your email address (the email address associated with your ACM member record) and your unique PIN provided by Election Services Co. Should you wish to vote by paper ballot please contact Election Services Co. to request a paper copy of the ballot and follow the postal mail ballot procedures: [email protected] or 1-866-720-4357. Please return your ballot in the enclosed envelope, which must be signed by you on the outside in the space provided. The signed ballot envelope may be inserted into a separate envelope for mailing if you prefer this method. All ballots must be received by no later than 16:00 UTC on 24 May 2018. Validation by the Tellers Committee will take place at 14:00 UTC on 29 May 2018. Jack Davidson's research interests include compilers, computer architecture, system software, embedded systems, computer security, and computer science education. He is co-author of two introductory textbooks: C Program Design: An Introduction to Object-Oriented Programming and Java 5.0 Program Design: An Introduction to Programming and Object-oriented Design. Professionally, he has helped organize many conferences across several fields.
In the early days of digital computing, it was not uncommon to find a radio receiver tuned to a particular frequency (I don't recall which one, sigh) so that the RF emitted by the computer could be picked up and played through the radio. You could tell when a program went into a loop and sometimes you could tell roughly where a computation had reached by the sounds coming from the radio monitor. Fast-forward to the 21st century and we are seeking a different kind of sound: the sound of programming. Bootstrap Worlda has developed online courses in programming, among other subjects, but what makes Bootstrap World so memorable for me is that the team has focused heavily on accessibility. The programming environment is extremely friendly to screen readers so that a blind programmer can navigate easily through complex programs using keyboard navigation coupled with oral descriptions/renderings of the program text and structure.b
A look under the hood of any major search, commerce, or social-networking site today will reveal a profusion of "deep-learning" algorithms. Over the past decade, these powerful artificial intelligence (AI) tools have been increasingly and successfully applied to image analysis, speech recognition, translation, and many other tasks. Indeed, the computational and power requirements of these algorithms now constitute a major and still-growing fraction of datacenter demand. Designers often offload much of the highly parallel calculations to commercial hardware, especially graphics-processing units (GPUs) originally developed for rapid image rendering. These chips are especially well-suited to the computationally intensive "training" phase, which tunes system parameters using many validated examples.