As we do every year, ACM convenes a gala event to celebrate and honor colleagues in our computing universe who have achieved pinnacle success in the field. Our most prestigious recognition is the ACM A.M. Turing Award and the 2017 award goes to John Hennessy and David Patterson: Their primary insight was to find a method to systematically and quantitatively evaluate machine instructions for their utility and to eliminate the least used of them, replacing them with sequences of simpler instructions with faster execution times requiring lower power. In the end, their designs resulted in Reduced Instruction Set Complexity or RISC. Today, most chips make use of this form of instruction set. A complete summary of their accomplishments can be found within this issue and at the ACM Awards website.a
Since research in synthetic biology began nearly two decades ago, the field has expanded beyond its original mandate of using engineering principles to study and manipulate cells. Today, scientists are building biological computers and DNA-based robots that can carry out logical operations and complete tasks. These miniscule machines look nothing like laptops or Roombas. Yet, algorithms still guide the robots through tasks, and the biological computers funnel inputs through logic gates. While a standard circuit works with electrical currents, though, the inputs in the biological version are biochemical signals triggered by presence of a protein or pathogen.
One of the formidable challenges healthcare providers face is putting medical data to maximum use. Somewhere between the quest to unlock the mysteries of medicine and design better treatments, therapies, and procedures, lies the real world of applying data and protecting patient privacy. "Today, there are many barriers to putting data to work in the most effective way possible," observes Drew Harris, director of health policy and population health at Thomas Jefferson University's College of Population Health in Philadelphia, PA. "The goals of protecting patients and finding answers are frequently at odds." It is a critical issue and one that will define the future of medicine. Medical advances are increasingly dependent on the analysis of enormous datasets--as well as data that extends beyond any one agency or enterprise.
In computer science, you are taught to comment your code. When you learn a new language, you learn the syntax for a comment in that language. Although the compiler or interpreter ignores all comments in a program, comments are valuable. However, there is a recent viewpoint that commenting code is bad, and that you should avoid all comments in your programs. In the 2013 article No Comment: Why Commenting Code Is Still a Bad Idea, Peter Vogel continued this discussion.
The ACM constitution provides that our Association hold a general election in the even-numbered years for the positions of President, Vice President, Secretary/Treasurer, and Members-at-Large. Biographical information and statements of the candidates appear on the following pages (candidates' names appear in random order). In addition to the election of ACM's officers--President, Vice President, Secretary/Treasurer--two Members-at-Large will be elected to serve on ACM Council. Please refer to the instructions posted at https://www.esc-vote.com/acm2018. To access the secure voting site, you will need to enter your email address (the email address associated with your ACM member record) and your unique PIN provided by Election Services Co. Should you wish to vote by paper ballot please contact Election Services Co. to request a paper copy of the ballot and follow the postal mail ballot procedures: [email protected] or 1-866-720-4357. Please return your ballot in the enclosed envelope, which must be signed by you on the outside in the space provided. The signed ballot envelope may be inserted into a separate envelope for mailing if you prefer this method. All ballots must be received by no later than 16:00 UTC on 24 May 2018. Validation by the Tellers Committee will take place at 14:00 UTC on 29 May 2018. Jack Davidson's research interests include compilers, computer architecture, system software, embedded systems, computer security, and computer science education. He is co-author of two introductory textbooks: C Program Design: An Introduction to Object-Oriented Programming and Java 5.0 Program Design: An Introduction to Programming and Object-oriented Design. Professionally, he has helped organize many conferences across several fields.
In 1950, Alan Turing wrote a paper entitled "Computing Machinery and Intelligence."a He proposed a test in which a human attempts to distinguish between a human and a computer by exchanging text messages with each of them. If the human is unable to distinguish between the two, the computer is said to have passed the "Turing Test." In fact, there were variations, including one in which a human interrogator interacting with a man and a woman was to try to tell which was the man and which was the woman. Turing called this the "Imitation Game."
Please also do not insist on constantly changing the features just to sell a new version. We old(er) humans are simply not all that enamored of the latest and greatest tech (recall that, in many cases, we created it), nor are we impressed by the ability to add emojis to our digital correspondence. We have learned that talking is more satisfying than texting, and visits from grandchildren are better than Facebook. Do not pity us--though, if you like, you may envy us.
The field of artificial intelligence (AI) is rife with misnomers and machine learning (ML) is a big one. ML is a vibrant and successful subfield, but the bulk of it is simply "function approximation based on a sample." For example, the learning portion of AlphaGo--which defeated the human world champion in the game of GO--is in essence a method for approximating a non-linear function from board position to move choice, based on tens of millions of board positions labeled by the appropriate move in that position.a As pointed out in my Wired article,4 function approximation is only a small component of a capability that would rival human learning, and might be rightfully called machine learning. Tom Mitchell and his collaborators have been investigating how to broaden the ML field for over 20 years under headings such as multitask learning,2 life-long learning,7 and more.
Whereas people learn many different types of knowledge from diverse experiences over many years, and become better learners over time, most current machine learning systems are much more narrow, learning just a single function or data model based on statistical analysis of a single data set. In this paper we define more precisely this never-ending learning paradigm for machine learning, and we present one case study: the Never-Ending Language Learner (NELL), which achieves a number of the desired properties of a never-ending learner. NELL has been learning to read the Web 24hrs/day since January 2010, and so far has acquired a knowledge base with 120mn diverse, confidence-weighted beliefs (e.g., servedWith(tea,biscuits)), while learning thousands of interrelated functions that continually improve its reading competence over time. NELL has also learned to reason over its knowledge base to infer new beliefs it has not yet read from those it has, and NELL is inventing new relational predicates to extend the ontology it uses to represent beliefs. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL. Machine learning is a highly successful branch of artificial intelligence (AI), and is now widely used for tasks from spam filtering, to speech recognition, to credit card fraud detection, to face recognition. Despite these successes, the ways in which computers learn today remain surprisingly narrow when compared to human learning. This paper explores an alternative paradigm for machine learning that more closely models the diversity, competence and cumulative nature of human learning. We call this alternative paradigm never-ending learning.
Communication with computing machinery has become increasingly'chatty' these days: Alexa, Cortana, Siri, and many more dialogue systems have hit the consumer market on a broader basis than ever, but do any of them truly notice our emotions and react to them like a human conversational partner would? In fact, the discipline of automatically recognizing human emotion and affective states from speech, usually referred to as Speech Emotion Recognition or SER for short, has by now surpassed the "age of majority," celebrating the 22nd anniversary after the seminal work of Daellert et al. in 199610--arguably the first research paper on the topic. However, the idea has existed even longer, as the first patent dates back to the late 1970s.41 Previously, a series of studies rooted in psychology rather than in computer science investigated the role of acoustics of human emotion (see, for example, references8,16,21,34). Blanton,4 for example, wrote that "the effect of emotions upon the voice is recognized by all people. Even the most primitive can recognize the tones of love and fear and anger; and this knowledge is shared by the animals. The dog, the horse, and many other animals can understand the meaning of the human voice. The language of the tones is the oldest and most universal of all our means of communication." It appears the time has come for computing machinery to understand it as well.28 This holds true for the entire field of affective computing--Picard's field-coining book by the same name appeared around the same time29 as SER, describing the broader idea of lending machines emotional intelligence able to recognize human emotion and to synthesize emotion and emotional behavior.