Communications of the ACM


Internet of Things Search Engine

Communications of the ACM

Advancements under the moniker of the Internet of Things (IoT) allow things to network and become the primary producers of data in the Internet.14 IoT makes the state and interactions of real-world available to Web applications and information systems with minimal latency and complexity.25 By enabling massive telemetry and individual addressing of "things," the IoT offers three prominent benefits: spatial and temporal traceability of individual real-world objects for thief prevention, counterfeit product detection and food safety via accessing their pedigree; enabling ambient data collection and analytics for optimizing crop planning, enabling telemedicine and assisted living; and supporting real-time reactive systems such as smart building, automatic logistics and self-driving, networked cars.11 Realizing these benefits requires the ability to discover and resolve queries for contents in the IoT. Offering these abilities is the responsibility of a class of software system called the Internet of Things search engine (IoTSE).


Unifying Logical and Statistical AI with Markov Logic

Communications of the ACM

For many years, the two dominant paradigms in artificial intelligence (AI) have been logical AI and statistical AI. Logical AI uses first-order logic and related representations to capture complex relationships and knowledge about the world. However, logic-based approaches are often too brittle to handle the uncertainty and noise present in many applications. Statistical AI uses probabilistic representations such as probabilistic graphical models to capture uncertainty. However, graphical models only represent distributions over propositional universes and must be customized to handle relational domains.


Extract, Shoehorn, and Load

Communications of the ACM

A lot of data is moved from system to system in an important and increasing part of the computing landscape. This is traditionally known as ETL (extract, transform, and load). While many systems are extremely good at this process, the source for the extraction and the destination for the load frequently have different representations for their data. It is common for this transformation to squeeze, truncate, or pad the data to make it fit into the target. This is really like using a shoehorn to fit into a shoe that is too small.


The Edge of Computational Photography

Communications of the ACM

Since their introduction more than a decade ago, smartphones have been equipped with cameras, allowing users to capture images and video without carrying a separate device. Thanks to the use of computational photographic technologies, which utilize algorithms to adjust photographic parameters in order to optimize them for specific situations, users with little or no photographic training can often achieve excellent results. The boundaries of what constitutes computational photography are not clearly defined, though there is some agreement that the term refers to the use of hardware such as lenses and image sensors to capture image data, and then applying software algorithms to automatically adjust the image parameters to yield an image. Examples of computational photography technology can be found in most recent smartphones and some standalone cameras, including high dynamic range imaging (HDR), auto-focus (AF), image stabilization, shot bracketing, and the ability to deploy various filters, among many other features. These features allow amateur photographers to produce pictures that can, at times, rival photographs taken by professionals using significantly more expensive equipment.


Good Algorithms Make Good Neighbors

Communications of the ACM

A host of different tasks--such as identifying the song in a database most similar to your favorite song, or the drug most likely to interact with a given molecule--have the same basic problem at their core: finding the point in a dataset that is closest to a given point. This "nearest neighbor" problem shows up all over the place in machine learning, pattern recognition, and data analysis, as well as many other fields. Yet the nearest neighbor problem is not really a single problem. Instead, it has as many different manifestations as there are different notions of what it means for data points to be similar. In recent decades, computer scientists have devised efficient nearest neighbor algorithms for a handful of different definitions of similarity: the ordinary Euclidean distance between points, and a few other distance measures.



Research for Practice

Communications of the ACM

Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable; to obtain understanding through empirically rigorous experiments; or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible. What sorts of papers best serve their readers? Ideally, papers should accomplish the following: provide intuition to aid the reader's understanding but clearly distinguish it from stronger conclusions supported by evidence; describe empirical investigations that consider and rule out alternative hypotheses; make clear the relationship between theoretical analysis and intuitive or empirical claims; and use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts.


ACM Awards Honor CS Contributions

Communications of the ACM

In this issue of Communications, as evidenced by the cover and lead article, we celebrate the latest recipients of the ACM A.M. Turing Award. Yoshua Bengio, Yann LeCun, and Geoffrey Hinton carried out pioneering work in deep learning that has touched all our lives. As Turing Laureates, they now join the eminent group of technology visionaries recognized with the world's highest distinction in computing. The Turing Award is one of a suite of professional honors ACM bestows annually to recognize technical achievements that have made significant contributions to our field. This month, I will have the pleasure of joining the awardees, ACM Fellows, and other luminaries in San Francisco for the ACM Awards Banquet.


Lifelong Learning in Artificial Neural Networks

Communications of the ACM

Columbia University is learning how to build and train self-aware neural networks, systems that can adapt and improve by using internal simulations and knowledge of their own structures. The University of California, Irvine, is studying the dual memory architecture of the hippocampus and cortex to replay relevant memories in the background, allowing the systems to become more adaptable and predictive while retaining previous learning. Tufts University is examining an intercellular regeneration mechanism observed in lower animals such as salamanders to create flexible robots capable of adapting to changes in their environment by altering their structures and functions on the fly. SRI International is developing methods to use environmental signals and their relevant context to represent goals in a fluid way rather than as discrete tasks, enabling AI agents to adapt their behavior on the go.


Neural Net Worth

Communications of the ACM

When Geoffrey Hinton started doing graduate student work on artificial intelligence at the University of Edinburgh in 1972, the idea that it could be achieved using neural networks that mimicked the human brain was in disrepute. Computer scientists Marvin Minsky and Seymour Papert had published a book in 1969 on Perceptrons, an early attempt at building a neural net, and it left people in the field with the impression that such devices were nonsense. "It didn't actually say that, but that's how the community interpreted the book," says Hinton who, along with Yoshua Bengio and Yann LeCun, will receive the 2018 ACM A.M. Turing award for their work that led deep neural networks to become an important component of today's computing. "People thought I was just completely crazy to be working on neural nets." Even in the 1980s, when Bengio and LeCun entered graduate school, neural nets were not seen as promising.