Communications of the ACM


Building Certified Concurrent OS Kernels

Communications of the ACM

In International Conference on Computer Aided Verification (2015), Springer 449–465.


Protein Design by Provable Algorithms

Communications of the ACM

Proteins are a class of large molecules that are involved in the vast majority of biological functions, from cell replication to photosynthesis to cognition. The chemical structure of proteins is very systematic5--they consist of a chain of atoms known as the backbone, which consists of three-atom (nitrogen-carbon-carbon) repeats known as residues, each of which features a sidechain of atoms emanating from the first carbon. In general, there are 20 different options for sidechains, and a residue with a particular type of sidechain is known as an amino acid (so there are also 20 different amino acid types). For billions of years, the process of evolution has optimized the sequence of amino acids that make up naturally occurring proteins to suit the needs of the organisms that make them. So we ask: Can we use computation to design non-naturally occurring proteins that suit our biomedical and industrial needs? This question is a combinatorial optimization problem, because the output of a protein design computation is a sequence of amino acids. Due to the vast diversity of naturally occurring proteins, it is possible--and very useful--to begin a protein design computation with a naturally occurring protein and then to modify it to achieve the desired function. In this article, we focus on protein design algorithms that perform this optimization using detailed modeling of the 3D structure of the protein.5,8 Thus, they will begin with a starting structure, a 3D structure of a (typically naturally occurring) protein we wish to modify. To illustrate this concept, imagine we wish to perform a simple example modification to a protein to make it more stable, so it can still function at higher temperatures.


Sampling-Based Robot Motion Planning

Communications of the ACM

In recent years, robots play an active role in everyday life: medical robots assist in complex surgeries; search-and-rescue robots are employed in mining accidents; and low-cost commercial robots clean houses. There is a growing need for sophisticated algorithmic tools enabling stronger capabilities for these robots. One fundamental problem that robotic researchers grapple with is motion planning--which deals with planning a collision-free path for a moving system in an environment cluttered with obstacles.13,29 To a layman, it may seem the wide use of robots in modern life implies that the motion-planning problem has already been solved. This is far from true.


Unlocking Data to Improve Public Policy

Communications of the ACM

There is a growing consensus among policymakers that bringing high-quality evidence to bear on public policy decisions is essential to supporting the effective and efficient government their constituencies want and need. At the U.S. federal level, this view is reflected in a recent Congressional report by the Commission on Evidence-Based Policymaking, which recommends creating a data infrastructure that enables "a future in which rigorous evidence is created efficiently, as a routine part of government operations, and used to construct effective public policy."4 This article describes a new approach to data infrastructure for fact-based policy, developed through a partnership between our interdisciplinary organization Research Improving People's Livesa and the State of Rhode Island.13 Together, we constructed RI 360, an anonymized database that integrates administrative records from siloed databases across nearly every Rhode Island state agency. The comprehensive scope of RI 360 has enabled new insights across a wide range of policy areas, and supports ongoing research into improving policies to alleviate poverty and increase economic opportunity for all Rhode Island residents (see the sidebar "Policy Areas in which RI 360 Has Contributed Insights").


Multi-Device Digital Assistance

Communications of the ACM

The use of multiple digital devices to support people's daily activities has long been discussed.11 Multi-device experiences (MDXs) spanning multiple devices simultaneously are viable for many individuals. Each device has unique strengths in aspects such as display, compute, portability, sensing, communications, and input. Despite the potential to utilize the portfolio of devices at their disposal, people typically use just one device per task; meaning they may need to make compromises in the tasks they attempt or may underperform at the task at hand. It also means the support that digital assistants such as Amazon Alexa, Google Assistant, or Microsoft Cortana can offer is limited to what is possible on the current device.


How Might We Increase System Trustworthiness?

Communications of the ACM

The acm risks Forum (risks.org) is now in its 35th year, the Communications Inside Risks series is in its 30th year, and the book they spawned--Computer-Related Risks7--went to press 25 years ago. Unfortunately, the types of problems discussed in these sources are still recurring in one form or another today, in many different application areas, with new ones continually cropping up. This seems to be an appropriate time to revisit some of the relevant underlying history, and to reflect on how we might reduce the risks for everyone involved, in part by significantly increasing the trustworthiness of our systems and networks, and also by having a better understanding of the causes of the problems. In this context, 'trustworthy' means having some reasonably well thought-out assurance that something is worthy of being trusted to satisfy certain well-specified system requirements (such as human safety, security, reliability, robustness and resilience, ease of use and ease of system administration, and predictable behavior in the face of adversities--such as high-probability real-time performance).


Technical Perspective: The Scalability of CertiKOS

Communications of the ACM

For moderate-size sequential programs, formal verification works--we can build a formal machine-checkable proof that a program is correct, with respect to a formal specification in logic. Machine-checked formal verifications of functional correctness have already been demonstrated for operating-system microkernels, optimizing compilers, cryptographic primitives and protocols, and so on. But suppose we want to verify a high-performance hypervisor kernel programmed in C, that runs on a real (x86) machine, that is capable of booting up Linux in each of its (hypervisor) guest partitions? Real machines these days are multicore--the hypervisor should provide multicore partitions that can host multicore guests, all protected from each other, but interacting via shared memory synchronized by locks. Furthermore, the operating system itself should be multicore, with fine-grain synchronization--we do not want one global lock guarding all the system calls by all the cores and threads.


AI Is Not an Excuse!

Communications of the ACM

I keep hearing excuses for not working on difficult problems: "Eventually AI will solve this so there's no point working on it now." First, we should be cautious about putting too much expectation on artificial intelligence which, by most metrics, is really today's machine learning (ML) and neural networks. There is no doubt these systems have produced truly remarkable results. The GO game story of AlphaGo and AlphaZero from Deep Mind is by now a classic example of the surprisingly powerful results these systems produce. A chess-playing version of AlphaZero learned quickly and demonstrated choices of moves unlike those of traditional chess players.


Computational Sustainability

Communications of the ACM

These are exciting times for computational sciences with the digital revolution permeating a variety of areas and radically transforming business, science, and our daily lives. The Internet and the World Wide Web, GPS, satellite communications, remote sensing, and smartphones are dramatically accelerating the pace of discovery, engendering globally connected networks of people and devices. The rise of practically relevant artificial intelligence (AI) is also playing an increasing part in this revolution, fostering e-commerce, social networks, personalized medicine, IBM Watson and AlphaGo, self-driving cars, and other groundbreaking transformations. Unfortunately, humanity is also facing tremendous challenges. Nearly a billion people still live below the international poverty line and human activities and climate change are threatening our planet and the livelihood of current and future generations. Moreover, the impact of computing and information technology has been uneven, mainly benefiting profitable sectors, with fewer societal and environmental benefits, further exacerbating inequalities and the destruction of our planet. Our vision is that computer scientists can and should play a key role in helping address societal and environmental challenges in pursuit of a sustainable future, while also advancing computer science as a discipline. For over a decade, we have been deeply engaged in computational research to address societal and environmental challenges, while nurturing the new field of Computational Sustainability.


Bitwise

Communications of the ACM

In 1960, physicist Eugene Wigner pondered "The Unreasonable Effectiveness of Mathematics in the Natural Sciences," wondering why it was that mathematics provided the "miracle" of accurately modeling the physical world. Wigner remarked, "it is not at all natural that'laws of nature' exist, much less that man is able to discover them." Fifty years later, artificial intelligence researchers Alon Halevy, Peter Norvig, and Fernando Pereira paid homage to Wigner in their 2009 paper "The Unreasonable Effectiveness of Data," an essay describing Google's ability to achieve higher quality search results and ad relevancy not primarily through algorithmic innovation but by amassing and analyzing orders of magnitude more data than anyone had previously. The article both summarized Google's successes to that date and presaged the jumps in "deep learning" in this decade. With sufficient data and computing power, computer-constructed models obtained through machine learning raise the possibility of performing as well if not better than human-crafted models of human behavior.