One of the intriguing aspects of the popular 1960s television show "Mission Impossible" was the opening sequence of every episode, which featured a secret agent listening to a recorded message about an upcoming mission. At the end of the recording each week, the tape would sizzle, crackle, and disintegrate into a heap of smoke and debris, ensuring no one else could access the top-secret information it contained. Until recently, self-destructing electronic systems remained within the realm of science fiction, but advances in chemistry, engineering, and materials science are finally allowing researchers to construct circuits that break down on their own timetable. This includes systems that rely on conventional complementary oxide semiconductor (CMOS) technology. "The goal is to develop functional circuits that can operate for a period of time and then vaporize," says Amit Lal, Robert M. Scharf 1977 Professor of Engineering in the Electrical and Computer Engineering Department at Cornell University in Ithaca, NY, and director of the university's SonicMEMs lab.
Autonomous drones represent a new breed of mobile computing system. Compared to smartphones and connected cars that only opportunistically sense or communicate, drones allow motion control to become part of the application logic. The efficiency of their movements is largely dictated by the low-level control enabling their autonomous operation based on high-level inputs. Existing implementations of such low-level control operate in a time-triggered fashion. In contrast, we conceive a notion of reactive control that allows drones to execute the low-level control logic only upon recognizing the need to, based on the influence of the environment onto the drone operation.
In the late 1990s, at about the time as an upsurge of interest among theorists in real-time control in which feedback loops were closed through rate-limited communication channels, the Bluetooth communication standard was introduced to enable "local area networks of things." Various research groups, including my own, became interested in implementing feedback control using Bluetooth channels in order to evaluate the design principles that we and others had developed for communication-limited real-time systems. With device networks taking on ever increasing importance, our Bluetooth work was part of an emergent area within control theory that was aimed at systems using existing infrastructure rather than systems of sensors, actuators, and data links that were co-optimized to work together to meet performance objectives. The main challenge of using infrastructure that was designed for purposes other than real-time applications was that none of the infrastructure-optimized computation and communication protocols are well suited to closing feedback loops of control systems. The work of Mottola and Whitehouse is somewhat along these lines--with the infrastructure in this case being the control logic and feedback control algorithms that are found on popular UAV autopilot platforms such as Ardupilot, Pixhawk, the Qualcomm Snapdragon, and the now discontinued OpenPilot.
Yet the combination of these factors created a milestone in AI history, as it had a profound impact on real-world applications and the successful deployment of various AI techniques that have been in the works for a very long time, particularly neural networks.g I shared these remarks in various contexts during the course of preparing this article. The audiences ranged from AI and computer science to law and public-policy researchers with an interest in AI. What I found striking is the great interest in this discussion and the comfort, if not general agreement, with the remarks I made. I did get a few "I beg to differ" responses though, all centering on recent advancements relating to optimizing functions, which are key to the successful training of neural networks (such as results on stochastic gradient descent, dropouts, and new activation functions). The objections stemmed from not having named them as breakthroughs (in AI). My answer: They all fall under the enabler I outlined earlier: "increasingly sophisticated statistical and optimization techniques for fitting functions." Follow up question: Does it matter that they are statistical and optimization techniques, as opposed to classical AI techniques?
Ask poverty attorney Joanna Green Brown for an example of a client who fell through the cracks and lost social services benefits they may have been eligible for because of a program driven by artificial intelligence (AI), and you will get an earful. There was the "highly educated and capable" client who had had heart failure and was on a heart and lung transplant wait list. The questions he was presented in a Social Security benefits application "didn't encapsulate his issue" and his child subsequently did not receive benefits. "It's almost impossible for an AI system to anticipate issues related to the nuance of timing," Green Brown says. Then there's the client who had to apply for a Medicaid recertification, but misread a question and received a denial a month later.
Can the diverse artificial intelligence (AI) community come together to build an infrastructure to advance the United Nation's sustainable development goals (SDGs, https://sustainabledevelopment.un.org/sdgs) around the world? Can global projects be developed that begin to address pressing issues surrounding some of our greatest humanitarian challenges to help all? Those were the goals of the second annual AI for Good Global Summit, the leading United Nations platform for dialogue on Artificial Intelligence held in Geneva, Switzerland, over three days in May. The conference was organized by the International Telecommunication Union (ITU), the United Nations' specialized agency for information and communication technology (ICT), in partnership with the XPRIZE Foundation, the Association for Computing Machinery (ACM), and 32 sister UN agencies. The 500 attendees consisted of a diverse set of multi-stakeholders with wide-ranging expertise--from the individual UN agencies (including everything from UNESCO and UNICEF to The World Health Organization, The World Bank, and UNHCR), AI researchers, public- and private-sector decision-makers, potential financial partners and sponsor organizations.
Awarding ACM's 2017 A.M. Turing Award to John Hennessy and David Patterson was richly deserved and long overdue, as described by Neil Savage in his news story "Rewarded for RISC" (June 2018). RISC was a big step forward. In their acceptance speech, Patterson also graciously acknowledged the contemporary and independent invention of the RISC concepts by John Cocke, another Turing laureate, at IBM, as described by Radin.1 Unfortunately, Cocke, who was the principal inventor but rarely published, was not included as an author, and it would have been good if Savage had mentioned his contribution. It is noteworthy that RISC architectures depend on and emerged from optimizing compilers. So far as I can tell, all the RISC inventors had strong backgrounds in both architecture and compilers.
MIT CSAIL's origami robot is packaged in an ingestible ice pill. In 2013, University of Sheffield roboticist Dana Damian was doing postdoctoral research at Harvard Medical School affiliate Boston Children's Hospital when she learned of a procedure called the Foker technique. The surgery, performed on children with a rare congenital lung defect, calls for doctors to attach sutures to part of an infant's esophagus, then tie them off on the baby's back. Over time, the sutures lengthen the esophagus by pulling on it, stimulating tissue growth. Although the technique can be effective, the risk of infection and complication is high, and the baby must remain under sedation for weeks.