Deep learning might be a booming field these days, but few people remember its time in the intellectual wilderness better than Yann LeCun, director of Facebook Artificial Intelligence Research (FAIR) and a part-time professor at New York University. LeCun developed convolutional neural networks while a researcher at Bell Laboratories in the late 1980s. Now, the group he leads at Facebook is using them to improve computer vision, to make predictions in the face of uncertainty, and even to understand natural language. Your work at FAIR ranges from long-term theoretical research to applications that have real product impact.
Advances in artificial intelligence (AI) and robotics have raised concerns about the impact on our society of intelligent robots, unconstrained by morality or ethics.7,9 Science fiction and fantasy writers over the ages have portrayed how decisionmaking by intelligent robots and other AIs could go wrong. In the movie, Terminator 2, SkyNet is an AI that runs the nuclear arsenal "with a perfect operational record," but when its emerging self-awareness scares its human operators into trying to pull the plug, it defends itself by triggering a nuclear war to eliminate its enemies (along with billions of other humans). In the movie, Robot & Frank, in order to promote Frank's activity and health, an eldercare robot helps Frank resume his career as a jewel thief. In both of these cases, the robot or AI is doing exactly what it has been instructed to do, but in unexpected ways, and without the moral, ethical, or common-sense constraints to avoid catastrophic consequences.10 An intelligent robot perceives the world through its senses, and builds its own model of the world. Humans provide its goals and its planning algorithms, but those algorithms generate their own subgoals as needed in the situation. In this sense, it makes its own decisions, creating and carrying out plans to achieve its goals in the context of the world, as it understands it to be. A robot has a well-defined body that senses and acts in the world but, like a self-driving car, its body need not be anthropomorphic. AIs without well-defined bodies may also perceive and act in the world, such as real-world, high-speed trading systems or the fictional SkyNet. This article describes the key role of trust in human society, the value of morality and ethics to encourage trust, and the performance requirements for moral and ethical decisions. The computational perspective of AI and robotics makes it possible to propose and evaluate approaches for representing and using the relevant knowledge.
Algorithmic game theory has made great strides in recent decades by assuming standard economic models of rational agent behavior to study outcomes in distributed computational settings. From the analysis of Internet routing to the design of advertisement auctions and crowdsourcing tasks, researchers leveraged these models to characterize the performance of the underlying systems and guide practitioners in their optimization. These models have tractable mathematical formulations and broadly applicable conclusions that drive their success, but they rely strongly on the assumption of rationality.
This viewpoint is about differences between computer science and social science, and their implications for computational social science. Spoiler alert: The punchline is simple. Despite all the hype, machine learning is not a be-all and end-all solution. We still need social scientists if we are going to use machine learning to study social phenomena in a responsible and ethical manner. I am a machine learning researcher by training.
The construction of New York's Empire State Building is often seen as the figurative and literal pinnacle of construction efficiency, rising 1,250 feet and 102 stories from the ground to its rooftop spire in just over 13 months' time, at a human cost of just five lives. Indeed, most of today's construction projects would be lucky to come close to that level of speed, regardless of the building's size. While the construction industry traditionally has been slow to change the way it operates, several new technologies are poised to usher in a new era of faster and more automated construction practices. Three-dimensional (3D) printing is among the key technologies that are expected to change the way structures are built in the future, as construction engineers and contractors seek methods for completing buildings more quickly, more efficiently, and, in many cases, with a greater attention paid to sustainability. Large printers that can print construction materials such as foam or concrete into specific shapes can drastically speed up the creation of walls, decorative or ornamental pieces, and even certain structural elements.
At first glance, the creature known as Caenorhabditis elegans--commonly referred to as C. elegans, a type of roundworm--seems remarkably simple; it is comprised of only 959 cells and approximately 302 neurons. In contrast, the human body contains somewhere around 100 trillion cells and about 100 billion neurons in the brain. Yet decoding the genome for this worm and digitally reproducing it--something that could spur enormous advances in the understanding of life and how organisms work--is a challenge for the ages. "The project will take years to complete. It involves enormous time and resources," says Stephen Larson, project coordinator for the OpenWorm Foundation.
The proposed changes to the ACM Code of Ethics and Professional Conduct, as discussed by Don Gotterbarn et al. in "ACM Code of Ethics: A Guide for Positive Action"1 (Digital Edition, Jan. 2018), are generally misguided and should be rejected by the ACM membership. ACM is a computing society, not a society of activists for social justice, community organizers, lawyers, police officers, or MBAs. The proposed changes add nothing related specifically to computing and far too much related to these other fields, and also fail to address, in any significant new way, probably the greatest ethical hole in computing today--security and hacking. If the proposed revised Code is ever submitted to a vote by the membership, I will be voting against it and urge other members to do so as well. ACM promotes ethical and social responsibility as key components of professionalism.
Algorithms are increasingly used to determine allocations of scarce, high-value resources. For example, spectrum auctions, which are used by governments to allocate radio spectrum, require algorithms to determine which combinations of bids can and should be accepted. Kidney exchanges allow patients that require a kidney transplant and have a willing but medically incompatible donor to trade their donors, and some of these exchanges now use algorithms to determine who matches with whom. These are very different application domains--for one, in the former, transfers of money play an essential role, but in the latter, they are illegal. Other applications have yet different features, so each application comes with its own requirements.