It's important to think about ethical decision-making in autonomous cars. But the trolley problem, with its action movie–like scenario, can overshadow questions that are more mundane but also more pertinent to most people. It's not as much fun to have a philosophical conversation about how real people in real situations deal with the risks of living with these new technologies. Most of us won't have to make a life-or-death decision like in the trolley problem, but we may well have to deal with technologies that decide who gets the raw end of a car-on-car, or car-on-human, situation.
Professor Daniel Beliavsky is a professional pianist and Charlotte Bennett is his student. Since Beliavsky is more experienced, he's more comfortable with the keyboard and looked at the sheet music more than his hands, whereas Bennett spent more time looking down at the keys. Impressively, Beliavsky is able to look ahead to where his hands will be in a few seconds.
It was by settling the vast landscape of the West, Kaplan argues, that the U.S. learned how to be a global power. Drawing heavily on the work of the early 20th-century historian Bernard DeVoto, Kaplan claims that settlement of the American West, including the wars and eventual extermination of native peoples, was a spiritual experience for the young American nation. "The conquest of the Great Plains and the Rockies," he cites DeVoto as intuiting, was "a necessary prelude in order to defeat the Nazis and the Japanese." His take on Manifest Destiny is more complex than DeVoto's: He both defends the impulse to expand while lamenting the moral cover that the cowboy-homesteader mystique gave to the abysmal treatment of native peoples and the jingoism of the Mexican-American War.
Relatedly, the relative value of human intelligence gained through questioning of any type has declined over the past 15 years. However, the U.S. intelligence community has undergone a renaissance in collection and analysis driven by massive improvements in surveillance technology, computing, big data analysis, and artificial intelligence. Other intelligence tools have also made big leaps forward, such as forensic intelligence collected from cellphones, pocket litter, and improvised explosive devices collected on the battlefield. Human intelligence still plays an important role, but not to the degree it did right after 9/11.
The Martens Clause appears to be the key to resolving much of the dispute over autonomous weapons systems because it provides the necessary grounding for moral questions in international law, and it gives an opening for us to actually grasp what might be considered the "dictates of public conscience." In other words, we can put to side the question of whether technology can act in a particular way at a particular time, and instead ask whether the technology should do so. As the International Committee of the Red Cross explains, "there is a related question of whether the principles of humanity and the dictates of public conscience (the Martens Clause) allow life and death decisions to be taken by a machine with little or no human control." But how would we begin to say that autonomous weapons systems uphold or violate the "principles of humanity and the dictates of public conscience?"
Alas, a quick Google search told me that the territory had already been covered. I shouldn't have been surprised. The plot of Plot, in which the fascist sympathizer Charles Lindbergh ascends to the presidency on tailwinds of celebrity and America First populism, speaks with vivid and distressing clarity to the present moment. Take some crucial moment in history and undo it or do it differently.
Arguing for a ban related to human cloning research in the late 1990s, Leon Kass, chairman of George W. Bush's President's Council on Bioethics, controversially invoked "the wisdom of repugnance": the idea that disgust can be "the emotional expression of deep wisdom," cluing us in that a scientific or technological practice is nasty, sinister, a threat to our humanity. Shelley's Frankenstein takes the contrary ethical stance, showing us the hazards of repugnance--how the creature is turned violent and vengeful after rejection by people horrified at his appearance.
He worked in isolation, hiding his progress from his teacher and his fellow scientists. And when Frankenstein died, his Creature continued to roam the earth, enraged and embittered, poised to wreak more damage. If Frankenstein had been a member of a research group, his fellow scientists could have stepped in to help control the Creature and to support Frankenstein in the challenges that came to light the moment the Creature attained autonomy. As it was, Frankenstein failed to manage his invention and succumbed to the perils of the isolated researcher.