We propose an alternative to the Turing test that removes the inherent asymmetry between humans and machines in Turing’s original imitation game. In this new test, both humans and machines judge each other. We argue that this makes the test more robust against simple deceptions. We also propose a small number of refinements to improve further the test. These refinements could be applied also to Turing’s original imitation game.
Overall, the inside risks Viewpoint "The Risks of Self-Auditing Systems" by Rebecca T. Mercuri and Peter G. Neumann (June 2016) was excellent, and we applaud its call for auditing systems by independent entities to ensure correctness and trustworthiness. However, with respect to voting, it said, "Some research has been devoted to end-to-end cryptographic verification that would allow voters to demonstrate their choices were correctly recorded and accurately counted. However, this concept (as with Internet voting) enables possibilities of vote buying and selling." While Internet voting (like any remote-voting method) is indeed vulnerable to vote buying and selling, end-to-end verifiable voting is not. Poll-site-based end-to-end verifiable voting systems use cryptographic methods to ensure voters can verify their own votes are correctly recorded and tallied while (paradoxically) not enabling them to demonstrate how they voted to anyone else.
In Silicon Valley, Nikolas Janin rises for his 40-minute commute to work just like everyone else. The shop manager and fleet technician at Google gets dressed and heads out to his Lexus RX 450h for the trip on California's clotted freeways. That's when his chauffeur – the car – takes over. One of Google's self-driving vehicles, Mr. Janin's ride is equipped with sophisticated artificial intelligence technology that allows him to sit as a passenger in the driver's seat. Its first real job, expected later this year, will be as a telemedicine robot, allowing a specialist thousands of miles away to visit patients' hospital rooms via a video screen mounted as its "head." When the physician is ready to visit another patient, he taps the new location on a computer map: Ava finds its own way to the next room, including using the elevator. In Pullman, Wash., researchers at Washington State University are fitting "smart" homes with sensors that automatically adjust the lighting needed in rooms and monitor and interpret all the movements and actions of its occupants, down to how many hours they sleep and minutes they exercise.
Explain: Many of the fears around AI stem from the possible job loss caused by the automation in industries such as manufacturing. However, automation is also at the heart of one of the most exciting and tangible AI products, driverless vehicles. An automated system can run without the help of a human but that does not make it artificially intelligent. An AI-powered automated system would not only be able to make decisions without a human but would be able to learn from those decisions and alter their action as a result.
Science fiction has already explored the theme of robot rights, such as the film Bicentennial Man. Science fiction likes to depict robots as autonomous machines, capable of making their own decisions and often expressing their own personalities. Yet we also tend to think of robots as property, and as lacking the kind of rights that we reserve for people. But if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights? What if a robot achieves true self-awareness?