In his newly published scan of the literature, expert Thomas B. Sheridan concludes that the time is ripe for human factors researchers to contribute scientific insights that can tackle the many challenges of human-robot interaction. Massachusetts Institute of Technology Professor Emeritus Sheridan, who for decades has studied humans and automation, looked at self-driving cars and highly automated transit systems; routine tasks such as the delivery of packages in Amazon warehouses; devices that handle tasks in hazardous or inaccessible environments, such as the Fukushima nuclear plant; and robots that engage in social interaction (Barbies). In each case, he noted significant human factors challenges, particularly concerning safety. No human driver, he claims, will stay alert to take over control of a Google car quickly enough should the automation fail. Nor does self-driving car technology consider the value of social interaction between drivers such as eye contact and hand signals.
Robot" begins a second season on USA on Wednesday night with a two-part opener broadcast back to back. A conspiracy thriller set in the present day – it's still 2015 on the series' clock – it's science fiction in the sense that it involves technology, but not in the quasi-supernatural manner of flying saucers, Godzillas, time travel or synthetic human or mutant superheroes and such. Still, it shares with much sci-fi a sense of the ordinary world pushed a click toward the uncanny. Into every generation a confused and disaffected hero is born. Robot" it's Elliot Alderson (Rami Malek), a cyberwhiz who was recruited – Season 1 spoilers ahead – to the underground hacking collective fsociety by a person (Christian Slater) who turned out to be his father, who turned out to be dead, a figment of his imagination projected wholly into his world, though invisible to everyone else – a "Sixth Sense" move, dramatically.
An AI judge has accurately predicted most verdicts of the European Court of Human Rights, and might soon be making important decisions about cases. Scientists built an artificially intelligence computer that was able to look at legal evidence as well as considering ethical questions to decide how a case should be decided. And it predicted those with 79 per cent accuracy, according to its creators. The algorithm looked at data sets made up 584 cases relating to torture and degrading treatment, fair trials and privacy. The computer was able to look through that information and make its own decision – which lined up with those made by Europe's most senior judges in almost every case.
The Russian government says that its agents weren't involved in hacking 500 million Yahoo accounts after the US charged two spies two spies over a "state-sponsored" cyber attack. The Kremlin said its FSB domestic intelligence service was not involved in any unlawful activity. It appeared to suggest that no Russian intelligence agents have ever hacked anyone else. This week it emerged that the US Department of Justice would charge two Russian spies with hacking into Yahoo in one of the biggest cyber attacks in history. It said that FSB agents had paid hackers to steal people's email accounts and try and gather information about journalists and politicians.
Ambarish is the founder and CEO of Blippar, an augmented reality and image recognition platform. Leading venture capitalists, scientists and CEOs all have the same prediction for artificial intelligence: machines will take jobs away from both blue- and white-collar workers, "eat the world" and, ultimately, overthrow humanity. These histrionics have driven a widely accepted negative narrative about this technology's potential impact on the future of humanity. Bringing artificial intelligence into the mainstream world should be met with hope and empathy, not fear. The concerns of these experts -- wealthy men with unparalleled access to utilities, healthcare, public safety, education and job opportunities -- are the concerns of privilege.