Goto

Collaborating Authors

Humans make the same call as self-driving cars 75% of the time when presented with the same data

Daily Mail - Science & tech

Humans see things in a very similar way to computers, according to a study which quizzed people on images and asked them to'think like a machine'. Participants were shown blurry images and asked to choose between A or B of what they assumed the AI may have picked up. They found that 75 per cent of the time humans and machines picked the same answer, showing that both can be equally tricked. The findings demonstrate how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. Computers, like those that power self-driving cars, can be tricked into mistaking random scribbles for trains, fences and even school busses.


Humans actually think like computers, a Johns Hopkins study says

ZDNet

Do you realize what you've become? Yes, you think you're a beautifully human, sentient being. You think you're a unique example of the species, fascinating in every aspect. Or have all those gadgets you've been using turned you into something of a predictable machine? I only ask because of a new study from Johns Hopkins University, in which researchers tested whether humans actually see things in a very similar way to computers.


Chaz Firestone - Arts & Sciences Magazine

#artificialintelligence

Try zeroing in on an orange. While the human brain may correctly identify the images, a machine might mistake them for a missile or jaguar, says Chaz Firestone, assistant professor in the Department of Psychological and Brain Sciences. Those mistakes might seem comical at face value, but could prove deadly if a self-driving car doesn't recognize a person in its path, for example. Or when we begin relying more on automated radiology to screen for anomalies like tumors or tissue damage. "Most of the time, research in our field [of artificial intelligence] is about getting computers to think like people," says Firestone.



Machine learning advances human-computer interaction : NewsCenter

#artificialintelligence

A natural language model developed in the Robotics and Artificial Intelligence Laboratory allows a user to speak a simple command, which the robot can translate into an action. If the robot is given a command to pick up a particular object, it can differentiate between other objects nearby, even if they are identical in appearance. Inside the University of Rochester's Robotics and Artificial Intelligence Laboratory, a robotic torso looms over a row of plastic gears and blocks, awaiting instructions. Next to him, Jacob Arkin '13, a doctoral candidate in electrical and computer engineering, gives the robot a command: "Pick up the middle gear in the row of five gears on the right," he says to the Baxter Research Robot. The robot, sporting a University of Rochester winter cap, pauses before turning, extending its right limb in the direction of the object.