Tech industry insiders regularly herald AI as the solution to all of our problems, included those posed by health care. London-based DeepMind, owned by Google's parent company, Alphabet, focuses heavily on the specifics of using artificial intelligence in health care, and on Monday it released a study showing the progress it's made in using AI to diagnose eye conditions. Published in the science journal Nature, the study reports that DeepMind, in partnership with Moorfields Eye Hospital in London, has trained its algorithms to detect over 50 sight-threatening conditions to the same accuracy as expert clinicians. In a project that began two years ago, DeepMind trained its machine learning algorithms using thousands of historic and fully anonymized eye scans to identify diseases that could lead to sight loss. According to the study, the system can now do so with 94 percent accuracy, and the hope is that it could eventually be used to transform how eye exams are conducted around the world.
"She was like, 'You're not your normal, cheery, bubbly self,' " Mr. Witkowski said. " 'You're not using exclamation points.' " She told him she felt his emails came off as more demanding than usual. "I didn't really know how to react," he said. Exclamation points are stressing people out. Years of rampant use have both diluted the punctuation mark's meaning and inflated its significance.
Of the 43 people who heard Nao beg to stay online, 13 chose to listen and did not turn him off, according to the study. Some merciful participants said they felt sorry for Nao and his fear of the void. Others reported that they did not want to act against Nao's will. And while the majority of people turned Nao off despite his protests, those people hesitated to do so, waiting on average more than twice as long as people who were in tests where Nao did not make its plea. The study builds on existing research that shows humans are inclined to treat electronic media as living beings.
Scientists discovered that bees have visual processing systems just like humans, which could help us understand how facial recognition evolved. A link has been sent to your friend's email address. A link has been posted to your Facebook feed. Scientists discovered that bees have visual processing systems just like humans, which could help us understand how facial recognition evolved.
When Pearse Keane started using optical coherence tomography (OCT) scanners to peer to the back of a person's eye in Los Angeles a decade ago, the machines were relatively crude. "The devices were lower resolution, they had much slower image acquisition speeds," says Keane, a consultant ophthalmic surgeon at Moorfields Eye Hospital and researcher at University College, London. From 2007, Keane spent two years studying scans from OCT machines learning to diagnose eye conditions in patients and pick out the minute details which make up sight-threatening diseases. "It was very time consuming, laborious work," Keane says. OCT scans use light to quickly create high resolution, 3D images of the back of the eye.
As we enter into the next revolutionary age, the age of artificial intelligence (AI), it's no surprise fear often guides the mainstream narrative. Fear of massive job loss and millions unemployed as AI and robots are implemented on a global scale. But as the CEO of Mondo, a niche tech and digital marketing staffing agency, I envision this future as one of major job creation and opportunity. This future, however, is only possible if we work together to guide AI and robotics innovation responsibly throughout all industries. To see proof of why AI won't take all our jobs, you only need to look at history.
If a little humanoid robot begged you not to shut it off, would you show compassion? In an experiment designed to investigate how people treat robots when they act like humans, many participants struggled to power down a pleading robot, either refusing to shut it off or taking more than twice the amount of time to pull the plug. The experiment was conducted by researchers in Germany whose findings were published in the scientific journal PLOS One, the Verge reported this month. Eighty-nine volunteers were asked to help improve a robot's interactions by completing two tasks with it: creating a weekly schedule and answering such questions as "Do you rather like pizza or pasta?" The tasks with the robot, named Nao, were actually part of a ploy, however.
A recent scientific survey off the coast of Sulawesi Island in Indonesia suggests that some shallow water corals may be less vulnerable to global warming than previously thought. Between 2014 and 2017, the world's reefs endured the worst coral bleaching event in history, as the cyclical El Niño climate event combined with anthropogenic warming to cause unprecedented increases in water temperature. But the June survey, funded by Microsoft co-founder Paul Allen's family foundation, found the Sulawesi reefs were surprisingly healthy. In fact they were in better condition than when they were originally surveyed in 2014 - a surprise for British scientist Dr Emma Kennedy, who led the research team. "After several depressing years as a coral reef scientist, witnessing the worst-ever global coral bleaching event, it is unbelievably encouraging to experience reefs such as these," she said.
The ability to learn something new while you sleep, known as Hypnopedia, has long been the dream of students cramming for upcoming exams. However, a new study suggests the practice might be impossible. Experts hooked up participants to advanced brain scanners to monitor them while they slept and while they were awake. Scientists then played the participants sounds either randomly, or from part of three distinct patterns. Sleeping volunteers showed no brain activity when it came to detecting the similarities in sounds, while the participants who were awake had no trouble picking out the pattern in the recording.
A new machine-learning system is as good as the best human experts at detecting eye problems and referring patients for treatment, say scientists. The groundbreaking artificial intelligence system, developed by the AI-outfit DeepMind with Moorfields eye hospital NHS foundation trust and University College London, was capable of correctly referring patients with more than 50 different eye diseases for further treatment with 94% accuracy, matching or beating world-leading eye specialists. "The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients," said Prof Sir Peng Tee Khaw, the director of the NIHR Biomedical Research Centre at Moorfields eye hospital and the UCL Institute of Ophthalmology. The two-stage AI system takes a more human-like and intelligible approach to analysing the highly complex optical coherence tomography (OCT) scans of patient retinas. These are commonly used to triage patients with sight problems into four clinical categories: urgent, semi-urgent, routine and observation only.