One researcher is putting real humans into computerized driving simulations to help self-driving cars learn human behavior. In the not-too-distant future, Americans will be sharing the road with self-driving cars. Companies are pouring billions of dollars into developing self-driving vehicles. Waymo, formerly the Google self-driving-car project, says that its self-driving cars have already driven millions of miles on the open road. Stanford University assistant professor Dorsa Sadigh has ridden in self-driving cars.
After Dara Khosrowshahi took over as Uber's chief executive last August, he considered shutting the company's money-losing autonomous vehicle division. A visit to Pittsburgh this spring changed that. In town for a leadership summit, Mr. Khosrowshahi and other Uber executives were briefed on the state of the company's self-driving vehicle research, which is based in Pittsburgh. The group was impressed by the progress its autonomous division had made in testing driverless cars in Pittsburgh and in Arizona, according to three people familiar with the ride-hailing company, who were not authorized to speak publicly. They left the meeting energized, convinced that Uber needed to forge ahead with self-driving cars, the people said.
New teachers, new backpacks, new crushes--and algorithms trawling students' social media posts. Blake Prewitt, superintendent of Lakeview school district in Battle Creek, Michigan, says he typically wakes up each morning to twenty new emails from a social media monitoring system the district activated earlier this year. It uses keywords and machine learning algorithms to flag public posts on Twitter and other networks that contain language or images that may suggest conflict or violence, and tag or mention district schools or communities. In recent months the alert emails have included an attempted abduction outside one school--Prewitt checked if the school's security cameras could aid police--and a comment about dress code from a student's relative--district staff contacted the family. Prewitt says the alerts help him keep his 4,000 students and 500 staff safe.
Can robots and workers co-exist?iStock, Workers, policymakers, and the media are concerned with the idea that automation, or technological change, will displace millions of American workers--and they are partially right. Andrew Yang, an early 2020 Presidential hopeful is already running on the idea that "the robots are coming" – though the story is not so simple. There have been, and will continue to be, technological breakthroughs that replace workers and reshape our economy. The next big worker-displacing technology is supposedly artificial intelligence (AI), which is thought to have the potential to replace millions of workers performing routine and menial tasks.
A novel encryption method devised by MIT researchers secures data used in online neural networks, without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data. Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks, such as, say, running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.
The team achieved a peak rate between 11.73 and 15.07 petaflops (single-precision) when running its data set on the Cori supercomputer. Machine learning, a form of artificial intelligence, enjoys unprecedented success in commercial applications. However, the use of machine learning in high performance computing for science has been limited. Why? Advanced machine learning tools weren't designed for big data sets, like those used to study stars and planets. A team from Intel, National Energy Research Scientific Computing Center (NERSC), and Stanford changed that.
Amazon.com and Microsoft have officially set up the friendship between their two voice assistants, Alexa and Cortana, a year after announcing the partnership to expand the reach and abilities of the competing assistants. But how does this relationship of rivals work? To talk to one assistant through the other, customers have to say either "Cortana, open Alexa" or "Alexa, open Cortana." From there, people can talk to the other assistant as usual. What they don't get is access to each others' data, according to statements from both companies Wednesday.
Forget peer pressure, future generations are more likely to be influenced by robots, a study suggests. The research, conducted at the University of Plymouth, found that while adults were not swayed by robots, children were. The fact that children tended to trust robots without question raised ethical issues as the machines became more pervasive, said researchers. They called for the robotics community to build in safeguards for children. Those taking part in the study completed a simple test, known as the Asch paradigm, which involved finding two lines that matched in length.
DeepMind's artificial intelligence can now spot key signs of eye disease as well as the world's top doctors. Anonymous diagnostic data from almost 15,000 NHS patients was used to help the AI learn how to spot 10 key features of eye disease from complex optical coherence tomography (OCT) retinal scans. An OCT scan uses light rather than X-rays or ultrasound to generate 3D images of the back of the eye, revealing abnormalities that may be signs of disease. The system has the potential to prevent irreversible sight loss by ensuring that patients with the most serious eye conditions receive early treatment. DeepMind's new system was developed alongside scientists at Moorfields, University College London.