Google I/O 2016 Preview: Android VR, Project Tango, Android N, Chrome OS, Self-Driving Cars And More

International Business Times

We'll find out this week what the tech giant is planning and how virtual and augmented reality factor into its goal to remain the world's most valuable company, as well as updates on the world's most popular operating system, Android, and possibly details on a return to China. On Wednesday, Google CEO Sundar Pichai will take the stage at the Shoreline Amphitheatre -- a 15-minute walk from his office at Google's Mountain View, California, headquarters -- and deliver an update on the company's current and coming projects. Nominally a conference for developers, Google I/O is really a way of keeping everyone updated on what it is working on, to help give products that are underperforming a promotional push and to remain in the public eye about other projects that may have slipped from the public memory due to being in development so long. Last year, it was all about a big update to Android; the previous year Google pushed Android TV and Android Wear; in 2013 it was the launch of Google Music; and 2012 saw a team of skydivers drop onto the Moscone Center stage in San Francisco during the keynote, shooting their exploits on Google Glass and live streaming it to an awestruck audience. This year, while we will hear about Android, Chrome OS, driverless cars and Project Ara, the big focus will be on the tech world's hot topic: virtual reality.


How Drive.ai Is Mastering Autonomous Driving with Deep Learning

#artificialintelligence

Among all of the self-driving startups working towards Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how they're using deep learning to master autonomous driving.


How Drive.ai Is Mastering Autonomous Driving With Deep Learning

#artificialintelligence

Among all of the self-driving startups working toward Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how it's using deep learning to master autonomous driving.


Google I/O is calling all Android robot programmers

#artificialintelligence

Pepper the robot participates in a Japanese ribbon-cutting ceremony earlier this year. Its manufacturer, SoftBank Robotics, is opening new offices in San Francisco and releasing a development kit for Android programmers. MOUNTAIN VIEW, Calif. - Pepper the robot is coming to our shores later this year, and its creators want the help of Android developers to help make it smarter. Japan-based SoftBank Robotics announced Wednesday at Google I/O, the company's annual developer's conference, that it is opening a new Pepper-focused outpost in San Francisco and unveiling an Android SDK, or software development kit, in the hopes of enticing programmers to write code for the robot. "Pepper is ultimately an unfinished product, and we just wanted to incentivize developers to expand the ways in which people can engage with a humanoid robot," says Steve Carlin, vice president of SoftBank Robotics Americas, which has an existing office in Boston.


It's a facial-recognition bonanza: Oakland bans it, activists track it, and pics taken from dating-site OkCupid feed it

#artificialintelligence

We'll be talking about everyone's favorite topic at the moment: facial recognition. First San Francisco, Somerville ... now Oakland: California's Oakland has become the third US city to ban its local government using facial recognition technology, after its council passed an ordinance this week. Council member Rebecca Kaplan submitted the ordinance for city officials to consider earlier this year in June. The document describes the shortcomings of the technology and why it should be banned. "The City of Oakland should reject the use of this flawed technology on the following basis: 1) systems rely on biased datasets with high levels of inaccuracy; 2) a lack of standards around the use and sharing of this technology; 3) the invasive nature of the technology; 4) and the potential abuses of data by our government that could lead to persecution of minority groups," according to the ordinance.