How Much Can Autonomous Cars Learn from Virtual Worlds?
To be able to drive safely and reliably, autonomous cars need to have a comprehensive understanding of what's going on around them. They need to recognize other cars, trucks, motorcycles, bikes, humans, traffic lights, street signs, and everything else that may end up on or near a road. They also have to do this in all kinds of weather and lighting conditions, which is why most (if not all) companies developing autonomous cars are spending a ludicrous (but necessary) amount of time and resources collecting data in an attempt to gain experience with every possible situation. In most cases, this technique depends on humans making annotations to enormous sets of data in order to train machine learning algorithms: hundreds or thousands of people looking at snapshots or videos taken by cars driving down streets, and drawing boxes around vehicles and road signs and labeling them, over and over. Researchers from the University of Michigan think there's a better way: Doing the whole thing in simulation instead, and they've shown that it can actually be more effective than using real data annotated by humans.
Jun-8-2017, 20:00:14 GMT
- Country:
- Asia > Singapore (0.06)
- Europe > Germany (0.06)
- North America > United States
- California (0.05)
- Michigan (0.27)
- Genre:
- Research Report > New Finding (0.31)
- Industry:
- Technology: