Results


Machines Are Developing Language Skills Inside Virtual Worlds

MIT Technology Review

Both the DeepMind and CMU approaches use deep reinforcement learning, popularized by DeepMind's Atari-playing AI. A neural network is fed raw pixel data from a virtual environment and uses rewards, like points in a computer game, to learn by trial and error (see "10 Breakthrough Technologies 2017: Reinforcement Learning"). By running through millions of training scenarios at accelerated speeds, both AI programs learned to associate words with particular objects and characteristics, which let them follow the commands. The millions of training runs required means Domingos is not convinced pure deep reinforcement learning will ever crack the real world.


This shuttle bus will serve people with vision, hearing, and physical impairments--and drive itself

#artificialintelligence

It's been 15 years since a degenerative eye disease forced Erich Manser to stop driving. Today, he commutes to his job as an accessibility consultant via commuter trains and city buses, but he has trouble locating empty seats sometimes and must ask strangers for guidance. A step toward solving Manser's predicament could arrive as soon as next year. Manser's employer, IBM, and an independent carmaker called Local Motors are developing a self-driving, electric shuttle bus that combines artificial intelligence, augmented reality, and smartphone apps to serve people with vision, hearing, physical, and cognitive disabilities. The buses, dubbed "Olli," are designed to transport people around neighborhoods at speeds below 35 miles per hour and will be sold to cities, counties, airports, companies, and universities.