Machines Are Developing Language Skills Inside Virtual Worlds

MIT Technology Review

Machines are learning to process simple commands by exploring 3-D virtual worlds. Devices like Amazon's Alexa and Google Home have brought voice-controlled technology into the mainstream, but these still only deal with simple commands. Making machines smart enough to handle a real conversation remains a very tough challenge. And it may be difficult to achieve without some grounding in the way the physical world works. Attempts to solve this problem by hard-coding relationships between words and objects and actions requires endless rules, making a machine unable to adapt to new situations.

This shuttle bus will serve people with vision, hearing, and physical impairments--and drive itself


It's been 15 years since a degenerative eye disease forced Erich Manser to stop driving. Today, he commutes to his job as an accessibility consultant via commuter trains and city buses, but he has trouble locating empty seats sometimes and must ask strangers for guidance. A step toward solving Manser's predicament could arrive as soon as next year. Manser's employer, IBM, and an independent carmaker called Local Motors are developing a self-driving, electric shuttle bus that combines artificial intelligence, augmented reality, and smartphone apps to serve people with vision, hearing, physical, and cognitive disabilities. The buses, dubbed "Olli," are designed to transport people around neighborhoods at speeds below 35 miles per hour and will be sold to cities, counties, airports, companies, and universities.