Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
Elon Musk has hired a new director of AI research at Tesla, and it may signal a plan to rethink the way its automated driving works. This week, Musk poached Andrej Karpathy, an expert on vision, deep learning, and reinforcement learning, from OpenAI, a nonprofit that Musk and others are funding that's dedicated to "discovering and enacting the path to safe artificial general intelligence." After Stanford, Karpathy interned with DeepMind, where reinforcement learning is a major focus. Appointing Karpathy a Tesla's director of AI research indicates something else about the challenge of autonomous driving: there's some distance left to go before it's solved (see "What to Know Before You Get in a Self-Driving Car").
To borrow a cliché opening from the last high school commencement or Maid of Honor speech you heard, the dictionary defines artificial intelligence (AI) as 1: a branch of computer science dealing with the simulation of intelligent behavior in computers; and 2: the capability of a machine to imitate intelligent human behavior. But, do these definitions really explain the difference between an artificially intelligent system and one that's just programmed to be useful? What is "intelligent" behavior or, more specifically, "intelligent human behavior"? For many, the term "artificial intelligence" draws to mind humanoid robots like C-3PO from "Star Wars" or Dolores from "Westworld."
The likes of China -- who among other things is building cruise missiles with a certain degree of autonomy -- are nipping away at America's heels. The Pentagon has put artificial intelligence at the centre of its strategy to maintain the United States' position as the world's dominant military power, earmarking $US18 billion ($23.5 billion) over the next three years for developing the technology. Speaking from San Francisco ahead of a major AI industry conference, Prof Walsh said unlike previous arms races, much of the progress in AI development was being made by private corporations. "It's the same sort of technology that is going to go into autonomous cars which is going to be a good thing ... but giving it the right to make life or death decisions (in the battlefield) is probably a bad idea," Prof Walsh said.
One particular challenge is to ground human language to robot internal representation of the physical world. Although copresent in a shared environment, humans and robots have mismatched capabilities in reasoning, perception, and action. A robot not only needs to incorporate collaborative effort from human partners to better connect human language to its own representation, but also needs to make extra collaborative effort to communicate its representation in language that humans can understand. This article gives a brief introduction to this research effort and discusses several collaborative approaches to grounding language to perception and action.
Amato, Christopher (University of New Hampshire) | Amir, Ofra (Harvard University) | Bryson, Joanna (University of Bath) | Grosz, Barbara (Harvard University) | Indurkhya, Bipin (Jagiellonian University) | Kiciman, Emre (Microsoft Research) | Kido, Takashi (Rikengenesis) | Lawless, W. F. (Massachusetts Institute of Technology) | Liu, Miao (University of Southern California) | McDorman, Braden (Semio) | Mead, Ross (University of Amsterdam) | Oliehoek, Frans A. (University of Pennsylvania) | Specian, Andrew (American University in Paris) | Stojanov, Georgi (University of Electro-Communications) | Takadama, Keiki
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2016 Spring Symposium Series on Monday through Wednesday, March 21-23, 2016 at Stanford University. The titles of the seven symposia were (1) AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics; (2) Challenges and Opportunities in Multiagent Learning for the Real World (3) Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform; (4) Ethical and Moral Considerations in Non-Human Agents; (5) Intelligent Systems for Supporting Distributed Human Teamwork; (6) Observational Studies through Social Media and Other Human-Generated Content, and (7) Well-Being Computing: AI Meets Health and Happiness Science.
Over the next 15 years, AI technologies will continue to make inroads in nearly every area of our lives, from education to entertainment, health care to security. "Now is the time to consider the design, ethical, and policy challenges that AI technologies raise," said Grosz. The report investigates eight areas of human activity in which AI technologies are already affecting urban life and will be even more pervasive by 2030: transportation, home/service robots, health care, education, entertainment, low-resource communities, public safety and security, employment, and the workplace. Some of the biggest challenges in the next 15 years will be creating safe and reliable hardware for autonomous cars and health care robots; gaining public trust for AI systems, especially in low-resource communities; and overcoming fears that the technology will marginalize humans in the workplace.
Department of Transportation Secretary Anthony Foxx recently released a 116-page policy document that aims to guide automakers and technologists on best-practices when it comes to the manufacturing and deployment of autonomous vehicle features. Apple, which has been rumored to be building a car, recently laid off employees of its automotive project and pivoted from making a car to creating autonomous software, according to reports. Another aftermarket self-driving tech company recently completed a successful 120-mile beer delivery without anyone at the wheel. A big rig cab equipped with sensors made by Otto, a startup bought by Uber recently for $670 million, made the delivery of Budweiser beer while its driver rested in the sleeper berth during most of the trip down Colorado's Interstate 25.
The new effort by Toyota is also the latest indication of a changing of the guard in Silicon Valley's basic technology research. In September, when Dr. Pratt joined Toyota, the company announced an initial artificial intelligence research effort committing 50 million in funding to the computer science departments of both Stanford and M.I.T. In addition to focusing on navigation technologies, the new research corporation will also apply artificial intelligence technologies to Toyota's factory automation systems, Dr. Pratt said. A version of this article appears in print on November 6, 2015, on page B3 of the New York edition with the headline: Toyota Planning an Artificial Intelligence Research Center in California.