If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
To borrow a cliché opening from the last high school commencement or Maid of Honor speech you heard, the dictionary defines artificial intelligence (AI) as 1: a branch of computer science dealing with the simulation of intelligent behavior in computers; and 2: the capability of a machine to imitate intelligent human behavior. But, do these definitions really explain the difference between an artificially intelligent system and one that's just programmed to be useful? What is "intelligent" behavior or, more specifically, "intelligent human behavior"? For many, the term "artificial intelligence" draws to mind humanoid robots like C-3PO from "Star Wars" or Dolores from "Westworld."
The likes of China -- who among other things is building cruise missiles with a certain degree of autonomy -- are nipping away at America's heels. The Pentagon has put artificial intelligence at the centre of its strategy to maintain the United States' position as the world's dominant military power, earmarking $US18 billion ($23.5 billion) over the next three years for developing the technology. Speaking from San Francisco ahead of a major AI industry conference, Prof Walsh said unlike previous arms races, much of the progress in AI development was being made by private corporations. "It's the same sort of technology that is going to go into autonomous cars which is going to be a good thing ... but giving it the right to make life or death decisions (in the battlefield) is probably a bad idea," Prof Walsh said.
When humans interact and collaborate with each other, they coordinate their turn-taking behaviors using verbal and nonverbal signals, expressed in the face and voice. In this article, I give an overview of several studies that show how humans in interaction with a humanlike robot make use of the same coordination signals typically found in studies on human-human interaction, and that it is possible to automatically detect and combine these cues to facilitate real-time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures and hesitation sounds.
One particular challenge is to ground human language to robot internal representation of the physical world. Although copresent in a shared environment, humans and robots have mismatched capabilities in reasoning, perception, and action. A robot not only needs to incorporate collaborative effort from human partners to better connect human language to its own representation, but also needs to make extra collaborative effort to communicate its representation in language that humans can understand. This article gives a brief introduction to this research effort and discusses several collaborative approaches to grounding language to perception and action.
TopTenz 144,728 views Breaking News: Apple Recalls Newest Macbook Pro Models due to Fire Hazard (DANGER!) Discovery Channel Documentary 356,106 views Future Robotic Technology Robosapiens Discovery Channel Documentary - Duration: 45:20. World Documentary 61,261 views Discovery Channel full episodes The Truth about the Bermuda Triangle national geographic documentary - Duration: 43:36. Breaking News: Apple Recalls Newest Macbook Pro Models due to Fire Hazard (DANGER!)
Over the next 15 years, AI technologies will continue to make inroads in nearly every area of our lives, from education to entertainment, health care to security. "Now is the time to consider the design, ethical, and policy challenges that AI technologies raise," said Grosz. The report investigates eight areas of human activity in which AI technologies are already affecting urban life and will be even more pervasive by 2030: transportation, home/service robots, health care, education, entertainment, low-resource communities, public safety and security, employment, and the workplace. Some of the biggest challenges in the next 15 years will be creating safe and reliable hardware for autonomous cars and health care robots; gaining public trust for AI systems, especially in low-resource communities; and overcoming fears that the technology will marginalize humans in the workplace.
The rise of AI is a cause for concern for humanity, with some experts even saying that it could be more devastating than the nuclear bomb. However, if Sofia the robot is anything to go by, we have nothing to worry about. Developed by Hanson Robotics, the extremely lifelike machine was recently on show at the Global Sources Electronics show in Hong Kong, where she interacted with passers-by and even cracked a few jokes. As it stands, Sofia has 62 facial and neck mechanisms to give her that realistic human look, with cameras in the eyes that are capable of facial recognition.
The new effort by Toyota is also the latest indication of a changing of the guard in Silicon Valley's basic technology research. In September, when Dr. Pratt joined Toyota, the company announced an initial artificial intelligence research effort committing 50 million in funding to the computer science departments of both Stanford and M.I.T. In addition to focusing on navigation technologies, the new research corporation will also apply artificial intelligence technologies to Toyota's factory automation systems, Dr. Pratt said. A version of this article appears in print on November 6, 2015, on page B3 of the New York edition with the headline: Toyota Planning an Artificial Intelligence Research Center in California.
Traffic congestion costs the U.S. economy 121 billion a year, mostly due to lost productivity, and produces about 25 billion kilograms of carbon dioxide emissions, Carnegie Mellon University professor of robotics Stephen Smith told the audience at a White House Frontiers Conference last week. In urban areas, drivers spend 40 percent of their time idling in traffic, he added. The next step is to have traffic signals talk to cars. Pittsburgh is the test bed for Uber's self-driving cars, and Smith's work on AI-enhanced traffic signals that talk with self-driving cars is paving the way for the ultimately fluid and efficient autonomous intersections.