Goto

Collaborating Authors

 pomerleau


The Past and Present of Imitation Learning: A Citation Chain Study

Kumar, Nishanth

arXiv.org Artificial Intelligence

I NTRODUCTION Imitation Learning is a promising area of active research. Early research in'programming by example' began in Software Development [9] before attracting the interest of Robotics and Artificial Intelligence (AI) researchers, who began using the terms'Learning from Demonstration' and'Imitation Learning' to describe their line of work. Over the last 30 years, Imitation Learning has advanced significantly and been used to solve difficult tasks ranging from Autonomous Driving [12] to playing Atari games [5]. In the course of this development, different methods for performing Imitation Learning have fallen into and out of favor. In this paper, I will explore the development of these different methods and attempt to examine how the field has progressed. I will be discussing 4 landmark papers that sequentially cite and inform each other.


Tesla's promise of 'full-self-driving' angers autonomous vehicle experts

#artificialintelligence

Washington, DC (CNN Business)Tesla is selling its cars with the option of "full self-driving capability," a feature that's drawing criticism from experts on self-driving technology. They say CEO Elon Musk is playing fast and loose with definitions, overselling the technology and potentially creating safety issues. When Tesla announced the $35,000 Model 3 Thursday, it said it would come with an optional $5,000 feature: full self-driving capability. The system will offer "automatic driving on city streets" as an update later this year, according to Tesla's website. A Tesla spokeswoman declined to comment on details around the automatic driving option, and pointed CNN Business to fine print on Tesla's order page that tells buyers the currently enabled features require "active" driver supervision and do not make the vehicle autonomous.


The Fake News Challenge Puts AI to the Test - MediaShift

#artificialintelligence

Long before Nov. 8, 2016, research scientist Dean Pomerleau was concerned about fake news. His Facebook News Feed had been filled with political disinformation during the presidential campaign, and he saw that the stories appeared to be influencing reader's attitudes toward the candidates. Though skeptical, he wondered whether it were possible to create a machine learning tool that could flag fake news stories, and discussed the idea with colleagues on Twitter. Then he issued a challenge: he bet that it were not possible to do, and asked his colleagues could prove him wrong. Delip Rao, the founder of Joostware, which builds Artificial Intelligence products, contacted Pomerleau and offered his help – and thus began the Fake News Challenge.


Artificial Intelligence is Going to Destroy Fake News

#artificialintelligence

With the rise of email came the rise of spam filling inboxes. Email has become sophisticated faster than spamming technology and now, the internet's junk mail is often caught in a folder; out of sight and out of mind are messages with the subject line "Kindly get back to me urgently" and the greeting "Dear Beneficiary." There's good news for anybody who sees fake news -- not the sort that's simply true but politically difficult for the president; but actual, fake, conspiracy theory-baiting chum -- as another form of spam. At least that's what Dean Pomerleau, research scientist at Carnegie Mellon University's Robotics Institute, said recently during a panel in New York on the proliferation of fake news. We solved the spam problem using artificial intelligence, he argued, and with A.I., we can solve the problem of fake news by filtering out credible news from the misinformation.


The Bittersweet Sweepstakes to Build an AI That Destroys Fake News

WIRED

Autonomous 18-wheelers are now driving the highways. Coffee table gadgets are recognizing spoken English nearly as well as humans. Smartphones apps instantly translate conversations between people speaking as many as nine different languages. But for Dean Pomerleau, none of this is all that surprising. Pomerleau built a self-driving car way back in 1989, when the first George Bush was president, and it navigated private roads using a neural network, the same AI technology that underpins modern gadgetry like the Amazon Echo and Microsoft Translator.


Can We Open the Black Box of AI?

#artificialintelligence

Dean Pomerleau can still remember his first tussle with the black-box problem. The year was 1991, and he was making a pioneering attempt to do something that has now become commonplace in autonomous-vehicle research: teach a computer how to drive. This meant taking the wheel of a specially equipped Humvee military vehicle and guiding it through city streets, says Pomerleau, who was then a robotics graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania. With him in the Humvee was a computer that he had programmed to peer through a camera, interpret what was happening out on the road and memorize every move that he made in response. Eventually, Pomerleau hoped, the machine would make enough associations to steer on its own.


Can we open the black box of AI?

#artificialintelligence

Dean Pomerleau can still remember his first tussle with the black-box problem. The year was 1991, and he was making a pioneering attempt to do something that has now become commonplace in autonomous-vehicle research: teach a computer how to drive. This meant taking the wheel of a specially equipped Humvee military vehicle and guiding it through city streets, says Pomerleau, who was then a robotics graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania. With him in the Humvee was a computer that he had programmed to peer through a camera, interpret what was happening out on the road and memorize every move that he made in response. Eventually, Pomerleau hoped, the machine would make enough associations to steer on its own.


Can we open the black box of AI?

#artificialintelligence

Dean Pomerleau can still remember his first tussle with the black-box problem. The year was 1991, and he was making a pioneering attempt to do something that has now become commonplace in autonomous-vehicle research: teach a computer how to drive. This meant taking the wheel of a specially equipped Humvee military vehicle and guiding it through city streets, says Pomerleau, who was then a robotics graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania. With him in the Humvee was a computer that he had programmed to peer through a camera, interpret what was happening out on the road and memorize every move that he made in response. Eventually, Pomerleau hoped, the machine would make enough associations to steer on its own.


Fatal Crash May Slow Advance of Self-Driving Cars

WSJ.com: WSJD - Technology

Advocates of driverless cars worry that the fatal crash of a Tesla Motors Inc. vehicle in self-driving mode will provoke additional regulatory oversight and slow deployment on U.S. roads of the rapidly advancing technology. The National Highway and Traffic Safety Administration aims this month to release a framework for regulating self-driving cars, which could include requiring auto makers to win approval for their technologies before releasing them. That sort of approval process wasn't applied to Tesla's Autopilot system to enable hands-free driving on highways, which the electric-car maker made available on Tesla vehicles via a software update in October. Regulators said Thursday that an Ohio man was using Autopilot when his Tesla Model S crashed into a 18-wheel truck in Florida on May 7, killing him. "There will be repercussions" in regulations, said Dean Pomerleau, a Carnegie Mellon University professor who has worked on driverless cars for 25 years and led several NHTSA research programs.


DARPA wants to teach machines how to learn -- GCN

#artificialintelligence

"When we look at what's happening with artificial intelligence, we see something that is very, very powerful, very valuable for military applications, but we also see a technology that is still quite fundamentally limited," DARPA Director Arati Prabhakar said at the Atlantic Council on May 2. Aiming to define those limits, a new Defense Advanced Research Projects Agency program will try to answer a singular question: "What are the fundamental limitations inherent in machine learning systems?" Through a series of research areas of interest, the Fundamental Limits of Learning, or Fun LoL, program will assess the potential of focused investigations by developing, validating and applying a theoretical framework for learning, according to the recently released request for information. The two primary research areas focus on articulating a general mathematical framework to measure learning and applying that framework to existing machine learning methods to characterize capabilities of current techniques. Machines can master chess, Jeopardy! Because complex threats require machines to adapt and learn quickly, it is important that they be able to generalize creatively from previously learned concepts.