Devil's in the details in Historic AI debate ZDNet

#artificialintelligence

Yoshua Bengio, left, has been a machine learning researcher for decades and runs Montreal's MILA institute for AI. Gary Marcus is a psychologist at NYU and a frequent critic of the puffed-up hype around AI. Gary Marcus, the NYU professor and entrepreneur who has made himself a gadfly of deep learning with his frequent skewering of headline hype, and Yoshua Bengio, a leading practitioner of deep learning awarded computing's higher honor for his pioneering work, went head to head Monday night in a two-hour debate Webcast from Bengio's MILA institute headquarters in Montreal. The two scholars seemed to find a lot of common ground as far as the broad strokes of where artificial intelligence needs to go, things such as trying to bring reasoning to AI. But when the discussion periodically lapsed into particular terminology or historical assertions, the two were suddenly at odds. The recorded stream of the video is posted on the organization's Facebook page if you want to go back and watch it.


DEBATE : YOSHUA BENGIO GARY MARCUS -- LIVE STREAMING

#artificialintelligence

Gary Marcus thinks that symbol-manipulation is critical for causality. In biology, in a complex creature such as a human, one finds many different brain areas. Expecting a monolithic architecture to replicate that seems to Gary Marcus deeply unrealistic. Yoshua Bengio believes that sequential reasoning can be performed while staying in a deep learning framework which makes use of attention mechanisms and the injection of new modularity and training framework (e.g. Bringing causality, in something like the rich form in which it is expressed in humans, into deep learning, would be a real and lasting contribution to general artificial intelligence.


Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws ZDNet

#artificialintelligence

Artificial intelligence has to go in new directions if it's to realize the machine equivalent of common sense, and three of its most prominent proponents are in violent agreement about exactly how to do that. Yoshua Bengio of Canada's MILA institute, Geoffrey Hinton of the University of Toronto, and Yann LeCun of Facebook, who have called themselves co-conspirators in the revival of the once-moribund field of "deep learning," took the stage Sunday night at the Hilton hotel in midtown Manhattan for the 34th annual conference of the Association for the Advancement of Artificial Intelligence. The three, who were dubbed the "godfathers" of deep learning by the conference, were being honored for having received last year's Turing Award for lifetime achievements in computing. Each of the three scientists got a half-hour to talk, and each one acknowledged numerous shortcomings in deep learning, things such as "adversarial examples," where an object recognition system can be tricked into misidentifying an object just by adding noise to a picture. "There's been a lot of talk of the negatives about deep learning," LeCun noted.


Can This AI Pioneer Make Algorithms Understand Cause and Effect?

#artificialintelligence

Known as the "Nobel Prize of computing," the Turing Award is regarded as the highest honor in computer science. The three researchers received this prestigious accolade for their contributions to deep learning, a subset of artificial intelligence (AI) development that's largely responsible for the technology's current renaissance. While deep learning has unlocked vast advances in facial recognition, natural language processing, and autonomous vehicles, it still struggles to explain causal relationships in data. Not one to rest on his laurels, Bengio is now on a new mission: To teach AI to ask "Why?". Bengio views AI's inability to "connect the dots" as a serious problem.


Back-of-a-napkin AI economics

#artificialintelligence

I felt her article was lacking in discussion of causal inference and digital experimentation, to crucial ingredients to building a software platform decision-making. So I include links to two upcoming workshops and conferences on these subjects. "Do the right thing": machine learning and causal inference for improved decision making -- Neurips 2019 workshop