Goto

Collaborating Authors

Results


Landing AI hires vision expert Dechow to correct the Big Data fallacy

#artificialintelligence

The field of deep learning has been suffering from what you might call a Big Data fallacy, the belief that more and more data is always a good thing. It may be time to focus on quality rather than just quantity. "There's a very fundamental problem that a lot of AI faces," said Andrew Ng, founder and CEO of Landing AI, a startup working to perfect the technology for industrial uses, in an interview with ZDNet this week. "A lot of AI is focused on maximizing the number of calories, which works up to a certain point," he said. "And sometimes you do have a lot of data, but when you have a small data set, it's more the quality of the data rather than the sheer volume."


Landing.AI hires vision expert Dechow to correct the Big Data fallacy

ZDNet

The field of deep learning has been suffering from what you might call a Big Data fallacy, the belief that more and more data is always a good thing. It may be time to focus on quality rather than just quantity. "There's a very fundamental problem that a lot of AI faces," said Andrew Ng, founder and CEO of Landing.AI, a startup working to perfect the technology for industrial uses, in an interview with ZDNet this week. "A lot of AI is focused on maximizing the number of calories, which works up to a certain point," he said. "And sometimes you do have a lot of data, but when you have a small data set, it's more the quality of the data rather than the sheer volume."


Artificial intelligence brings our century's Era of Enlightenment

#artificialintelligence

In 2017, Google DeepMind developed an artificial intelligence (AI) program called AlphaZero. It was programmed to play chess against an earlier developed program called Stockfish. The difference between the two, Stockfish, the then dominant program in chess, was programmed with all the moves that could be made in chess matches and it made choices from this database. AlphaZero was different, it used logic of its own informed by the ability to recognize patterns of moves across a vast series of possibilities, many not conceived by human minds. It learned from these patterns of possibilities, actually playing against itself to build this knowledge base.


What is AI? Stephen Hanson in conversation with Terry Sejnowski

AIHub

Hanson: Terry, thanks so much for joining this videocast or podvideo, I don't really know what to call it. When I started trying to conceptualize what I was getting at, I wanted to talk to people who had a clear and obvious perspective on what they thought AI is. And you're particularly unique, and special in this context, because you have been consistent since… Well, there's a great book that you have a chapter in and I think Jim Anderson edited in 1981, called "Parallel Models of Associative Memory". Sejnowski: It's interesting you brought that up because I met Geoff Hinton in San Diego in 1979 at a workshop he and Jim organized that resulted in that book. It was my first neural network workshop. We were all interested the same things. There was no neural network organization or community at that time – We were a bunch of isolated researchers working on our own. Hanson: And probably not well appreciated, by talking about neural networks, or neural modelling. Sejnowski: We were the outliers. But we had a great time talking with each other. Hanson: Going back to the book, you had a chapter called skeleton filters in the brain. I think that was the name of it. Perhaps not the best title in the world, but still… "Skeleton filters" is a little scary, I gotta say. But, it was a really incredibly easy read – I just read it the other day again. And, in it, you're really going in a subtle way from biophysics, modelling a neuron and referencing everybody, you know Cowen, and everybody who'd developed a differential equation, or anything up to semantics and cognition. But biophysical modeling, this kind of category you might associate with biophysics of neural modelling, in that neurons and circuits matter and that's what we're modelling, for that purpose – that's the purpose of it. For example, I think you mentioned Hartline and Ratliff, and Limulus crab retina. And this provided an enormous amount of data well into the 60s where people were actually modelling and there were predictions and it was very tightly tied to the crab. Sejnowski: By the way, although it's called a Horseshoe Crab, and looks like one, Limulus has eight legs, so it's an arachnid.


Neural Network From Scratch

#artificialintelligence

In this edition of Napkin Math, we'll invoke the spirit of the Napkin Math series to establish a mental model for how a neural network works by building one from scratch. In a future issue we will do napkin math on performance, as establishing the first-principle understanding is plenty of ground to cover for today! Neural nets are increasingly dominating the field of machine learning / artificial intelligence: the most sophisticated models for computer vision (e.g. Google Translate), and more are based on neural nets. When these artificial neural nets reach some arbitrary threshold of neurons, we call it deep learning. A visceral example of Deep Learning's unreasonable effectiveness comes from this interview with Jeff Dean who leads AI at Google.


Game Theory In Artificial Intelligence

#artificialintelligence

I want to start off with a quick question – can you recognize the two personalities in the below image? I'm certain you got one right. For most of us early age math enthusiasts, the movie "A Beautiful Mind" is inextricably embedded into our memory. Russell Crowe plays the role of John Nash in the movie, a Nobel prize winner for economics (and the person on the left-hand side above). Now, you would remember the iconic scene often regarded as: "Don't go after the blonde". "….the best outcome would come when everyone in the group is doing what's best for himself and the group."


MIT Researchers Just Discovered an AI Mimicking the Brain on Its Own

#artificialintelligence

In 2019, The MIT Press Reader published a pair of interviews with Noam Chomsky and Steven Pinker, two of the world's foremost linguistic and cognitive scientists. The conversations, like the men themselves, vary in their framing and treatment of key issues surrounding their areas of expertise. When asked about machine learning and its contributions to cognitive science, however, their opinions gather under the banner of skepticism and something approaching disappointment. "In just about every relevant respect it is hard to see how [machine learning] makes any kind of contribution to science," Chomsky laments, "specifically to cognitive science, whatever value it may have for constructing useful devices or for exploring the properties of the computational processes being employed." While Pinker adopts a slightly softer tone, he echoes Chomsky's lack of enthusiasm for how AI has advanced our understanding of the brain: "Cognitive science itself became overshadowed by neuroscience in the 1990s and artificial intelligence in this decade, but I think those fields will need to overcome their theoretical barrenness and be reintegrated with the study of cognition -- mindless neurophysiology and machine learning have each hit walls when it comes to illuminating intelligence."


DeepMind Solves AGI, Summons Demon

#artificialintelligence

In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. It turns out, we may not have to wait much longer. This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asyncronomous bisecting convolutions.


2021 in review: AI firm DeepMind solves human protein structures

New Scientist

IT TOOK decades for scientists to unlock the structure of just 17 per cent of the proteins in the human body. But UK-based AI company DeepMind raised the bar to 98.5 per cent in July when it announced that its AlphaFold model could quickly and reliably calculate the way proteins fold. This could lead to targeted drugs that bind to specific parts of molecules. We caught up with Pushmeet Kohli at DeepMind to see how work is progressing with mapping almost every one of the more than 100 million known proteins that have been sequenced from across the tree of life. Were you surprised at the success of AlphaFold, considering that figuring out protein folding previously required vast supercomputers?


How the DeepMind Scholarship Benefits Our Students

#artificialintelligence

We caught up with Darius to discuss a bit about his experience with the DeepMind scholarship and how it has supported both his educational and professional development. This interview was lightly edited for clarity. Tell us a bit about your experience as a DeepMind Scholar. I was very happy to be selected as a DeepMind Scholar. It created many opportunities for me by allowing me to live and work in New York, where it was very easy to network and learn from a diverse group of accomplished data scientists.