Trained neural nets perform much like humans on classic psychological tests


In the early part of the 20th century, a group of German experimental psychologists began to question how the brain acquires meaningful perceptions of a world that is otherwise chaotic and unpredictable. To answer this question, they developed the notion of the "gestalt effect"--the idea that when it comes to perception, the whole is something other than the parts. Sine then, psychologists have discovered that the human brain is remarkably good at perceiving complete pictures on the basis of fragmentary information. A good example is the figure shown here. The brain perceives two-dimensional shapes such as a triangle and a square, and even a three-dimensional sphere.

The brain is (mostly) not a computer - ten pence piece


I recently had my attention drawn to this essay from May 2016 – The Empty Brain – written by psychologist Robert Epstein (thanks Andrew). In it, Epstein argues that the dominant information processing (IP) model of the brain is wrong. He states that human brains do not use symbolic representations of the world and do not process information like a computer. Instead, the IP model is one chained to our current level of technological sophistication. It is just a metaphor, with no biological validity.

What Is Machine Learning - A Complete Beginner's Guide


Computers have helped us to calculate the vastness of space and the minute details of subatomic particles. When it comes to counting and calculating, or following logical yes/no algorithms – computers outperform humans thanks to the electrons moving through their circuitry at the speed of light. But we generally don't consider them as "intelligent" because, traditionally, computers haven't been able to do anything themselves, without being taught (programmed) by us first. So far, even if a computer had access to all of the information in the world it couldn't do anything "smart" with it. It could find us a picture of a cat – but only because we had told it that certain pictures contain cats.

Sophisticated New AI Performs Better When It Can Sleep And Dream


In humans, evidence suggests it has a whole range of benefits, including this one: it keeps the brain healthy by letting neurons prune unnecessary synaptic connections we make during the day. This process, called synaptic homeostasis, prevents the brain from being overrun by useless memories. It's possible that it helps to improve our cognitive performance, while dreams allow us to process our memories. As it turns out, something similar may be occurring when artificial neural networks are allowed to sleep and dream. Yep, you read that correctly.

How machine learning, drones, and robotics will transform the NHS and healthcare


The UK's National Health Service continues to suffer the longest funding squeeze since it was established 71 years ago. That financial pressure has resulted in the service missing targets for how soon cancer patients should be referred for treatment for the past three years and waiting times in Accident and Emergency departments being at record levels. Such is the financial and staffing pressure on the service, that talking about how recent advances in artificial intelligence (AI) could be applied to the NHS might seem fanciful. Yet Professor Tony Young, national clinical director for innovation at NHS England, believes healthcare is at an inflection point, where machine-learning technology could fuel huge advances in what's possible. "I think that healthcare is heading for one of those giant-leap moments in the next five to 10 years and AI is going to be a key tool in enabling us to take that giant leap," he said, speaking at an event in London organized by The King's Fund and IBM Watson Health.

What is the opposite of Artificial Intelligence?


When someone asked me that question "what is the opposite of Artificial Intelligence?", But just a few days ago, one of my colleagues, who shares my enthusiasm (or one can say geekaism) for AI based technology, sent me this GIF that illustrates so well the old ways of thinking about automation and autonomous software, and I thought it is the perfect illustration of the answer to that question. Nested IF/ELSE statements are NOT Artificial Intelligence, they depict exactly the opposite. The answer is in Artificial Neural Network, ANN for short. ANN was developed when we realized that the human mind preforms tasks in a manner that is significantly different than the way a conventional digital computer performs the same task.

Combining AI's Power with Self-centered Human Nature Could Be Dangerous


If we could shrink the entire history of our planet to one year, humans would have shown up roughly at 11pm on 31 Dec. In the grand scheme of things, we are insignificant. However, if we expand our thinking to the entire observable universe, our evolutionary success is a stroke of near-impossible luck that comprises all the biological conditions and chances required for us to become the dominant species on this planet. Of the 300 billion solar systems in the Milky Way, Earth is the only planet on which we know life exists. Out of the 8.7 billion known species on earth, we became the first general intelligence.

DeepMind and Google: the battle to control artificial intelligence


One afternoon in August 2010, in a conference hall perched on the edge of San Francisco Bay, a 34-year-old Londoner called Demis Hassabis took to the stage. Walking to the podium with the deliberate gait of a man trying to control his nerves, he pursed his lips into a brief smile and began to speak: "So today I'm going to be talking about different approaches to building…" He stalled, as though just realising that he was stating his momentous ambition out loud. And then he said it: "AGI". AGI stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human. AGI will be able to complete discrete tasks, such as recognising photos or translating languages, which are the single-minded focus of the multitude of artificial intelligences (AIs) that inhabit our phones and computers. But it will also add, subtract, play chess and speak French. It will also understand physics papers, compose novels, devise investment strategies and make delightful conversation with strangers. It will monitor nuclear reactions, manage electricity grids and traffic flow, and effortlessly succeed at everything else. AGI will make today's most advanced AIs look like pocket calculators. The only intelligence that can currently attempt all these tasks is the kind that humans are endowed with. But human intelligence is limited by the size of the skull that houses the brain. Its power is restricted by the puny amount of energy that the body is able to provide. Because AGI will run on computers, it will suffer none of these constraints. Its intelligence will be limited only by the number of processors available.

Ten big global challenges technology could solve

MIT Technology Review

Carbon sequestration Cutting greenhouse-gas emissions alone won't be enough to prevent sharp increases in global temperatures. We'll also need to remove vast amounts of carbon dioxide from the atmosphere, which not only would be incredibly expensive but would present us with the thorny problem of what to do with all that CO2. A growing number of startups are exploring ways of recycling carbon dioxide into products, including synthetic fuels, polymers, carbon fiber, and concrete. That's promising, but what we'll really need is a cheap way to permanently store the billions of tons of carbon dioxide that we might have to pull out of the atmosphere. Grid-scale energy storage Renewable energy sources like wind and solar are becoming cheap and more widely deployed, but they don't generate electricity when the sun's not shining or wind isn't blowing.

Artificial Intelligence And The Fourth Age Of Humanity - Disruption Hub


From the most basic tools of the past to the complicated machines we use today, technology has changed the course of human history. In the fundamental sense, technology augments our abilities – it helps us to solve problems, and allows us to achieve things that would never have been possible before. But there's a clear difference between the simple tools used by our distant ancestors and the artificially intelligent computer programmes currently shaping the world: one enhances our bodies, and the other supplements the workings of our brains. This raises a whole host of philosophical questions about the nature of intelligent machines, their ethical use, and the state of humanity itself. One man who has more than a passing interest on this subject is Byron Reese, GigaOm publisher, futurist, and author of the recent book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.