Recent AI advances in speech recognition, game-playing, image understanding, and language translation have all been based on a simple concept: multiply some numbers together, set some of them to zero, and then repeat. Since "multiplying and zeroing" doesn't inspire investors to start throwing money at you, these models are instead presented under the much loftier banner of "deep neural networks." Ever since the first versions of these networks were invented by Frank Rosenblatt in 1957, there has been controversy over how "neural" these models are. The New York Times proclaimed these first programs (which could accomplish tasks as astounding as distinguishing shapes on the left side versus shapes on the right side of a paper) to be "the first device to think as the human brain." Deep neural networks remained mostly a fringe idea for decades, since they typically didn't perform very well, due (in retrospect) to the limited computational power and small dataset sizes of the era.
Today things are a little different – thanks to the rollout of the internet, the proliferation of mobile, data-gathering phones and other devices and the adoption of online, connected technology in industry, we literally have more data than we know how to deal with. No human brain can hope to process even a fraction of the digital information it has available. The idea that it can, is one half of what is driving the world-changing breakthroughs we are seeing today. The other half is the "brain" of machine learning. Because as well as simply ingesting data, a machine has to process it in order to learn.
Deep learning is not as complex a concept that non-science people often happen to decipher. Scientific evolution over the years have reached a stage where a lot of explorations and defined research work needs the assistance of artificial intelligence. Since machines are usually fed with a particular set of algorithms to understand and react to various tasks within a matter of seconds, working with them broadens the scope of scientific breakthroughs resulting in the invention of techniques and procedures that make human life simpler and enriching. However, in order to work with machines, it is important for them to understand and recognize things just the way the human brain does. For example, we may recognize an apple through its shape and colour.
"Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It's really an attempt to understand human intelligence and human cognition." We often use human consciousness as the ultimate benchmark for artificial exploration. The human brain is ridiculously intricate. While weighing only three pounds, it contains about 100 billion neurons and 100 trillion connections between those.
Deep learning is much more like the human brain than is machine learning. Consider the way your brain interprets faces, for example. Your conscious self recognizes the whole face as a distinct person by interpreting the relationships between the parts at an astounding pace. You can't label each relationship it has identified, or even quantify and write out the variables your brain is interpreting. These things happen without your knowledge, so to speak.