New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases (Miyawaki et al., 2008; Wen et al., 2016) or to the matching to exemplars (Naselaris et al., 2009; Nishimoto et al., 2011). Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features (Horikawa & Kamitani, 2017). Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery.
There is a new race in Silicon Valley involving Artificial Intelligence and no it's not HealthTech, FinTech, Voice Commerce or involve Google, Facebook or Microsoft... this race involves the brain and more specifically brain-computer interfaces. This race also involves technology royalty, the US government, billion dollar defence companies, a big connection to PayPal and years of medical research to better understand the human brain and implant devices that could make a consumer brain-computer interface a reality. The race is called "Neural implants, merging the human brain with AI" So what exactly are neural implants? Brain implants, often referred to as neural implants, are technological devices that connect directly to a biological subject's brain – usually placed on the surface of the brain, or attached to the brain's cortex. A common purpose of modern brain implants and the focus of much current research is establishing a biomedical prosthesis circumventing areas in the brain that have become dysfunctional after a stroke or other head injuries.
Elon Musk says merging biological intelligence and artificial intelligence is important to help human beings deal with the AI apocalypse. Almost exactly a month ago, Elon Musk introduced a room of engineers and curious consumers to a sci-fi-sounding invention made by his neurotechnology startup Neuralink: an implantable "brain chip" that will "merge biological intelligence with machine intelligence." Per Musk's description, this chip will be installed in a person's brain by drilling a two-millimeter hole in the skull. "The interface to the chip is wireless, so you have no wires poking out of your head," he assured. Musk argued that such devices will help humans deal with the so-called AI apocalypse, a scenario in which artificial intelligence outpaces human intelligence and takes control of the planet away from the human species.
But that's not such a good idea, says cognitive psychologist Susan Schneider. In fact, she wrote this week in an op-ed for the Financial Times that the project could amount to "suicide for the human mind." To make sure that humanity is neither conquered nor left behind by future AI, Musk's plan is to merge the human brain with computers that would turbo-boost our intelligence. But Schneider, a researcher at the University of the Pacific, argues that merging your brain with machines could amount to accidentally killing yourself. "You could augment your intelligence with chips, but there will be a point at which you end your life," she wrote.
For neuroscientist Professor Katharina von Kriegstein from TU Dresden, however, the human brain remains the "most admirable speech processing machine." "It works much better than computer-based speech processing and will probably continue to do so for a long time to come," comments Professor von Kriegstein, "because the exact processes of speech processing in the brain are still largely unknown." In a recent study, the neuroscientist from Dresden and her team discovered another building block in the mystery of human speech processing. In the study, 33 test persons were examined using functional magnetic resonance imaging (MRI). The test persons received speech signals from different speakers.
These new ultrafast artificial intelligence algorithms (inspired by slow brain dynamics)have the potential to outperform all learning rates achieved by state-of-the-art learning algorithms to date. Through this technology, the scientists, from Bar-Ilan University, are aiming to rebuild a bridge between neuroscience and advanced artificial intelligence algorithms, one that was proposed some 70 years ago. Discussing what is involved, lead researcher Professor Ido Kanter says: "The current scientific and technological viewpoint is that neurobiology and machine learning are two distinct disciplines that advanced independently." The Israeli researchers have challenged this dichotomy. The researchers contend there is merit in studying the slower human brain even in the era of super-fast computers, since the human brain is still capable of doing and perceiving many things that artificial intelligence is incapable of performing.
Miniature brains grown in a lab exhibit remarkably similar activity to preterm babies' brains. This dispels the idea that human brains need to develop in a womb or be connected to other organs to function. Scientists have long been trying to grow realistic models of human brains to better understand how our brains work and make it easier to test new treatments for neurological disorders. However, until now, it was assumed that these models wouldn't be able to recreate the sophisticated connections found in real brains. "We previously assumed that the human brain needs some input from other organs and from the mother's uterus to thrive," says Alysson Muotri at the University of California, San Diego.
We learn from our personal interaction with the world, and our memories of those experiences help guide our behaviors. Experience and memory are inexorably linked, or at least they seemed to be before a recent report on the formation of completely artificial memories. Using laboratory animals, investigators reverse engineered a specific natural memory by mapped the brain circuits underlying its formation. They then "trained" another animal by stimulating brain cells in the pattern of the natural memory. Doing so created an artificial memory that was retained and recalled in a manner indistinguishable from a natural one.
There is still much confusion about this point in the AI community. With this article, I want to present my view on the relationship between duplication and simulation, because it is of great importance that there is clarity here. In one of my previous articles (in German), "Können Maschinen menschliches Bewusstsein hervorbringen?", von Neumann briefly touched on the subject by expressing his skepticism about the possibility of using a computer to duplicate the activities of the human brain. Now we will try to get to the bottom of this question a little more thoroughly. The philosopher John Searle has attached great importance to this point by explaining that a simulation is not duplication and that a machine cannot duplicate human thought, but at best, simulate it.