A scientist from Russia has developed a new neural network architecture and tested its learning ability on the recognition of handwritten digits. The intelligence of the network was amplified by chaos, and the classification accuracy reached 96.3%. The network can be used in microcontrollers with a small amount of RAM and embedded in such household items as shoes or refrigerators, making them'smart.' The study was published in Electronics. Today, the search for new neural networks that can operate on microcontrollers with a small amount of random access memory (RAM) is of particular importance.
Earlier this year, researchers from Russia's Neurobotics Corporation and a team at the Moscow Institute of Physics and Technology worked out how to visualize human brain activity by mimicking images observed in real-time. This breakthrough in artificial neural network technology usage will eventually enable post-stroke rehabilitation devices that will be controlled by signals from the brain. The team uploaded their research via a'preprint' on the bioRxiv website and also shared a video that showcased their'mind-reading' device at work. To develop devices that can be controlled by the human and treatments for cognitive disorders or post-stroke rehabilitation, neurobiologists must have an understanding of how the brain encodes data and information. A critical development in the creation of these technologies is the ability to study brain activity using visual perception as a marker.
Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person's brain activity as actual images mimicking what they observe in real time. This will enable new post-stroke rehabilitation devices controlled by brain signals. The team published its research as a preprint on bioRxiv and posted a video online, showing their "mind-reading" system at work. To develop devices controlled by the brain and methods for cognitive disorder treatment and post-stroke rehabilitation, neurobiologists need to understand how the brain encodes information. A key aspect of this is studying the brain activity of people perceiving visual information, for example, while watching a video.
It has long been the stuff of science fiction but now mind-reading machines may actually be here and they may not be invasive. Researchers from the Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person's brain activity as actual images without the use of invasive brain implants. The work has the potential to enable new non-invasive post-stroke rehabilitation devices controlled by brain signals as well as novel cognitive disorder treatments. In order to do achieve such applications, neurobiologists need to understand how the brain encodes information by studying it in real-time such as when a person is watching a video. This is where the new brain-computer interface developed by the researchers comes in.
Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person's brain activity as actual images mimicking what they observe in real time. This will enable new post-stroke rehabilitation devices controlled by brain signals. The team published its research as a preprint on bioRxiv and posted a video online showing their "mind-reading" system at work. To develop devices controlled by the brain and methods for cognitive disorder treatment and post-stroke rehabilitation, neurobiologists need to understand how the brain encodes information. A key aspect of this is studying the brain activity of people perceiving visual information, for example, while watching a video.
Sign in to report inappropriate content. As part of the NeuroNet NTI Assistive Neurotechnology project, employees of the Neurobotics Group of Companies and the Moscow Institute of Physics and Technology have trained neural networks to recreate images of the electrical activity of the brain. Earlier, no such experiments were performed on EEG material (other scientists used fMRI or analyzed signals directly from neurons). In the future, this discovery will create a new type of device for post-stroke rehabilitation.
Despite the widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons.
Many of the biggest names in the technology industry are consumed with developing an artificial general intelligence, or AGI. Unlike today's leading artificial intelligence software, an AGI wouldn't need flesh-and-blood trainers to figure out how to translate English to Mandarin or spot tumors in an X-ray. In theory, it would have some measure of independence from its creators, solve complex, novel problems on its own, and herald an era in which humankind is no longer superior to machines. The consensus among our pitiful fleshbrains is that if humans ever manage to create an AGI, it'll arise in Mountain View, Calif., Beijing, or Moscow. All three cities are near world-class AI research universities and are home to companies that have pumped billions into the AGI race. There exists, however, a chance that the breakthrough will come from the Swiss city of Lugano. The picturesque slice of Switzerland's southern tip is home to about 60,000 people, including a computer scientist named Jürgen Schmidhuber. He's a professor, a researcher, and the co-founder of a 25-employee AI startup called Nnaisense.
Artificial intelligence (AI) has made a few spectacular-sounding headlines this year, with various organizations showing off their own digital creation's cognitive capabilities. These tend to relate to "logical" intelligence, dealing with mathematics, rationality, and decision-making. However, just this month, a team of researchers from the National Research Nuclear University Moscow Engineering Physics Institute (NRNU MEPhI) has announced that they are developing an AI that is able to have both narrative and emotional intellect. If this so-called "Virtual Actor" (VA) is able to understand human emotions, it will buck the trend in the types of AIs emerging from other research teams across the world. Google's DeepDream, for example, is a truly surreal convoluted neural network that can "think" of hallucinogenic images.
A new app that uses a neural network combined with facial recognition software to help put names to faces in random photographs by scanning social network data is being used in Russia to identify and harass young women who have previously appeared in pornographic films. Trinity Digital, an app developer in Russia, released a free iOS and Android app called FindFace in February that enables people to identify people by taking a random photograph and using its neural network to figure out the person's name, location, occupation and other details. A neural network is a huge network of computers that are trained using computer algorithms to solve complex problems quickly, as well as improving artificial intelligence by gaining a deeper recognition and understanding of art and the world around us. On 9 April, members of a disreputable imageboard called Dvach, which is like the Russian cousin of 4chan, launched a campaign to deliberately try to locate and identify actresses who appear in pornography, as well as women listed on Intimcity, a Russian website advertising prostitution and escort services. Using FindFace, the Dvach users not only identified the women, but they also shared archived copies of their profiles on Vkontakte (the Russian version of Facebook) publicly online and repeatedly spammed the families of the women letting them know that they had been outed as porn stars and prostitutes.