Russia


New form of AI can read human brain activity in real time

#artificialintelligence

Russian researchers have developed new artificial intelligence (AI) capable of reading a person's brain activity in real time and simultaneously visualise it in the form of an image. Their research could be the first step towards the development of a real-time brain-computer interface. The discovery could lead to the development of new non-invasive methods for post-stroke rehabilitation. Researchers from the Moscow Institute of Physics and Technology (MIPT) in Russia have developed artificial intelligence capable of reading a person's neural activity in real time, as reported by the website Developpez.com. The work was conducted in collaboration with the Russian company Neurobotics, which specialises in the design of algorithms based on brain activity.


Artificial Neural Networks are Reconstructing Human Thoughts in Realtime

#artificialintelligence

Earlier this year, researchers from Russia's Neurobotics Corporation and a team at the Moscow Institute of Physics and Technology worked out how to visualize human brain activity by mimicking images observed in real-time. This breakthrough in artificial neural network technology usage will eventually enable post-stroke rehabilitation devices that will be controlled by signals from the brain. The team uploaded their research via a'preprint' on the bioRxiv website and also shared a video that showcased their'mind-reading' device at work. To develop devices that can be controlled by the human and treatments for cognitive disorders or post-stroke rehabilitation, neurobiologists must have an understanding of how the brain encodes data and information. A critical development in the creation of these technologies is the ability to study brain activity using visual perception as a marker.


The security implications of Artificial Intelligence

#artificialintelligence

On 11 April 2019, Daniel Fiott was invited by the EU's Political and Security Committee (PSC) to participate in a lunch debate on Artificial intelligence (AI). The event was part of the PSC's initiative to enhance dialogue with think tanks, NGOS and academia on key challenges for EU foreign, security and defence policy. The event brought together PSC Ambassadors, as well as representatives from the European Commission and the European External Action Service. Daniel joined experts from the Centre for the Study of Existential Risk (CSAR) at the University of Cambridge and Tilburg University, and he outlined recent AI developments and implications for the defence sector, with a particular focus on the EU and AI developments in Russia, China and the United States. The legal challenges and ethical dilemmas of AI were also discussed.


Zenia is using computer vision to build an AI-driven fitness trainer

#artificialintelligence

As in just about every area of the health and fitness market, technology is increasingly infiltrating yoga, with startups and investors pushing to capitalize on the $80 billion market. Last year, Germany-based Asana Rebel raised more than $17 million from notable backers that include Greycroft to grow its virtual yoga platform, while New York's Mirror has raised sizable funding rounds for a connected mirror that delivers virtual fitness classes, such as yoga and Pilates. Zenia recently entered the fray with a mobile app that leverages machine learning, computer vision, and motion tracking with the promise of helping improve your yoga poses. The company calls it "the world's first AI-powered yoga assistant," and plans to expand its technology to cover all areas of health and fitness. Zenia was officially founded out of Belarus in May of this year by software engineer Alexey Kurov, and the company has secured an undisclosed investment from such notable backers as Misha Lyalin, CEO and chair of Russia-based game developer Zeptolab, and Bulba Ventures, a Belarusian venture capital (VC) firm that invests in AI startups.


Neural network reconstructs human 'thoughts' from brain waves in real time -- Moscow Institute of Physics and Technology

#artificialintelligence

Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person's brain activity as actual images mimicking what they observe in real time. This will enable new post-stroke rehabilitation devices controlled by brain signals. The team published its research as a preprint on bioRxiv and posted a video online, showing their "mind-reading" system at work. To develop devices controlled by the brain and methods for cognitive disorder treatment and post-stroke rehabilitation, neurobiologists need to understand how the brain encodes information. A key aspect of this is studying the brain activity of people perceiving visual information, for example, while watching a video.


Mona Lisa 'brought to life' with deepfake AI

#artificialintelligence

The subject of Leonardo da Vinci's famous Mona Lisa painting has been brought to life by AI researchers. The video, achieved from a single photo, shows the model in the portrait moving her head, eyes and mouth. The latest iteration of so-called deepfake technology came out of Samsung's AI research laboratory in Moscow. Some are concerned that the rise of convincing deepfake technology has huge potential for misuse. Samsung's algorithms were trained on a public database of 7,000 images of celebrities gathered from YouTube.


Mind-reading A.I. analyzes your brainwaves to guess what video you're watching - AIVAnet

#artificialintelligence

When it comes to things like showing us the right search results at the right time, A.I. can often seem like it's darn close to being able to read people's minds. But engineers at Russian robotics research company Neurobotics Lab have shown that artificial intelligence really can be trained to read minds -- and guess what videos users are watching based entirely on their brain waves alone. "We have demonstrated that observing visual scenes of different content affects the human brain waves, so that we can distinguish the scene categories from [one another] by analyzing the corresponding EEG (electroencephalogram) signal," Anatoly Bobe, an engineer of Neurorobotics Lab in Moscow, told Digital Trends. "We [then] created a system for reconstructing the images from EEG signal features." The researchers trained the A.I. by showing it video clips of different objects, alongside the brain wave recordings of the people watching them.


Auriga Attends Intel Experience Day 2019

#artificialintelligence

Intel Experience Day 2019, organized by Intel, one of the major innovative hardware and technology corporations worldwide, took place in Moscow at the end of October. Intel and partner companies presented the latest Intel hardware and software product implementations advancing IoT, AI, computer vision, machine learning, object recognition, and more. Many speakers shared their ideas and insights on trending industrial innovations like cloud computing, Big Data, and analytics, including Al Diaz, Intel's Vice President, Natalya Galyan, Intel's Regional Director for Russia, and Marina Alekseeva, CEO of R&D of Intel in Russia. Intel Experience Day 2019 attracted many IT market players who use Intel solutions in their work daily, and Auriga experts were among them. Several years ago, Auriga became a pioneer user of the Intel Multi-OS Engine tool to develop an innovative iPad application for patient monitoring.


First Guidelines For Robocar Test Drivers

#artificialintelligence

Experimental self-driving car, based on modified Ford automobile, with Lidar and other sensors ... [ ] visible, in the Mission Bay neighborhood of San Francisco, California, June 10, 2019. Testing self-driving vehicles on public roads remains a scary prospect for citizens of communities where that's happening but a six-month old consortium of major automakers and ride-share companies has taken a step towards removing some of that fear, by addressing the human element. An operator sits in the driver's seat of a Toyota Motor Corp. Prius hybrid car, operated by ... [ ] Yandex.Taxi, part of Yandex.NV, during a self-driving taxi trial on open roads in Moscow, Russia, on Wednesday, Aug. 21, 2019. Yandex, Russia's largest search engine that successfully expanded to online taxi and swallowed Uber Technologies Inc. operations in the country, started testing self-driving cars in 2017. The humans are what's known as in-vehicle fallback test drivers.


Using artificial intelligence to track solar power

#artificialintelligence

What they did: Cape Analytics analyzed visual data on tens of millions of homes in major metro areas nationwide by working with partners like the location data company Nearmap. That enabled a fine-grain analysis of residential solar power at a neighborhood level. Why it matters: The firm intends its localized data to help policymakers better understand where solar power is being adopted and why -- and help homeowners understand if they can get state-specific incentives for going solar. What they found: Every "super solar" neighborhood in the U.S. -- those with over 500 homes and solar systems -- is in California, except for one in Saint Petersburg, Florida, which is 13.2% solar. The big picture: Cape Analytics examined the entire U.S., Farzaneh tells Axios.