Goto

Collaborating Authors

Results


New research indicates the whole universe could be a giant neural network

#artificialintelligence

The core idea is deceptively simple: every observable phenomenon in the entire universe can be modeled by a neural network. And that means, by extension, the universe itself may be a neural network. Vitaly Vanchurin, a professor of physics at the University of Minnesota Duluth, published an incredible paper last August entitled "The World as a Neural Network" on the arXiv pre-print server. It managed to slide past our notice until today when Futurism's Victor Tangermann published an interview with Vanchurin discussing the paper. We discuss a possibility that the entire universe on its most fundamental level is a neural network.


Our emotions might not stay private for long

#artificialintelligence

If there is any doubt in your mind that are not headed to a future where mind-machine meld is going to be the new norm, just look at Elon Musk's Neuralink's BCI. The animal trials are already underway, as claimed by Musk, a monkey with a wireless implant in his skull with tiny wires can play video games with his mind. Although designed to cure a wide variety of diseases, the experiment aligns with Musk's long-term vision of coming up with a brain-computer interface that is able to compete with increasingly powerful AIs. However, Neuralink's proposed device is an invasive one that requires fine threads that need to be implanted in the brain. And as if these invasive devices were not scary enough for a person like me, new breakthroughs in neuroscience and artificial intelligence might infiltrate our emotions -- the last bastion of personal privacy. Don't get me wrong, I am all for using the novel tech for healthcare purposes, but who is to say that this can't be used by nefarious players for mind control or "thought policing" by the State.


The Map of Artificial Intelligence (2020)

#artificialintelligence

Notice: This map is not a precise reflection of the state of the AI field, but just my subjective representation. This is my first map as of the end of 2020 and will be extended in the future. It contains more than 200 words or phrases, so to describe all of them would be too extensive and overkill. Much more interesting (and useful for me) to tell how this map was gradually building in my head. I will not explain everything, just the main things, so it is normal not to understand something. The story begins in 2013 after I finished my bachelor in applied physics. I was researching heterostructures, something that is in the core of the solid-state lasers that are used for Internet transmission. In short, I was not satisfied, the field is good and perspective but not in Ukraine. So, I decided to write my master thesis in a different field that I will be truly interested in. I had two options and I chose one -- the field of artificial intelligence.


Thought-detection: AI has infiltrated our last bastion of privacy

#artificialintelligence

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction. Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person's emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed "hidden" information about an individual's heart and breathing rates.


Combining convolutional neural network with computational neuroscience to simulate cochlear mechanics

#artificialintelligence

A trio of researchers at Ghent University has combined a convolutional neural network with computational neuroscience to create a model that simulates human cochlear mechanics. In their paper published in Nature Machine Intelligence, Deepak Baby, Arthur Van Den Broucke and Sarah Verhulst describe how they built their model and the ways they believe it can be used. Over the past several decades, great strides have been made in speech and voice recognition technology. Customers are routinely serviced by phone-based agents, for example. Also, voice recognition and response systems on smartphones have become ubiquitous.


Building Conscious Artificial Intelligence: How far are we and Why?

#artificialintelligence

The Internet has been replete with news headlines about GPT-3 writing articles, Google's Neural Network creating eerie artwork, artificial intelligence (AI) models creating music and what not. While these may seems quite intriguing for a tech enthusiast, for an average person, it may be overwhelming. Not only he shall be worried about ever-increasing capabilities of artificial intelligence, it also births fear to AI and robots dominating humans – as portrayed in dystopian movies. Hence, all these milestones achieved by AI begs the question – Will artificial intelligence be conscious someday? Artificial intelligence tries to solve real-world problems by simulating human brain intelligence to perform the assigned task. Generally, it can be categorized into two distinct types: Weak AI and Strong AI.



AI And Creativity: Why OpenAI's Latest Model Matters

#artificialintelligence

When prompted to generate "a mural of a blue pumpkin on the side of a building," OpenAI's new deep ... [ ] learning model DALL-E produces this series of original images. OpenAI has done it again. Earlier this month, OpenAI--the research organization behind last summer's much-hyped language model GPT-3--released a new AI model named DALL-E. While it has generated less buzz than GPT-3 did, DALL-E has even more profound implications for the future of AI. In a nutshell, DALL-E takes text captions as input and produces original images as output. For instance, when fed phrases as diverse as "a pentagonal green clock," "a sphere made of fire" or "a mural of a blue pumpkin on the side of a building," DALL-E is able to generate shockingly accurate visual renderings.


Interpretable Models for Granger Causality Using Self-explaining Neural Networks

arXiv.org Machine Learning

Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. In this paper, we propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality.


Meta-Reinforcement Learning for Adaptive Motor Control in Changing Robot Dynamics and Environments

arXiv.org Artificial Intelligence

This work developed a meta-learning approach that adapts the control policy on the fly to different changing conditions for robust locomotion. The proposed method constantly updates the interaction model, samples feasible sequences of actions of estimated the state-action trajectories, and then applies the optimal actions to maximize the reward. To achieve online model adaptation, our proposed method learns different latent vectors of each training condition, which are selected online given the newly collected data. Our work designs appropriate state space and reward functions, and optimizes feasible actions in an MPC fashion which are then sampled directly in the joint space considering constraints, hence requiring no prior design of specific walking gaits. We further demonstrate the robot's capability of detecting unexpected changes during interaction and adapting control policies quickly. The extensive validation on the SpotMicro robot in a physics simulation shows adaptive and robust locomotion skills under varying ground friction, external pushes, and different robot models including hardware faults and changes.