Goto

Collaborating Authors

Results


Our emotions might not stay private for long

#artificialintelligence

If there is any doubt in your mind that are not headed to a future where mind-machine meld is going to be the new norm, just look at Elon Musk's Neuralink's BCI. The animal trials are already underway, as claimed by Musk, a monkey with a wireless implant in his skull with tiny wires can play video games with his mind. Although designed to cure a wide variety of diseases, the experiment aligns with Musk's long-term vision of coming up with a brain-computer interface that is able to compete with increasingly powerful AIs. However, Neuralink's proposed device is an invasive one that requires fine threads that need to be implanted in the brain. And as if these invasive devices were not scary enough for a person like me, new breakthroughs in neuroscience and artificial intelligence might infiltrate our emotions -- the last bastion of personal privacy. Don't get me wrong, I am all for using the novel tech for healthcare purposes, but who is to say that this can't be used by nefarious players for mind control or "thought policing" by the State.


Combining convolutional neural network with computational neuroscience to simulate cochlear mechanics

#artificialintelligence

A trio of researchers at Ghent University has combined a convolutional neural network with computational neuroscience to create a model that simulates human cochlear mechanics. In their paper published in Nature Machine Intelligence, Deepak Baby, Arthur Van Den Broucke and Sarah Verhulst describe how they built their model and the ways they believe it can be used. Over the past several decades, great strides have been made in speech and voice recognition technology. Customers are routinely serviced by phone-based agents, for example. Also, voice recognition and response systems on smartphones have become ubiquitous.



AI And Creativity: Why OpenAI's Latest Model Matters

#artificialintelligence

When prompted to generate "a mural of a blue pumpkin on the side of a building," OpenAI's new deep ... [ ] learning model DALL-E produces this series of original images. OpenAI has done it again. Earlier this month, OpenAI--the research organization behind last summer's much-hyped language model GPT-3--released a new AI model named DALL-E. While it has generated less buzz than GPT-3 did, DALL-E has even more profound implications for the future of AI. In a nutshell, DALL-E takes text captions as input and produces original images as output. For instance, when fed phrases as diverse as "a pentagonal green clock," "a sphere made of fire" or "a mural of a blue pumpkin on the side of a building," DALL-E is able to generate shockingly accurate visual renderings.


Interpretable Models for Granger Causality Using Self-explaining Neural Networks

arXiv.org Machine Learning

Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. In this paper, we propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality.


HySTER: A Hybrid Spatio-Temporal Event Reasoner

arXiv.org Artificial Intelligence

The task of Video Question Answering (VideoQA) consists in answering natural language questions about a video and serves as a proxy to evaluate the performance of a model in scene sequence understanding. Most methods designed for VideoQA up-to-date are end-to-end deep learning architectures which struggle at complex temporal and causal reasoning and provide limited transparency in reasoning steps. We present the HySTER: a Hybrid Spatio-Temporal Event Reasoner to reason over physical events in videos. Our model leverages the strength of deep learning methods to extract information from video frames with the reasoning capabilities and explainability of symbolic artificial intelligence in an answer set programming framework. We define a method based on general temporal, causal and physics rules which can be transferred across tasks. We apply our model to the CLEVRER dataset and demonstrate state-of-the-art results in question answering accuracy. This work sets the foundations for the incorporation of inductive logic programming in the field of VideoQA.


Understanding in Artificial Intelligence

arXiv.org Artificial Intelligence

However, this progress is largely driven by increased computational power, namely GPU's, and bigger data sets but not due to radically new algorithms or knowledge representations. Artificial Neural Networks and Stochastic Gradient Descent, popularized in the 80's [3], remain the fundamental building blocks for most modern AI systems. While very successful for many applications, especially in vision, the purely deep-learning based approach has significant weaknesses. For instance, CNN's struggle with same-different relations [4], fail when long-chained reasoning is needed [5], are non-decomposable, cannot easily incorporate symbolic knowledge, and are hampered by a lack of model interpretability. Many current methods essentially compute higher order statistics over basic elements such as pixels, phonemes, letters or words to process inputs but do not explicitly model the building blocks and their relations in a (de)composable and interpretable way.


CES 2021: LG's press conference featured a virtual person presenting

USATODAY - Tech Top Stories

Typically the presenters at a CES press conference don't get a lot of attention. Wearing a pink hooded sweatshirt with the phrase "Stay punk forever," Reah Keem was among presenters highlighting some of the offerings from LG, ranging from appliances to personal technology. LG describes her as a "virtual composer and DJ made even more human through deep learning technology." Keem was there to introduce the LG CLOi robot, which can disinfect high-traffic areas using ultraviolet light. You can watch Reah make her debut during LG's press conference Monday morning, at roughly the 22-minute mark.


Improving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals

arXiv.org Artificial Intelligence

Multi-hop Knowledge Base Question Answering (KBQA) aims to find the answer entities that are multiple hops away in the Knowledge Base (KB) from the entities in the question. A major challenge is the lack of supervision signals at intermediate steps. Therefore, multi-hop KBQA algorithms can only receive the feedback from the final answer, which makes the learning unstable or ineffective. To address this challenge, we propose a novel teacher-student approach for the multi-hop KBQA task. In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network. The major novelty lies in the design of the teacher network, where we utilize both forward and backward reasoning to enhance the learning of intermediate entity distributions. By considering bidirectional reasoning, the teacher network can produce more reliable intermediate supervision signals, which can alleviate the issue of spurious reasoning. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our approach on the KBQA task.


A Brief Survey of Associations Between Meta-Learning and General AI

arXiv.org Artificial Intelligence

This paper briefly reviews the history of meta-learning and describes its contribution to general AI. Meta-learning improves model generalization capacity and devises general algorithms applicable to both in-distribution and out-of-distribution tasks potentially. General AI replaces task-specific models with general algorithmic systems introducing higher level of automation in solving diverse tasks using AI. We summarize main contributions of meta-learning to the developments in general AI, including memory module, meta-learner, coevolution, curiosity, forgetting and AI-generating algorithm. We present connections between meta-learning and general AI and discuss how meta-learning can be used to formulate general AI algorithms.