Goto

Collaborating Authors

 experimentalist



Why "Good" Research Ideas Fail

#artificialintelligence

One day in my life as a machine learning researcher, I had a new idea, and it felt like a good idea. I had a rush of excitement, but then… some hesitation. As always, I knew that having an idea that feels good is different from having an idea that's actually good. The ultimate test of whether an idea is actually good is to see if it works in the real world. Testing in the real world, though, requires careful implementation and experimentation.


Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics

Kwon, Minhae, Daptardar, Saurabh, Schrater, Paul, Pitkow, Xaq

arXiv.org Artificial Intelligence

A fundamental question in neuroscience is how the brain creates an internal model of the world to guide actions using sequences of ambiguous sensory information. This is naturally formulated as a reinforcement learning problem under partial observations, where an agent must estimate relevant latent variables in the world from its evidence, anticipate possible future states, and choose actions that optimize total expected reward. This problem can be solved by control theory, which allows us to find the optimal actions for a given system dynamics and objective function. However, animals often appear to behave suboptimally. Why? We hypothesize that animals have their own flawed internal model of the world, and choose actions with the highest expected subjective reward according to that flawed model. We describe this behavior as rational but not optimal. The problem of Inverse Rational Control (IRC) aims to identify which internal model would best explain an agent's actions. Our contribution here generalizes past work on Inverse Rational Control which solved this problem for discrete control in partially observable Markov decision processes. Here we accommodate continuous nonlinear dynamics and continuous actions, and impute sensory observations corrupted by unknown noise that is private to the animal. We first build an optimal Bayesian agent that learns an optimal policy generalized over the entire model space of dynamics and subjective rewards using deep reinforcement learning. Crucially, this allows us to compute a likelihood over models for experimentally observable action trajectories acquired from a suboptimal agent. We then find the model parameters that maximize the likelihood using gradient ascent.


A non-cooperative meta-modeling game for automated third-party calibrating, validating, and falsifying constitutive laws with parallelized adversarial attacks

Wang, Kun, Sun, WaiChing, Du, Qiang

arXiv.org Artificial Intelligence

The evaluation of constitutive models, especially for high-risk and high-regret engineering applications, requires efficient and rigorous third-party calibration, validation and falsification. While there are numerous efforts to develop paradigms and standard procedures to validate models, difficulties may arise due to the sequential, manual and often biased nature of the commonly adopted calibration and validation processes, thus slowing down data collections, hampering the progress towards discovering new physics, increasing expenses and possibly leading to misinterpretations of the credibility and application ranges of proposed models. This work attempts to introduce concepts from game theory and machine learning techniques to overcome many of these existing difficulties. We introduce an automated meta-modeling game where two competing AI agents systematically generate experimental data to calibrate a given constitutive model and to explore its weakness, in order to improve experiment design and model robustness through competition. The two agents automatically search for the Nash equilibrium of the meta-modeling game in an adversarial reinforcement learning framework without human intervention. By capturing all possible design options of the laboratory experiments into a single decision tree, we recast the design of experiments as a game of combinatorial moves that can be resolved through deep reinforcement learning by the two competing players. Our adversarial framework emulates idealized scientific collaborations and competitions among researchers to achieve a better understanding of the application range of the learned material laws and prevent misinterpretations caused by conventional AI-based third-party validation.


DeFINE: Delayed Feedback based Immersive Navigation Environment for Studying Goal-Directed Human Navigation

Tiwari, Kshitij, Kyrki, Ville, Cheung, Allen, Yamamoto, Naohide

arXiv.org Artificial Intelligence

With the advent of consumer-grade products for presenting an immersive virtual environment (VE), there is a growing interest in utilizing VEs for testing human navigation behavior. However, preparing a VE still requires a high level of technical expertise in computer graphics and virtual reality, posing a significant hurdle to embracing the emerging technology. To address this issue, this paper presents Delayed Feedback based Immersive Navigation Environment (DeFINE), a framework that allows for easy creation and administration of navigation tasks within customizable VEs via intuitive graphical user interfaces and simple settings files. Importantly, DeFINE has a built-in capability to provide performance feedback to participants during an experiment, a feature that is critically missing in other similar frameworks. To demonstrate the usability of DeFINE from both experimentalists' and participants' perspectives, a case study was conducted in which participants navigated to a hidden goal location with feedback that differentially weighted speed and accuracy of their responses. In addition, the participants evaluated DeFINE in terms of its ease of use, required workload, and proneness to induce cybersickness. Results showed that the participants' navigation performance was affected differently by the types of feedback they received, and they rated DeFINE highly in the evaluations, validating DeFINE's architecture for investigating human navigation in VEs. With its rich out-of-the-box functionality and great customizability due to open-source licensing, DeFINE makes VEs significantly more accessible to many researchers.


Accelerating electrocatalyst discovery with machine learning

#artificialintelligence

Researchers are paving the way to total reliance on renewable energy as they study both large- and small-scale ways to replace fossil fuels. One promising avenue is converting simple chemicals into valuable ones using renewable electricity, including processes such as carbon dioxide reduction or water splitting. But to scale these processes up for widespread use, we need to discover new electrocatalysts--substances that increase the rate of an electrochemical reaction that occurs on an electrode surface. To do so, researchers at Carnegie Mellon University are looking to new methods to accelerate the discovery process: machine learning. Zack Ulissi, an assistant professor of chemical engineering (ChemE), and his group are using machine learning to guide electrocatalyst discovery.


shaping-animal-vegetable-and-mineral

Robohub

Nature has a way of making complex shapes from a set of simple growth rules. The curve of a petal, the swoop of a branch, even the contours of our face are shaped by these processes. What if we could unlock those rules and reverse engineer nature's ability to grow an infinitely diverse array of shapes? Scientists from Harvard's Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have done just that. In a paper published in the Proceedings of the National Academy of Sciences, the team demonstrates a technique to grow any target shape from any starting shape.


How AI algorithms could help design new drugs - Futurity

#artificialintelligence

You are free to share this article under the Attribution 4.0 International license. A new kind of AI algorithm--designed to work with a small amount of data--may be able to assist in the early stages of drug development. Artificially intelligent algorithms can learn to identify amazingly subtle information, enabling them to distinguish between people in photos or to screen medical images as well as a doctor. But in most cases their ability to perform such feats relies on training that involves thousands to trillions of data points. This means artificial intelligence doesn't work all that well in situations where there is very little data, such as drug development.