Goto

Collaborating Authors

 unravelling


Counterfactual Vision-and-Language Navigation: Unravelling the Unseen

Neural Information Processing Systems

The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments. A prominent challenge is to train an agent capable of generalising to new environments at test time, rather than one that simply memorises trajectories and visual details observed during training. We propose a new learning strategy that learns both from observations and generated counterfactual environments. We describe an effective algorithm to generate counterfactual observations on the fly for VLN, as linear combinations of existing environments. Simultaneously, we encourage the agent's actions to remain stable between original and counterfactual environments through our novel training objective-effectively removing the spurious features that otherwise bias the agent. Our experiments show that this technique provides significant improvements in generalisation on benchmarks for Room-to-Room navigation and Embodied Question Answering.


Unravelling the Performance of Physics-informed Graph Neural Networks for Dynamical Systems

Neural Information Processing Systems

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. Then, we evaluate them on spring, pendulum, and gravitational and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.


Unravelling the mystery of the earliest life on Earth: Scientists uncover fresh chemical evidence of microbes in rocks more than 3.3 BILLION years old

Daily Mail - Science & tech

In 1996 Nasa and the White House made the explosive announcement that the rock contained traces of Martian bugs. The meteorite, catalogued as Allen Hills (ALH) 84001, crashed onto the frozen wastes of Antarctica 13,000 years ago and was recovered in 1984. Photographs were released showing elongated segmented objects that appeared strikingly lifelike.



Review for NeurIPS paper: Counterfactual Vision-and-Language Navigation: Unravelling the Unseen

Neural Information Processing Systems

Summary and Contributions: This paper introduces a method for generating *counterfactual* visual features for augmenting the training of vision-and-language navigation (VLN) models (which predict a sequence of actions to carry out a natural language instruction, conditioning on a sequence of visual inputs). Counterfactual training examples are produced by perturbing the visual features in an original training example with a linear combination of visual features from a similar training example. Weights (exogenous variables) in the linear combination are optimized to jointly minimize the edit to the original features and maximize the probability that a separate speaker (instruction generation) model assigns to the true instruction conditioned on the resulting counterfactual features, subject to the constraint that the counterfactual features change the interpretation model's predicted timestep at every action. Once these counterfactual features are produced, the model is trained to encourage it to assign equal probability to actions in the original example when conditioning on the original and the counterfactual features (in imitation learning), or to obtain equal reward (in reinforcement learning). The method improves performance on unseen environments for the R2R benchmark for VLN, and also shows improvements on embodied question answering.


Unravelling the Performance of Physics-informed Graph Neural Networks for Dynamical Systems

Neural Information Processing Systems

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems.


Stars, Stripes, and Silicon: Unravelling the ChatGPT's All-American, Monochrome, Cis-centric Bias

Torrielli, Federico

arXiv.org Artificial Intelligence

This paper investigates the challenges associated with bias, toxicity, unreliability, and lack of robustness in large language models (LLMs) such as ChatGPT. It emphasizes that these issues primarily stem from the quality and diversity of data on which LLMs are trained, rather than the model architectures themselves. As LLMs are increasingly integrated into various real-world applications, their potential to negatively impact society by amplifying existing biases and generating harmful content becomes a pressing concern. The paper calls for interdisciplinary efforts to address these challenges. Additionally, it highlights the need for collaboration between researchers, practitioners, and stakeholders to establish governance frameworks, oversight, and accountability mechanisms to mitigate the harmful consequences of biased LLMs.


Unravelling the mystery of the 'world's ugliest animal': Scientists reveal why male proboscis monkeys have large, phallic noses - and say they're crucial for mating success

Daily Mail - Science & tech

It's safe to say that proboscis monkeys are some of the strangest looking creatures in the animal kingdom. While female monkeys have pointy noses, the males have large, rather phallic noses – earning them the title of the'world's ugliest animals'. Now, a study has finally got to the bottom of this unusual facial feature. Scientists from the Australian National University say that their large noses are more than just an eye sore. Instead, they offer several major benefits – especially when it comes to attracting a female partner.


Unravelling the Breakthrough in Energy Efficient Artificial Intelligence – IAM Network

#artificialintelligence

Spiking Neural Networks requires less frequency while communicating, and involves minimum calculations for performing the task. The neural networks are the brain of Artificial Intelligence. Just like the neurons in the human body, these neural networks precede every process of AI. The modern neural networks are efficient in performing tasks but are lacks energy efficiency. That's why performing tasks like speech recognition, ECG and gesture recognition entails consumption of extensive energy.


Unravelling the Breakthrough in Energy Efficient Artificial Intelligence

#artificialintelligence

Spiking Neural Networks requires less frequency while communicating, and involves minimum calculations for performing the task. The neural networks are the brain of Artificial Intelligence. Just like the neurons in the human body, these neural networks precede every process of AI. The modern neural networks are efficient in performing tasks but are lacks energy efficiency. That's why performing tasks like speech recognition, ECG and gesture recognition entails consumption of extensive energy.