talk round-up
Radhika Nagpal at #NeurIPS2021: the collective intelligence of army ants
The 35th conference on Neural Information Processing Systems (NeurIPS2021) featured eight invited talks. In this post, we give a flavour of the final presentation. Radhika's research focusses on collective intelligence, with the overarching goal being to understand how large groups of individuals, with local interaction rules, can cooperate to achieve globally complex behaviour. Each individual is miniscule compared to the massive phenomena that they create, and, with a limited view of the actions of the rest of the swarm, they achieve striking coordination. Looking at collective intelligence from an algorithmic point-of-view, the phenomenon emerges from many individuals interacting using simple rules.
AIhub monthly digest: January 2022 – new voices in AI, bug bounties, and arXiv hits two million
Welcome to our first monthly digest of 2022! This is the place where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we cover our new series New voices in AI, hear from an ACML award winner, and celebrate an arXiv milestone. We're excited to announce the launch of a new series for AIhub: New voices in AI. Hosted by Joe Daly, this series will highlight the work of PhD students, early career researchers, and those in the field of AI with a fresh perspective.
#NeurIPS2021 invited talks round-up: part three – the collective intelligence of army ants
The 35th conference on Neural Information Processing Systems (NeurIPS2021) featured eight invited talks. In the last of our series of round-ups, we give a flavour of the final presentation. Radhika's research focusses on collective intelligence, with the overarching goal being to understand how large groups of individuals, with local interaction rules, can cooperate to achieve globally complex behaviour. Each individual is miniscule compared to the massive phenomena that they create, and, with a limited view of the actions of the rest of the swarm, they achieve striking coordination. Looking at collective intelligence from an algorithmic point-of-view, the phenomenon emerges from many individuals interacting using simple rules.
#NeurIPS2021 invited talks round-up: part two – benign overfitting, optimal transport, and human and machine intelligence
The 35th conference on Neural Information Processing Systems (NeurIPS2021) featured eight invited talks. Continuing our series of round-ups, we give a flavour of the next three presentations. In his talk, Peter focussed on the phenomenon of benign overfitting, one of the surprises to arise from deep learning: that deep neural networks seem to predict well, even with a perfect fit to noisy training data. The presentation began with a broader perspective on theoretical progress inspired by large-scale machine learning problems. Peter took us back to 1988, and to a NeurIPS paper by Eric Baum and David Haussler who were interested in the question of generalization for neural networks.
#NeurIPS2021 invited talks round-up: part one – Duolingo, the banality of scale and estimating the mean
The 35th conference on Neural Information Processing Systems (NeurIPS2021) started on Monday 6 December 2021. There are eight invited talks at the conference this year. In this post, we give a flavour of the first three, which covered a diverse range of topics. Duolingo is the world's most downloaded educational app, with around 500 million downloads to date. In his talk, co-founder and CEO Luis described the different ways in which the Duolingo team use AI.
#NeurIPS2020 invited talks round-up: part three – causal learning and the genomic bottleneck
In this post we conclude our summaries of the NeurIPS invited talks from the 2020 meeting. In this final instalment, we cover the talks by Marloes Maathuis (ETH Zurich) and Anthony M Zador (Cold Spring Harbor Laboratory). Marloes began her talk on causal learning with a simple example of the phenomenon known as Simpson's paradox, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined. She also talked about the importance of considering causality when making decisions based on such data. Marloes went on to explain the difference between causal and non-causal questions. Non-causal questions are about predictions in the same system, for example, predicting the cancer rate among smokers.
- Research Report > Strength High (0.37)
- Research Report > Experimental Study (0.37)
#NeurIPS2020 invited talks round-up: part two – the real AI revolution, and the future for the invisible workers in AI
In this post we continue our summaries of the NeurIPS invited talks from the 2020 meeting. Here, we cover the talks by Chris Bishop (Microsoft Research) and Saiph Savage (Carnegie Mellon University). Chris began his talk by suggesting that now is a particularly exciting time to be involved in AI. What he termed "the real AI revolution" has nothing to do with artificial general intelligence (AGI), but is driven by the way we create software, and hence new technology. Machine learning is becoming ubiquitous and can be used to solve many problems that cannot, yet, be solved using other methods.
#NeurIPS2020 invited talks round-up: part one
There were seven interesting and varied invited talks at NeurIPS this year. Here, we summarise the first three, which were given by Charles Isbell (Georgia Tech), Jeff Shamma (King Abdullah University of Science and Technology) and Shafi Goldwasser (UC Berkeley, MIT and Weizmann Institute of Science). The invited talks kicked off in style with a presentation from Charles Isbell. He had posted a teaser on Twitter indicating that he was trying something new with the format, and it certainly did not disappoint. The talk received rave reviews during both the live chat channel and afterwards on social media.