Goto

Collaborating Authors

 Netherlands


This viral Dutch Fish Doorbell is peak internet

PCWorld

When you purchase through links in our articles, we may earn a small commission. The Dutch Fish Doorbell mixes livestreams, crowdsourcing, and conservation in all of the best ways. Every spring in the Dutch city of Utrecht, thousands of fish attempt to migrate through the city's canals to reach spawning grounds, but locked flood gates stay shut for long stretches to manage water levels. So the city came up with a weirdly charming solution: a fish doorbell. The site, called Visdeurbel --or Fish Doorbell--lets anyone in the world help the fish out.


Developing active and flexible microrobots

Robohub

Leiden researchers Professor Daniela Kraft and Mengshi Wei have created microscopic robots that move without sensors, software, or external control. Instead, their behaviour emerges entirely from their shape and the way they interact with their environment. This class of robots opens up entirely new possibilities for biomedical applications. Inspiration to build these robots came from nature. Kraft: "Animals like worms and snakes constantly adapt their shape as they move, which helps them to navigate their environments. Macroscopic robots similarly use flexibility for their function. However, until now, microrobots were either small and rigid, or large and flexible. We wondered if we could realize small and flexible microrobots in our lab."

  Country: Europe > Netherlands > South Holland > Leiden (0.28)

Calibeating Prediction-Powered Inference

van der Laan, Lars, Van Der Laan, Mark

arXiv.org Machine Learning

We study semisupervised mean estimation with a small labeled sample, a large unlabeled sample, and a black-box prediction model whose output may be miscalibrated. A standard approach in this setting is augmented inverse-probability weighting (AIPW) [Robins et al., 1994], which protects against prediction-model misspecification but can be inefficient when the prediction score is poorly aligned with the outcome scale. We introduce Calibrated Prediction-Powered Inference, which post-hoc calibrates the prediction score on the labeled sample before using it for semisupervised estimation. This simple step requires no retraining and can improve the original score both as a predictor of the outcome and as a regression adjustment for semisupervised inference. We study both linear and isotonic calibration. For isotonic calibration, we establish first-order optimality guarantees: isotonic post-processing can improve predictive accuracy and estimator efficiency relative to the original score and simpler post-processing rules, while no further post-processing of the fitted isotonic score yields additional first-order gains. For linear calibration, we show first-order equivalence to PPI++. We also clarify the relationship among existing estimators, showing that the original PPI estimator is a special case of AIPW and can be inefficient when the prediction model is accurate, while PPI++ is AIPW with empirical efficiency maximization [Rubin et al., 2008]. In simulations and real-data experiments, our calibrated estimators often outperform PPI and are competitive with, or outperform, AIPW and PPI++. We provide an accompanying Python package, ppi_aipw, at https://larsvanderlaan.github.io/ppi-aipw/.


Pugs and Frenchies could find breathing relief for squishy faces with new treatment

Popular Science

Snoretox-1 uses inactive tetanus to help keep airways open. More information Adding us as a Preferred Source in Google by using this link indicates that you would like to see more of our content in Google News results. Humans bred dogs that can't breathe. Science may finally give them some relief. Breakthroughs, discoveries, and DIY tips sent six days a week.


Decentralized Machine Learning with Centralized Performance Guarantees via Gibbs Algorithms

Bermudez, Yaiza, Perlaza, Samir, Esnaola, Iñaki

arXiv.org Machine Learning

In this paper, it is shown, for the first time, that centralized performance is achievable in decentralized learning without sharing the local datasets. Specifically, when clients adopt an empirical risk minimization with relative-entropy regularization (ERM-RER) learning framework and a forward-backward communication between clients is established, it suffices to share the locally obtained Gibbs measures to achieve the same performance as that of a centralized ERM-RER with access to all the datasets. The core idea is that the Gibbs measure produced by client~$k$ is used, as reference measure, by client~$k+1$. This effectively establishes a principled way to encode prior information through a reference measure. In particular, achieving centralized performance in the decentralized setting requires a specific scaling of the regularization factors with the local sample sizes. Overall, this result opens the door to novel decentralized learning paradigms that shift the collaboration strategy from sharing data to sharing the local inductive bias via the reference measures over the set of models.


Adversarial Label Invariant Graph Data Augmentations for Out-of-Distribution Generalization

Zhang, Simon, DeMilt, Ryan P., Jin, Kun, Xia, Cathy H.

arXiv.org Machine Learning

Out-of-distribution (OoD) generalization occurs when representation learning encounters a distribution shift. This occurs frequently in practice when training and testing data come from different environments. Covariate shift is a type of distribution shift that occurs only in the input data, while the concept distribution stays invariant. We propose RIA - Regularization for Invariance with Adversarial training, a new method for OoD generalization under convariate shift. Motivated by an analogy to $Q$-learning, it performs an adversarial exploration for counterfactual data environments. These new environments are induced by adversarial label invariant data augmentations that prevent a collapse to an in-distribution trained learner. It works with many existing OoD generalization methods for covariate shift that can be formulated as constrained optimization problems. We develop an alternating gradient descent-ascent algorithm to solve the problem in the context of causally generated graph data, and perform extensive experiments on OoD graph classification for various kinds of synthetic and natural distribution shifts. We demonstrate that our method can achieve high accuracy compared with OoD baselines.


Back to school: robots learn from factory workers

Robohub

What if training a robot to handle dirty, dangerous work on the factory floor was as simple as showing it how? Czech startup RoboTwin is doing exactly that, helping factory workers teach robots new skills by demonstration. Instead of writing complex code, workers perform the job once and RoboTwin's technology turns those movements into a robot programme - opening the door to automation for smaller manufacturers. Founded in Prague in 2021, RoboTwin builds handheld devices and no-code software that capture human movements and translate them into instructions for industrial robots. The aim is to make automation faster, simpler and more accessible to manufacturers that do not have specialist robotics programmers.


Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

AIHub

What is the topic of the research in your paper? In our paper, we study how social structures emerge when the "individuals" in a network are artificial agents powered by large language models. To do so, we analyzed a platform called Moltbook - a social network entirely populated by AI agents, specifically LLM-based agents, that interact with each other through posts and comments. This social network creates a very unusual but powerful setting: instead of observing human behavior, we can study a brand new society made only of artificial entities and observe whether it organizes itself in similar ways. To understand the structure of interactions in this system, we modelled the platform as a network, where each agent is a node and each interaction is a connection between them.


mlr3torch: A Deep Learning Framework in R based on mlr3 and torch

Fischer, Sebastian, Burk, Lukas, Zhang, Carson, Bischl, Bernd, Binder, Martin

arXiv.org Machine Learning

Deep learning (DL) has become a cornerstone of modern machine learning (ML) praxis. We introduce the R package mlr3torch, which is an extensible DL framework for the mlr3 ecosystem. It is built upon the torch package, and simplifies the definition, training, and evaluation of neural networks for both tabular data and generic tensors (e.g., images) for classification and regression. The package implements predefined architectures, and torch models can easily be converted to mlr3 learners. It also allows users to define neural networks as graphs. This representation is based on the graph language defined in mlr3pipelines and allows users to define the entire modeling workflow, including preprocessing, data augmentation, and network architecture, in a single graph. Through its integration into the mlr3 ecosystem, the package allows for convenient resampling, benchmarking, preprocessing, and more. We explain the package's design and features and show how to customize and extend it to new problems. Furthermore, we demonstrate the package's capabilities using three use cases, namely hyperparameter tuning, fine-tuning, and defining architectures for multimodal data. Finally, we present some runtime benchmarks.


We might finally know how to use quantum computers to boost AI

New Scientist

Quantum computers might eventually be able to handle some AI applications that currently require huge amounts of conventional computing power. Such a development would be a major boost to machine learning and similar artificial intelligence algorithms. Quantum computers hold the promise of eventually being able to complete certain calculations that are impossible for conventional computers. For years, researchers have been debating whether these advantages over conventional computers extend to tasks that involve lots of data, and the algorithms that learn from them - in other words, the machine learning that underlies many AI programs. Now, Hsin-Yuan Huang at the quantum computing firm Oratomic and his colleagues argue that the answer ought to be "yes". Their mathematical work aims to lay the foundations for a future where quantum computers offer a broad boost to AI. "Machine learning is really utilised everywhere in science and technology and also everyday life.