Goto

Collaborating Authors

 Saarland


Identifiability of Potentially Degenerate Gaussian Mixture Models With Piecewise Affine Mixing

Xu, Danru, Lachapelle, Sébastien, Magliacane, Sara

arXiv.org Machine Learning

Causal representation learning (CRL) aims to identify the underlying latent variables from high-dimensional observations, even when variables are dependent with each other. We study this problem for latent variables that follow a potentially degenerate Gaussian mixture distribution and that are only observed through the transformation via a piecewise affine mixing function. We provide a series of progressively stronger identifiability results for this challenging setting in which the probability density functions are ill-defined because of the potential degeneracy. For identifiability up to permutation and scaling, we leverage a sparsity regularization on the learned representation. Based on our theoretical results, we propose a two-stage method to estimate the latent variables by enforcing sparsity and Gaussianity in the learned representations. Experiments on synthetic and image data highlight our method's effectiveness in recovering the ground-truth latent variables.






Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation

Neural Information Processing Systems

T o this end, we propose a simple yet powerful paradigm for seamlessly unifying different human pose and shape-related tasks and datasets. Our formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume, and obtain its estimated location in 3D. We achieve this by learning a continuous neural field of body point localizer functions, each of which is a differently parameterized 3D heatmap-based convolutional point localizer (detector).



Google DeepMind wants to know if chatbots are just virtue signaling

MIT Technology Review

Google DeepMind is calling for the moral behavior of large language models--such as what they do when called on to act as companions, therapists, medical advisors, and so on--to be scrutinized with the same kind of rigor as their ability to code or do math . As LLMs improve, people are asking them to play more and more sensitive roles in their lives. Agents are starting to take actions on people's behalf. LLMs may be able to influence human decision-making . And yet nobody knows how trustworthy this technology really is at such tasks. With coding and math, you have clear-cut, correct answers that you can check, William Isaac, a research scientist at Google DeepMind, told me when I met him and Julia Haas, a fellow research scientist at the firm, for an exclusive preview of their work, which is published in today. That's not the case for moral questions, which typically have a range of acceptable answers: "Morality is an important capability but hard to evaluate," says Isaac. "In the moral domain, there's no right and wrong," adds Haas.