Müller, Thomas
Modelling collective motion based on the principle of agency
Ried, Katja, Müller, Thomas, Briegel, Hans J.
Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals. In theoretical models, these rules are normally \emph{assumed} to take a particular form, possibly constrained by heuristic arguments. We propose a new class of models, which describe the individuals as \emph{agents}, capable of deciding for themselves how to act and learning from their experiences. The local interaction rules do not need to be postulated in this model, since they \emph{emerge} from the learning process. We apply this ansatz to a concrete scenario involving marching locusts, in order to model the phenomenon of density-dependent alignment. We show that our learning agent-based model can account for a Fokker-Planck equation that describes the collective motion and, most notably, that the agents can learn the appropriate local interactions, requiring no strong previous assumptions on their form. These results suggest that learning agent-based models are a powerful tool for studying a broader class of problems involving collective motion and animal agency in general.
Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks
Kallweit, Simon, Müller, Thomas, McWilliams, Brian, Gross, Markus, Novák, Jan
We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds---e.g. the characteristic silverlining and the "whiteness" of the inner body---challenging for methods based solely on Monte Carlo integration or diffusion theory. We approach the problem differently. Instead of simulating all light transport during rendering, we pre-learn the spatial and directional distribution of radiant flux from tens of cloud exemplars. To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source. The descriptor is input to a deep neural network that predicts the radiance function for each shading configuration. We make the key observation that progressively feeding the hierarchical descriptor into the network enhances the network's ability to learn faster and predict with high accuracy while using few coefficients. We also employ a block design with residual connections to further improve performance. A GPU implementation of our method synthesizes images of clouds that are nearly indistinguishable from the reference solution within seconds interactively. Our method thus represents a viable solution for applications such as cloud design and, thanks to its temporal stability, also for high-quality production of animated content.