Collaborating Authors


Dogs can tell when you want to give them a treat – even if you don't

New Scientist

Pet dogs know when you intend to give them a treat, even if you drop it where they can't get to it Dogs can understand when humans mean well, even if they don't get what they want from us. Prior to this work, the ability to distinguish between a human being unwilling or unable to perform a task had only been found in non-human primates. The close social bond between humans and canines is well established, but researchers have a limited understanding of if and how dogs comprehend human intent. To see if pet dogs can distinguish between intentional and accidental actions by strangers, Christoph Völter at the University of Veterinary Medicine Vienna in Austria and his colleagues ran tests with humans offering dogs food while the animals' body movements were tracked using eight cameras. Each dog and human were separated by a transparent plastic panel with holes that a slice of sausage could be passed through.

Deep learning lets algorithm produce best solutions to molecules' Schrödinger equations yet


A new deep-learning algorithm from researchers in Austria produces more accurate numerical solutions to the Schrödinger equation than ever before for a number of different molecules at relatively modest computational cost. Surprisingly, the researchers found that, whereas some'pre-training' of the algorithm could improve its predictive abilities, more substantial training was actively harmful. As the Schrödinger equation can be solved analytically only for the hydrogen atom, researchers wishing to estimate energies of molecules are forced to rely on numerical methods. Simpler approximations such as density functional theory and the Hartree-Fock method, which is almost as old as the Schrödinger equation itself, can treat far-larger systems but often gives inaccurate results. Newer techniques such as complete active space self-consistent field (CASSCF) give results closer to experiments, but require much more computation.



Are passionate about contributing to solutions that benefit science and business at the same time; Are able to communicate with university and business stakeholders; Have knowledge of several of the following techniques: mathematical programming, dynamic programming, reinforcement learning, supervised learning, simulation, business analytics, heuristics, etc.; Can code in one or more of the following programming languages: Python, Java, C, Delphi, Matlab, and R; Have, or will shortly acquire, an MSc degree in Industrial Engineering, Operations Research, Applied Mathematics, or related programme; Possess excellent communication skills and are proficient in English.

A Formal Approach to Identifying the Impact of Noise on Neural Networks

Communications of the ACM

The past few years have seen an incredible rise in the use of smart systems based on artificial neural networks (ANNs), owing to their remarkable classification capability and decision making comparable to that of humans. Yet, as shown in Figure 1, the addition of even a small amount of noise to the input may trigger these networks to give incorrect results.13 This is an alarming limitation of the ANNs, particularly for those deployed in safety-critical applications such as autonomous vehicles, aviation, and healthcare. For instance, consider a self-driving car using an ANN to perceive traffic signs as shown in Figure 2; the correct classification by the ANN in noisy real-world environments is crucial for the safety of humans and objects in the vicinity of the car. Magnitudes of image input and the noise applied to it.

DeepMind breaks 50-year math record using AI; new record falls a week later


Matrix multiplication is at the heart of many machine learning breakthroughs, and it just got faster--twice. Last week, DeepMind announced it discovered a more efficient way to perform matrix multiplication, conquering a 50-year-old record. This week, two Austrian researchers at Johannes Kepler University Linz claim they have bested that new record by one step. Matrix multiplication, which involves multiplying two rectangular arrays of numbers, is often found at the heart of speech recognition, image recognition, smartphone image processing, compression, and generating computer graphics. Graphics processing units (GPUs) are particularly good at performing matrix multiplication due to their massively parallel nature.

AI mathematician, tumour fungi and Africa's coronavirus genomes


AlphaTensor was designed to perform matrix multiplications, but the same approach could be used to tackle other mathematical challenges.Credit: DeepMind An artificial intelligence (AI) developed by machine-learning company DeepMind in London has tackled a type of calculation called matrix multiplication. The system -- called AlphaTensor -- leverages the skills that DeepMind's game-playing AIs use to beat human players at games such as Go and chess. Matrix multiplication is a widely used mathematical technique that involves multiplying numbers arranged in grids, or matrices, that might represent sets of pixels in images, air conditions in a weather model or the internal workings of an artificial neural network. AlphaTensor broke ground by finding shortcuts to solve these problems with fewer steps (A. The same general approach could have applications in other kinds of mathematical operation, its developers say, such as decomposing complex waves or other mathematical objects into simpler ones.

DeepMind AI invents faster algorithms to solve tough maths puzzles


AlphaTensor was designed to perform matrix multiplications, but the same approach could be used to tackle other mathematical challenges.Credit: DeepMind Researchers at DeepMind in London have shown that artificial intelligence (AI) can find shortcuts in a fundamental type of mathematical calculation, by turning the problem into a game and then leveraging the machine-learning techniques that another of the company's AIs used to beat human players in games such as Go and chess. The AI discovered algorithms that break decades-old records for computational efficiency, and the team's findings, published on 5 October in Nature1, could open up new paths to faster computing in some fields. "It is very impressive," says Martina Seidl, a computer scientist at Johannes Kepler University in Linz, Austria. "This work demonstrates the potential of using machine learning for solving hard mathematical problems." Advances in machine learning have allowed researchers to develop AIs that generate language, predict the shapes of proteins2 or detect hackers.

On Tackling Explanation Redundancy in Decision Trees

Journal of Artificial Intelligence Research

Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models. The interpretability of decision trees motivates explainability approaches by so-called intrinsic interpretability, and it is at the core of recent proposals for applying interpretable ML models in high-risk applications. The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct. Indeed, in the case of DTs, explanations correspond to DT paths. Since decision trees are ideally shallow, and so paths contain far fewer features than the total number of features, explanations in DTs are expected to be succinct, and hence interpretable. This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not be deemed interpretable. The paper introduces logically rigorous path explanations and path explanation redundancy, and proves that there exist functions for which decision trees must exhibit paths with explanation redundancy that is arbitrarily larger than the actual path explanation. The paper also proves that only a very restricted class of functions can be represented with DTs that exhibit no explanation redundancy. In addition, the paper includes experimental results substantiating that path explanation redundancy is observed ubiquitously in decision trees, including those obtained using different tree learning algorithms, but also in a wide range of publicly available decision trees. The paper also proposes polynomial-time algorithms for eliminating path explanation redundancy, which in practice require negligible time to compute. Thus, these algorithms serve to indirectly attain irreducible, and so succinct, explanations for decision trees. Furthermore, the paper includes novel results related with duality and enumeration of explanations, based on using SAT solvers as witness-producing NP-oracles.

#IJCAI2022 invited talk: Insights in medicine with Mihaela van der Schaar


The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJACI-ECAI 2022) took place from 23-29 July, in Vienna. As part of the conference there were eight fascinating invited talks. The title of her talk was "Panning for insights in medicine and beyond: New frontiers in machine learning interpretability". Mihaela began by explaining why the field of medicine is so complex. Differences between individuals, due to factors such as genetic background, environmental exposure, and life-style, lead to variations in symptoms, disease trajectories, and responses to treatments.

Using reinforcement learning for control of direct ink writing


Closed-loop printing enhanced by machine learning. Using fluids for 3D printing may seem paradoxical at first glance, but not all fluids are watery. Many useful materials are more viscous, from inks to hydrogels, and thus qualify for printing. Yet their potential has been relatively unexplored due to the limited control over their behaviour. Now, researchers of the Bickel group at the Institute of Science and Technology Austria (ISTA) are employing machine learning in virtual environments to achieve better results in real-world experiments.