Goto

Collaborating Authors

 Styria


Interview with Lea Demelius: Researching differential privacy

AIHub

In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Lea Demelius, who is researching differential privacy. I am studying at the University of Technology Graz in Austria. My research focuses on differential privacy, which is widely regarded as the state-of-the-art for protecting privacy in data analysis and machine learning.


Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning Bettina Kรถnighofer

Neural Information Processing Systems

In many Deep Reinforcement Learning (RL) problems, decisions in a trained policy vary in significance for the expected safety and performance of the policy. Since RL policies are very complex, testing efforts should concentrate on states in which the agent's decisions have the highest impact on the expected outcome. In this paper, we propose a novel model-based method to rigorously compute a ranking of state importance across the entire state space. We then focus our testing efforts on the highest-ranked states. In this paper, we focus on testing for safety. However, the proposed methods can be easily adapted to test for performance.


Anomaly Detection in Complex Dynamical Systems: A Systematic Framework Using Embedding Theory and Physics-Inspired Consistency

arXiv.org Artificial Intelligence

TU Graz, Institute for Technical Informatics, Inffeldgasse 16/I, Graz, Austria, 8010 AI AUSTRIA, RL Community, Wollzeile 24/12, Vienna, Austria, 1010 Abstract --Anomaly detection in complex dynamical systems is essential for ensuring reliability, safety, and efficiency in industrial and cyber-physical infrastructures. Predictive maintenance helps prevent costly failures, while cybersecurity monitoring has become critical as digitized systems face growing threats. Many of these systems exhibit oscillatory behaviors and bounded motion, requiring anomaly detection methods that capture structured temporal dependencies while adhering to physical consistency principles. In this work, we propose a system-theoretic approach to anomaly detection, grounded in classical embedding theory and physics-inspired consistency principles . We build upon the Fractal Whitney Embedding Prevalence Theorem, extending traditional embedding techniques to complex system dynamics. Additionally, we introduce state-derivative pairs as an embedding strategy to capture system evolution. T o enforce temporal coherence, we develop a T emporal Differential Consistency Autoencoder (TDC-AE), incorporating a TDC-Loss that aligns the approximated derivatives of latent variables with their dynamic representations. We evaluate our method on the C-MAPSS dataset, a benchmark for turbofan aeroengine degradation . TDC-AE outperforms LSTMs and Transformers while achieving a 200x reduction in MAC operations, making it particularly suited for lightweight edge computing . Our findings support the hypothesis that anomalies disrupt stable system dynamics, providing a robust, interpretable signal for anomaly detection. I NTRODUCTION Anomaly detection in complex physical dynamical systems is a critical research area as industrial and engineered systems become more sophisticated. Identifying deviations from expected behavior is essential for ensuring reliability, safety, and efficiency.


Some of the world's smartest traffic lights are getting smarter

Popular Science

Urban planners in Vienna, Austria, installed their first smart traffic lights specifically designed to increase pedestrian safety in 2018. After years of analysis and improvement, the Graz University of Technology (TU Graz) researchers have now rolled out a second generation of exponentially more complex, deep learning-based software to 21 lights at four crosswalks. Unlike its predecessor, however, the new system is programmed to provide greater help to pedestrians with walking aids, wheelchairs, and even baby strollers. People with disabilities are disproportionately at risk when crossing busy streets. Pedestrians using wheelchairs, for example, are 36 percent more likely to die in a car-related accident when compared to victims struck while standing.


Machine learning-based classification for Single Photon Space Debris Light Curves

arXiv.org Artificial Intelligence

The growing number of man-made debris in Earth's orbit poses a threat to active satellite missions due to the risk of collision. Characterizing unknown debris is, therefore, of high interest. Light Curves (LCs) are temporal variations of object brightness and have been shown to contain information such as shape, attitude, and rotational state. Since 2015, the Satellite Laser Ranging (SLR) group of Space Research Institute (IWF) Graz has been building a space debris LC catalogue. The LCs are captured on a Single Photon basis, which sets them apart from CCD-based measurements. In recent years, Machine Learning (ML) models have emerged as a viable technique for analyzing LCs. This work aims to classify Single Photon Space Debris using the ML framework. We have explored LC classification using k-Nearest Neighbour (k-NN), Random Forest (RDF), XGBoost (XGB), and Convolutional Neural Network (CNN) classifiers in order to assess the difference in performance between traditional and deep models. Instead of performing classification on the direct LCs data, we extracted features from the data first using an automated pipeline. We apply our models on three tasks, which are classifying individual objects, objects grouped into families according to origin (e.g., GLONASS satellites), and grouping into general types (e.g., rocket bodies). We successfully classified Space Debris LCs captured on Single Photon basis, obtaining accuracies as high as 90.7%. Further, our experiments show that the classifiers provide better classification accuracy with automated extracted features than other methods.


Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning

arXiv.org Artificial Intelligence

In many Deep Reinforcement Learning (RL) problems, decisions in a trained policy vary in significance for the expected safety and performance of the policy. Since RL policies are very complex, testing efforts should concentrate on states in which the agent's decisions have the highest impact on the expected outcome. In this paper, we propose a novel model-based method to rigorously compute a ranking of state importance across the entire state space. We then focus our testing efforts on the highest-ranked states. In this paper, we focus on testing for safety. However, the proposed methods can be easily adapted to test for performance. In each iteration, our testing framework computes optimistic and pessimistic safety estimates. These estimates provide lower and upper bounds on the expected outcomes of the policy execution across all modeled states in the state space. Our approach divides the state space into safe and unsafe regions upon convergence, providing clear insights into the policy's weaknesses. Two important properties characterize our approach. (1) Optimal Test-Case Selection: At any time in the testing process, our approach evaluates the policy in the states that are most critical for safety. (2) Guaranteed Safety: Our approach can provide formal verification guarantees over the entire state space by sampling only a fraction of the policy. Any safety properties assured by the pessimistic estimate are formally proven to hold for the policy. We provide a detailed evaluation of our framework on several examples, showing that our method discovers unsafe policy behavior with low testing effort.


A Pylon Model for Semantic Segmentation

Neural Information Processing Systems

Graph cut optimization is one of the standard workhorses of image segmentation since for binary random field representations of the image, it gives globally optimal results and there are efficient polynomial time implementations. Often, the random field is applied over a flat partitioning of the image into non-intersecting elements, such as pixels or super-pixels. In the paper we show that if, instead of a flat partitioning, the image is represented by a hierarchical segmentation tree, then the resulting energy combining unary and boundary terms can still be optimized using graph cut (with all the corresponding benefits of global optimality and efficiency). As a result of such inference, the image gets partitioned into a set of segments that may come from different layers of the tree. We apply this formulation, which we call the pylon model, to the task of semantic segmentation where the goal is to separate an image into areas belonging to different semantic classes. The experiments highlight the advantage of inference on a segmentation tree (over a flat partitioning) and demonstrate that the optimization in the pylon model is able to flexibly choose the level of segmentation across the image. Overall, the proposed system has superior segmentation accuracy on several datasets (Graz-02, Stanford background) compared to previously suggested approaches.


Interview with Marek ล uppa: insights into RoboCupJunior

AIHub

The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage. Marek ล uppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league. I started with RoboCupJunior quite a while ago: my first international competition was in 2009 in Graz, where I was lucky enough to compete in Soccer for the first time. Our team didn't do all that well in that event but RoboCup made a deep impression and so I stayed around: first as a competitor and later to help organise the RoboCupJunior Soccer league. Right now I am serving as part of the RoboCupJunior Execs who are responsible for the organisation of RoboCupJunior as a whole.


Robotic system offers hidden window into collective bee behavior

Robohub

The robotic system is shown in an experimental hive Artificial Life Lab/U. of Graz/Hiveopolis Honeybees are famously finicky when it comes to being studied. Research instruments and conditions and even unfamiliar smells can disrupt a colony's behavior. Now, a joint research team from the Mobile Robotic Systems Group in EPFL's School of Engineering and School of Computer and Communication Sciences and the Hiveopolis project at Austria's University of Graz have developed a robotic system that can be unobtrusively built into the frame of a standard honeybee hive. "Many rules of bee society โ€“ from collective and individual interactions to raising a healthy brood โ€“ are regulated by temperature, so we leveraged that for this study," explains EPFL PhD student Rafael Barmak, first author on a paper on the system recently published in Science Robotics. "The thermal sensors create a snapshot of the bees' collective behavior, while the actuators allow us to influence their movement by modulating thermal fields."


Robotic system offers hidden window into collective bee behavior

#artificialintelligence

Honeybees are famously finicky when it comes to being studied. Research instruments and conditions and even unfamiliar smells can disrupt a colony's behavior. Now, a joint research team from the Mobile Robotic Systems Group in EPFL's School of Engineering and School of Computer and Communication Sciences and the Hiveopolis project at Austria's University of Graz have developed a robotic system that can be unobtrusively built into the frame of a standard honeybee hive. "Many rules of bee society--from collective and individual interactions to raising a healthy brood--are regulated by temperature, so we leveraged that for this study," explains EPFL Ph.D. student Rafael Barmak, first author on a paper on the system recently published in Science Robotics. "The thermal sensors create a snapshot of the bees' collective behavior, while the actuators allow us to influence their movement by modulating thermal fields."