Goto

Collaborating Authors

 Gelderland



AI-assisted mammograms cut risk of developing aggressive breast cancer

New Scientist

People who are screened for breast cancer by AI-supported radiologists are less likely to develop aggressive cancers before their next screening round than those who are screened by radiologists alone, raising hopes that AI-assisted screening could save lives. "This is the first randomised controlled trial on the use of AI in mammography screening," says Kristina Lång at Lund University in Sweden. The AI-supported approach involves using the software - which has been trained on more than 200,000 mammography scans from 10 countries - to rank the likelihood of cancer being present in mammograms on a scale of 1 to 10, based on visual patterns in the scans. The scans receiving a score of 1 to 9 are then assessed by one experienced radiologist, while scans receiving a score of 10 - indicating cancer is most likely to be present - are assessed by two experienced radiologists. An earlier study found that this approach could detect 29 per cent more cancers than standard screening, where each mammogram is assessed by two radiologists, without increasing the rate of false detections - where a growth is flagged but follow-up tests reveal it isn't actually there or wouldn't go on to cause problems.


What the evolution of tickling tells us about being human

New Scientist

From bonobos and rats to tickling robots, research is finally cracking the secrets of why we're ticklish, and what that reveals about our brains In a grey-walled room in the Dutch city of Nijmegen, a strange activity is underfoot. Wearing a cap covered in sensors and positioning themselves into a chair, a person places their bare feet over two holes in a platform. Beneath this lies a robot, which uses a metal probe to begin to tickle their soles. Here, at Radboud University's Touch and Tickle lab, volunteers are being mercilessly tickled in the name of science. "We can manipulate how strong the stimulation is, how fast and where it is going to be applied on your foot," says Konstantina Kilteni, who runs the lab, of the robot tickling experiment.


Radboud chemists are working with companies and robots on the transition from oil-based to bio-based materials

Robohub

Chemical products such as medicines, plastics, soap, and paint are still often based on fossil raw materials. This is not sustainable, so there is an urgent need for ways to make a'materials transition' to products made from bio-based raw materials. To achieve results more quickly and efficiently, researchers at Radboud University in the Big Chemistry programme are using robots and AI. The material transition from fossil-based to bio-based (where raw materials are based on materials of biological origin) is a major challenge. Raw materials for products must be replaced without changing the quality of those products.

  Country:
  Industry: Materials > Chemicals (0.68)

ProgRAG: Hallucination-Resistant Progressive Retrieval and Reasoning over Knowledge Graphs

Park, Minbae, Yang, Hyemin, Kim, Jeonghyun, Park, Kunsoo, Kim, Hyunjoon

arXiv.org Artificial Intelligence

Large Language Models (LLMs) demonstrate strong reasoning capabilities but struggle with hallucinations and limited transparency. Recently, KG-enhanced LLMs that integrate knowledge graphs (KGs) have been shown to improve reasoning performance, particularly for complex, knowledge-intensive tasks. However, these methods still face significant challenges, including inaccurate retrieval and reasoning failures, often exacerbated by long input contexts that obscure relevant information or by context constructions that struggle to capture the richer logical directions required by different question types. Furthermore, many of these approaches rely on LLMs to directly retrieve evidence from KGs, and to self-assess the sufficiency of this evidence, which often results in premature or incorrect reasoning. To address the retrieval and reasoning failures, we propose ProgRAG, a multi-hop knowledge graph question answering (KGQA) framework that decomposes complex questions into sub-questions, and progressively extends partial reasoning paths by answering each sub-question. At each step, external retrievers gather candidate evidence, which is then refined through uncertainty-aware pruning by the LLM. Finally, the context for LLM reasoning is optimized by organizing and rearranging the partial reasoning paths obtained from the sub-question answers. Experiments on three well-known datasets demonstrate that ProgRAG outperforms existing baselines in multi-hop KGQA, offering improved reliability and reasoning quality.


Formalized Hopfield Networks and Boltzmann Machines

Cipollina, Matteo, Karatarakis, Michail, Wiedijk, Freek

arXiv.org Artificial Intelligence

Neural networks are widely used, yet their analysis and verification remain challenging. In this work, we present a Lean 4 formalization of neural networks, covering both deterministic and stochastic models. We first formalize Hopfield networks, recurrent networks that store patterns as stable states. We prove convergence and the correctness of Hebbian learning, a training rule that updates network parameters to encode patterns, here limited to the case of pairwise-orthogonal patterns. We then consider stochastic networks, where updates are probabilistic and convergence is to a stationary distribution. As a canonical example, we formalize the dynamics of Boltzmann machines and prove their ergodicity, showing convergence to a unique stationary distribution using a new formalization of the Perron-Frobenius theorem.


Leveraging LLMs to support co-evolution between definitions and instances of textual DSLs

Zhang, Weixing, Hebig, Regina, Strüber, Daniel

arXiv.org Artificial Intelligence

Software languages evolve over time for various reasons, such as the addition of new features. When the language's grammar definition evolves, textual instances that originally conformed to the grammar become outdated. For DSLs in a model-driven engineering context, there exists a plethora of techniques to co-evolve models with the evolving metamodel. However, these techniques are not geared to support DSLs with a textual syntax -- applying them to textual language definitions and instances may lead to the loss of information from the original instances, such as comments and layout information, which are valuable for software comprehension and maintenance. This study explores the potential of Large Language Model (LLM)-based solutions in achieving grammar and instance co-evolution, with attention to their ability to preserve auxiliary information when directly processing textual instances. By applying two advanced language models, Claude-3.5 and GPT-4o, and conducting experiments across seven case languages, we evaluated the feasibility and limitations of this approach. Our results indicate a good ability of the considered LLMs for migrating textual instances in small-scale cases with limited instance size, which are representative of a subset of cases encountered in practice. In addition, we observe significant challenges with the scalability of LLM-based solutions to larger instances, leading to insights that are useful for informing future research.


Quantum-Enhanced Reinforcement Learning for Accelerating Newton-Raphson Convergence with Ising Machines: A Case Study for Power Flow Analysis

Kaseb, Zeynab, Moller, Matthias, Spoor, Lindsay, Guo, Jerry J., Xiang, Yu, Palensky, Peter, Vergara, Pedro P.

arXiv.org Artificial Intelligence

The Newton-Raphson (NR) method is widely used for solving power flow (PF) equations due to its quadratic convergence. However, its performance deteriorates under poor initialization or extreme operating scenarios, e.g., high levels of renewable energy penetration. Traditional NR initialization strategies often fail to address these challenges, resulting in slow convergence or even divergence. We propose the use of reinforcement learning (RL) to optimize the initialization of NR, and introduce a novel quantum-enhanced RL environment update mechanism to mitigate the significant computational cost of evaluating power system states over a combinatorially large action space at each RL timestep by formulating the voltage adjustment task as a quadratic unconstrained binary optimization problem. Specifically, quantum/digital annealers are integrated into the RL environment update to evaluate state transitions using a problem Hamiltonian designed for PF. Results demonstrate significant improvements in convergence speed, a reduction in NR iteration counts, and enhanced robustness under different operating conditions.


A new kid on the block: Distributional semantics predicts the word-specific tone signatures of monosyllabic words in conversational Taiwan Mandarin

Jin, Xiaoyun, Ernestus, Mirjam, Baayen, R. Harald

arXiv.org Artificial Intelligence

We present a corpus-based investigation of how the pitch contours of monosyllabic words are realized in spontaneous conversational Mandarin, focusing on the effects of words' meanings. We used the generalized additive model to decompose a given observed pitch contour into a set of component pitch contours that are tied to different control variables and semantic predictors. Even when variables such as word duration, gender, speaker identity, tonal context, vowel height, and utterance position are controlled for, the effect of word remains a strong predictor of tonal realization. We present evidence that this effect of word is a semantic effect: word sense is shown to be a better predictor than word, and heterographic homophones are shown to have different pitch contours. The strongest evidence for the importance of semantics is that the pitch contours of individual word tokens can be predicted from their contextualized embeddings with an accuracy that substantially exceeds a permutation baseline. For phonetics, distributional semantics is a new kid on the block. Although our findings challenge standard theories of Mandarin tone, they fit well within the theoretical framework of the Discriminative Lexicon Model.