Goto

Collaborating Authors

 Southern Denmark


On the Use of Bagging for Local Intrinsic Dimensionality Estimation

Péter, Kristóf, Campello, Ricardo J. G. B., Bailey, James, Houle, Michael E.

arXiv.org Machine Learning

The theory of Local Intrinsic Dimensionality (LID) has become a valuable tool for characterizing local complexity within and across data manifolds, supporting a range of data mining and machine learning tasks. Accurate LID estimation requires samples drawn from small neighborhoods around each query to avoid biases from nonlocal effects and potential manifold mixing, yet limited data within such neighborhoods tends to cause high estimation variance. As a variance reduction strategy, we propose an ensemble approach that uses subbagging to preserve the local distribution of nearest neighbor (NN) distances. The main challenge is that the uniform reduction in total sample size within each subsample increases the proximity threshold for finding a fixed number k of NNs around the query. As a result, in the specific context of LID estimation, the sampling rate has an additional, complex interplay with the neighborhood size, where both combined determine the sample size as well as the locality and resolution considered for estimation. We analyze both theoretically and experimentally how the choice of the sampling rate and the k-NN size used for LID estimation, alongside the ensemble size, affects performance, enabling informed prior selection of these hyper-parameters depending on application-based preferences. Our results indicate that within broad and well-characterized regions of the hyper-parameters space, using a bagged estimator will most often significantly reduce variance as well as the mean squared error when compared to the corresponding non-bagged baseline, with controllable impact on bias. We additionally propose and evaluate different ways of combining bagging with neighborhood smoothing for substantial further improvements on LID estimation performance.




Medieval elite still received fancy burials despite disease stigma

Popular Science

Breakthroughs, discoveries, and DIY tips sent six days a week. Wealth confers privilege, and for many people during the Middle Ages, this privilege extended into the afterlife . The trend often mirrored their relationship with religion before their deaths, too--nobility and knights frequently ensured they sat in the front pews of services. Money is only one facet of social relations, however. Communities have long discriminated against and ostracized residents with debilitating illnesses--especially those with outward physical effects.




Towards a pretrained deep learning estimator of the Linfoot informational correlation

Berg, Stéphanie M. van den, Halekoh, Ulrich, Möller, Sören, Jensen, Andreas Kryger, Hjelmborg, Jacob von Bornemann

arXiv.org Machine Learning

We develop a supervised deep-learning approach to estimate mutual information between two continuous random variables. As labels, we use the Linfoot informational correlation, a transformation of mutual information that has many important properties. Our method is based on ground truth labels for Gaussian and Clayton copulas. We compare our method with estimators based on kernel density, k-nearest neighbours and neural estimators. We show generally lower bias and lower variance. As a proof of principle, future research could look into training the model with a more diverse set of examples from other copulas for which ground truth labels are available.


Who built Scandinavia's oldest wooden plank boat? An ancient fingerprint offers clues.

Popular Science

Science Archaeology Who built Scandinavia's oldest wooden plank boat? An ancient fingerprint offers clues. Archeologists are closer to solving the Hjortspring Boat's mysteries. Breakthroughs, discoveries, and DIY tips sent every weekday. Archaeologists examining an ancient boat discovered in Denmark over a century ago are getting some help from a clue usually associated with crime scenes .


Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications

Simmons-Edler, Riley, Dong, Jean, Lushenko, Paul, Rajan, Kanaka, Badman, Ryan P.

arXiv.org Artificial Intelligence

Military weapon systems and command-and-control infrastructure augmented by artificial intelligence (AI) have seen rapid development and deployment in recent years. However, the sociotechnical impacts of AI on combat systems, military decision-making, and the norms of warfare have been understudied. We focus on a specific subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions. We refer to this subset as AI-powered lethal autonomous weapon systems (AI-LAWS) and argue that they introduce novel risks -- including unanticipated escalation, poor reliability in unfamiliar environments, and erosion of human oversight -- all of which threaten both military effectiveness and the openness of AI research. These risks cannot be addressed by high-level policy alone; effective regulation must be grounded in the technical behavior of AI models. We argue that AI researchers must be involved throughout the regulatory lifecycle. Thus, we propose a clear, behavior-based definition of AI-LAWS -- systems that introduce unique risks through their use of modern AI -- as a foundation for technically grounded regulation, given that existing frameworks do not distinguish them from conventional LAWS. Using this definition, we propose several technically-informed policy directions and invite greater participation from the AI research community in military AI policy discussions.


Not Everything That Counts Can Be Counted: A Case for Safe Qualitative AI

Beltoft, Stine, Galke, Lukas

arXiv.org Artificial Intelligence

Artificial intelligence (AI) and large language models (LLM) are reshaping science, with most recent advances culminating in fully-automated scientific discovery pipelines. But qualitative research has been left behind. Researchers in qualitative methods are hesitant about AI adoption. Yet when they are willing to use AI at all, they have little choice but to rely on general-purpose tools like ChatGPT to assist with interview interpretation, data annotation, and topic modeling - while simultaneously acknowledging these system's well-known limitations of being biased, opaque, irreproducible, and privacy-compromising. This creates a critical gap: while AI has substantially advanced quantitative methods, the qualitative dimensions essential for meaning-making and comprehensive scientific understanding remain poorly integrated. We argue for developing dedicated qualitative AI systems built from the ground up for interpretive research. Such systems must be transparent, reproducible, and privacy-friendly. We review recent literature to show how existing automated discovery pipelines could be enhanced by robust qualitative capabilities, and identify key opportunities where safe qualitative AI could advance multidisciplinary and mixed-methods research.