Goto

Collaborating Authors

 Bologna





Overview of the 17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management

Interactive AI Magazine

IC3K 2025 (17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management) received 163 paper submissions from 40 countries. To evaluate each submission, a double-blind paper review was performed by the Program Committee. After a stringent selection process, 31 papers were published and presented as full papers, i.e. completed work (12 pages/25' oral presentation), 81 papers were accepted as short papers (54 as oral presentation). The organizing committee included the IC3K Conference Chairs: Ricardo da Silva Torres, Artificial Intelligence Group, Wageningen University & Research, Netherlands and Jorge Bernardino, Polytechnic University of Coimbra, Portugal, and the IC3K 2025 Program Chairs: Le Gruenwald, University of Oklahoma, School of Computer Science, United States, Frans Coenen, University of Liverpool, United Kingdom, Jesualdo Tomás Fernández-Breis, University of Murcia, Spain, Lars Nolle, Jade University of Applied Sciences, Germany, Elio Masciari, University of Napoli Federico II, Italy and David Aveiro, University of Madeira, NOVA-LINCS and ARDITI, Portugal. At the closing session, the conference acknowledged a few papers that were considered excellent in their class, presenting a "Best Paper Award", "Best Student Paper Award", and "Best Poster Award" for each of the co-located conferences.



Deepfaking Orson Welles's Mangled Masterpiece

The New Yorker

A.I. re-creations of the "Magnificent Ambersons" stars Joseph Cotten, Agnes Moorehead, Dolores Costello, and Tim Holt. Edward Saatchi first saw "The Magnificent Ambersons," Orson Welles's mangled masterpiece from 1942, when he was twelve years old, in the private screening room of his family's crenellated mansion, in West Sussex. Saatchi's parents had already shown him and his brother "Citizen Kane." But "Ambersons," Welles's follow-up film, about a wealthy Midwestern clan brought low, came with a bewitching backstory: R.K.O. had ripped the movie from the director's hands, slashed forty-three minutes, tacked on a happy ending, and destroyed the excised footage in order to free up vault space, leaving decades' worth of cinephiles to obsess over what might have been. Part of this outcome was the result of studio treachery, but Welles, owing to some combination of hubris and distraction, had let his film slip from his grasp. Saatchi recalled, "Around the family dinner table, that was always such a big topic: How much was Welles responsible for this? Mum was always quite tough on him." Saatchi's father, Maurice, a baron also known as Lord Saatchi, is one of two Iraqi British brothers who founded the advertising firm Saatchi & Saatchi, in 1970, which led their family to become one of the richest in the U.K. Edward's mother, Josephine Hart, who died in 2011, was an Irish writer best known for her erotic thriller "Damage," which was adapted into a film by Louis Malle. Edward, born in 1985, grew up in London and at the sprawling country estate, surrounded by palatial gardens and classical statuary. He described his parents as "movie mad." The actor and Welles biographer Simon Callow, a Saatchi family friend, recalled, "They had a cinema of their own inside the house, and it was a ritual of theirs every week to watch a film together." Aside from old movies, Edward was obsessed with "Star Trek"--especially the Holodeck, a device that conjured simulated 3-D worlds populated by characters who could interact with the members of the Starship Enterprise. That kind of wizardry didn't exist in the real world, at least not yet. But the young prince of the Saatchi castle had faith that someday it would, and that it could bring the original "Ambersons" back from oblivion. "To me, this is the lost holy grail of cinema," Saatchi told me recently, like Charles Foster Kane murmuring about Rosebud. "It just seemed intuitively that there would be some way to undo what had happened."


Concentration Inequalities for Exchangeable Tensors and Matrix-valued Data

Cheng, Chen, Barber, Rina Foygel

arXiv.org Machine Learning

We study concentration inequalities for structured weighted sums of random data, including (i) tensor inner products and (ii) sequential matrix sums. We are interested in tail bounds and concentration inequalities for those structured weighted sums under exchangeability, extending beyond the classical framework of independent terms. We develop Hoeffding and Bernstein bounds provided with structure-dependent exchangeability. Along the way, we recover known results in weighted sum of exchangeable random variables and i.i.d. sums of random matrices to the optimal constants. Notably, we develop a sharper concentration bound for combinatorial sum of matrix arrays than the results previously derived from Chatterjee's method of exchangeable pairs. For applications, the richer structures provide us with novel analytical tools for estimating the average effect of multi-factor response models and studying fixed-design sketching methods in federated averaging. We apply our results to these problems, and find that our theoretical predictions are corroborated by numerical evidence.


Temporal Complexity and Self-Organization in an Exponential Dense Associative Memory Model

Cafiso, Marco, Paradisi, Paolo

arXiv.org Machine Learning

Dense Associative Memory (DAM) models generalize the classical Hopfield model by incorporating n-body or exponential interactions that greatly enhance storage capacity. While the criticality of DAM models has been largely investigated, mainly within a statistical equilibrium picture, little attention has been devoted to the temporal self-organizing behavior induced by learning. In this work, we investigate the behavior of a stochastic exponential DAM (SEDAM) model through the lens of Temporal Complexity (TC), a framework that characterizes complex systems by intermittent transition events between order and disorder and by scale-free temporal statistics. Transition events associated with birth-death of neural avalanche structures are exploited for the TC analyses and compared with analogous transition events based on coincidence structures. We systematically explore how TC indicators depend on control parameters, i.e., noise intensity and memory load. Our results reveal that the SEDAM model exhibits regimes of complex intermittency characterized by nontrivial temporal correlations and scale-free behavior, indicating the spontaneous emergence of self-organizing dynamics. These regimes emerge in small intervals of noise intensity values, which, in agreement with the extended criticality concept, never shrink to a single critical point. Further, the noise intensity range needed to reach the critical region, where self-organizing behavior emerges, slightly decreases as the memory load increases. This study highlights the relevance of TC as a complementary framework for understanding learning and information processing in artificial and biological neural systems, revealing the link between the memory load and the self-organizing capacity of the network.


2025 digest of digests

AIHub

Throughout the year we've reported on some of the larger stories, and some of the lesser-covered happenings, in our regular monthly digests. We look back through the archives and pick out one or two stories from each of our digests. This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also launched six smaller versions of R1 that are tiny enough to run locally on laptops. In Wired, Zeyi Yang reported on who is behind the startup, whilst Tongliang Liu (in The Conversation) looked at how DeepSeek achieved its results with a fraction of the cash and computing power of its competitors.


Bayesian Empirical Bayes: Simultaneous Inference from Probabilistic Symmetries

Wu, Bohan, Weinstein, Eli N., Blei, David M.

arXiv.org Machine Learning

Empirical Bayes (EB) improves the accuracy of simultaneous inference "by learning from the experience of others" (Efron, 2012). Classical EB theory focuses on latent variables that are iid draws from a fitted prior (Efron, 2019). Modern applications, however, feature complex structure, like arrays, spatial processes, or covariates. How can we apply EB ideas to these settings? We propose a generalized approach to empirical Bayes based on the notion of probabilistic symmetry. Our method pairs a simultaneous inference problem-with an unknown prior-to a symmetry assumption on the joint distribution of the latent variables. Each symmetry implies an ergodic decomposition, which we use to derive a corresponding empirical Bayes method. We call this methodBayesian empirical Bayes (BEB). We show how BEB recovers the classical methods of empirical Bayes, which implicitly assume exchangeability. We then use it to extend EB to other probabilistic symmetries: (i) EB matrix recovery for arrays and graphs; (ii) covariate-assisted EB for conditional data; (iii) EB spatial regression under shift invariance. We develop scalable algorithms based on variational inference and neural networks. In simulations, BEB outperforms existing approaches to denoising arrays and spatial data. On real data, we demonstrate BEB by denoising a cancer gene-expression matrix and analyzing spatial air-quality data from New York City.