Goto

Collaborating Authors

 france


Leave big tech behind! How to replace Amazon, Google, X, Meta, Apple – and more

The Guardian

Switching to big tech alternatives is easier than you might imagine. Switching to big tech alternatives is easier than you might imagine. T here's not much to love about big tech these days. So many ills can be laid at its door: social media harms, misinformation, polarisation, mining and misuse of personal data, environmental negligence, tax avoidance, the list goes on. Added to which, Silicon Valley's leaders seem all too keen to cosy up to the Trump administration, to shower the president with bribes - sorry, gifts - and remain silent about his worsening political overreach. And that's before we get to the rampant " enshittification ", as the tech writer Cory Doctorow describes it, which means that by design many big tech products have become less useful and more extractive than they were when we originally signed up to them.



Overview of the 17th International Joint Conference on Computational Intelligence

Interactive AI Magazine

IJCCI 2025 (17th International Joint Conference on Computational Intelligence) received 146 paper submissions from 41 countries. To evaluate each submission, a double-blind paper review was performed by the Program Committee. After a stringent selection process, 36 papers were published and presented as full papers, i.e. completed work (12 pages/25' oral presentation), 83 papers were accepted as short papers (58 as oral presentation). The organizing committee included the IJCCI Conference Chair: Joaquim Filipe, Polytechnic Institute of Setubal, Portugal, and the IJCCI 2025 Program Chairs: Francesco Marcelloni, University of Pisa, Italy, Kurosh Madani, University of Paris-EST Créteil (UPEC), France, and Niki van Stein, Leiden University, Netherlands. At the closing session, the conference acknowledged a few papers that were considered excellent in their class, presenting a "Best Paper Award", "Best Student Paper Award", and "Best Poster Award" for each of the co-located conferences.


ModelSelectionforBayesianAutoencoders: SupplementaryMaterial

Neural Information Processing Systems

In this section, we review some key results on the Wasserstein distance. Wpp Rπ(t,θi),Rρ(t,θi), (4) where the approximation comes from using Monte-Carlo integration by samplingθi uniformly in SD 1 [2]. M,M is the number of points used to approximate the integral. Calculating the Wasserstein distance with the empirical distribution function is computationally attractive. To do that, we first sortxms in an ascending order, such thatxi[m] xi[m+1], where i[m]istheindexofthesortedxms. Hamiltonian Monte Carlo (HMC)[24]isahighly-efficient MarkovChain Monte Carlo (MCMC) method used to generate samples from the posteriorw p(w|y).



803b9c4a8e4784072fdd791c54d614e2-Supplemental-Conference.pdf

Neural Information Processing Systems

This is the state-of-the-art graph contrastive learning based recommendation method, which proposes randomly node dropout, edge dropout, and random walk for augmentation onthebipartite graph.


In medieval France, murderous pigs faced trial and execution

Popular Science

Animal trials helped to restore order when the unspeakable happened. In 1457, a sow and her piglets were put on trial for the murder of a child in the village of Savigny in Burgundy, France. The sow was ultimately found guilty and her piglets were acquitted. Breakthroughs, discoveries, and DIY tips sent six days a week. It's a common scene in many films set in medieval Europe: a wooden cart wheeling its way through a jeering crowd of townsfolk, taking a condemned prisoner to the gallows.


A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem

Floridi, Luciano, Jia, Yiyang, Tohmé, Fernando

arXiv.org Artificial Intelligence

This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.


LUNE: Efficient LLM Unlearning via LoRA Fine-Tuning with Negative Examples

Liu, Yezi, Chen, Hanning, Huang, Wenjun, Ni, Yang, Imani, Mohsen

arXiv.org Artificial Intelligence

Large language models (LLMs) possess vast knowledge acquired from extensive training corpora, but they often cannot remove specific pieces of information when needed, which makes it hard to handle privacy, bias mitigation, and knowledge correction. Traditional model unlearning approaches require computationally expensive fine-tuning or direct weight editing, making them impractical for real-world deployment. In this work, we introduce LoRA-based Unlearning with Negative Examples (LUNE), a lightweight framework that performs negative-only unlearning by updating only low-rank adapters while freezing the backbone, thereby localizing edits and avoiding disruptive global changes. Leveraging Low-Rank Adaptation (LoRA), LUNE targets intermediate representations to suppress (or replace) requested knowledge with an order-of-magnitude lower compute and memory than full fine-tuning or direct weight editing. Extensive experiments on multiple factual unlearning tasks show that LUNE: (I) achieves effectiveness comparable to full fine-tuning and memory-editing methods, and (II) reduces computational cost by about an order of magnitude.


40,000 Roman-era coins discovered in French village

Popular Science

The town was important to the Celtic Mediomatrici tribe before it was conquered by Julius Caesar. Breakthroughs, discoveries, and DIY tips sent every weekday. Archeologists recently discovered over 40,000 Roman-era coins during a dig in a French village. The treasure trove of ancient coins were found in three ceramic storage vessels that had been buried between 1,700 and 1,800 years ago. The team from the National Institute for Preventive Archaeological Research (INRAP) was digging in the village of Senon in northeastern France, roughly 60 miles from the Luxembourg border.