Goto

Collaborating Authors

 france


A mathematical framework for time-delay reservoir computing analysis

Clabaut, Anh-Tuan, Auriol, Jean, Boussaada, Islam, Mazanti, Guilherme

arXiv.org Machine Learning

Reservoir computing is a well-established approach for processing data with a much lower complexity compared to traditional neural networks. Despite two decades of experimental progress, the core properties of reservoir computing (namely separation, robustness, and fading memory) still lack rigorous mathematical foundations. This paper addresses this gap by providing a control-theoretic framework for the analysis of time-delay-based reservoir computers. We introduce formal definitions of the separation property and fading memory in terms of functional norms, and establish their connection to well-known stability notions for time-delay systems as incremental input-to-state stability. For a class of linear reservoirs, we derive an explicit lower bound for the separation distance via Fourier analysis, offering a computable criterion for reservoir design. Numerical results on the NARMA10 benchmark and continuous-time system prediction validate the approach with a minimal digital implementation.


Leave big tech behind! How to replace Amazon, Google, X, Meta, Apple – and more

The Guardian

Switching to big tech alternatives is easier than you might imagine. Switching to big tech alternatives is easier than you might imagine. T here's not much to love about big tech these days. So many ills can be laid at its door: social media harms, misinformation, polarisation, mining and misuse of personal data, environmental negligence, tax avoidance, the list goes on. Added to which, Silicon Valley's leaders seem all too keen to cosy up to the Trump administration, to shower the president with bribes - sorry, gifts - and remain silent about his worsening political overreach. And that's before we get to the rampant " enshittification ", as the tech writer Cory Doctorow describes it, which means that by design many big tech products have become less useful and more extractive than they were when we originally signed up to them.



Overview of the 17th International Joint Conference on Computational Intelligence

Interactive AI Magazine

IJCCI 2025 (17th International Joint Conference on Computational Intelligence) received 146 paper submissions from 41 countries. To evaluate each submission, a double-blind paper review was performed by the Program Committee. After a stringent selection process, 36 papers were published and presented as full papers, i.e. completed work (12 pages/25' oral presentation), 83 papers were accepted as short papers (58 as oral presentation). The organizing committee included the IJCCI Conference Chair: Joaquim Filipe, Polytechnic Institute of Setubal, Portugal, and the IJCCI 2025 Program Chairs: Francesco Marcelloni, University of Pisa, Italy, Kurosh Madani, University of Paris-EST Créteil (UPEC), France, and Niki van Stein, Leiden University, Netherlands. At the closing session, the conference acknowledged a few papers that were considered excellent in their class, presenting a "Best Paper Award", "Best Student Paper Award", and "Best Poster Award" for each of the co-located conferences.



ModelSelectionforBayesianAutoencoders: SupplementaryMaterial

Neural Information Processing Systems

In this section, we review some key results on the Wasserstein distance. Wpp Rπ(t,θi),Rρ(t,θi), (4) where the approximation comes from using Monte-Carlo integration by samplingθi uniformly in SD 1 [2]. M,M is the number of points used to approximate the integral. Calculating the Wasserstein distance with the empirical distribution function is computationally attractive. To do that, we first sortxms in an ascending order, such thatxi[m] xi[m+1], where i[m]istheindexofthesortedxms. Hamiltonian Monte Carlo (HMC)[24]isahighly-efficient MarkovChain Monte Carlo (MCMC) method used to generate samples from the posteriorw p(w|y).



803b9c4a8e4784072fdd791c54d614e2-Supplemental-Conference.pdf

Neural Information Processing Systems

This is the state-of-the-art graph contrastive learning based recommendation method, which proposes randomly node dropout, edge dropout, and random walk for augmentation onthebipartite graph.


Russia-Ukraine war: List of key events, day 1,435

Al Jazeera

Could Ukraine hold a presidential election right now? Will Europe use frozen Russian assets to fund war? How can Ukraine rebuild China ties? 'Ukraine is running out of men, money and time' The death toll from a Russian attack on a passenger train in Ukraine's Kharkiv region on Tuesday rose to six, after the remains of several bodies were recovered from the wreckage, the Kharkiv Regional Prosecutor's Office said on the Telegram messaging app. At least six people were injured in a Russian missile attack on Ukraine's Zaporizhia region, the head of the regional military administration, Ivan Fedorov, said on Telegram.


A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem

Floridi, Luciano, Jia, Yiyang, Tohmé, Fernando

arXiv.org Artificial Intelligence

This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.