Goto

Collaborating Authors

 classique


Traitement quantique des langues : {\'e}tat de l'art

Campano, Sabrina, Nabil, Tahar, Bothua, Meryl

arXiv.org Artificial Intelligence

This article presents a review of quantum computing research works for Natural Language Processing (NLP). Their goal is to improve the performance of current models, and to provide a better representation of several linguistic phenomena, such as ambiguity and long range dependencies. Several families of approaches are presented, including symbolic diagrammatic approaches, and hybrid neural networks. These works show that experimental studies are already feasible, and open research perspectives on the conception of new models and their evaluation.


Deep learning for classification of noisy QR codes

Leygonie, Rebecca, Lobry, Sylvain, ), null, (LIPADE), Laurent Wendling

arXiv.org Artificial Intelligence

We wish to define the limits of a classical classification model based on deep learning when applied to abstract images, which do not represent visually identifiable objects.QR codes (Quick Response codes) fall into this category of abstract images: one bit corresponding to one encoded character, QR codes were not designed to be decoded manually. To understand the limitations of a deep learning-based model for abstract image classification, we train an image classification model on QR codes generated from information obtained when reading a health pass. We compare a classification model with a classical (deterministic) decoding method in the presence of noise. This study allows us to conclude that a model based on deep learning can be relevant for the understanding of abstract images.


Algorithme EM r\'egularis\'e

Houdouin, Pierre, Jonkcheere, Matthieu, Pascal, Frederic

arXiv.org Artificial Intelligence

Expectation-Maximization (EM) algorithm is a widely used iterative algorithm for computing maximum likelihood estimate when dealing with Gaussian Mixture Model (GMM). When the sample size is smaller than the data dimension, this could lead to a singular or poorly conditioned covariance matrix and, thus, to performance reduction. This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size. This method aims to maximize a penalized GMM likelihood where regularized estimation may ensure positive definiteness of covariance matrix updates by shrinking the estimators towards some structured target covariance matrices. Finally, experiments on real data highlight the good performance of the proposed algorithm for clustering purposes.


First steps towards quantum machine learning applied to the classification of event-related potentials

Cattan, Grégoire, Quemy, Alexandre, Andreev, Anton

arXiv.org Machine Learning

Low information transfer rate is a major bottleneck for brain-computer interfaces based on non-invasive electroencephalography (EEG) for clinical applications. This led to the development of more robust and accurate classifiers. In this study, we investigate the performance of quantum-enhanced support vector classifier (QSVC). Training (predicting) balanced accuracy of QSVC was 83.17 (50.25) %. This result shows that the classifier was able to learn from EEG data, but that more research is required to obtain higher predicting accuracy. This could be achieved by a better configuration of the classifier, such as increasing the number of shots.


A new step for computing

Vasques, Xavier

arXiv.org Artificial Intelligence

The data center of tomorrow is a data center made up of heterogeneous systems, which will run heterogeneous workloads. The systems will be located as close as possible to the data. Heterogeneous systems will be equipped with binary, biological inspired and quantum accelerators. These architectures will be the foundations to address challenges. Like an orchestra conductor, the hybrid cloud will make it possible to set these systems to music thanks to a layer of security and intelligent automation.


Une approche totalement instanci\'ee pour la planification HTN

Ramoul, Abdeldjalil, Pellier, Damien, Fiorino, Humbert, Pesty, Sylvie

arXiv.org Artificial Intelligence

Many planning techniques have been developed to allow autonomous systems to act and make decisions based on their perceptions of the environment. Among these techniques, HTN ({\it Hierarchical Task Network}) planning is one of the most used in practice. Unlike classical approaches of planning. HTN operates by decomposing task into sub-tasks until each of these sub-tasks can be achieved an action. This hierarchical representation provide a richer representation of planning problems and allows to better guide the plan search and provides more knowledge to the underlying algorithms. In this paper, we propose a new approach of HTN planning in which, as in conventional planning, we instantiate all planning operators before starting the search process. This approach has proven its effectiveness in classical planning and is necessary for the development of effective heuristics and encoding planning problems in other formalism such as CSP or SAT. The instantiation is actually used by most modern planners but has never been applied in an HTN based planning framework. We present in this article a generic instantiation algorithm which implements many simplification techniques to reduce the process complexity inspired from those used in classical planning. Finally we present some results obtained from an experimentation on a range of problems used in the international planning competitions with a modified version of SHOP planner using fully instantiated problems.