Goto

Collaborating Authors

 algorithme


Artificial Intelligence / Human Intelligence: Who Controls Whom?

Jacquemot, Charlotte

arXiv.org Artificial Intelligence

Using the example of the film 2001: A Space Odyssey, this chapter illustrates the challenges posed by an AI capable of making decisions that go against human interests. But are human decisions always rational and ethical? In reality, the cognitive decision-making process is influenced by cognitive biases that affect our behavior and choices. AI not only reproduces these biases, but can also exploit them, with the potential to shape our decisions and judgments. Behind IA algorithms, there are sometimes individuals who show little concern for fundamental rights and impose their own rules. To address the ethical and societal challenges raised by AI and its governance, the regulation of digital platforms and education are keys levers. Regulation must reflect ethical, legal, and political choices, while education must strengthen digital literacy and teach people to make informed and critical choices when facing digital technologies.


Analyse comparative d'algorithmes de restauration en architecture dépliée pour des signaux chromatographiques parcimonieux

Gharbi, Mouna, Villa, Silvia, Chouzenoux, Emilie, Pesquet, Jean-Christophe, Duval, Laurent

arXiv.org Artificial Intelligence

Data restoration from degraded observations, of sparsity hypotheses, is an active field of study. Traditional iterative optimization methods are now complemented by deep learning techniques. The development of unfolded methods benefits from both families. We carry out a comparative study of three architectures on parameterized chromatographic signal databases, highlighting the performance of these approaches, especially when employing metrics adapted to physico-chemical peak signal characterization.


Interpolation pour l'augmentation de donnees : Application \`a la gestion des adventices de la canne a sucre a la Reunion

Ferber, Frederick Fabre, Gay, Dominique, Soulie, Jean-Christophe, Diatta, Jean, Maillard, Odalric-Ambrym

arXiv.org Machine Learning

Data augmentation is a crucial step in the development of robust supervised learning models, especially when dealing with limited datasets. This study explores interpolation techniques for the augmentation of geo-referenced data, with the aim of predicting the presence of Commelina benghalensis L. in sugarcane plots in La R\'eunion. Given the spatial nature of the data and the high cost of data collection, we evaluated two interpolation approaches: Gaussian processes (GPs) with different kernels and kriging with various variograms. The objectives of this work are threefold: (i) to identify which interpolation methods offer the best predictive performance for various regression algorithms, (ii) to analyze the evolution of performance as a function of the number of observations added, and (iii) to assess the spatial consistency of augmented datasets. The results show that GP-based methods, in particular with combined kernels (GP-COMB), significantly improve the performance of regression algorithms while requiring less additional data. Although kriging shows slightly lower performance, it is distinguished by a more homogeneous spatial coverage, a potential advantage in certain contexts.


Algorithme EM r\'egularis\'e

Houdouin, Pierre, Jonkcheere, Matthieu, Pascal, Frederic

arXiv.org Artificial Intelligence

Expectation-Maximization (EM) algorithm is a widely used iterative algorithm for computing maximum likelihood estimate when dealing with Gaussian Mixture Model (GMM). When the sample size is smaller than the data dimension, this could lead to a singular or poorly conditioned covariance matrix and, thus, to performance reduction. This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size. This method aims to maximize a penalized GMM likelihood where regularized estimation may ensure positive definiteness of covariance matrix updates by shrinking the estimators towards some structured target covariance matrices. Finally, experiments on real data highlight the good performance of the proposed algorithm for clustering purposes.


Contribution \`a l'Optimisation d'un Comportement Collectif pour un Groupe de Robots Autonomes

Bendahmane, Amine

arXiv.org Artificial Intelligence

This thesis studies the domain of collective robotics, and more particularly the optimization problems of multirobot systems in the context of exploration, path planning and coordination. It includes two contributions. The first one is the use of the Butterfly Optimization Algorithm (BOA) to solve the Unknown Area Exploration problem with energy constraints in dynamic environments. This algorithm was never used for solving robotics problems before, as far as we know. We proposed a new version of this algorithm called xBOA based on the crossover operator to improve the diversity of the candidate solutions and speed up the convergence of the algorithm. The second contribution is the development of a new simulation framework for benchmarking dynamic incremental problems in robotics such as exploration tasks. The framework is made in such a manner to be generic to quickly compare different metaheuristics with minimum modifications, and to adapt easily to single and multi-robot scenarios. Also, it provides researchers with tools to automate their experiments and generate visuals, which will allow them to focus on more important tasks such as modeling new algorithms. We conducted a series of experiments that showed promising results and allowed us to validate our approach and model.


Un jeu a debattre pour sensibiliser a l'Intelligence Artificielle dans le contexte de la pandemie de COVID-19

Adam, Carole, Lauradoux, Cédric

arXiv.org Artificial Intelligence

Artificial Intelligence is more and more pervasive in our lives. Many important decisions are delegated to AI algorithms: accessing higher education, determining prison sentences, autonomously driving vehicles... Engineers and researchers are educated to this field, while the general population has very little knowledge about AI. As a result, they are very sensitive to the (more or less accurate) ideas disseminated by the media: an AI that is unbiased, infallible, and will either save the world or lead to its demise. We therefore believe, as highlighted by UNESCO, that it is essential to provide the population with a general understanding of AI algorithms, so that they can choose wisely whether to use them (or not). To this end, we propose a serious game in the form of a civic debate aiming at selecting an AI solution to control a pandemic. This game is targeted at high school students, it was first experimented during a science fair, and is now available freely.


A new step for computing

Vasques, Xavier

arXiv.org Artificial Intelligence

The data center of tomorrow is a data center made up of heterogeneous systems, which will run heterogeneous workloads. The systems will be located as close as possible to the data. Heterogeneous systems will be equipped with binary, biological inspired and quantum accelerators. These architectures will be the foundations to address challenges. Like an orchestra conductor, the hybrid cloud will make it possible to set these systems to music thanks to a layer of security and intelligent automation.


Etat de l'art sur l'application des bandits multi-bras

Bouneffouf, Djallel

arXiv.org Artificial Intelligence

The Multi-armed bandit offer the advantage to learn and exploit the already learnt knowledge at the same time. This capability allows this approach to be applied in different domains, going from clinical trials where the goal is investigating the effects of different experimental treatments while minimizing patient losses, to adaptive routing where the goal is to minimize the delays in a network. This article provides a review of the recent results on applying bandit to real-life scenario and summarize the state of the art for each of these fields. Different techniques has been proposed to solve this problem setting, like epsilon-greedy, Upper confident bound (UCB) and Thompson Sampling (TS). We are showing here how this algorithms were adapted to solve the different problems of exploration exploitation.


Interpretabilit\'e des mod\`eles : \'etat des lieux des m\'ethodes et application \`a l'assurance

Delcaillau, Dimitri, Ly, Antoine, Vermet, Franck, Papp, Alizé

arXiv.org Machine Learning

Since May 2018, the General Data Protection Regulation (GDPR) has introduced new obligations to industries. By setting a legal framework, it notably imposes strong transparency on the use of personal data. Thus, people must be informed of the use of their data and must consent the usage of it. Data is the raw material of many models which today make it possible to increase the quality and performance of digital services. Transparency on the use of data also requires a good understanding of its use through different models. The use of models, even if efficient, must be accompanied by an understanding at all levels of the process that transform data (upstream and downstream of a model), thus making it possible to define the relationships between the individual's data and the choice that an algorithm could make based on the analysis of the latter. (For example, the recommendation of one product or one promotional offer or an insurance rate representative of the risk.) Models users must ensure that models do not discriminate against and that it is also possible to explain its result. The widening of the panel of predictive algorithms - made possible by the evolution of computing capacities -- leads scientists to be vigilant about the use of models and to consider new tools to better understand the decisions deduced from them . Recently, the community has been particularly active on model transparency with a marked intensification of publications over the past three years. The increasingly frequent use of more complex algorithms (\textit{deep learning}, Xgboost, etc.) presenting attractive performances is undoubtedly one of the causes of this interest. This article thus presents an inventory of methods of interpreting models and their uses in an insurance context.


Classification des S{\'e}ries Temporelles Incertaines par Transformation Shapelet

Mbouopda, Michael, Nguifo, Engelbert Mephu

arXiv.org Artificial Intelligence

Time serie classification is used in a diverse range of domain such as meteorology, medicine and physics. It aims to classify chronological data. Many accurate approaches have been built during the last decade and shapelet transformation is one of them. However, none of these approaches does take data uncertainty into account. Using uncertainty propagation techiniques, we propose a new dissimilarity measure based on euclidean distance. We also show how to use this new measure to adapt shapelet transformation to uncertain time series classification. An experimental assessment of our contribution is done on some state of the art datasets.