Goto

Collaborating Authors

France


A Brief Overview of Methods to Explain AI (XAI)

#artificialintelligence

I know this topic has been discussed many times. But I recently gave some talks on interpretability (for SCAI and France Innovation) and thought it would be good to include some of my work in this article. The importance of explainability for the decision-making process in machine learning doesn't need to be proved any longer. Users are demanding more explanations, and although there are no uniform and strict definitions of interpretability and explainability, the number of scientific papers explaining artificial intelligence (or XAI) is growing exponentially. As you may know, there are two ways to design an interpretable machine learning process.


Policymakers want to regulate AI but lack consensus on how

#artificialintelligence

According to a new Clifford Chance survey of 1,000 tech policy experts across the United States, U.K., Germany and France, policymakers are concerned about the impact of artificial intelligence, but perhaps not nearly enough. Though policymakers rightly worry about cybersecurity, it's perhaps too easy to focus on near-term, obvious threats while the longer-term, not-obvious-at-all threats of AI get ignored. Or, rather, not ignored, but there is no consensus on how to tackle emerging issues with AI. When YouGov polled tech policy experts on behalf of Clifford Chance and asked priority areas for regulation ("To what extent do you think the following issues should be priorities for new legislation or regulation?"), ethical use of AI and algorithmic bias ranked well down the pecking order from other issues: Maybe this isn't a big deal, except that AI (or, more accurately, machine learning) finds its way into higher-ranked priorities like data privacy and misinformation. Indeed, it's arguably the primary catalyst for problems in these areas, not to mention the "brains" behind sophisticated cybersecurity threats.


EU Artificial Intelligence Regulation Risks Undermining Social Safety Net - AI Summary

#artificialintelligence

The European Union's (EU) proposed plan to regulate the use of artificial intelligence (AI) threatens to undermine the bloc's social safety net, and is ill-equipped to protect people from surveillance and discrimination, according to a report by Human Rights Watch. Drawing on case studies from Ireland, France, the Netherlands, Austria, Poland and the UK, the non-governmental organisation (NGO) found that Europe's trend towards automation is discriminating against people in need of social security support, compromising their privacy, and making it harder for them to obtain government assistance. Self-regulation not good enough The report echoes claims made by digital civil rights experts, who previously told Computer Weekly the regulatory proposal is stacked in favour of organisations – both public and private – that develop and deploy AI technologies, which are essentially being tasked with box-ticking exercises, while ordinary people are offered little in the way of protection or redress. "As a result, it is likely that critically important information about a broad range of law enforcement technologies that could impact human rights, including criminal risk assessment tools and crime analytics software that parse large datasets to detect patterns of suspicious behaviour, would remain secret," it said. However, according to Laure Baudrihaye-Gérard, legal and policy director at NGO Fair Trials, the extension of Europol's mandate in combination with the AIA's proposed exemptions would effectively allow the crime agency to operate with little accountability and oversight when it came to developing and using AI for policing.


Can artificial intelligence help close gender gaps at work?

#artificialintelligence

Is it because she is a mother? Or perhaps she is perceived as lacking ambition, or leadership qualities? A woman works with a computer at the Technology trade show VivaTech in France, Paris, on 17 June 2021. Xose Bouzas / Hans Lucas Source: Reuters Gender stereotypes continue to hold women back at work, but a handful of tech firms say they have developed artificial intelligence (AI) systems that can help break biases in hiring and promotion to give female candidates a fairer chance. Employers and the wider economy could stand to gain, too.


Wormholes might be more stable than previously thought

Daily Mail - Science & tech

Wormholes might be more stable than previously predicted, according to a new study, that found they could be used to transport spacecraft across the universe. Also known as an Einstein-Rosen Bridge, the theoretical interstellar phenomenon works by tunnelling between two distant points in space - like a worm hole. These portals between black holes were once thought to instantly collapse once forming, unless an unknown for of exotic matter could be deployed as a stabiliser. However, a new study by physicist Pascal Koiran, from the École normale supérieure de Lyon, in France, looked at them using a different set of techniques. He found that a particle could be documented crossing the event horizon into the wormhole, go through it and reach the other side in a finite amount of time.


EU: Artificial Intelligence Regulation Threatens Social Safety Net, Warns HRW

#artificialintelligence

The European Union's plan to regulate artificial intelligence is ill-equipped to protect people from flawed algorithms that deprive them of lifesaving benefits and discriminate against vulnerable populations, Human Rights Watch said in report on the regulation. The European Parliament should amend the regulation to better protect people's rights to social security and an adequate standard of living. The 28-page report in the form of a question-and-answer document, "How the EU's Flawed Artificial Intelligence Regulation Endangers the Social Safety Net," examines how governments are turning to algorithms to allocate social security support and prevent benefits fraud. Drawing on case studies in Ireland, France, the Netherlands, Austria, Poland, and the United Kingdom, Human Rights Watch found that this trend toward automation can discriminate against people who need social security support, compromise their privacy, and make it harder for them to qualify for government assistance. But the regulation will do little to prevent or rectify these harms.


Minister Champagne attends Global Partnership on Artificial Intelligence Paris Summit

#artificialintelligence

Artificial intelligence (AI) offers powerful new solutions across sectors of the economy to improve the lives of Canadians-from advanced health care to more efficient and sustainable resource development and agriculture. International collaboration and coordination will help realize the full potential of AI to benefit all citizens and accelerate trustworthy technology development, while fostering diversity and inclusion across the AI domain. Today, the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, joined leading international AI experts, including representatives from 18 GPAI member countries and the European Union, for the second annual plenary of the Global Partnership on Artificial Intelligence (GPAI) in Paris, France. During the opening ceremony, Minister Champagne highlighted the progress that GPAI has made in its first year under Canada's chairmanship and formally passed the torch to France, which is taking over as 2021-2022 GPAI Council Chair. The Minister was joined by Cédric O, France's Minister of State for the Digital Transition and Electronic Communication and the new GPAI Lead Council Chair.


Using tools helps you understand language and vice versa

New Scientist

Practising a tool-using task helps people do better in a test of complex language understanding – and the benefits go the other way too. The crossover may happen because some of the same parts of the brain are involved in tool use and language, says Claudio Brozzoli at the National Institute of Health and Medical Research in Lyon, France. One idea is that language evolved by co-opting some of the brain networks involved in tool use. Both abilities involve sequences of precise physical movements – whether of the hands or of the lips, jaws, tongue and voice box – which must be done in the right order to be effective. Brozzoli's team asked volunteers to lie in a brain scanner while carrying out tasks involving either tool use or understanding complex sentences.


EU: Artificial Intelligence Regulation Threatens Social Safety Net

#artificialintelligence

The European Parliament should amend the regulation to better protect people's rights to social security and an adequate standard of living. The 28-page report in the form of a question-and-answer document, "How the EU's Flawed Artificial Intelligence Regulation Endangers the Social Safety Net," examines how governments are turning to algorithms to allocate social security support and prevent benefits fraud. Drawing on case studies in Ireland, France, the Netherlands, Austria, Poland, and the United Kingdom, Human Rights Watch found that this trend toward automation can discriminate against people who need social security support, compromise their privacy, and make it harder for them to qualify for government assistance. But the regulation will do little to prevent or rectify these harms. "The EU's proposal does not do enough to protect people from algorithms that unfairly strip them of the benefits they need to support themselves or find a job," said Amos Toh, senior researcher on artificial intelligence and human rights at Human Rights Watch.


EU artificial intelligence regulation risks undermining social safety net

#artificialintelligence

The European Union's (EU) proposed plan to regulate the use of artificial intelligence (AI) threatens to undermine the bloc's social safety net, and is ill-equipped to protect people from surveillance and discrimination, according to a report by Human Rights Watch. Social security support across Europe is increasingly administered by AI-powered algorithms, which are being used by governments to allocate life-saving benefits, provide job support and control access to a variety of social services, said Human Rights Watch in its 28-page report, How the EU's flawed artificial intelligence regulation endangers the social safety net. Drawing on case studies from Ireland, France, the Netherlands, Austria, Poland and the UK, the non-governmental organisation (NGO) found that Europe's trend towards automation is discriminating against people in need of social security support, compromising their privacy, and making it harder for them to obtain government assistance. It added that while the EU's Artificial Intelligence Act (AIA) proposal, which was published in April 2021, does broadly acknowledge the risks associated with AI, "it does not meaningfully protect people's rights to social security and an adequate standard of living". "In particular, its narrow safeguards neglect how existing inequities and failures to adequately protect rights – such as the digital divide, social security cuts, and discrimination in the labour market – shape the design of automated systems, and become embedded by them."