The European Commission will this week present its proposal on Artificial Intelligence (AI), seen as a step toward a new regulatory framework, promised by Commission President Ursula von der Leyen in her State of the Union, writes Marie-Françoise Gondard-Argenti. Marie-Françoise Gondard-Argenti is a member of the Employers' Group at the European Economic and Social Committee. It is clear that there is no country or company manager in Europe at the moment that does not support the development of a trustworthy and innovative AI ecosystem, which promotes a human-centric approach and that primarily services people, increasing their well-being. There is no company in Europe that does not understand the need to leverage the EU market to spread the EU's approach to AI regulation globally. However, at the moment, the EU lags behind.
Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit. Machine learning (ML), artificial intelligence (AI), and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. The potential uses include improving diagnostic accuracy,1 more reliably predicting prognosis,2 targeting treatments,3 and increasing the operational efficiency of health systems.4 Examples of potentially disruptive technology with early promise include image based diagnostic applications of ML/AI, which have shown the most early clinical promise (eg, deep learning based algorithms improving accuracy in diagnosing retinal pathology compared with that of specialist physicians5), or natural language processing used as a tool to extract information from structured and unstructured (that is, free) text embedded in electronic health records.2 Although we are only just …
When Microsoft spends $19.7 billion on a company whose specialties included voice recognition and artificial intelligence (AI) as part of its health sector strategy, you know that AI in the medical field is here to stay. It only makes sense, then, that regulations regarding the technology would not be far behind. Thanks to a leaked document first reported by Politico, we now have our first look at what such regulations might look like in the European Union. The regulation document largely concerns "high-risk" usages of AI. That's not surprising, as the European Commission originally published a whitepaper in February 2020 outlining ideas for regulating such uses of the technology.
As technology becomes a more intimate part of our everyday lives, artificial intelligence is driving progress and helping solve some of society's greatest challenges. BI Norwegian Business School is researching the impact of technology and artificial intelligence on sustainability, gender equality, health and wellbeing, justice, social responsibility, and responsible investment and education. Matilda Dorotic, an associate professor in marketing at the business school, has been involved in extensive research on the impact of technology and big data on diverse aspects of well-being, civil-mindedness, and smart cities--she's recently been invited by the European Commission to talk about the societal issues of implementing artificial intelligence and its far-reaching impact on citizens. It's something we all need to pay more attention to as, according to Matilda "citizens know so little about the ways in which artificial intelligence is fundamentally changing their everyday lives". According to the WHO, cancer is the second leading cause of death worldwide and is responsible for around 10 million deaths per year.
Europe is already the world's tech privacy cop. Now it might become the AI cop too. Companies using artificial intelligence in the EU could soon be required to get audited first, under new rules set to be proposed by the European Union as soon as next week. The regulations were partly sketched out in an EU white paper last year and aim to ensure the responsible application of AI in high-stakes situations like autonomous driving, remote surgery or predictive policing. Officials want to ensure that such systems are trained on privacy-protecting and diverse data sets.
Bésame Cosmetics founder and makeup historian Gabriela Hernandez delivers insights into the billion-dollar cosmetic industry. Learn how makeup was deeply impacted by society's perception of women. A make-up artist has become an internet sensation after transforming herself into popular celebrities -- even fooling her friends and phone. Liss Lacao, 29, has recreated the recognizable features of celebrities such as Gordon Ramsay, Dolly Parton, the Queen and British Prime Minister Boris Johnson. She's so good, she's even fooled her iPhone -- which has facial recognition -- and her friends into thinking she was one of the A-listers.
Poppy Gustafsson runs a cutting-edge and gender-diverse cybersecurity firm on the brink of a £3bn stock market debut, but she is happy to reference pop culture classic the Terminator to help describe what Darktrace actually does. Launched in Cambridge eight years ago by an unlikely alliance of mathematicians, former spies from GCHQ and the US and artificial intelligence (AI) experts, Darktrace provides protection, enabling businesses to stay one step ahead of increasingly smarter and dangerous hackers and viruses. Marketing its products as the digital equivalent of the human body's ability to fight illness, Darktrace's AI-security works as an "enterprise immune system", can "self-learn and self-heal" and has an "autonomous response capability" to tackle threats without instruction as they are detected. "It really does feel like we're in this new era of cybersecurity," says Gustafsson, the chief executive of Darktrace. "The arms race will absolutely continue, I really don't think it's very long until this [AI] innovation gets into the hands of attackers, and we will see these very highly targeted and specific attacks that humans won't necessarily be able to spot and defend themselves from. "It's not going to be these futuristic Terminator-style robots out shooting each other, it's going to be all these little pieces of code fighting in the background of our businesses.
A European Union plan to regulate artificial intelligence could see companies that break proposed rules on mass surveillance and discrimination fined millions of euros. Draft legislation, leaked ahead of its official release later this month, suggests the EU is attempting to find a "third way" on AI regulation, between the free market US and authoritarian China. The draft rules represent an outright ban on AI designed to manipulate people "to their detriment", carry out indiscriminate surveillance or calculate "social scores". Much of the wording is currently vague enough that it could cover the entire advertising industry or nothing at all. In any case, the military and any agency ensuring public security are exempt.
The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications. The rules are part of legislation set to be proposed by the European Commission, the bloc's executive body, according to a draft of the proposal obtained by Bloomberg. The details could change before the commission unveils the measure, which is expected to be as soon as next week. European member states would be required to appoint assessment bodies to test, certify and inspect the systems, according to the document. Companies that develop prohibited AI services, or supply incorrect information or fail to cooperate with the national authorities could be fined up to a maximum of 4% of global revenue.
The letter urges the Commissioner to support enhanced protection for fundamental human rights. A group of 51 digital rights organizations has called on the European Commission to impose a complete ban on the use of facial recognition technologies for mass surveillance – with no exceptions allowed. Comprising activist groups from across the continent, such as Big Brother Watch UK, AlgorithmWatch and the European Digital Society, the call was chaperoned by advocacy network the European Digital Rights (EDRi) in the form of an open letter to the European commissioner for Justice, Didier Reynders. It comes just weeks before the Commission releases much-awaited new rules on the ethical use of artificial intelligence on the continent on 21 April. The letter urges the Commissioner to support enhanced protection for fundamental human rights in the upcoming laws, in particular in relation to facial recognition and other biometric technologies, when these tools are used in public spaces to carry out mass surveillance.