"Today, we are presenting our ambition to shape Europe's digital future. It covers everything from cybersecurity to critical infrastructures, digital education to skills, and democracy to media. I want that digital Europe reflects the best of Europe – open, fair, diverse, democratic, and confident," says Ursula von der Leyen, President of the Commission. Ursula von der Leyen says that Europe needs to step up its efforts to create a truly digital economy and to make better use of the massive amount of data being collected. She argued that in five years, Europe alone would generate the same amount of data that is today collected worldwide.
Self-driving cars are one of the high-risk artificial intelligence applications the European Union wants to regulate. The European Commission today unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China. The commission will draft new laws--including a ban on "black box" AI systems that humans can't interpret--to govern high-risk uses of the technology, such as in medical devices and self-driving cars. Although the regulations would be broader and stricter than any previous EU rules, European Commission President Ursula von der Leyen said at a press conference today announcing the plan that the goal is to promote "trust, not fear." The plan also includes measures to update the European Union's 2018 AI strategy and pump billions into R&D over the next decade.
Just-released proposals out of Europe that call for new rules to regulate high-risk artificial intelligence systems provide another marker for U.S. insurers and regulators as they consider the opportunities and risks of this evolving technology, industry experts say. The proposals and accompanying data strategy unveiled Feb. 19 are part of the EU's broader digital strategy aimed at setting global standards on technological development that put people first. In its report, the European Commission says that while artificial intelligence can bring advances by tackling climate change and making production more efficient, it also "entails a number of risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives, or being used for criminal purposes." Jon Godfread, insurance commissioner for North Dakota, said the policy document is "another fencepost and guideline that we can all take a look at" as international discussions on how to regulate artificial intelligence continue to develop. The EU's risk-based regulatory approach outlined in the report says clear rules are needed for high-risk artificial intelligence systems in recruitment, health care, transport, energy and law enforcement so that they are "transparent, traceable and guarantee human oversight."
Artificial Intelligence is a technology used to plan for the future. Planification implies intelligibility, calculability, and systematization. The future as a concept has been, in occidental cultures, closely tied to monotheism and the development of a linear narrative about societies, with a predicted end of the world, where individuals end up either in paradise or hell. This was a radical change from the narratives of classic cultures, where there was no notion of the past or prehistory, but rather a narrative of a cultural, god-given origin similar to the present. It did not anticipate change in the manner of future narratives. Future narratives see the time to come as a time when evolution happens, when neither clothes nor context nor social habits remain the same. With the development of Protestantism and capitalism, the future became more than a point in time when the story would end. It became an unwritten point of opportunity to be shaped by human beings.
Three members from the legal affairs committee are currently working to ensure the EU is prepared for the legal and ethical aspects of developments in artificial intelligence (AI). Find out more in our interview. German EPP member Axel Voss, the member responsible for issues relating to civil liability regime for artificial intelligence, speaks about how the EU can solve the legal uncertainties created by the use of AI. What problems does the Parliament wants to solve? Although Europe's existing civil liability framework covers most upcoming scenarios, new technologies based on AI will nevertheless expose several unsolved issues.
"The Five" discussed the media reaction to reports on Russia's involvement or prospective involvement in the 2020 presidential election Monday, with particular focus on cable news channels CNN and MSNBC. "In terms of these talking heads on TV, the makeup-wearing misery mongers, you're never, ever, ever going to hear them apologize for getting it wrong literally for the last four years," Fox Business Network's Dagen McDowell said. "Because in their in their arrogance and insecurity, they'll never be able to admit that they are tools for Putin and also fools." A U.S. intelligence official told Fox News Sunday that contrary to numerous recent media reports, there is no evidence to suggest that Russia is making a specific "play" to boost President Trump's reelection bid. The official added that top election security official Shelby Pierson, who briefed Congress on Russian election interference efforts earlier this month, may have overstated intelligence regarding the issue.
The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people's lives. A browse through our'ethics' category here on AI News will highlight the serious problem of bias in today's algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind. Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly. "Technology is a force for good which has improved people's lives but we must make sure it is developed in a safe and secure way. Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development. I'm pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the Centre's recommendations to Government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society."
The European Union has published its European data strategy [PDF], intended to provide the framework for what it describes as human-centric artificial intelligence. The white paper, said President of the European Commission, Ursula von der Leyen, is intended to "shape Europe's digital future". She continued: "It covers everything from cybersecurity to critical infrastructures, digital education to skills, democracy to media. I want that digital Europe reflects the best of Europe - open, fair, diverse, democratic, and confident." However, while the strategy has been pitched as boosting the EU's technology sector and preparing the bloc for a shift to an ever-more data-driven economy, increasingly governed by AI, the strategy is driven by a desire to regulate artificial intelligence and data platforms before they take off.
Oliver Letwin's strange and somewhat alarming new book begins at midnight on Thursday 31 December 2037. In Swindon – stay with me! – a man called Aameen Patel is working the graveyard shift at Highways England's traffic HQ when his computer screen goes blank, and the room is plunged into darkness. He tries to report these things to his superiors, but can get no signal on his mobile. Looking at the motorway from the viewing window by his desk, he observes, not an orderly stream of traffic, but a dramatic pile-up of crashed cars and lorries – at which point he realises something is seriously amiss. In the Britain of 2037, everything, or almost everything, is controlled by 7G wireless technology, from the national grid to the traffic (not only are cars driverless; a vehicle cannot even join a motorway without logging into an "on-route guidance system"). There is, then, only one possible explanation: the entire 7G network must have gone down. It sounds like I'm describing a novel – and it's true that Aameen Patel will soon be joined by another fictional creation in the form of Bill Donoghue, who works at the Bank of England, and whose job it will be to tell the prime minister that the country is about to pay a heavy price for its cashless economy, given that even essential purchases will not be possible until the network is back up (Bill's mother-in-law is also one of thousands of vulnerable people whose carers will soon be unable to get to them, the batteries in their electric cars having gone flat).
On Wednesday the European Commission launched a blizzard of proposals and policy papers under the general umbrella of "shaping Europe's digital future". The documents released included: a report on the safety and liability implications of artificial intelligence, the internet of things and robotics; a paper outlining the EU's strategy for data; and a white paper on "excellence and trust" in artificial intelligence. In their general tenor, the documents evoke the blend of technocracy, democratic piety and ambitiousness that is the hallmark of EU communications. That said, it is also the case that in terms of doing anything to get tech companies under some kind of control, the European Commission is the only game in town. In a nice coincidence, the policy blitz came exactly 24 hours after Mark Zuckerberg, supreme leader of Facebook, accompanied by his bag-carrier – a guy called Nicholas Clegg who looked vaguely familiar – had called on the commission graciously to explain to its officials the correct way to regulate tech companies.