If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The European Union has published a new framework to regulate the use of artificial intelligence across the bloc's 27 member states. The proposal, which will take years to implement into law and will be subject to many tweaks and amendments during this time, nevertheless constitutes the most ambitious AI regulations seen globally to date. The regulations cover a wide range of applications, from software in self-driving cars to algorithms used to vet job candidates, and arrive at a time when countries around the world are struggling with the ethical ramifications of artificial intelligence. Similar to the EU's data privacy law, GDPR, the regulation gives the bloc the ability to fine companies that infringe its rules up to 6 percent of their global revenues, though such punishments are extremely rare. "It is a landmark proposal of this Commission. It's our first ever legal framework on artificial intelligence," said European Commissioner Margrethe Vestager during a press conference.
On 21 April 2021, the European Commission proposed new rules and actions aimed at making Europe the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will endeavour to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users' trust in new products. Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said: "On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake."
AI took center stage in recently-announced updates to the Alexa virtual voice assistant, and in the charges this week from the European Commission that Amazon is breaking EU competition rules. During Amazon's Alexa Live event held in July, the company announced a major update to Alexa's developer toolkit that brings AI improvements. Since launching in 2014, Amazon's voice assistant has shipped hundreds of millions of units, which are targeted by a sizable developer community offering voice apps, called Skills, that extend the Alexa default feature set. Just as the Android and iOS large selections of third party applications differentiate those operating systems, so Skill plays an important role in Amazon's growth strategy for Alexa, according to a recent account in siliconAngle. Amazon added deep learning models for natural language understanding that the company said will enable Skills to recognize users' voice commands with 15% higher accuracy on average.
The European Commission today unveiled a sweeping set of proposals that it hopes will establish the region as a leader in artificial intelligence by focusing on trust and transparency. The proposals would lead to changes in the way data is collected and shared in an effort to level the playing field between European companies and competitors from the U.S. and China. The EC wants to prevent potential abuses while also building confidence among citizens in order to reap the benefits promised by the technology. In a series of announcements, EC leaders expressed optimism that AI could help tackle challenges such as climate change, mobility, and health care, along with a determination to keep private tech companies from influencing regulation and dominating the data needed to develop these algorithms. "We want citizens to trust the new technology," said Ursula von der Leyen, president of the European Commission.
The potential for the Internet of Things to lead to distortion in market competition is troubling European Union lawmakers who have today kicked off a sectoral inquiry. They're aiming to gather data from hundreds of companies operating in the smart home and connected device space -- via some 400 questionnaires, sent to companies big and small across Europe, Asia and the US -- using the intel gleaned to feed a public consultation slated for early next year when the Commission will also publish a preliminary report. In a statement on the launch of the sectoral inquiry today, the European Union's competition commissioner, Margrethe Vestager, said the risks to competition and open markets linked to the data collection capabilities of connected devices and voice assistants are clear. The aim of the exercise is therefore to get ahead of any data-fuelled competition risks in the space before they lead to irreversible market distortion. "One of the key issues here is data. Voice assistants and smart devices can collect a vast amount of data about our habits. And there's a risk that big companies could misuse the data collected through such devices, to cement their position in the market against the challenges of competition. They might even use their knowledge of how we access other services to enter the market for those services and take it over," said Vestager.
The EU competition watchdog is taking another look at whether big tech is helping itself to too large a slice of the digital market, this time in the space of connected devices. The organization's commissioner Margrethe Vestager announced the launch of a sector probe to make sure that the companies behind smart products and digital assistants aren't building monopolies that could threaten consumer rights in the EU. While the technologies have great potential, the commissioner warned that they should be deployed carefully. "We'll only see the full benefits – low prices, wide choice, innovative products and services – if the markets for these devices stay open and competitive. And the trouble is that competition in digital markets can be fragile," said Vestager.
When DeepMind, the artificial intelligence company owned by Google parent Alphabet Inc., released its predictions about some of the building blocks of the virus that causes Covid-19 in early March, it gave medical researchers a small but potentially important clue that could help them develop a vaccine and treatments for the respiratory illness. The company's deep learning system, AlphaFold, which predicts the shapes of proteins when no similar structures are available, is just one example of the powerful role AI is playing in the fight against the novel coronavirus. The innovations that DeepMind and others are rapidly rolling out could be complicated by AI laws to be unveiled by the European Union this year. Even as the coronavirus upends business, economic, and legislative plans the world over, the EU is pushing ahead with its AI policy proposal, which would make it a global leader in regulating the sector. The European Commission, the bloc's executive body, released its plan in February, calling for public feedback by the end of May.
One area that the Commission is particularly concerned about is facial recognition. At the moment, the processing of biometric data in order to identify people is illegal in most cases, under data privacy laws. However, the EU is now looking at whether there should be certain exceptions. Speaking to journalists in Brussels, Margrethe Vestager, the EU's head of competition policy, said: "Artificial intelligence is not good or bad in itself, it all depends on why and how it is used." In an exclusive interview with CNBC Tuesday, Vestager said that the EU is taking a "double-sided" approach where it will enable this technology, while also ensuring it's not harmful to EU citizens.
Artificial Intelligence technologies carrying a high-risk of abuse that could potentially lead to an erosion of fundamental rights will be subjected to a series of new requirements, the European Commission announced on Wednesday (19 February). As part of the executive's White paper on AI, a series of'high-risk' technologies have been earmarked for future oversight, including those in'critical sectors' and those deemed to be of'critical use.' Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications. Artificial Intelligence technologies coming under those two categories will be obliged to abide by strict rules, which could include compliance tests and controls, the Commission said on Wednesday. Sanctions could be imposed should certain technologies fail to meet such requirements.
The European Union is set to release new regulations for artificial intelligence that are expected to focus on transparency and oversight as the region seeks to differentiate its approach from those of the United States and China. On Wednesday, EU technology chief Margrethe Vestager will unveil a wide-ranging plan designed to bolster the region's competitiveness. While transformative technologies such as AI have been labeled critical to economic survival, Europe is perceived as slipping behind the U.S., where development is being led by tech giants with deep pockets, and China, where the central government is leading the push. Europe has in recent years sought to emphasize fairness and ethics when it comes to tech policy. These systems would require human oversight and audits, according to a widely leaked draft of the new rules.