You now have access to a treasure trove of government info through your smart speaker if you live in the UK. The British government has made over 12,000 pieces of Gov.uk information available through Alexa and Google Assistant, saving you the trouble of wading through official pages. Some of them are simple questions like the next bank holiday, while others are more involved questions such as obtaining a passport. Not everything is available, so you can't completely depend on a voice assistant just yet. However, there are promises of expansion.
"Artificial intelligence" can be defined as the theory and development of computer systems able to perform tasks that normally require human intervention. Artificial intelligence (AI) is being used in new products and services across numerous industries and for a variety of policy-related purposes, raising questions about the resulting legal implications, including its effect on individual privacy. Aspects of AI related to privacy concerns are the ability of systems to make decisions and to learn by adjusting their code in response to inputs received over time, using large volumes of data. Following the European Commission's declaration on AI in April 2018, its High-Level Expert Group on Artificial Intelligence (AI HLEG) published Draft Ethics Guidelines for Trustworthy AI in December 2018. A consultation process regarding this working document concluded on February 1, 2019, and a revised draft of the document based on the comments that were received is expected to be delivered to the European Commission in April 2019.
Many are concerned about the amount of time we – and our children – spend on devices. Soon to be a father, Prince Harry recently suggested "social media is more addictive than drugs and alcohol, yet it's more dangerous because it's normalised and there are no restrictions to it". But worries are not just limited to personal use. Many schools and workplaces are increasingly delivering content digitally, and even using game-playing elements like point scoring and competition with others in non-game contexts to drive better performance. This "always on" lifestyle means many can't just "switch off".
This week, the European Union published a set of ethical guidelines detailing how businesses and governments can achieve trustworthy artificial intelligence (AI)--that is, AI that is lawful, ethical, and socially and technologically robust. While these guidelines are not laws, they set out a framework for lawmakers and companies to achieve trustworthy AI. "The EU's new Ethics guidelines for trustworthy AI are a considered and constructive step toward addressing the impact of trustworthy AI on humankind, and toward laying the groundwork for necessary further discussion between key stakeholders in the private, public and governmental sectors," Juan Miguel de Joya, a consultant at the International Telecommunication Union and a member of the Association for Computing Machinery's US Technology Policy Committee, told TechRepublic. SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic) The EU's new guidelines should start conversations among businesses worldwide that may not have the resources to independently assess the impact of the technology, de Joya said. "Perhaps most fundamentally and significantly, release of the new guidelines is an opportunity for government, business, computing professionals and other stakeholders--particularly in the United States--to capture and channel the momentum of these discussions into real understanding of AI's potential and pitfalls," de Joya said. These guidelines are "a welcome, solid and significant step forward," Lorraine Kisselburgh, a visiting fellow in the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University, and a member of the Association for Computing Machinery's US Technology Policy Committee, told TechRepublic.
BRUSSELS (Reuters) - Companies working with artificial intelligence need to install accountability mechanisms to prevent its being misused, the European Commission said on Monday, under new ethical guidelines for a technology open to abuse. AI projects should be transparent, have human oversight and secure and reliable algorithms, and they must be subject to privacy and data protection rules, the commission said, among other recommendations. The European Union initiative taps in to a global debate about when or whether companies should put ethical concerns before business interests, and how tough a line regulators can afford to take on new projects without risking killing off innovation. "The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies," the Commission digital chief, Andrus Ansip, said in a statement.
London (CNN Business)Social media faces a crisis of trust. Europe wants to make sure artificial intelligence doesn't go the same way. The European Commission on Monday unveiled ethics guidelines that are designed to influence the development of AI systems before they become deeply embedded in society. The intervention could help break the pattern of regulators being forced to play catch up with emerging technologies that lead to unanticipated negative consequences. The importance of doing so was underscored Monday when Britain proposed new rules that would make internet companies legally responsible for ridding their platforms of harmful content.
The European Commission will launch a pilot project this summer designed to test ethical guidelines it has developed for the use of artificial intelligence. Companies, public agencies, and other organizations can now join the European AI Alliance which will officially notify members when the pilot starts. "The ethical dimension of AI is not a luxury feature or an add-on," said Vice-President for the Digital Single Market Andrus Ansip in a statement. "It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust."
There has, for years, been one thing that just about all of the tech industry agrees on: regulation is coming. Recently, they have even realised that it's necessary. But if there is one thing that has split tech behemoths, politicians and the public apart more than perhaps any other issue, it's what that regulation should look like. Now the UK government thinks it has alighted on an answer, offering perhaps the first comprehensive attempt to limit the harm that technology companies are doing to the people – in particular the children – who use them. For the most part, the solution they have chosen focuses on shifting the responsibility for content that appears on the site onto the people who run them.
Less than one week after Google scrapped its AI ethics council, the European Union has set out its own guidelines for achieving "trustworthy" artificial intelligence. On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence, as companies and governments weigh both the benefits and risks of the far-reaching technology. "The ethical dimension of AI is not a luxury feature or an add-on," said Andrus Ansip, EU vice-president for the digital single market, in a press release Monday. "It is only with trust that our society can fully benefit from technologies." The EU defines artificial intelligence as systems that show "intelligent behavior," allowing them to analyze their environment and perform tasks with some degree of autonomy.
Britain's leading position in developing self-driving cars could produce a £62bn economic boost by 2030, the car industry claimed – but warned that such potential could be jeopardised by a no-deal Brexit. A report published by the Society of Motor Manufacturers and Traders said the UK has significant advantages over other countries in pushing connected and autonomous vehicles, including forward-looking legislation allowing autonomous cars to be insured and driven on a greater proportion of roads than elsewhere. Mike Hawes, the chief executive of the SMMT, said more than £500m had been invested in research and development by industry and government, and another £740m in communications infrastructure to enable autonomous cars to work. He said: "The opportunities are dramatic – new jobs, economic growth and improvements across society. The UK's potential is clear. We are ahead of many rival nations but to realise these benefits we must move fast."