Goto

Collaborating Authors

Results


Parliamentary Responses to Artificial Intelligence

#artificialintelligence

While Artificial intelligence (AI) has been developing for decades, recent years have seen increasing attention to its various societal impacts. These impacts range from positive and helpful to harmful and even life-threatening in some cases. Parliaments have responded to such developments by undertaking various programmes of work. What have they done, and what can Scotland learn from these approaches? This short review provides a snapshot of the work that various Parliaments around the world have undertaken on AI. It outlines the various approaches adopted by Parliaments and highlights common themes. In noting the key points for Scotland, it is designed to inform and guide the Scottish Parliament and others, as Scotland considers its own approach to the many opportunities and challenges AI presents. The report was written by Robbie Scarff on an internship supported by the Scottish Graduate School of Social Science. From this work, here are some key areas and questions for the Scottish Parliament to consider.


Legal regulation of artificial intelligence in Kazakhstan and abroad

#artificialintelligence

In our understanding, the question of who owns intellectual property rights on AI-related works is also important when determining who is liable for AI-caused harm. For that reason, further development of legislation in that direction is expected. As we mentioned before, one of the main characteristics of AI is the use (collection, analysis) of data. Personal data is included in this. Some experts have opined that AI systems can develop more quickly in jurisdictions where there is less regulation on the use and protection of personal data--or where it is not regulated at all. This is related to the fact that AI needs to use data to achieve the established tasks. The EU is the realm of the General Data Protection Regulation (GDPR), which aims to protect personal data against illegal use. The EU, in light of the GDPR, has already prepared a list of prohibited practices of AI.


Artificial intelligence: MEPs want the EU to be a global standard-setter

#artificialintelligence

On Tuesday, the European Parliament adopted the final recommendations of its Special Committee on Artificial Intelligence in a Digital Age (AIDA). The text, adopted with 495 votes to 34, and 102 abstentions, says that the public debate on the use of artificial intelligence (AI) should focus on the technology's enormous potential to complement human labour. It notes that the EU has fallen behind in the global race for tech leadership. There is a risk that standards will be developed elsewhere, often by non-democratic actors, while MEPs believe the EU needs to act as a global standard-setter in AI. The EU should not always regulate AI as a technology, say MEPs, and the level of regulatory intervention should be proportionate to the type of risk associated with the particular use of an AI system. The report will feed into upcoming parliamentary work on AI, in particular the AI Act, which is currently being discussed in the Internal Market and Consumer Protection (IMCO) and the Civil Liberties, Justice and Home Affairs (LIBE) committees.


Artificial Intelligence and Automated Systems Legal Update (1Q22)

#artificialintelligence

Secretary shall support a program of fundamental research, development, and demonstration of energy efficient computing and data center technologies relevant to advanced computing applications, including high performance computing, artificial intelligence, and scientific machine learning.").


Artificial intelligence: filling the gaps

#artificialintelligence

Stronger legislation than the European Commission envisages is needed to regulate AI and protect workers. Artificial intelligence (AI) is of strategic importance for the European Union: the European Commission frequently affirms that'artificial intelligence with a purpose can make Europe a world leader'. Recently, the commissioner for the digital age, Margrethe Vestager, again insisted on AI's'huge potential' but admitted there was'a certain reluctance', a hesitation on the part of the public: 'Can we trust the authorities that put it in place?' One had to be able to trust in technology, she said, 'because this is the only way to open markets for AI to be used'. Trust is indeed central to the acceptance of AI by European citizens.


AI researcher says police tech suppliers are hostile to transparency

#artificialintelligence

Artificial intelligence (AI) researcher Sandra Wachter says that although the House of Lords inquiry into police technology "was a great step in the right direction" and succeeded in highlighting the major concerns around police AI and algorithms, the conflict of interest between criminal justice bodies and their suppliers could still hold back meaningful change. Wachter, who was invited to the inquiry as an expert witness, is an associate professor and senior research fellow at the Oxford Internet Institute who specialises in the law and ethics of AI. Speaking with Computer Weekly, Wachter said she is hopeful that at least some of the recommendations will be taken forward into legislation, but is worried about the impact of AI suppliers' hostility to transparency and openness. "I am worried about it mainly from the perspective of intellectual property and trade secrets," she said. "There is an unwillingness or hesitation in the private sector to be completely open about what is actually going on for various reasons, and I think that might be a barrier to implementing the inquiry's recommendations."


US Companies Must Deal with EU AI law, Like It or Not

#artificialintelligence

Don't look now, but using Google Analytics to track your website's audience might be illegal. That's the view of a court in Austria, which in January found that Google's data product was in breach of the European Union's General Data Protection Regulation (GDPR) as it was not doing enough to make sure data transferred from the EU to the company's servers in the US was protected (from, say, US intelligence agencies). Well for those working in AI and biotech, it matters, especially to those working outside of Europe with a view to expansion there. For a start, this is a major precedent that threatens to upend the way many tech companies work, since the tech sector relies heavily on the safe use and transfer of large quantities of data. Whether you use Google Analytics is neither here nor there; the case has shown that Privacy Shield -- the EU-US framework that governs the transfer of personal information in compliance with GDPR -- may not be compliant with European law after all.


New EU rules would allow it to shut down AI before it got dangerous

#artificialintelligence

Artificial Intelligence is everywhere: the rise of "thinking" machines has been one of the defining developments of the past two decades – and will only become more prominent as computing power increases. The European Union has been working on a framework to regulate AI for some time, starting way back in March 2018, as part of its broader Digital Decade regulations. Work on AI regulations has been relatively slow while the EU focuses on the Digital Markets Act and the Digital Services Act, which focus on reigning in the American tech giants, but the work definitely continues. Any worthwhile legislative process should be open to critique and analysis and the EU's AI Act is undergoing a thorough treatment by the UK-based Ada Lovelace Institute, an independent research institution working on data policy. The full report (via TechCrunch) includes a lot of detail on the pros and cons of the regulation, which is a global first, with the main takeaway is that the EU is setting itself up to have some pretty powerful tools at its disposal.


EU's AI Act 'contains powers to order AI models destroyed' – TechCrunch

#artificialintelligence

The European Union's planned risk-based framework for regulating artificial intelligence includes powers for oversight bodies to order the withdrawal of a commercial AI system or require that an AI model be retrained if it's deemed high risk, according to an analysis of the proposal by a legal expert. That suggests there's significant enforcement firepower lurking in the EU's (still not yet adopted) Artificial Intelligence Act -- assuming the bloc's patchwork of Member State-level oversight authorities can effectively direct it at harmful algorithms to force product change in the interests of fairness and the public good. The draft Act continues to face criticizm over a number of structural shortcomings -- and may still fall far short of the goal of fostering broadly "trustworthy" and "human-centric" AI, which EU lawmakers have claimed for it. But, on paper at least, there looks to be some potent regulatory powers. The European Commission put out its proposal for an AI Act just over a year ago -- presenting a framework that prohibits a tiny list of AI use cases (such as a China-style social credit scoring system), considered too dangerous to people's safety or EU citizens' fundamental rights to be allowed, while regulating other uses based on perceived risk -- with a subset of "high risk" use cases subject to a regime of both ex ante (before) and ex post (after) market surveillance.


EU Act 'must empower those affected by AI systems to take action'

#artificialintelligence

Independent research organistion the Ada Lovelace Institute has published a series of proposals on how the European Union (EU) can amend its forthcoming Artificial Intelligence Act (AIA) to empower those affected by the technology on both an individual and collective level. The proposed amendments also aim to expand and reshape the meaning of "risk" within the regulation, which the Institute has said should be based on "reasonably foreseeable" purpose and extend beyond its current focus on individual rights and safety to also include systemic and environmental risks. "Regulating AI is a difficult legal challenge, so the EU should be congratulated for being the first to come out with a comprehensive framework," said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute. "However, the current proposals can and should be improved, and there is an opportunity for EU policymakers to significantly strengthen the scope and effectiveness of this landmark legislation." As it currently stands, the AIA, which was published by the European Commission (EC) on 21 April 2021, adopts a risk-based, market-led approach to regulating the technology, focusing on establishing rules around the use of "high-risk" and "prohibited" AI practices.