On Thursday 26 November Professor Andrew Murray joined us online to deliver his lecture'Almost human: law and human agency in the time of artificial intelligence.' More than five hundred people from all over the world watched the sixth Annual T.M.C. Asser Lecture. A recording of the lecture is now available on our Youtube page. Artificial intelligence (AI) is all around us: from the smartphones in our hands to drone strikes thousands of miles away. While this technology has many benefits such as simplifying complex data and making daily tasks easier, it also has dangerous implications.
On Thursday, 26 November, Prof. Andrew Murray, will deliver the Sixth T.M.C. Asser Lecture – 'Almost Human: Law and Human Agency in the Time of Artificial Intelligence'. Asser Institute researcher Dr. Dimitri Van Den Meerssche had the opportunity to speak with professor Murray about his perspective on the challenges posed by Artificial Intelligence to our human agency and autonomy – the backbone of the modern rule of law. A conversation on algorithmic opacity, the peril of dehumanization, the illusionary ideal of the'human in the loop' and the urgent need to go beyond'ethics' in the international regulation of AI. One central observation in your Lecture is how Artificial Intelligence threatens human agency. Could you elaborate on your understanding of human agency and how it is being threatened? In my Lecture I refer to the definition of agency by legal philosopher Joseph Raz. He argues that to be fully in control of one's own agency and decisions you need to have capacity, the availability of options and the freedom to exercise that choice without interference. My claim is that there are four ways in which the adoption and use of algorithms affect our autonomy, and particularly Raz's third requirement: that we are to be free from coercion. First, there is an internal and positive impact. This happens when an algorithm gives us choices, which have been limited by pre-determined values – values that we cannot observe. The second impact is internal and negative. In this scenario, choices are removed because of pre-selected values.
Since late 2016, the Chinese government has subjected the 13 million ethnic Uyghurs and other Turkic Muslims in Xinjiang to mass arbitrary detention, forced political indoctrination, restrictions on movement, and religious oppression. Credible estimates indicate that under this heightened repression, up to one million people are being held in "political education" camps. The government's "Strike Hard Campaign against Violent Terrorism" (Strike Hard Campaign, 严厉打击暴力恐怖活动专项行动) has turned Xinjiang into one of China's major centers for using innovative technologies for social control. This report provides a detailed description and analysis of a mobile app that police and other officials use to communicate with the Integrated Joint Operations Platform (IJOP, 一体化联合作战平台), one of the main systems Chinese authorities use for mass surveillance in Xinjiang. Human Rights Watch first reported on the IJOP in February 2018, noting the policing program aggregates data about people and flags to ...
Singapore and Australia have formally signed off on a digital economy agreement following months of negotiation. It marks the second such pact, following a first with New Zealand and Chile, that the Singapore government has inked covering several areas of cooperation, including cross-border data flow, digital payments, and artificial intelligence (AI). The Singapore-Australia Digital Economy Agreement was signed virtually during a videoconference Thursday between Singapore's Minister for Trade and Industry Chan Chun Sing and Australia's Minister for Trade, Tourism, and Investment Simon Birmingham. Discussions between the two countries had kicked off last October before wrapping up in March, with both sides agreeing to establish a framework that facilitated "deeper cooperation" to "shape" international rules and establish interoperability between digital systems. Country's government is setting aside more than SG$500 million ($352.49
In the next coming years, space activities are expected to undergo a radical transformation with the emergence of new satellite systems or new services which will incorporate the contributions of artificial intelligence and machine learning defined as covering a wide range of innovations from autonomous objects with their own decision-making power to increasingly sophisticated services exploiting very large volumes of information from space. This chapter identifies some of the legal and ethical challenges linked to its use. These legal and ethical challenges call for solutions which the international treaties in force are not sufficient to determine and implement. For this reason, a legal methodology must be developed that makes it possible to link intelligent systems and services to a system of rules applicable thereto. It discusses existing legal AI-based tools amenable for making space law actionable, interoperable and machine readable for future compliance tools.
This book discusses the necessity and perhaps urgency for the regulation of algorithms on which new technologies rely; technologies that have the potential to re-shape human societies. From commerce and farming to medical care and education, it is difficult to find any aspect of our lives that will not be affected by these emerging technologies. At the same time, artificial intelligence, deep learning, machine learning, cognitive computing, blockchain, virtual reality and augmented reality, belong to the fields most likely to affect law and, in particular, administrative law. The book examines universally applicable patterns in administrative decisions and judicial rulings. First, similarities and divergence in behavior among the different cases are identified by analyzing parameters ranging from geographical location and administrative decisions to judicial reasoning and legal basis. As it turns out, in several of the cases presented, sources of general law, such as competition or labor law, are invoked as a legal basis, due to the lack of current specialized legislation. This book also investigates the role and significance of national and indeed supranational regulatory bodies for advanced algorithms and considers ENISA, an EU agency that focuses on network and information security, as an interesting candidate for a European regulator of advanced algorithms. Lastly, it discusses the involvement of representative institutions in algorithmic regulation.
EU trade policy should carve out space for the regulation of ethical and responsible artificial intelligence (AI) in future trade talks. This is the finding of a new study by researchers from the University of Amsterdam's (UvA) Institute for Information Law. The Dutch Ministry of Foreign Affairs commissioned the study to generate further knowledge about the interface between international trade law and European norms and values when it comes to the use of AI. As AI seeps ever more comprehensively into our daily lives--through our phones, our cars, even in our doctors' offices--the need to ensure responsible use of such technologies becomes ever greater. Responsible use of AI is therefore a top priority for the Dutch government and for the EU as a whole.
Tesla and SpaceX CEO, Elon Musk, says that AI like the one his companies make should be better regulated. Musk's opinion on the dangers of letting AI proliferate unfettered was prompted by a report published in MIT Technology Review about changing company culture at OpenAI, a technology company that helps develop new AI. Elon Musk formerly helmed the company but left due to conflicts of interest. The report claims that OpenAI has shifted from its goal of equitably distributing AI technology to a more secretive, funding-driven company. 'OpenAI should be more open imo,' he tweeted.
As we enter a new decade, we take with us the growing challenges we face in many fields, including artificial intelligence and conducting business while ensuring human rights. These hot topics are not going away any time soon. With the speed of innovation and technology, the responsibility of keeping up with development and regulating practices is all the more crucial to ensure a just world. Our upcoming winter academies on AI and international law, and due diligence as a key to responsible conduct, will empower you with the skills and knowledge you need to tackle those issues in your daily work. Winter academy on Artificial Intelligence and International law (20 – 24 January) 2020 will be a critical year to set the tone for the next decade of innovations in Artificial Intelligence (AI), one of the most complex technologies to monitor or regulate.
After a U.S. airstrike kills Iranian Gen. Qassem Soleimani in Iraq, Secretary of State Mike Pompeo tells'Fox & amp; Friends' that President Trump's decision was necessary to deter further aggression. The U.N. special rapporteur on extrajudicial killing on Friday said the President Trump-approved drone strike against Qassem Soleimani, Iran's top general, violated international human rights law. In a lengthy Twitter thread, Agnès Callamard said that "outside the context of active hostilities, the use of drones or other means for targeted killing is almost never likely to be legal," adding that the U.S. would need to prove the person targeted constituted an imminent threat to others. She also took issue with the justification for using drones in another country on the basis of self-defense. "Under customary international law States can take military action if the threatened attack is imminent, no other means would deflect it, and the action is proportionate," she wrote.