Goto

Collaborating Authors

International Law


Artificial intelligence in space

arXiv.org Artificial Intelligence

In the next coming years, space activities are expected to undergo a radical transformation with the emergence of new satellite systems or new services which will incorporate the contributions of artificial intelligence and machine learning defined as covering a wide range of innovations from autonomous objects with their own decision-making power to increasingly sophisticated services exploiting very large volumes of information from space. This chapter identifies some of the legal and ethical challenges linked to its use. These legal and ethical challenges call for solutions which the international treaties in force are not sufficient to determine and implement. For this reason, a legal methodology must be developed that makes it possible to link intelligent systems and services to a system of rules applicable thereto. It discusses existing legal AI-based tools amenable for making space law actionable, interoperable and machine readable for future compliance tools.


Elon Musk warns AI like the kind used in Tesla's autopilot should be regulated by international law

Daily Mail - Science & tech

Tesla and SpaceX CEO, Elon Musk, says that AI like the one his companies make should be better regulated. Musk's opinion on the dangers of letting AI proliferate unfettered was prompted by a report published in MIT Technology Review about changing company culture at OpenAI, a technology company that helps develop new AI. Elon Musk formerly helmed the company but left due to conflicts of interest. The report claims that OpenAI has shifted from its goal of equitably distributing AI technology to a more secretive, funding-driven company. 'OpenAI should be more open imo,' he tweeted.


Start your year with high quality trainings in the fields of AI and international law and business and human rights.

#artificialintelligence

As we enter a new decade, we take with us the growing challenges we face in many fields, including artificial intelligence and conducting business while ensuring human rights. These hot topics are not going away any time soon. With the speed of innovation and technology, the responsibility of keeping up with development and regulating practices is all the more crucial to ensure a just world. Our upcoming winter academies on AI and international law, and due diligence as a key to responsible conduct, will empower you with the skills and knowledge you need to tackle those issues in your daily work. Winter academy on Artificial Intelligence and International law (20 – 24 January) 2020 will be a critical year to set the tone for the next decade of innovations in Artificial Intelligence (AI), one of the most complex technologies to monitor or regulate.


Program Director, Technology and International Affairs Program - Washington, DC

#artificialintelligence

Recent efforts include working with government cyber-security agencies and insurance and financial industry experts to develop principles of responsible private sector response against attackers; collaborating with G-20 finance ministries and central banks, international financial institutions such as SWIFT, and global banks and insurers to develop practical norms to protect the integrity of financial data and transactions; initiatives in Silicon Valley and China to develop compatible approaches to promote Artificial Intelligence safety; and an effort to map how diverse stakeholders in China, India, and the United States assess risks associated with bioengineering techniques such as gene-editing.


A Vision for the Future of Private International Law and the Internet

#artificialintelligence

There are countless news stories and scientific publications illustrating how artificial intelligence (AI) will change the world. As far as law is concerned, discussions largely center around how AI systems such as IBM's Watson will cause disruption in the legal industry. However, little attention has been directed at how AI might prove beneficial for the field of private international law. Private international law has always been a complex discipline, and its application in the online environment has been particularly challenging, with both jurisdictional overreach and jurisdictional gaps. Primarily, this is due to the fact that the near-global reach of a person's online activities will so easily expose that person to the jurisdiction and laws of a large number of countries.


Artificial Intelligence and International Security: The Long View Ethics & International Affairs Cambridge Core

#artificialintelligence

How will emerging autonomous and intelligent systems affect the international landscape of power and coercion two decades from now? Will the world see a new set of artificial intelligence (AI) hegemons just as it saw a handful of nuclear powers for most of the twentieth century? Will autonomous weapon systems make conflict more likely or will states find ways to control proliferation and build deterrence, as they have done (fitfully) with nuclear weapons? And importantly, will multilateral forums find ways to engage the technology holders, states as well as industry, in norm setting and other forms of controlling the competition? The answers to these questions lie not only in the scope and spread of military applications of AI technologies but also in how pervasive their civilian applications will be.


Teaching AI, Ethics, Law and Policy

arXiv.org Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.


Japan drafting guidelines to stop technology leaks from universities working with foreign firms

The Japan Times

The government will set guidelines by the end of March next year for preventing technology leaks from universities that conduct research with foreign firms, sources close to the matter said Wednesday. The move comes as the United States and China grow cautious about advanced technologies such as artificial intelligence being converted for military use. While Japan already regulates the disclosure of sensitive technologies and products by the nation's state organizations and companies to overseas firms under a foreign exchange and foreign trade law, university laboratories have been managing infrequent arrangements on their own, leading some experts to voice concerns about the risk of information leaks. The envisioned guidelines would require universities and other research institutions to set regulations on joint projects involving foreign entities. They will be based on the comprehensive innovation strategy adopted by the Cabinet in 2018 aimed at promoting university research on AI, biotechnology and other leading technologies.


Deep Models, Machine Learning, and Artificial Intelligence Applications in National and International Security

AI Magazine

These technologies have revolutionized many commercial applications, but they are not currently designed to solve security problems. At best, they are suitable for fast recommendations. Therefore, there exists a fundamental instability in the learned functions. This trust issue is only one of the major issues with using deep learning for security applications. A second issue is the data requirements.


Japan's Komeito political party seeks international regulations on robotic weapons

The Japan Times

A project team of Komeito, the junior partner in the Liberal Democratic Party-led ruling coalition, has presented to Foreign Minister Taro Kono its proposals for an international agreement to regulate robotic weapons development. Deployment of lethal autonomous weapons systems, or LAWS, cannot be overlooked in terms of international humanitarian law and ethics, according to the proposals released Monday. Komeito called for agreeing on a document, such as a political declaration or a code of conduct, within the framework of the Convention on Certain Conventional Weapons. Kono said he will refer to the proposals. Ethical issues and military advantages of such weapons have been under discussion within the framework of the convention since 2014.