Tesla and SpaceX CEO, Elon Musk, says that AI like the one his companies make should be better regulated. Musk's opinion on the dangers of letting AI proliferate unfettered was prompted by a report published in MIT Technology Review about changing company culture at OpenAI, a technology company that helps develop new AI. Elon Musk formerly helmed the company but left due to conflicts of interest. The report claims that OpenAI has shifted from its goal of equitably distributing AI technology to a more secretive, funding-driven company. 'OpenAI should be more open imo,' he tweeted.
As we enter a new decade, we take with us the growing challenges we face in many fields, including artificial intelligence and conducting business while ensuring human rights. These hot topics are not going away any time soon. With the speed of innovation and technology, the responsibility of keeping up with development and regulating practices is all the more crucial to ensure a just world. Our upcoming winter academies on AI and international law, and due diligence as a key to responsible conduct, will empower you with the skills and knowledge you need to tackle those issues in your daily work. Winter academy on Artificial Intelligence and International law (20 – 24 January) 2020 will be a critical year to set the tone for the next decade of innovations in Artificial Intelligence (AI), one of the most complex technologies to monitor or regulate.
Recent efforts include working with government cyber-security agencies and insurance and financial industry experts to develop principles of responsible private sector response against attackers; collaborating with G-20 finance ministries and central banks, international financial institutions such as SWIFT, and global banks and insurers to develop practical norms to protect the integrity of financial data and transactions; initiatives in Silicon Valley and China to develop compatible approaches to promote Artificial Intelligence safety; and an effort to map how diverse stakeholders in China, India, and the United States assess risks associated with bioengineering techniques such as gene-editing.
There are countless news stories and scientific publications illustrating how artificial intelligence (AI) will change the world. As far as law is concerned, discussions largely center around how AI systems such as IBM's Watson will cause disruption in the legal industry. However, little attention has been directed at how AI might prove beneficial for the field of private international law. Private international law has always been a complex discipline, and its application in the online environment has been particularly challenging, with both jurisdictional overreach and jurisdictional gaps. Primarily, this is due to the fact that the near-global reach of a person's online activities will so easily expose that person to the jurisdiction and laws of a large number of countries.
How will emerging autonomous and intelligent systems affect the international landscape of power and coercion two decades from now? Will the world see a new set of artificial intelligence (AI) hegemons just as it saw a handful of nuclear powers for most of the twentieth century? Will autonomous weapon systems make conflict more likely or will states find ways to control proliferation and build deterrence, as they have done (fitfully) with nuclear weapons? And importantly, will multilateral forums find ways to engage the technology holders, states as well as industry, in norm setting and other forms of controlling the competition? The answers to these questions lie not only in the scope and spread of military applications of AI technologies but also in how pervasive their civilian applications will be.
A project team of Komeito, the junior partner in the Liberal Democratic Party-led ruling coalition, has presented to Foreign Minister Taro Kono its proposals for an international agreement to regulate robotic weapons development. Deployment of lethal autonomous weapons systems, or LAWS, cannot be overlooked in terms of international humanitarian law and ethics, according to the proposals released Monday. Komeito called for agreeing on a document, such as a political declaration or a code of conduct, within the framework of the Convention on Certain Conventional Weapons. Kono said he will refer to the proposals. Ethical issues and military advantages of such weapons have been under discussion within the framework of the convention since 2014.
Oracle on Tuesday released a series of updates to its Transportation Management and Global Trade Management clouds. The updates aim to help companies streamline and simplify compliance with shifting global trade regulations, as well as speed up customer fulfillment, Oracle said. Key to the new features is the injection of data into shipment routes and automated event handling. For instance, routing decisions will now take into account factors such as historic traffic patterns, hazardous material restrictions and tolls when planning shipments. Changes to transportation planning software are designed to improve outbound order fulfillment.
In the medium to long term, AI expertise must not reside in only a small number of countries – or solely within narrow segments of the population. Governments worldwide must invest in developing and retaining home-grown talent and expertise in AI if their countries are to be independent of the dominant AI expertise that is now typically concentrated in the US and China. And they should work to ensure that engineering talent is nurtured across a broad base in order to mitigate inherent bias issues. Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals. The humanitarian sector could derive significant benefit from such systems, which might for example decrease response times in emergencies.
This report examines some of the challenges for policymakers that may arise from the advancement and increasing application of AI. It draws together strands of thinking about the impact that AI may have on selected areas of international affairs – from military, human security and economic perspectives – over the next 10–15 years. The report sets out a broad framework to define and distinguish between the types of roles that artificial intelligence might play in policymaking and international affairs: these roles are identified as analytical, predictive and operational. In analytical roles, AI systems might allow fewer humans to make higher-level decisions, or to automate repetitive tasks such as monitoring sensors set up to ensure treaty compliance. In these roles, AI may well change – and in some ways it has already changed – the structures through which human decision-makers understand the world.
A top officer at the technology company IBM gave a presentation at the Elliott School of International Affairs Monday discussing the future of artificial intelligence in the workforce. The event, which featured Martin Fleming, the chief analytics officer and a chief economist at IBM, was the last installation of the Institute for International Economic Policy's 10th-anniversary speaker series, which ran throughout this academic year. Previous speakers included Louise Fox, a chief economist at the United States Agency for International Development and Bob Koopman, a chief economist at the World Trade Organization. Fleming began the event by describing artificial intelligence's increasingly important role in modern society – and its ability to structure typically unstructured data and make predictions about the future. He added that many forms of AI can improve productivity without killing human jobs – one of the most common fears about expanding robotics in the workforce.