If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Amid a global pandemic, economic recession and simmering racial tensions around the world, Israel's threat to formally annex parts of occupied Palestinian territory presents yet another international crisis in the making. This is because, with this outrageous move, the Israeli government threatens to unravel the rules-based system of international relations. Today's international law regime was established in the first half of the 20th century not only to regulate relations between states but also to assist the movements for self-determination across the world and oversee the end of colonialism. The looming Israeli annexation of Palestinian land and the global inaction on it evidence the failure of this regime to help end colonialism and put its very raison d'etre in question. Much of the narrative in international diplomatic circles around the issue of annexation has revolved around deterrence, with the rationale being the threat of tangible consequences to annexation will lead to a reconsideration of the move. Yet this narrative fails to acknowledge that we have reached a point, where Israel will annex yet another chunk of Palestinian territory precisely because deterrence has not worked.
In a move that caused a ripple effect across the Middle East, Iranian General Qassem Soleimani was killed in a US drone strike near Baghdad's international airport on January 3. On that day, the Pentagon announced the attack was carried out "at the direction of the president". In a new report examining the legality of armed drones and the Soleimani killing in particular, Agnes Callamard, UN special rapporteur on extrajudicial and arbitrary killings, said the US raid that killed Soleimani was "unlawful". Callamard presented her report at the Human Rights Council in Geneva on Thursday. The United States, which is not a member after quitting the council in 2018, rejected the report saying it gave "a pass to terrorists". In Callamard's view, the consequences of targeted killings by armed drones have been neglected by states.
On June 30, US Secretary of State Mike Pompeo's address to the UN Security Council calling for an arms embargo on Iran to be extended was expected to dominate the international news agenda. However, Iran's judiciary stole the morning's headlines by issuing an arrest warrant for Donald Trump the day before. Tehran prosecutor Ali Alqasimehr said on Monday that Trump, along with more than 30 others accused of involvement in the January 3 drone attack that killed Iran's top general, Qassem Soleimani, face "murder and terrorism charges". The prosecutor added that Tehran asked Interpol for help in detaining the US president. The same day, the US special envoy for Iran, Brian Hook, denounced the warrant as a "propaganda stunt" at a press conference in the Saudi capital, Riyadh.
In the next coming years, space activities are expected to undergo a radical transformation with the emergence of new satellite systems or new services which will incorporate the contributions of artificial intelligence and machine learning defined as covering a wide range of innovations from autonomous objects with their own decision-making power to increasingly sophisticated services exploiting very large volumes of information from space. This chapter identifies some of the legal and ethical challenges linked to its use. These legal and ethical challenges call for solutions which the international treaties in force are not sufficient to determine and implement. For this reason, a legal methodology must be developed that makes it possible to link intelligent systems and services to a system of rules applicable thereto. It discusses existing legal AI-based tools amenable for making space law actionable, interoperable and machine readable for future compliance tools.
Tesla and SpaceX CEO, Elon Musk, says that AI like the one his companies make should be better regulated. Musk's opinion on the dangers of letting AI proliferate unfettered was prompted by a report published in MIT Technology Review about changing company culture at OpenAI, a technology company that helps develop new AI. Elon Musk formerly helmed the company but left due to conflicts of interest. The report claims that OpenAI has shifted from its goal of equitably distributing AI technology to a more secretive, funding-driven company. 'OpenAI should be more open imo,' he tweeted.
As we enter a new decade, we take with us the growing challenges we face in many fields, including artificial intelligence and conducting business while ensuring human rights. These hot topics are not going away any time soon. With the speed of innovation and technology, the responsibility of keeping up with development and regulating practices is all the more crucial to ensure a just world. Our upcoming winter academies on AI and international law, and due diligence as a key to responsible conduct, will empower you with the skills and knowledge you need to tackle those issues in your daily work. Winter academy on Artificial Intelligence and International law (20 – 24 January) 2020 will be a critical year to set the tone for the next decade of innovations in Artificial Intelligence (AI), one of the most complex technologies to monitor or regulate.
WASHINGTON – The Pentagon on Monday distanced itself from U.S. President Donald Trump's assertions that he would bomb Iranian cultural sites despite international prohibitions on such attacks. Defense Secretary Mark Esper said the U.S. will "follow the laws of armed conflict." When asked if that ruled out targeting cultural sites, Esper said pointedly, "That's the laws of armed conflict." The split between the president and his Pentagon chief came amid heightened tensions with Tehran following a U.S. drone strike that killed Gen. Qassem Soleimani, the head of Iran's elite Quds Force. Trump had twice warned that he would hit Iranian cultural sites if Tehran retaliates against the U.S. Esper's public comments reflected the private concerns of other defense and military officials, who cited legal prohibitions on attacks on civilian, cultural and religious sites, except under certain, threatening circumstances.
Recent efforts include working with government cyber-security agencies and insurance and financial industry experts to develop principles of responsible private sector response against attackers; collaborating with G-20 finance ministries and central banks, international financial institutions such as SWIFT, and global banks and insurers to develop practical norms to protect the integrity of financial data and transactions; initiatives in Silicon Valley and China to develop compatible approaches to promote Artificial Intelligence safety; and an effort to map how diverse stakeholders in China, India, and the United States assess risks associated with bioengineering techniques such as gene-editing.
Russia started sabotaging the discussion from the very first session. Throughout the morning of Aug. 21, its diplomats at the United Nations in Geneva took the floor, nitpicking language in a document meant to pave the way for an eventual ban on lethal autonomous weapons, also known as killer robots, an emerging category of weapons that would be able to fight on their own and decide who to target and kill. "They were basically trying to waste time," says Laura Nolan of the International Committee for Robot Arms Control, who watched with frustration in the hall. But while Russia vigorously worked to derail progress, it had a quieter partner: China. "I very much get the impression that they're working together in some way," says Nolan. "[The Chinese] are letting the Russians steamroll the process, and they're happy to hang back."
There are countless news stories and scientific publications illustrating how artificial intelligence (AI) will change the world. As far as law is concerned, discussions largely center around how AI systems such as IBM's Watson will cause disruption in the legal industry. However, little attention has been directed at how AI might prove beneficial for the field of private international law. Private international law has always been a complex discipline, and its application in the online environment has been particularly challenging, with both jurisdictional overreach and jurisdictional gaps. Primarily, this is due to the fact that the near-global reach of a person's online activities will so easily expose that person to the jurisdiction and laws of a large number of countries.