Goto

Collaborating Authors

international law


Q&A: Global challenges surrounding the deployment of AI

#artificialintelligence

The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI. The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here. Q: Can you talk about the ongoing work of the AI Policy Forum and the AI policy landscape generally?


Principles for the Ethical Use of Artificial Intelligence in the United Nations System

#artificialintelligence

In September 2022, the United Nations System Chief Executives Board for Coordination endorsed the Principles for the Ethical Use of Artificial Intelligence in the United Nations System, developed through the High-level Committee on Programmes (HLCP) which approved the Principles at an intersessional meeting in July 2022. These Principles were developed by a workstream co-led by United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Office of Information and Communications Technology of the United Nations Secretariat (OICT), in the HLCP Inter-Agency Working Group on Artificial Intelligence. The Principles are based on the Recommendation on the Ethics of Artificial Intelligence adopted by UNESCO's General Conference at its 41st session in November 2021. This set of ten principles, grounded in ethics and human rights, aims to guide the use of artificial intelligence (AI) across all stages of an AI system lifecycle across United Nations system entities. It is intended to be read with other related policies and international law, and includes the following principles: do no harm; defined purpose, necessity and proportionality; safety and security; fairness and non-discrimination; sustainability; right to privacy, data protection and data governance; human autonomy and oversight; transparency and explainability; responsibility and accountability; and inclusion and participation.


La veille de la cybersécurité

#artificialintelligence

By focusing on a dialogue with consumers through more robust, conversational AI, brands can deliver the best customer experience possible while also respecting consumer privacy. With increasing regulation and industry shifts by big tech companies, brands of all sizes need to reevaluate their data practices. Christian Ward, chief data officer, Yext, discusses how breakthroughs in conversational AI and natural language processing enable a consent-based dialogue between brands and consumers that provides a personalized customer experience at scale. We stand at a decisive moment for brands as they look to the future of consumer data strategies. With disparate state-by-state legislation, numerous proposals for a national regulatory framework in the US, and differing international laws, including the GDPR, the data governance guidelines for brands to follow are inconsistent – and potentially costly should they run afoul of the rules.


Catalyzing Innovation via Centers, Labs, and Foundries

#artificialintelligence

The cornerstone of collaboration is based on knowledge transfer; sharing of research tools, methodologies and findings; and sometimes combining mutual funding resources to meet shortfalls necessary to build prototypes and commercialize technologies. Collaborations often involve combinations of government, industry and academia who work together to meet difficult challenges and cultivate new ideas. A growing trend for many leading companies is creating technology specific innovation centers, labs, and foundries to accelerate collaboration and invention. As the development of new technologies continues to grow exponentially and globally, collaboration has more value as a resource for adapting to the rapidly emerging technologies landscape by establishing pivotal connections between companies, technologies and stakeholders. In the US Federal government, the National Labs (including: Lawrence Livermore, Oak Ridge, Argonne, Sandia, Idaho National Laboratory, Battelle, and Brookhaven, and Federally Funded Research and Development Centers (FFRDC's), and federally funded Centers For Excellence have been outlets for innovation and public/private cooperation.


Rules-based order key to Indo-Pacific security, Japan defense chief tells ASEAN

The Japan Times

PHNOM PENH – Defense Minister Nobuo Kishi said Wednesday during talks with his ASEAN counterparts that maintaining a rules-based international order in the Indo-Pacific region is important, apparently with China's growing maritime assertiveness in mind. In pushing for Japan's vision of a "free and open" Indo-Pacific, Kishi called for a regional code of conduct in the South China Sea to be "effective, substantial and consistent with international law," his ministry said in a press release. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites. If this does not resolve the issue or you are unable to add the domains to your allowlist, please see this support page.


La veille de la cybersécurité

#artificialintelligence

Using artificial intelligence (AI) for warfare has been the promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology argues only so much can be automated and shows the value of human judgment. All of the hard problems in AI really are judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war, » said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. AI decision-making is based on four key components: data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action. Machine learning advancements have made predictions easier, which makes data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans must intervene, Lindsay and University of Toronto Professor Avi Goldfarb wrote in the paper, « Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War, » published in International Security.


INTERNATIONAL LAW: AI USE IN LETHAL WEAPONS?

#artificialintelligence

Whereas, the 10 principles for ethical AI(formulated in 2017) included applicability of International Humanitarian Law( IHL); non-delegation of human responsibility; accountability for use of force in accordance with international law; weapons reviews before deployment; incorporation of physical, non-proliferation and cyber security safeguards; risk assessment and mitigation during technology development; consideration of the use of emerging technologies in the area of lethal autonomous weapon system (LAWS) in compliance with IHL.


A global law for artificial intelligence?

#artificialintelligence

Editor's note: Rostam J. Neuwirth is a professor of the Faculty of Law, University of Macau. The article reflects the author's opinions, and not necessarily the views of CGTN. At the end of last year, the General Conference of the United Nations Educational, Scientific and Cultural Organization adopted the Recommendation on the Ethics of Artificial Intelligence as the first global standard-setting instrument responding to the ethical concerns related to artificial intelligence (AI). This recommendation must strongly be welcome, because – albeit legally non-binding – it reflects an emerging global consensus on the ethical concerns raised and serious risks caused by AI. Most of all, it recognizes the need to work on global solutions to this problem of fundamental importance for present and future generations.


Killer Robots are No Longer Science Fiction

#artificialintelligence

With geopolitical instability during an omicron surge, the AI of military factions are under the microscope. With Russia/Ukraine, China/India and China/Taiwan borders under pressure, there's a greater danger of A.I being misused in geographical tensions. Terminators were once just a movie. Engineers in Korea have developed a highly dexterous robotic hand that's capable of crushing beer cans or gently clutching an egg. It looks nearly exactly like those old movies.


There is an elephant in the room: Towards a critique on the use of fairness in biometrics

arXiv.org Artificial Intelligence

In 2019, the UK's Immigration and Asylum Chamber of the Upper Tribunal dismissed an asylum appeal basing the decision on the output of a biometric system, alongside other discrepancies. The fingerprints of the asylum seeker were found in a biometric database which contradicted the appellant's account. The Tribunal found this evidence unequivocal and denied the asylum claim. Nowadays, the proliferation of biometric systems is shaping public debates around its political, social and ethical implications. Yet whilst concerns towards the racialised use of this technology for migration control have been on the rise, investment in the biometrics industry and innovation is increasing considerably. Moreover, fairness has also been recently adopted by biometrics to mitigate bias and discrimination on biometrics. However, algorithmic fairness cannot distribute justice in scenarios which are broken or intended purpose is to discriminate, such as biometrics deployed at the border. In this paper, we offer a critical reading of recent debates about biometric fairness and show its limitations drawing on research in fairness in machine learning and critical border studies. Building on previous fairness demonstrations, we prove that biometric fairness criteria are mathematically mutually exclusive. Then, the paper moves on illustrating empirically that a fair biometric system is not possible by reproducing experiments from previous works. Finally, we discuss the politics of fairness in biometrics by situating the debate at the border. We claim that bias and error rates have different impact on citizens and asylum seekers. Fairness has overshadowed the elephant in the room of biometrics, focusing on the demographic biases and ethical discourses of algorithms rather than examine how these systems reproduce historical and political injustices.