The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI. The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here. Q: Can you talk about the ongoing work of the AI Policy Forum and the AI policy landscape generally?
In September 2022, the United Nations System Chief Executives Board for Coordination endorsed the Principles for the Ethical Use of Artificial Intelligence in the United Nations System, developed through the High-level Committee on Programmes (HLCP) which approved the Principles at an intersessional meeting in July 2022. These Principles were developed by a workstream co-led by United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Office of Information and Communications Technology of the United Nations Secretariat (OICT), in the HLCP Inter-Agency Working Group on Artificial Intelligence. The Principles are based on the Recommendation on the Ethics of Artificial Intelligence adopted by UNESCO's General Conference at its 41st session in November 2021. This set of ten principles, grounded in ethics and human rights, aims to guide the use of artificial intelligence (AI) across all stages of an AI system lifecycle across United Nations system entities. It is intended to be read with other related policies and international law, and includes the following principles: do no harm; defined purpose, necessity and proportionality; safety and security; fairness and non-discrimination; sustainability; right to privacy, data protection and data governance; human autonomy and oversight; transparency and explainability; responsibility and accountability; and inclusion and participation.
By focusing on a dialogue with consumers through more robust, conversational AI, brands can deliver the best customer experience possible while also respecting consumer privacy. With increasing regulation and industry shifts by big tech companies, brands of all sizes need to reevaluate their data practices. Christian Ward, chief data officer, Yext, discusses how breakthroughs in conversational AI and natural language processing enable a consent-based dialogue between brands and consumers that provides a personalized customer experience at scale. We stand at a decisive moment for brands as they look to the future of consumer data strategies. With disparate state-by-state legislation, numerous proposals for a national regulatory framework in the US, and differing international laws, including the GDPR, the data governance guidelines for brands to follow are inconsistent – and potentially costly should they run afoul of the rules.
PHNOM PENH – Defense Minister Nobuo Kishi said Wednesday during talks with his ASEAN counterparts that maintaining a rules-based international order in the Indo-Pacific region is important, apparently with China's growing maritime assertiveness in mind. In pushing for Japan's vision of a "free and open" Indo-Pacific, Kishi called for a regional code of conduct in the South China Sea to be "effective, substantial and consistent with international law," his ministry said in a press release. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites. If this does not resolve the issue or you are unable to add the domains to your allowlist, please see this support page.
Using artificial intelligence (AI) for warfare has been the promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology argues only so much can be automated and shows the value of human judgment. All of the hard problems in AI really are judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war, » said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. AI decision-making is based on four key components: data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action. Machine learning advancements have made predictions easier, which makes data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans must intervene, Lindsay and University of Toronto Professor Avi Goldfarb wrote in the paper, « Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War, » published in International Security.
Try here the demonstration tool for automatically classifying goods with their commercial descriptions and experience how AI could assist core Customs operations. As the awareness among Customs agencies about the importance and the interest in its application grows, the BACUDA expert team with the support of CCF-Korea continues to deliver state of the art methods and training material to meet the demands of Members. Complementing the development of the neural network model to support the classification of goods in Harmonized System, an online advanced Data Analytics course including a practical module on the HS recommendation algorithm was published on CLiKC!, the WCO e-learning platform. The BACUDA team of experts collaborated on the development of an AI model to recommend HS codes, which aims to support commodity classification for Customs officials by using historical data to predict HS codes upon the entry of the commercial descriptions of goods. An accompanying tool provides a demonstration on the functions which the model offers.
The event attracted more than 700 attendees and provided insights into how advanced technologies can help Customs administrations facilitate the flow of goods across borders. The publication titled, "The role of advanced technologies in cross-border trade: A customs perspective" provides the current state of play and sheds light on the opportunities and challenges Customs face when deploying these technologies. The publication outlines the key findings of WCO's 2021 Annual Consolidated Survey and its results on Customs' use of advanced technologies such as blockchain, the internet of things, data analytics and artificial intelligence to facilitate trade and enhance safety, security and fair revenue collection. The joint publication highlights the benefits that can result from the adoption of these advanced technologies, such as enhanced transparency of procedures, sharing of information amongst all relevant stakeholders in real time, better risk management, and improved data quality, leading to greater efficiency in Customs processes and procedures. In his remarks, WCO Deputy Secretary General Ricardo Treviño Chapa said, "Technologies will assist implementation of international trade facilitation rules and standards, such as the WCO Revised Kyoto Convention and the WTO Trade Facilitation Agreement. We are therefore delighted to be partnering with the WTO, to ensure that our work in assisting our Members' digital transformation journeys is complementary, that we bring all relevant partners to the same table, and that we avoid duplication."
Whereas, the 10 principles for ethical AI(formulated in 2017) included applicability of International Humanitarian Law( IHL); non-delegation of human responsibility; accountability for use of force in accordance with international law; weapons reviews before deployment; incorporation of physical, non-proliferation and cyber security safeguards; risk assessment and mitigation during technology development; consideration of the use of emerging technologies in the area of lethal autonomous weapon system (LAWS) in compliance with IHL.
Editor's note: Rostam J. Neuwirth is a professor of the Faculty of Law, University of Macau. The article reflects the author's opinions, and not necessarily the views of CGTN. At the end of last year, the General Conference of the United Nations Educational, Scientific and Cultural Organization adopted the Recommendation on the Ethics of Artificial Intelligence as the first global standard-setting instrument responding to the ethical concerns related to artificial intelligence (AI). This recommendation must strongly be welcome, because – albeit legally non-binding – it reflects an emerging global consensus on the ethical concerns raised and serious risks caused by AI. Most of all, it recognizes the need to work on global solutions to this problem of fundamental importance for present and future generations.
Knowledge of the changing traffic is critical in risk management. Customs offices worldwide have traditionally relied on local resources to accumulate knowledge and detect tax fraud. This naturally poses countries with weak infrastructure to become tax havens of potentially illicit trades. The current paper proposes DAS, a memory bank platform to facilitate knowledge sharing across multi-national customs administrations to support each other. We propose a domain adaptation method to share transferable knowledge of frauds as prototypes while safeguarding the local trade information. Data encompassing over 8 million import declarations have been used to test the feasibility of this new system, which shows that participating countries may benefit up to 2-11 times in fraud detection with the help of shared knowledge. We discuss implications for substantial tax revenue potential and strengthened policy against illicit trades.