Goto

Collaborating Authors

Results


Want to develop a risk-management framework for AI? Treat it like a human.

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Artificial intelligence (AI) technologies offer profoundly important strategic benefits and hazards for global businesses and government agencies. One of AI's greatest strengths is its ability to engage in behavior typically associated with human intelligence -- such as learning, planning, and problem solving. AI, however, also brings new risks to organizations and individuals, and manifests those risks in perplexing ways. It is inevitable that AI will soon face increased regulation.


The scientist and the AI-assisted, remote-control killing machine

The Japan Times

Iran's top nuclear scientist woke up an hour before dawn, as he did most days, to study Islamic philosophy before his day began. That afternoon, he and his wife would leave their vacation home on the Caspian Sea and drive to their country house in Absard, a bucolic town east of Tehran, where they planned to spend the weekend. Iran's intelligence service had warned him of a possible assassination plot, but the scientist, Mohsen Fakhrizadeh, had brushed it off. Convinced that Fakhrizadeh was leading Iran's efforts to build a nuclear bomb, Israel had wanted to kill him for at least 14 years. But there had been so many threats and plots that he no longer paid them much attention. Despite his prominent position in Iran's military establishment, Fakhrizadeh wanted to live a normal life. And, disregarding the advice of his security team, he often drove his own car to Absard instead of having bodyguards drive him in an armored vehicle. It was a serious breach of security protocol, but he insisted. So shortly after noon on Friday, Nov. 27, he slipped behind the wheel of his black Nissan Teana sedan, his wife in the passenger seat beside him, and hit the road.


The Scientist and the A.I.-Assisted, Remote-Control Killing Machine

#artificialintelligence

That afternoon, he and his wife would leave their vacation home on the Caspian Sea and drive to their country house in Absard, a bucolic town east of Tehran, where they planned to spend the weekend. Iran's intelligence service had warned him of a possible assassination plot, but the scientist, Mohsen Fakhrizadeh, had brushed it off. Convinced that Mr. Fakhrizadeh was leading Iran's efforts to build a nuclear bomb, Israel had wanted to kill him for at least 14 years. But there had been so many threats and plots that he no longer paid them much attention. Despite his prominent position in Iran's military establishment, Mr. Fakhrizadeh wanted to live a normal life. And, disregarding the advice of his security team, he often drove his own car to Absard instead of having bodyguards drive him in an armored vehicle. It was a serious breach of security protocol, but he insisted. So shortly after noon on Friday, Nov. 27, he slipped behind the wheel of his black Nissan Teana sedan, his wife in the passenger seat beside him, and hit the road. Since 2004, when the Israeli government ordered its foreign intelligence agency, the Mossad, to prevent Iran from obtaining nuclear weapons, the agency had been carrying out a campaign of sabotage and cyberattacks on Iran's nuclear fuel enrichment facilities.


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet

#artificialintelligence

Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...


Reports of the Workshops Held at the 2021 AAAI Conference on Artificial Intelligence

Interactive AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirty-Fifth Conference on Artificial Intelligence was held virtually from February 8-9, 2021. There were twenty-six workshops in the program: Affective Content Analysis, AI for Behavior Change, AI for Urban Mobility, Artificial Intelligence Safety, Combating Online Hostile Posts in Regional Languages during Emergency Situations, Commonsense Knowledge Graphs, Content Authoring and Design, Deep Learning on Graphs: Methods and Applications, Designing AI for Telehealth, 9th Dialog System Technology Challenge, Explainable Agency in Artificial Intelligence, Graphs and More Complex Structures for Learning and Reasoning, 5th International Workshop on Health Intelligence, Hybrid Artificial Intelligence, Imagining Post-COVID Education with AI, Knowledge Discovery from Unstructured Data in Financial Services, Learning Network Architecture During Training, Meta-Learning and Co-Hosted Competition, ...


A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized

arXiv.org Artificial Intelligence

Artificial intelligence (AI) systems operate in increasingly diverse areas, from healthcare to facial recognition, the stock market, autonomous vehicles, and so on. While the underlying digital infrastructure of AI systems is developing rapidly, each area of implementation is subject to different degrees and processes of legitimization. By combining elements from institutional theory and information systems-theory, this paper presents a conceptual framework to analyze and understand AI-induced field-change. The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions while existing institutional infrastructures determine the scope and speed at which organizational change is allowed to occur. Where institutional infrastructure and governance arrangements, such as standards, rules, and regulations, still are unelaborate, the field can move fast but is also more likely to be contested. The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.


Artificial Intelligence and Automated Systems Legal Update (2Q21)

#artificialintelligence

After a busy start to the year, regulatory and policy developments related to Artificial Intelligence and Automated Systems ("AI") have continued apace in the second quarter of 2021. Unlike the comprehensive regulatory framework proposed by the European Union ("EU") in April 2021,[1] more specific regulatory guidelines in the U.S. are still being proposed on an agency-by-agency basis. President Biden has so far sought to amplify the emerging U.S. AI strategy by continuing to grow the national research and monitoring infrastructure kick-started by the 2019 Trump Executive Order[2] and remain focused on innovation and competition with China in transformative innovations like AI, superconductors, and robotics. Most recently, the U.S. Innovation and Competition Act of 2021--sweeping, bipartisan R&D and science-policy legislation--moved rapidly through the Senate. While there has been no major shift away from the previous "hands off" regulatory approach at the federal level, we are closely monitoring efforts by the federal government and enforcers such as the FTC to make fairness and transparency central tenets of U.S. AI policy.


Vehicle Fuel Optimization Under Real-World Driving Conditions: An Explainable Artificial Intelligence Approach

arXiv.org Artificial Intelligence

Fuel optimization of diesel and petrol vehicles within industrial fleets is critical for mitigating costs and reducing emissions. This objective is achievable by acting on fuel-related factors, such as the driving behaviour style. In this study, we developed an Explainable Boosting Machine (EBM) model to predict fuel consumption of different types of industrial vehicles, using real-world data collected from 2020 to 2021. This Machine Learning model also explains the relationship between the input factors and fuel consumption, quantifying the individual contribution of each one of them. The explanations provided by the model are compared with domain knowledge in order to see if they are aligned. The results show that the 70% of the categories associated to the fuel-factors are similar to the previous literature. With the EBM algorithm, we estimate that optimizing driving behaviour decreases fuel consumption between 12% and 15% in a large fleet (more than 1000 vehicles).


The Threat of Artificial Intelligence

#artificialintelligence

The technologies referred to as "artificial intelligence" or "AI" are more momentous than most people realize. Their impact will be at least equal to, and may well exceed, that of electricity, the computer, and the internet. What's more, their impact will be massive and rapid, faster than what the internet has wrought in the past thirty years. Much of it will be wondrous, giving sight to the blind and enabling self-driving vehicles, for example, but AI-engendered technology may also devastate job rolls, enable an all- encompassing surveillance state, and provoke social upheavals yet unforeseen. The time we have to understand this fast-moving technology and establish principles for its governance is very short. The term "AI" was coined by a computer scientist in 1956.


Levels of explainable artificial intelligence for human-aligned conversational explanations

arXiv.org Artificial Intelligence

Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level `narrow' explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level `strong' explanations.