Goto

Collaborating Authors

Results


Attack of the drones: the mystery of disappearing swarms in the US midwest

The Guardian

At twilight on New Year's Eve, 2020, Placido Montoya, 35, a plumber from Fort Morgan, Colorado, was driving to work. Ahead of him he noticed blinking lights in the sky. He'd heard rumours of mysterious drones, whispers in his local community, but now he was seeing them with his own eyes. In the early morning gloom, it was hard to make out how big the lights were and how many were hovering above him. But one thing was clear to Montoya: he needed to give chase.


Best surveillance drone in 2021

ZDNet

Security and surveillance are one of the biggest growth areas in the ever-expanding UAV sector. While it's a relatively recent addition to enterprise toolkits in many industries, the use of drones to provide aerial assessments of activities on the ground is actually a return to form for the technology, which has seen some of its most ambitious development in defense applications. A lineup of aerial hardware stacks to fit a variety of enterprise photography and video use cases. Aerial vehicles can cover vastly more terrain than slower, clumsier ground-based surveillance systems -- which is why they've been a key component of military and law enforcement applications for decades. But drones, which are smaller, cheaper, and more efficient than manned-aircraft like helicopters, have very quickly democratized access to aerial security and surveillance and opened up the skies to companies of all sizes across sectors.


Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

arXiv.org Artificial Intelligence

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithm's decision. Unlike prior work, our system, LEWIS: (1)can compute provably effective explanations and recourse at local, global and contextual levels (2)is designed to work with users with varying levels of background knowledge of the underlying causal model and (3)makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We empirically evaluate LEWIS on three real-world datasets and show that it generates human-understandable explanations that improve upon state-of-the-art approaches in XAI, including the popular LIME and SHAP. Experiments on synthetic data further demonstrate the correctness of LEWIS's explanations and the scalability of its recourse algorithm.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


Drones With 'Most Advanced AI Ever' Coming Soon To Your Local Police Department

#artificialintelligence

Three years ago, Customs and Border Protection placed an order for self-flying aircraft that could launch on their own, rendezvous, locate and monitor multiple targets on the ground without any human intervention. In its reasoning for the order, CBP said the level of monitoring required to secure America's long land borders from the sky was too cumbersome for people alone. To research and build the drones, CBP handed $500,000 to Mitre Corp., a trusted nonprofit Skunk Works that was already furnishing border police with prototype rapid DNA testing and smartwatch hacking technology. They were "tested but not fielded operationally" as "the gap from simulation to reality turned out to be much larger than the research team originally envisioned," a CBP spokesperson says. This year, America's border police will test automated drones from Skydio, the Redwood City, Calif.-based startup that on Monday announced it had raised an additional $170 million in venture funding at a valuation of $1 billion. That brings the total raised for Skydio to $340 million.


Drones With 'Most Advanced AI Ever' Coming Soon To Your Local Police Department

#artificialintelligence

Three years ago, Customs and Border Protection placed an order for self-flying aircraft that could launch on their own, rendezvous, locate and monitor multiple targets on the ground without any human intervention. In its reasoning for the order, CBP said the level of monitoring required to secure America's long land borders from the sky was too cumbersome for people alone. To research and build the drones, CBP handed $500,000 to Mitre Corp., a trusted nonprofit Skunk Works that was already furnishing border police with prototype rapid DNA testing and smartwatch hacking technology. They were "tested but not fielded operationally" as "the gap from simulation to reality turned out to be much larger than the research team originally envisioned," a CBP spokesperson says. This year, America's border police will test automated drones from Skydio, the Redwood City, Calif.-based startup that on Monday announced it had raised an additional $170 million in venture funding at a valuation of $1 billion. That brings the total raised for Skydio to $340 million.


Drones With 'Most Advanced AI Ever' Coming Soon To Your Local Police Department

#artificialintelligence

Three years ago, Customs and Border Protection placed an order for self-flying aircraft that could launch on their own, rendezvous, locate and monitor multiple targets on the ground without any human intervention. In its reasoning for the order, CBP said the level of monitoring required to secure America's long land borders from the sky was too cumbersome for people alone. To research and build the drones, CBP handed $500,000 to Mitre Corp., a trusted nonprofit Skunk Works that was already furnishing border police with prototype rapid DNA testing and smartwatch hacking technology. They were "tested but not fielded operationally" as "the gap from simulation to reality turned out to be much larger than the research team originally envisioned," a CBP spokesperson says. This year, America's border police will test automated drones from Skydio, the Redwood City, Calif.-based startup that on Monday announced it had raised an additional $170 million in venture funding at a valuation of $1 billion.


Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence

arXiv.org Artificial Intelligence

The societal and ethical implications of the use of opaque artificial intelligence systems for consequential decisions, such as welfare allocation and criminal justice, have generated a lively debate among multiple stakeholder groups, including computer scientists, ethicists, social scientists, policy makers, and end users. However, the lack of a common language or a multi-dimensional framework to appropriately bridge the technical, epistemic, and normative aspects of this debate prevents the discussion from being as productive as it could be. Drawing on the philosophical literature on the nature and value of explanations, this paper offers a multi-faceted framework that brings more conceptual precision to the present debate by (1) identifying the types of explanations that are most pertinent to artificial intelligence predictions, (2) recognizing the relevance and importance of social and ethical values for the evaluation of these explanations, and (3) demonstrating the importance of these explanations for incorporating a diversified approach to improving the design of truthful algorithmic ecosystems. The proposed philosophical framework thus lays the groundwork for establishing a pertinent connection between the technical and ethical aspects of artificial intelligence systems.


Los Angeles man admits flying drone that struck LAPD helicopter over Hollywood

Los Angeles Times

A Los Angeles man admitted in federal court Thursday that he flew a drone that struck a Los Angeles Police Department helicopter that was responding to a crime scene in Hollywood. Andrew Rene Hernandez, 22, made the admission in pleading guilty to one count of unsafe operation of an unmanned aircraft, a misdemeanor. A spokesman for the U.S. attorney's office in Los Angeles said Hernandez is believed to be the first person in the country to be convicted of that offense, which carries a punishment of up to one year in prison. In his plea agreement, Hernandez admitted that he "recklessly interfered with and disrupted" the operation of the LAPD helicopter, which was responding to a burglary of a pharmacy, and that his actions "posed an imminent safety hazard" to the chopper's occupants. Reached by phone Thursday, Hernandez declined to comment.


A Survey on the Explainability of Supervised Machine Learning

Journal of Artificial Intelligence Research

Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.