Goto

Collaborating Authors

Results


The Autonomous-Car Chaos of the 2004 Darpa Grand Challenge

WIRED

When the inquisition required him to drop his study of what the Roman Catholic Church insisted was not a heliocentric solar system, Galileo Galilei turned his energy to the less controversial question of how to stick a telescope onto a helmet. The king of Spain had offered a hefty reward to anyone who could solve the stubborn mystery of how to determine a ship's longitude while at sea: 6,000 ducats up front and another 2,000 per year for life. Galileo thought his headgear, with the telescope fixed over one eye and making its wearer look like a misaligned unicorn, would net him the reward. Determining latitude is easy for any sailor who can pick out the North Star, but finding longitude escaped the citizens of the 17th century, because it required a precise knowledge of time. That's based on a simple principle: Say you set your clock before sailing west from Greenwich.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


DARPA CODE Autonomy Engine Demonstrated on Avenger UAS

#artificialintelligence

General Atomics Aeronautical Systems, Inc. (GA-ASI) has demonstrated the DARPA-developed Collaborative Operations in Denied Environment (CODE) autonomy engine on the company's Avenger Unmanned Aircraft System (UAS). CODE was used in order to gain further understanding of cognitive Artificial Intelligence (AI) processing on larger UAS platforms for air-to-air targeting. Using a network-enabled Tactical Targeting Network Technology (TTNT) radio for mesh network mission communications, GA-ASI was able to demonstrate integration of emerging Advanced Tactical Data Links (ATDL), as well as separation between flight and mission critical systems. During the autonomous flight, CODE software controlled the manoeuvring of the Avenger UAS for over two hours without human pilot input. GA-ASI extended the base software behavioural functions for a coordinated air-to-air search with up to six aircraft, using five virtual aircraft for the purposes of the demonstration.


AI-Powered Sensing Technology to be Developed for MQ-9 UAS

#artificialintelligence

General Atomics Aeronautical Systems, Inc. (GA-ASI) has been awarded a contract by the U.S. Department of Defense's Joint Artificial Intelligence Center (JAIC) to develop enhanced autonomous sensing capabilities for unmanned aerial vehicles (UAVs). The JAIC Smart Sensor project aims to advance drone-based AI technology by demonstrating object recognition algorithms and employing onboard AI to automatically control UAV sensors and direct autonomous flight. GA-ASI will deploy these new capabilities on a MQ-9 Reaper UAV equipped with a variety of sensors, including GA-ASI's Reaper Defense Electronic Support System (RDESS) and Lynx Synthetic Aperture Radar (SAR). GA-ASI's Metis Intelligence, Surveillance and Reconnaissance (ISR) tasking and intelligence-sharing application, which enables operators to specify effects-based mission objectives and receive automatic notification of actionable intelligence, will be used to command the unmanned aircraft. J.R. Reid, GA-ASI Vice President of Strategic Development, commented: "GA-ASI is excited to leverage the considerable investment we have made to advance the JAIC's autonomous sensing objective. This will bring a tremendous increase in unmanned systems capabilities for applications across the full-range of military operations."


Assessment of System-Level Cyber Attack Vulnerability for Connected and Autonomous Vehicles Using Bayesian Networks

arXiv.org Artificial Intelligence

This study presents a methodology to quantify vulnerability of cyber attacks and their impacts based on probabilistic graphical models for intelligent transportation systems under connected and autonomous vehicles framework. Cyber attack vulnerabilities from various types and their impacts are calculated for intelligent signals and cooperative adaptive cruise control (CACC) applications based on the selected performance measures. Numerical examples are given that show impact of vulnerabilities in terms of average intersection queue lengths, number of stops, average speed, and delays. At a signalized network with and without redundant systems, vulnerability can increase average queues and delays by $3\%$ and $15\%$ and $4\%$ and $17\%$, respectively. For CACC application, impact levels reach to $50\%$ delay difference on average when low amount of speed information is perturbed. When significantly different speed characteristics are inserted by an attacker, delay difference increases beyond $100\%$ of normal traffic conditions.


The State of AI Ethics Report (October 2020)

arXiv.org Artificial Intelligence

The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU's AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.


Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data

arXiv.org Machine Learning

While deep learning has resulted in major breakthroughs in many application domains, the frameworks commonly used in deep learning remain fragile to artificially-crafted and imperceptible changes in the data. In response to this fragility, adversarial training has emerged as a principled approach for enhancing the robustness of deep learning with respect to norm-bounded perturbations. However, there are other sources of fragility for deep learning that are arguably more common and less thoroughly studied. Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning. In this paper, we propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning. Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data. Critical to our paradigm is first obtaining a model of natural variation which can be used to vary data over a range of natural conditions. Such models may be either known a priori or else learned from data. In the latter case, we show that deep generative models can be used to learn models of natural variation that are consistent with realistic conditions. We then exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model. Our extensive experiments show that across a variety of naturally-occurring conditions and across various datasets, deep neural networks trained with our model-based algorithms significantly outperform both standard deep learning algorithms as well as norm-bounded robust deep learning algorithms.


'Machines set loose to slaughter': the dangerous rise of military AI

#artificialintelligence

Two menacing men stand next to a white van in a field, holding remote controls. They open the van's back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. And existing defences are weak or nonexistent.


Drones – the New Critical Infrastructure

#artificialintelligence

Be prepared in the near future when you gaze into the blue skies to perceive a whole series of strange-looking things – no, they will not be birds, nor planes, or even superman. They may be temporarily, and in some cases startlingly mistaken as UFOs, given their bizarre and ominous appearance. But, in due course, they will become recognized as valuable objects of a new era of human-made flying machines, intended to serve a broad range of missions and objectives. Many such applications are already incorporated and well entrenched in serving essential functions for extending capabilities in our vital infrastructures such as transportation, utilities, the electric grid, agriculture, emergency services, and many others. Rapidly advancing technologies have made possible the dramatic capabilities of unmanned aerial vehicles (UAV/drones) to uniquely perform various functions that were inconceivable a mere few years ago.


Scientists use big data to sway elections and predict riots -- welcome to the 1960s

Nature

Ignorance of history is a badge of honour in Silicon Valley. "The only thing that matters is the future," self-driving-car engineer Anthony Levandowski told The New Yorker in 20181. Levandowski, formerly of Google, Uber and Google's autonomous-vehicle subsidiary Waymo (and recently sentenced to 18 months in prison for stealing trade secrets), is no outlier. The gospel of'disruptive innovation' depends on the abnegation of history2. 'Move fast and break things' was Facebook's motto. Another word for this is heedlessness. And here are a few more: negligence, foolishness and blindness.