Goto

Collaborating Authors


Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence

#artificialintelligence

Artificial intelligence (AI) has the potential to deliver significant social and economic benefits, including reducing accidental deaths and injuries, making new scientific discoveries, and increasing productivity.[1] However, an increasing number of activists, scholars, and pundits see AI as inherently risky, creating substantial negative impacts such as eliminating jobs, eroding personal liberties, and reducing human intelligence.[2] Some even see AI as dehumanizing, dystopian, and a threat to humanity.[3] As such, the world is dividing into two camps regarding AI: those who support the technology and those who oppose it. Unfortunately, the latter camp is increasingly dominating AI discussions, not just in the United States, but in many nations around the world. There should be no doubt that nations that tilt toward fear rather than optimism are more likely to put in place policies and practices that limit AI development and adoption, which will hurt their economic growth, social ...


Inside AI: Technology Landscape of Artificial Intelligence

@machinelearnbot

AI Clouds: Lego blocking cloud based services with developer kits, large general purpose AI companies are enabling developers to deploy algorithms via SDKs within their cloud hosted platforms. From Microsoft Azure AI platform all the way to Amazon's AWS AI Offerings, these organizations provide pre-trained models, GPUs and storage that are necessary for more effective continuous deployment, testing and quality assurance (QA). AI Languages: Beyond software applications to onboard users onto AI platforms, companies are standardizing new languages to familiarize developers to continually build using their libraries. Uber's AI Labs, for example, released their own probabilistic python offshoot programming language, Pyro. Wit.ai is another language for developers to build cross device applications.


Towards a Framework for Certification of Reliable Autonomous Systems

arXiv.org Artificial Intelligence

The capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. However, regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace? We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state-of-the-art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. Case studies in seven distinct domains illustrate the article. Keywords: autonomous systems; certification; verification; Artificial Intelligence 1 Introduction Since the dawn of human history, humans have designed, implemented and adopted tools to make it easier to perform tasks, often improving efficiency, safety, or security.


Ultra Low Power Deep-Learning-powered Autonomous Nano Drones

arXiv.org Artificial Intelligence

Flying in dynamic, urban, highly-populated environments represents an open problem in robotics. State-of-the-art (SoA) autonomous Unmanned Aerial Vehicles (UAVs) employ advanced computer vision techniques based on computationally expensive algorithms, such as Simultaneous Localization and Mapping (SLAM) or Convolutional Neural Networks (CNNs) to navigate in such environments. In the Internet-of-Things (IoT) era, nano-size UAVs capable of autonomous navigation would be extremely desirable as self-aware mobile IoT nodes. However, autonomous flight is considered unaffordable in the context of nano-scale UAVs, where the ultra-constrained power envelopes of tiny rotor-crafts limit the on-board computational capabilities to low-power microcontrollers. In this work, we present the first vertically integrated system for fully autonomous deep neural network-based navigation on nano-size UAVs. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and deployed on a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. We discuss a methodology and software mapping tools that enable the SoA CNN presented in [1] to be fully executed on-board within a strict 12 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 94 mW on average - 1% of the power envelope of the deployed nano-aircraft.