Goto

Collaborating Authors

Results


Amazon's Latest Gimmicks Are Pushing the Limits of Privacy

WIRED

At the end of September, amidst its usual flurry of fall hardware announcements, Amazon debuted two especially futuristic products within five days of each other. The first is a small autonomous surveillance drone, Ring Always Home Cam, that waits patiently inside a charging dock to eventually rise up and fly around your house, checking whether you left the stove on or investigating potential burglaries. The second is a palm recognition scanner, Amazon One, that the company is piloting at two of its grocery stores in Seattle as a mechanism for faster entry and checkout. Both products aim to make security and authentication more convenient--but for privacy-conscious consumers, they also raise red flags. Amazon's latest data-hungry innovations are not launching in a vacuum.


Amazon's new Ring camera is actually a flying drone -- for inside your home

#artificialintelligence

Ring's Always Home Cam is an indoor security camera drone. Ring on Thursday introduced a new product to its growing roster of smart home devices -- the Ring Always Home Cam. Unlike the Amazon company's other security cameras, the Always Home Cam is a flying camera drone that docks when it isn't in use. The Ring Always Home Cam will be available in 2021 for $250. Along with this hardware announcement, Ring says you'll be able to turn on end-to-end encryption in the Ring app's Control Center "later this year" in an effort to improve the security of its devices.


Of course I want an Amazon drone flying inside my house. Don't you?

ZDNet

I always know a new product is excellent when its makers describe it as "next-level." I hear you moan, on seeing the new, wondrous Ring Always Home Cam. Also: When is Prime Day 2020? Oh, how can you be such a killjoy? When Amazon's Ring describes it as "Next-Level Compact, Lightweight, Autonomously Flying Indoor Security Camera," surely you leap toward your ceiling and exclaim: "Finally, something from Amazon I actually want! A drone that flies around my living room!"


Always Home Cam: Amazon's robot drone flying inside our homes seems like a bad idea

ZDNet

I actually had to double-check my calendar to make sure today wasn't April Fool's. Because watching the intro video of an indoor surveillance drone operated by Amazon seemed like just the sort of geeky joke you'd expect on April 1. But it isn't April Fools, and besides, Google has always been the one with the twisted sense of humor. Amazon has always been the one with the twisted sense of world domination. This was a serious press briefing.


A Survey of FPGA-Based Robotic Computing

arXiv.org Artificial Intelligence

Recent researches on robotics have shown significant improvement, spanning from algorithms, mechanics to hardware architectures. Robotics, including manipulators, legged robots, drones, and autonomous vehicles, are now widely applied in diverse scenarios. However, the high computation and data complexity of robotic algorithms pose great challenges to its applications. On the one hand, CPU platform is flexible to handle multiple robotic tasks. GPU platform has higher computational capacities and easy-touse development frameworks, so they have been widely adopted in several applications. On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios. With specialized designed hardware logic and algorithm kernels, FPGA-based accelerators can surpass CPU and GPU in performance and energy efficiency. In this paper, we give an overview of previous work on FPGA-based robotic accelerators covering different stages of the robotic system pipeline. An analysis of software and hardware optimization techniques and main technical issues is presented, along with some commercial and space applications, to serve as a guide for future work. Therefore, the computation and storage complexity, as well as real-time and power constraints of the robotic system, Over the last decade, we have seen significant progress hinders its wide application in latency-critical or power-limited in the development of robotics, spanning from algorithms, scenarios [13]. Various robotic systems, like Therefore, it is essential to choose a proper compute platform manipulators, legged robots, unmanned aerial vehicles, selfdriving for the robotic system. CPU and GPU are two widely cars have been designed for search and rescue [1], [2], used commercial compute platforms. CPU is designed to exploration [3], [4], package delivery [5], entertainment [6], handle a wide range of tasks quickly and is often used to [7] and more applications and scenarios. These robots are develop novel algorithms. A typical CPU can achieve 10-on the rise of demonstrating their full potential. Take drones, 100 GFLOPS with below 1GOP/J power efficiency [14]. In a type of aerial robots, for example, the number of drones contrast, GPU is designed with thousands of processor cores has grown by 2.83x between 2015 and 2019 based on the running simultaneously, which enable massive parallelism. The typical GPU can perform up to 10 TOPS performance and registered number has reached 1.32 million in 2019, and the become a good candidate for high-performance scenarios. Recently, FFA expects this number will come to 1.59 billion by 2024.


Vision-Based Autonomous Drone Control using Supervised Learning in Simulation

arXiv.org Artificial Intelligence

Limited power and computational resources, absence of high-end sensor equipment and GPS-denied environments are challenges faced by autonomous micro areal vehicles (MAVs). We address these challenges in the context of autonomous navigation and landing of MAVs in indoor environments and propose a vision-based control approach using Supervised Learning. To achieve this, we collected data samples in a simulation environment which were labelled according to the optimal control command determined by a path planning algorithm. Based on these data samples, we trained a Convolutional Neural Network (CNN) that maps low resolution image and sensor input to high-level control commands. We have observed promising results in both obstructed and non-obstructed simulation environments, showing that our model is capable of successfully navigating a MAV towards a landing platform. Our approach requires shorter training times than similar Reinforcement Learning approaches and can potentially overcome the limitations of manual data collection faced by comparable Supervised Learning approaches.


Flightmare: A Flexible Quadrotor Simulator

arXiv.org Artificial Intelligence

Currently available quadrotor simulators have a rigid and highly-specialized structure: either are they really fast, physically accurate, or photo-realistic. In this work, we propose a paradigm-shift in the development of simulators: moving the trade-off between accuracy and speed from the developers to the end-users. We use this design idea to develop a novel modular quadrotor simulator: Flightmare. Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. This makes our simulator extremely fast: rendering achieves speeds of up to 230 Hz, while physics simulation of up to 200,000 Hz. In addition, Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. We demonstrate the flexibility of Flightmare by using it for two completely different robotic tasks: learning a sensorimotor control policy for a quadrotor and path-planning in a complex 3D environment.


Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems

arXiv.org Artificial Intelligence

Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.


A bald eagle takes on a government drone. The bald eagle wins

#artificialintelligence

When a bald eagle tangled unexpectedly with a government drone last month in Michigan, it won, emerging from the scene unscathed. Officials say it is somewhere in Lake Michigan. The Michigan Department of Environment, Great Lakes and Energy disclosed the attack on Thursday, almost one month after the eagle sent the $950 drone into the Great Lake. The trouble began when Hunter King, an environmental quality analyst with the department, sent a drone over Michigan's Upper Peninsula to map shoreline erosion, the department said. Delta flight returns to Austin airport after striking what may have been birds or a drone, officials say His drone's reception started to sputter, so he commanded it to return home.


Apes Spotted Flying Drone and Smiling

#artificialintelligence

In a new short video that has surfaced on TikTok, apes have been spotted flying drones. The drone is an Autel Robotics Evo and the apes are located in a Myrtle Beach Safari in South Carolina. The video was taken by photographer Nick B. and shows two apes flying a drone. One is standing up using the drone's controller while the other sits beside him holding the drone's case. The video is particularly impressive as the ape seems very much in control of the drone.