Goto

Collaborating Authors

Results


AI Comes to Edge Computing

#artificialintelligence

Powerful local processors can remove the need for a device to have a cloud connection. Along the coastline of Australia's New South Wales (NSW) state hovers a fleet of drones, helping to keep the waters safe. Earlier this year, the drones helped lifeguards at the state's Far North Coast rescue two teenagers who were struggling in heavy surf. The drones are powered by artificial-intelligence (AI) and machine-vision algorithms that constantly analyze their video feeds and highlight items that need attention: say, sharks, or stray swimmers. This is the same kind of technology that enables Google Photos to sort pictures, a home security camera to detect strangers, and a smart fridge to warn you when your perishables are close to their expiration dates.


Is AI an Existential Threat?

#artificialintelligence

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize. We will explore two different types of AI, Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI). To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.


A Survey of FPGA-Based Robotic Computing

arXiv.org Artificial Intelligence

Recent researches on robotics have shown significant improvement, spanning from algorithms, mechanics to hardware architectures. Robotics, including manipulators, legged robots, drones, and autonomous vehicles, are now widely applied in diverse scenarios. However, the high computation and data complexity of robotic algorithms pose great challenges to its applications. On the one hand, CPU platform is flexible to handle multiple robotic tasks. GPU platform has higher computational capacities and easy-touse development frameworks, so they have been widely adopted in several applications. On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios. With specialized designed hardware logic and algorithm kernels, FPGA-based accelerators can surpass CPU and GPU in performance and energy efficiency. In this paper, we give an overview of previous work on FPGA-based robotic accelerators covering different stages of the robotic system pipeline. An analysis of software and hardware optimization techniques and main technical issues is presented, along with some commercial and space applications, to serve as a guide for future work. Therefore, the computation and storage complexity, as well as real-time and power constraints of the robotic system, Over the last decade, we have seen significant progress hinders its wide application in latency-critical or power-limited in the development of robotics, spanning from algorithms, scenarios [13]. Various robotic systems, like Therefore, it is essential to choose a proper compute platform manipulators, legged robots, unmanned aerial vehicles, selfdriving for the robotic system. CPU and GPU are two widely cars have been designed for search and rescue [1], [2], used commercial compute platforms. CPU is designed to exploration [3], [4], package delivery [5], entertainment [6], handle a wide range of tasks quickly and is often used to [7] and more applications and scenarios. These robots are develop novel algorithms. A typical CPU can achieve 10-on the rise of demonstrating their full potential. Take drones, 100 GFLOPS with below 1GOP/J power efficiency [14]. In a type of aerial robots, for example, the number of drones contrast, GPU is designed with thousands of processor cores has grown by 2.83x between 2015 and 2019 based on the running simultaneously, which enable massive parallelism. The typical GPU can perform up to 10 TOPS performance and registered number has reached 1.32 million in 2019, and the become a good candidate for high-performance scenarios. Recently, FFA expects this number will come to 1.59 billion by 2024.


Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems

arXiv.org Artificial Intelligence

Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.


How to Improve Computer Vision in AI Drones Using Image Annotation Services?

#artificialintelligence

The Autonomous flying drone uses the computer vision technology to hover in the air avoiding the objects to keep moving on the right path. Apart from security surveillance and Ariel view monitoring, AI drone is now used by online retail giant Amazon to deliver the products at customer's doorstep revolutionizing the transportation and delivery system by logistics and supply chain companies. Cogito and AWS SageMaker Ground Truth have partnered to accelerate your training data pipeline. We are organising a webinar to help you "Build High-Quality Training Data for Computer Vision and NLP Applications". After registering, you will receive a confirmation email containing information about joining the webinar.


Top Computer Vision Trends for the Modern Enterprise

#artificialintelligence

The increased sophistication of artificial neural networks (ANNs) coupled with the availability of AI-powered chips have driven am unparalleled enterprise interest in computer vision (CV). This exciting new technology will find myriad applications in several industries, and according to GlobalData forecasts, it would reach a market size of $28bn by 2030. The increasing adoption of AI-powered computer vision solutions, consumer drones; and the rising Industry 4.0 adoption will drive this phenomenal change. Deep learning has bought a new change in the role of machine vision used for smart manufacturing and industrial automation. The integration of deep learning propels machine vision systems to adapt itself to manufacturing variations.


Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence

#artificialintelligence

Artificial intelligence (AI) has the potential to deliver significant social and economic benefits, including reducing accidental deaths and injuries, making new scientific discoveries, and increasing productivity.[1] However, an increasing number of activists, scholars, and pundits see AI as inherently risky, creating substantial negative impacts such as eliminating jobs, eroding personal liberties, and reducing human intelligence.[2] Some even see AI as dehumanizing, dystopian, and a threat to humanity.[3] As such, the world is dividing into two camps regarding AI: those who support the technology and those who oppose it. Unfortunately, the latter camp is increasingly dominating AI discussions, not just in the United States, but in many nations around the world. There should be no doubt that nations that tilt toward fear rather than optimism are more likely to put in place policies and practices that limit AI development and adoption, which will hurt their economic growth, social ...


Rick Mills – "The Promise of AI" Prospector News

#artificialintelligence

In'The Terminator' series of action films starring Arnold Schwarzenegger, a cybernetic organism (cyborg) is programmed from the future to go back in time and kill the mother of the scientist who leads the fight against Skynet, an artificial intelligence system that will cause a nuclear holocaust. Terrifying and at times comical ("I'll be back", "Make my day") The Terminator cyborg was among the first presentations of artificial intelligence (AI) to a global audience. While numerous facets of AI have been developed over the past couple of decades, all with positive outcomes, the fear of AI being programmed to do something devastating to the human race, of computers "going rogue", continues to persist. On the other hand, AI holds tremendous potential for benefiting humanity in ways we are only just starting to recognize. This article gives an overview of artificial intelligence including some of its most interesting manifestations. The first step is defining what we mean by artificial intelligence. One definition of AI is "the simulation of human intelligence processes by machines, especially computers." Such processes include learning by acquiring information, understanding the rules around using that information, employing reasoning to reach conclusions, and self-correcting.


Deep Learning-Based Real-Time Multiple-Object Detection and Tracking via Drone

#artificialintelligence

Target tracking has been one of the many popular applications that an unmanned aerial vehicle (UAV) is used for, in a variety of missions from intelligence gathering and surveillance to reconnaissance missions. Target tracking by autonomous vehicles could prove to be a beneficial tool for the development of guidance systems- Pedestrian detection, dynamic vehicle detection, and obstacle detection too and can improve the features of the guiding assistance system. An aerial vehicle equipped with object recognition and tracking features could play a vital role in drone navigation and obstacle avoidance; video surveillance, aerial view for traffic management, self-driving systems, surveillance of road conditions, and emergency response too. Target detection capacity in drones has made stupendous progress off late. Earlier, target detection in drone systems mostly used vision-based target finding algorithms.