Goto

Collaborating Authors

Results



Axes for Sociotechnical Inquiry in AI Research

arXiv.org Artificial Intelligence

The development of artificial intelligence (AI) technologies has far exceeded the investigation of their relationship with society. Sociotechnical inquiry is needed to mitigate the harms of new technologies whose potential impacts remain poorly understood. To date, subfields of AI research develop primarily individual views on their relationship with sociotechnics, while tools for external investigation, comparison, and cross-pollination are lacking. In this paper, we propose four directions for inquiry into new and evolving areas of technological development: value--what progress and direction does a field promote, optimization--how the defined system within a problem formulation relates to broader dynamics, consensus--how agreement is achieved and who is included in building it, and failure--what methods are pursued when the problem specification is found wanting. The paper provides a lexicon for sociotechnical inquiry and illustrates it through the example of consumer drone technology.


Machine Learning-Based Automated Design Space Exploration for Autonomous Aerial Robots

arXiv.org Artificial Intelligence

Building domain-specific architectures for autonomous aerial robots is challenging due to a lack of systematic methodology for designing onboard compute. We introduce a novel performance model called the F-1 roofline to help architects understand how to build a balanced computing system for autonomous aerial robots considering both its cyber (sensor rate, compute performance) and physical components (body-dynamics) that affect the performance of the machine. We use F-1 to characterize commonly used learning-based autonomy algorithms with onboard platforms to demonstrate the need for cyber-physical co-design. To navigate the cyber-physical design space automatically, we subsequently introduce AutoPilot. This push-button framework automates the co-design of cyber-physical components for aerial robots from a high-level specification guided by the F-1 model. AutoPilot uses Bayesian optimization to automatically co-design the autonomy algorithm and hardware accelerator while considering various cyber-physical parameters to generate an optimal design under different task level complexities for different robots and sensor framerates. As a result, designs generated by AutoPilot, on average, lower mission time up to 2x over baseline approaches, conserving battery energy.


Top 100 Artificial Intelligence Companies in the World

#artificialintelligence

Artificial Intelligence (AI) is not just a buzzword, but a crucial part of the technology landscape. AI is changing every industry and business function, which results in increased interest in its applications, subdomains and related fields. This makes AI companies the top leaders driving the technology swift. AI helps us to optimise and automate crucial business processes, gather essential data and transform the world, one step at a time. From Google and Amazon to Apple and Microsoft, every major tech company is dedicating resources to breakthroughs in artificial intelligence. As big enterprises are busy acquiring or merging with other emerging inventions, small AI companies are also working hard to develop their own intelligent technology and services. By leveraging artificial intelligence, organizations get an innovative edge in the digital age. AI consults are also working to provide companies with expertise that can help them grow. In this digital era, AI is also a significant place for investment. AI companies are constantly developing the latest products to provide the simplest solutions. Henceforth, Analytics Insight brings you the list of top 100 AI companies that are leading the technology drive towards a better tomorrow. AEye develops advanced vision hardware, software, and algorithms that act as the eyes and visual cortex of autonomous vehicles. AEye is an artificial perception pioneer and creator of iDAR, a new form of intelligent data collection that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. Their mission was to acquire the most information with the fewest ones and zeros. This would allow AEye to drive the automotive industry into the next realm of autonomy. Algorithmia invented the AI Layer.


5G-and-Beyond Networks with UAVs: From Communications to Sensing and Intelligence

arXiv.org Artificial Intelligence

Due to the advancements in cellular technologies and the dense deployment of cellular infrastructure, integrating unmanned aerial vehicles (UAVs) into the fifth-generation (5G) and beyond cellular networks is a promising solution to achieve safe UAV operation as well as enabling diversified applications with mission-specific payload data delivery. In particular, 5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine-type communications (mMTC). On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in three-dimensional (3D) space. On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference. Besides the requirement of high-performance wireless communications, the ability to support effective and efficient sensing as well as network intelligence is also essential for 5G-and-beyond 3D heterogeneous wireless networks with coexisting aerial and ground users. In this paper, we provide a comprehensive overview of the latest research efforts on integrating UAVs into cellular networks, with an emphasis on how to exploit advanced techniques (e.g., intelligent reflecting surface, short packet transmission, energy harvesting, joint communication and radar sensing, and edge intelligence) to meet the diversified service requirements of next-generation wireless systems. Moreover, we highlight important directions for further investigation in future work.


AI Comes to Edge Computing

#artificialintelligence

Powerful local processors can remove the need for a device to have a cloud connection. Along the coastline of Australia's New South Wales (NSW) state hovers a fleet of drones, helping to keep the waters safe. Earlier this year, the drones helped lifeguards at the state's Far North Coast rescue two teenagers who were struggling in heavy surf. The drones are powered by artificial-intelligence (AI) and machine-vision algorithms that constantly analyze their video feeds and highlight items that need attention: say, sharks, or stray swimmers. This is the same kind of technology that enables Google Photos to sort pictures, a home security camera to detect strangers, and a smart fridge to warn you when your perishables are close to their expiration dates.


Is AI an Existential Threat?

#artificialintelligence

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize. We will explore two different types of AI, Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI). To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.


A Survey of FPGA-Based Robotic Computing

arXiv.org Artificial Intelligence

Recent researches on robotics have shown significant improvement, spanning from algorithms, mechanics to hardware architectures. Robotics, including manipulators, legged robots, drones, and autonomous vehicles, are now widely applied in diverse scenarios. However, the high computation and data complexity of robotic algorithms pose great challenges to its applications. On the one hand, CPU platform is flexible to handle multiple robotic tasks. GPU platform has higher computational capacities and easy-touse development frameworks, so they have been widely adopted in several applications. On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios. With specialized designed hardware logic and algorithm kernels, FPGA-based accelerators can surpass CPU and GPU in performance and energy efficiency. In this paper, we give an overview of previous work on FPGA-based robotic accelerators covering different stages of the robotic system pipeline. An analysis of software and hardware optimization techniques and main technical issues is presented, along with some commercial and space applications, to serve as a guide for future work. Therefore, the computation and storage complexity, as well as real-time and power constraints of the robotic system, Over the last decade, we have seen significant progress hinders its wide application in latency-critical or power-limited in the development of robotics, spanning from algorithms, scenarios [13]. Various robotic systems, like Therefore, it is essential to choose a proper compute platform manipulators, legged robots, unmanned aerial vehicles, selfdriving for the robotic system. CPU and GPU are two widely cars have been designed for search and rescue [1], [2], used commercial compute platforms. CPU is designed to exploration [3], [4], package delivery [5], entertainment [6], handle a wide range of tasks quickly and is often used to [7] and more applications and scenarios. These robots are develop novel algorithms. A typical CPU can achieve 10-on the rise of demonstrating their full potential. Take drones, 100 GFLOPS with below 1GOP/J power efficiency [14]. In a type of aerial robots, for example, the number of drones contrast, GPU is designed with thousands of processor cores has grown by 2.83x between 2015 and 2019 based on the running simultaneously, which enable massive parallelism. The typical GPU can perform up to 10 TOPS performance and registered number has reached 1.32 million in 2019, and the become a good candidate for high-performance scenarios. Recently, FFA expects this number will come to 1.59 billion by 2024.


Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems

arXiv.org Artificial Intelligence

Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.


How to Improve Computer Vision in AI Drones Using Image Annotation Services?

#artificialintelligence

The Autonomous flying drone uses the computer vision technology to hover in the air avoiding the objects to keep moving on the right path. Apart from security surveillance and Ariel view monitoring, AI drone is now used by online retail giant Amazon to deliver the products at customer's doorstep revolutionizing the transportation and delivery system by logistics and supply chain companies. Cogito and AWS SageMaker Ground Truth have partnered to accelerate your training data pipeline. We are organising a webinar to help you "Build High-Quality Training Data for Computer Vision and NLP Applications". After registering, you will receive a confirmation email containing information about joining the webinar.