Results


Top 100 Artificial Intelligence Companies in the World

#artificialintelligence

Artificial Intelligence (AI) is not just a buzzword, but a crucial part of the technology landscape. AI is changing every industry and business function, which results in increased interest in its applications, subdomains and related fields. This makes AI companies the top leaders driving the technology swift. AI helps us to optimise and automate crucial business processes, gather essential data and transform the world, one step at a time. From Google and Amazon to Apple and Microsoft, every major tech company is dedicating resources to breakthroughs in artificial intelligence. As big enterprises are busy acquiring or merging with other emerging inventions, small AI companies are also working hard to develop their own intelligent technology and services. By leveraging artificial intelligence, organizations get an innovative edge in the digital age. AI consults are also working to provide companies with expertise that can help them grow. In this digital era, AI is also a significant place for investment. AI companies are constantly developing the latest products to provide the simplest solutions. Henceforth, Analytics Insight brings you the list of top 100 AI companies that are leading the technology drive towards a better tomorrow. AEye develops advanced vision hardware, software, and algorithms that act as the eyes and visual cortex of autonomous vehicles. AEye is an artificial perception pioneer and creator of iDAR, a new form of intelligent data collection that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. Their mission was to acquire the most information with the fewest ones and zeros. This would allow AEye to drive the automotive industry into the next realm of autonomy. Algorithmia invented the AI Layer.


Deep Learning based Multi-Modal Sensing for Tracking and State Extraction of Small Quadcopters

arXiv.org Artificial Intelligence

This paper proposes a multi-sensor based approach to detect, track, and localize a quadcopter unmanned aerial vehicle (UAV). Specifically, a pipeline is developed to process monocular RGB and thermal video (captured from a fixed platform) to detect and track the UAV in our FoV. Subsequently, a 2D planar lidar is used to allow conversion of pixel data to actual distance measurements, and thereby enable localization of the UAV in global coordinates. The monocular data is processed through a deep learning-based object detection method that computes an initial bounding box for the UAV. The thermal data is processed through a thresholding and Kalman filter approach to detect and track the bounding box. Training and testing data are prepared by combining a set of original experiments conducted in a motion capture environment and publicly available UAV image data. The new pipeline compares favorably to existing methods and demonstrates promising tracking and localization capacity of sample experiments.


Search and Rescue with Airborne Optical Sectioning

arXiv.org Machine Learning

We show that automated person detection under occlusion conditions can be significantly improved by combining multi-perspective images before classification. Here, we employed image integration by Airborne Optical Sectioning (AOS)---a synthetic aperture imaging technique that uses camera drones to capture unstructured thermal light fields---to achieve this with a precision/recall of 96/93%. Finding lost or injured people in dense forests is not generally feasible with thermal recordings, but becomes practical with use of AOS integral images. Our findings lay the foundation for effective future search and rescue technologies that can be applied in combination with autonomous or manned aircraft. They can also be beneficial for other fields that currently suffer from inaccurate classification of partially occluded people, animals, or objects.


How to Improve Computer Vision in AI Drones Using Image Annotation Services?

#artificialintelligence

The Autonomous flying drone uses the computer vision technology to hover in the air avoiding the objects to keep moving on the right path. Apart from security surveillance and Ariel view monitoring, AI drone is now used by online retail giant Amazon to deliver the products at customer's doorstep revolutionizing the transportation and delivery system by logistics and supply chain companies. Cogito and AWS SageMaker Ground Truth have partnered to accelerate your training data pipeline. We are organising a webinar to help you "Build High-Quality Training Data for Computer Vision and NLP Applications". After registering, you will receive a confirmation email containing information about joining the webinar.


Top Computer Vision Trends for the Modern Enterprise

#artificialintelligence

The increased sophistication of artificial neural networks (ANNs) coupled with the availability of AI-powered chips have driven am unparalleled enterprise interest in computer vision (CV). This exciting new technology will find myriad applications in several industries, and according to GlobalData forecasts, it would reach a market size of $28bn by 2030. The increasing adoption of AI-powered computer vision solutions, consumer drones; and the rising Industry 4.0 adoption will drive this phenomenal change. Deep learning has bought a new change in the role of machine vision used for smart manufacturing and industrial automation. The integration of deep learning propels machine vision systems to adapt itself to manufacturing variations.


Learning Pose Estimation for UAV Autonomous Navigation andLanding Using Visual-Inertial Sensor Data

arXiv.org Machine Learning

Abstract-- In this work, we propose a robust network-in-the-loop control system that allows an Unmanned-Aerial-Vehicles to navigate and land autonomously on a desired target. To estimate the global pose of the aerial vehicle, we develop a deep neural network architecture for visual-inertial odometry, which provides a robust alternative to traditional techniques for autonomous navigation of Unmanned-Aerial-Vehicles. We first provide experimental results on the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% against the baseline. Second, we use Airsim, a simulator available as a plugin for Unreal Engine, to create new datasets of photorealistic images and inertial measurement to train and test our model. We finally integrate the proposed architecture for global localization with the Airsim closed-loop control system, and we provide simulation results for the autonomous landing of the aerial vehicle. I. INTRODUCTION Unmanned-Aerial-Vehicles (UAVs) can provide significant support for many applications, such as rescue operations, environmental monitoring, package delivery, and surveillance. To guarantee a high safety level in the UAV operation, it is crucial to have continuous monitoring of the state of the vehicle. Currently, the most standard techniques deployed for pose estimation are Visual-Inertial Odometry (VIO) [1, 2] and Simultaneous Localization and Mapping (SLAM) [3-5].



Thumb PC uses Google software to give computer vision to robots and drones

#artificialintelligence

A new USB stick computer uses Google's machine-learning software to give drones and robots the equivalent of a human eye, and add new smarts to cameras. It is instead designed to analyze pixels and provide the right context for images. Fathom provides the much-needed horsepower for devices like drones, robots and cameras to run computer vision applications like image recognition. These devices alone typically don't have the ability to run computer vision applications. Fathom uses an embedded version of Google's TensorFlow machine learning software for vision processing.