Goto

Collaborating Authors

Results


Argo AI Model Jointly Plots Vehicle Pose and Shape From LiDAR Data

#artificialintelligence

Any good driver who is about to change lanes knows it's important to glance over their shoulder to ensure there are no vehicles in their blind spot -- and such real-time awareness of nearby vehicles is no less critical for autonomous driving systems. That's why self-driving technologies rely on a robust perception backbone that is expected to identify all relevant agents in the environment, including accurate "pose and shape" estimation of other vehicles sharing the road. Autonomous vehicle systems have evolved their own digital approaches to shoulder-checking, leveraging data from one of their most common sensing modalities, LiDAR. Now, a team of researchers from Pittsburgh-based autonomous vehicle technology company Argo AI, Microsoft, and CMU have introduced a novel network architecture for jointly estimating the shape and pose of vehicles even from partial LiDAR observations. Existing SOTA methods for pose and shape prediction typically first estimate the pose of an unaligned partial point cloud then apply that pose to the partial input before estimating the shape. However, this encoder-pose decoder and encoder-shape decoder architecture can result in shape estimation suffering from any errors in the pose estimation network's output and, eventually, poor completion performance.


LiDAR explained: What this laser tech can do for your new iPhone

Mashable

Another September means another new iPhone launch. Naturally, Apple's probably got all kinds of weird new features cooked up for its flagship device. Rumor has it that LiDAR integration is just one of the things we can expect from the theoretical iPhone 12 when it comes out later this year. "Hold on," you must be thinking. "What the heck is LiDAR?"


Lidar sensor manufacturer Ouster raises $42 million

#artificialintelligence

Ouster, a San Francisco-based lidar startup that launched out of stealth in December 2017, today secured $42 million in funding, bringing its total raised to $140 million. Cofounder and CEO Angus Pacala says the fresh capital will be used to fund product development and support sales internationally. Lidar sensors are at the core of autonomous vehicle systems like those from Waymo, Uber, Aurora, and Cruise. These sensors measure the distance to objects by illuminating them with light and measuring the reflected pulses, and their use cases extend beyond the automotive sector. Lidar sensors are often tapped for obstacle detection and mapping in mining robots, atmospheric studies for space, forestry management, wind farm optimization, speed limit enforcement, and even video games.


Ibeo's LiDAR systems to provide higher autonomy to autonomous vehicles - Geospatial World

#artificialintelligence

Germany's Ibeo Automotive Systems, which specializes in lidar systems for autonomous driving, has signed a contract to provide China's Great Wall Motor Company (GWM) with its latest solid-state design. Ibeo said that it has commissioned key partner ZF Friedrichschafen – which in 2016 acquired a major stake in Ibeo – to produce the sensors and control unit for the "Level 3" system, which will provide partial autonomy. GWM has contracted one of its own subsidiaries to develop the system, which will be based around vertical cavity surface-emitting lasers (VCSELs) produced by Austria's AMS. Ibeo points out that, after signing a letter of intent in 2019, it has already been in pre-development with GWM for a year. Officially, the project started with the signing of an additional contract by the two parties last month.


Artificial Intelligence and Machine Learning – Path to Intelligent Automation

#artificialintelligence

With evolving technologies, intelligent automation has become a top priority for many executives in 2020. Forrester predicts the industry will continue to grow from $250 million in 2016 to $12 billion in 2023. With more companies identifying and implementation the Artificial Intelligence (AI) and Machine Learning (ML), there is seen a gradual reshaping of the enterprise. Industries across the globe integrate AI and ML with businesses to enable swift changes to key processes like marketing, customer relationships and management, product development, production and distribution, quality check, order fulfilment, resource management, and much more. AI includes a wide range of technologies such as machine learning, deep learning (DL), optical character recognition (OCR), natural language processing (NLP), voice recognition, and so on, which creates intelligent automation for organizations across multiple industrial domains when combined with robotics.


Computer Vision: Python OCR & Object Detection Quick Starter

#artificialintelligence

This is the third course from my Computer Vision series. Image Recognition, Object Detection, Object Recognition and also Optical Character Recognition are among the most used applications of Computer Vision. Using these techniques, the computer will be able to recognize and classify either the whole image, or multiple objects inside a single image predicting the class of the objects with the percentage accuracy score. Using OCR, it can also recognize and convert text in the images to machine readable format like text or a document. Object Detection and Object Recognition is widely used in many simple applications and also complex ones like self driving cars.


Intro To Computer Vision - Classification

#artificialintelligence

Thanks to advancements in deep learning & artificial neural networks, computer vision is increasingly capable of mimicking human vision & is paving the way for self-driving cars, medical diagnosis, scanning recorded surveillance, manufacturing & much more. In this introductory workshop, Sage Elliot will give an overview of deep learning as it related to computer vision with a focused discussion around image classification. You will also learn about careers in computer vision & who are some of the biggest users of this technology. About Your Instructor: Sage Elliott is a Machine Learning Developer Evangelist for Sixgill with about 10 years of experience in the engineering space. He has passion for exploring new technologies & building communities.


PictoBlox AI: Artificial Intelligence (AI) and Machine Learning for Kids

#artificialintelligence

Artificial intelligence (AI) and machine learning (ML) have turned out to be the frontrunners of the fourth Industrial Revolution with technologies such as face detection systems, self-driving cars, and virtual assistants influencing our lives more than we realize. As a result, understanding artificial intelligence (AI) and machine learning for kids of the 21st century has become important, as they are the future leaders and innovators of our world and because AI and ML are going to become a huge part of their lives and future. Over the last year, PictoBlox has helped hundreds of thousands of children learn to code the fun way, a must-have skill for them for living in and leading the 21st-century. Now, PictoBlox is here with its latest version to learn artificial intelligence and machine learning to kids in an interactive and playful manner. Find out more about PictoBlox AI's services for AI and machine learning for kids to learn AI in a fun way HERE.


Cheaper lidar system aid mass adoption of driverless cars

#artificialintelligence

Light Detection And Ranging (lidar) systems enable vehicles to'see' in real-time by mapping three-dimensional images. The systems use large, rotating mirrors which reflect laser beams from surrounding objects. A University of Colorado-Boulder team has been working on a different way of steering these laser beams, called wavelength steering. This technique involves pointing each wavelength of laser light to a unique angle. This allows for a lidar system which is far less bulky and expensive, and can be easily made smaller than current devices.


Learning local and compositional representations for zero-shot learning - Microsoft Research

#artificialintelligence

In computer vision, one key property we expect of an intelligent artificial model, agent, or algorithm is that it should be able to correctly recognize the type, or class, of objects it encounters. This is critical in numerous important real-world scenarios--from biomedicine, where an intelligent system might be tasked with distinguishing between cancerous cells and healthy ones, to self-driving cars, where being able to discriminate between pedestrians, other vehicles, and road signs is crucial to successfully and safely navigating roads. Deep learning is one of the most significant tools for state-of-the-art systems in computer vision, and its use has resulted in models that have reached or can even exceed human-level performance in important and challenging real-world image classification tasks. Despite their successes, these models still have difficulty generalizing, or adapting to tasks in testing or deployment scenarios that don't closely resemble the tasks they were trained on. For example, a visual system trained under typical weather conditions in Northern California may fail to properly recognize pedestrians in Quebec because of differences in weather, clothes, demographics, and other features.