These days, machine learning and computer vision are all the craze. We've all seen the news about self-driving cars and facial recognition and probably imagined how cool it'd be to build our own computer vision models. However, it's not always easy to break into the field, especially without a strong math background. Libraries like PyTorch and TensorFlow can be tedious to learn if all you want to do is experiment with something small. In this tutorial, I present a simple way for anyone to build fully-functional object detection models with just a few lines of code.
Self-driving cars, home automation, virtual assistants…it's clear we've already seen some outstanding technological advances and are on the brink of more significant breakthroughs. Alain Fiocco, CTO for OVHcloud, calls 2020 "a new era" for technology. But with all new advances, which will pull ahead in 2020? Here is a breakdown of the top five telecom trends to watch for in the year ahead. Right now, the world runs on 4G, also known as LTE.
Few months go by without another devastating earthquake somewhere in the world reminding us how we all remain at the mercy of major seismic events that strike without warning. But a new branch of geophysics powered by machine learning is uncovering fresh insights into the earth's slipping faults that often trigger these catastrophic earthquakes. Machine learning, which often goes by the catchier moniker of artificial intelligence, has captured the public's imagination with its promises of fully autonomous cars and the approaching "singularity" when machines out-think people. The current state of the art, however, shows little signs of true intelligence, such as the ability to abstract the principles behind a given phenomenon. In image recognition, AI systems learn from rote memorization to identify objects and are, therefore, often fooled.
Car companies have been feverishly working to improve the technologies behind self-driving cars. But so far even the most high-tech vehicles still fail when it comes to safely navigating in rain and snow. This is because these weather conditions wreak havoc on the most common approaches for sensing, which usually involve either lidar sensors or cameras. In the snow, for example, cameras can no longer recognize lane markings and traffic signs, while the lasers of lidar sensors malfunction when there's, say, stuff flying down from the sky. MIT researchers have recently been wondering whether an entirely different approach might work.
Pony.ai, a self-driving startup based in Silicon Valley and Guangzhou, China, is deepening its ties to Toyota. The two companies announced a pilot program to test self-driving cars on public roads in two Chinese cities, Beijing and Shanghai. The Japanese auto giant plans to invest $400 million in Pony.ai, valuing the startup at $3 billion. Pony.ai has been working with Toyota since 2019 on public autonomous vehicle testing. With this new investment, their relationship will become even closer, with the automaker and the startup "co-developing" mobility products like "mobility services."
By 2030, a tenth of vehicles worldwide will be self-driving, and the market volume of fully automated cars getting into gear by this time is expected to be worth $13.7bn, according to the latest DossierPlus report from Statista. The analyst's study said that after billions of miles of tests in simulations or on public roads, self-driving cars are beginning to leave the test tracks. Autonomous driving has come a long way since Waymo (previously named the Google Self-Driving Car Project) started testing self-driving cars. The report noted that digital taxi firm Uber has invested more than $1bn over three years on self-driving cars. Statista also observed that when General Motors subsidiary Cruise received US$3.4bn in funding in 2018, the overall automotive startup funding had increased ten-fold over the past five years, reaching a record-breaking $US 27.5bn in 2018.
HONG KONG/BEIJING – Autonomous driving firm Pony.ai said on Wednesday it has raised $462 million in its latest funding round, led by an investment by Japan's largest automaker Toyota Motor Corp. Toyota invested around $400 million (¥44.2 billion) in the round, Pony.ai said in a statement, marking its biggest investment in an autonomous driving company with a Chinese background. The latest fund raising values the three-year-old firm, already backed by Sequoia Capital China and Beijing Kunlun Tech Co, at slightly more than $3 billion. The investment by Toyota comes at a time when global car makers, technology firms, start-ups and investors -- including Tesla, Alphabet Inc's Waymo and Uber -- are pouring capital into developing self-driving vehicles. Over the past two years, 323 deals related to autonomous cars raised a total of $14.6 billion worldwide, according to data provider PitchBook, even amid concerns about the technology given its high cost and complexity. The Silicon Valley-based startup Pony.ai -- co-founded by CEO James Peng, a former executive at China's Baidu, and chief technology officer Lou Tiancheng, a former Google and Baidu engineer -- is already testing autonomous vehicles in California, Beijing and Guangzhou.
Southwest Research Institute, a leading innovator of machine learning technologies, has developed a motion prediction system that enhances pedestrian detection for automated vehicles. The computer vision tool uses a novel deep learning algorithm to predict motion by observing real-time biomechanical movements with the pelvic area being a key indicator for changes. "For instance, if a pedestrian is walking west, the system can predict if that person will suddenly turn south," said SwRI's Samuel E. Slocum, a senior research analyst who led the internally funded project. "As the push for automated vehicles accelerates, this research offers several important safety features to help protect pedestrians." Recent accidents involving automated vehicles have heightened the call for improved detection of pedestrians and other moving obstacles.
Researchers from MIT have developed new self-driving car system capable of navigating in low visibility settings, including in fog and snow. The system relies on Localizing Ground Penetrating Radar (LGPR), which takes readings the shape and composition of the road directly below and around the car with electromagnetic pulses. Other self-driving car systems use a combination of Lidar, radar, and cameras to develop a real-time topographical model of where the car is in space. These systems are generally reliable but have been vulnerable to visual tricks like fake road signs and lane makers, and can become significantly less reliable during bad weather conditions. The LGPR system aims to improve on these vulnerabilities by focusing on the road itself and not the open space in front of the car.
The simplest way to think about Artificial Intelligence is in the context of a human. It forms systems that work intelligently and independently. It can perform in complex environments, autonomously, and adapts to that environment by learning. From SIRI to self-driving cars, AI has taken the world by storm and has the potential to disrupt nearly every sector one can think of. AI has disrupted nearly every industry to boost efficiency in this competitive market.