By the end of 2018, something will be very different about the harbor area in the northern Chinese city of Caofeidian. If you were to visit, the whirring cranes and tractors driving containers to and fro would be the only things in sight. Caofeidian is set to become the world's first fully autonomous harbor by the end of the year. The US-Chinese startup TuSimple, a specialist in developing self-driving trucks, will replace human-driven terminal tractor-trucks with 20 self-driving models. A separate company handles crane automation, and a central control system will coordinate the movements of both.
AI Clouds: Lego blocking cloud based services with developer kits, large general purpose AI companies are enabling developers to deploy algorithms via SDKs within their cloud hosted platforms. From Microsoft Azure AI platform all the way to Amazon's AWS AI Offerings, these organizations provide pre-trained models, GPUs and storage that are necessary for more effective continuous deployment, testing and quality assurance (QA). AI Languages: Beyond software applications to onboard users onto AI platforms, companies are standardizing new languages to familiarize developers to continually build using their libraries. Uber's AI Labs, for example, released their own probabilistic python offshoot programming language, Pyro. Wit.ai is another language for developers to build cross device applications.
I originally wrote and published a version of this article in September 2016. Since then, quite a bit has happened, further cementing my view that these changes are coming and that the implications will be even more substantial. I decided it was time to update this article with some additional ideas and a few changes. As I write this, Uber just announced that it just ordered 24,000 self-driving Volvos. Tesla just released an electric, long-haul tractor trailer with extraordinary technical specs (range, performance) and self-driving capabilities (UPS just preordered 125!). And, Tesla just announced what will probably be the quickest production car ever made -- perhaps the fastest. It will go zero to sixty in about the time it takes you to read zero to sixty. And, of course, it will be able to drive itself. The future is quickly becoming now.
For toddlers, playing with toys is not all fun-and-games--it's an important way for them to learn how the world works. Using a similar methodology, researchers from UC Berkeley have developed a robot that, like a child, learns from scratch and experiments with objects to figure out how to best move them around. And by doing so, this robot is essentially able to see into its own future. A robotic learning system developed by researchers at Berkeley's Department of Electrical Engineering and Computer Sciences visualizes the consequences of its future actions to discover ways of moving objects through time and space. Called Vestri, and using technology called visual foresight, the system can manipulate objects it's never encountered before, and even avoid objects that might be in the way.
In a previous post, we studied various open datasets that could be used to train a model for pixel-wise semantic segmentation of urban scenes. Here, we take a look at various deep learning architectures that cater specifically to time-sensitive domains like autonomous vehicles. In recent years, deep learning has surpassed traditional computer vision algorithms by learning a hierarchy of features from the training dataset itself. This eliminates the need for hand-crafted features and thus such techniques are being extensively explored in academia and industry. Prior to deep learning architectures, semantic segmentation models relied on hand-crafted features fed into classifiers like Random Forests, SVM, etc.
With an operating footprint of up to 50km from the mining pit to iron ore carriers, it was easy for Citic Pacific Mining, Australia's largest magnetite mining company, to lose track of its assets, such as light vehicles, buses and service trucks. Find out how to draw up a battle plan for securing connected devices and the key areas to target. You forgot to provide an Email Address. This email address doesn't appear to be valid. This email address is already registered.
KITCHENER, Ontario--(BUSINESS WIRE)--Miovision, a global leader in smart city technology, today launched the next generation of traffic technology. Using a type of artificial intelligence (AI) called deep learning, Miovision SmartSense brings AI to the roadside to help cities sense and understand what's happening at the intersection in real time. SmartSense can detect the presence and movement of vehicles, pedestrians and cyclists and use this data to improve congestion and safety. The new Miovision SmartSense technology completes the company's TrafficLink solution, which also includes a 360-degree fisheye camera and an IoT connected hub that allows traffic professionals to remotely access the intersection. Together, these components make up a powerful AI toolkit that uncovers insights about the intersection.
It's okay for me, since I did not put that much effort in it.The I feel bad for the other students though. So there are two electives, semantic segmentation, and functional safety. Functional safety is interesting but I chose semantic segmentation, because it is a coding project, the functional safety project is to write a document. I learned about the concept of functional safety, and functional safety frameworks to ensure that vehicles is safe, both at the system and component levels.
Major developments in current technology are now unravelling the marketplace for secure data, providing a vast, untapped store of opportunity for both enterprises and individuals. New business models based on blockchain will enable us to take lead of the technological growths in transport. One of the prime changes we are likely to see in the next era is a mammoth invasion in the amount of connected devices on which our lives will depend. The IOT space is gearing up for a colossal expansion and will most definitely touch numerous facets of our lives. We are probably going to see over 20 billion connected devices, from traffic lights to cash machine, self-serve kiosks in coffee shops and retail outlets, sensors and robots on production line floors.
Driverless cars are already ferrying around passengers in Las Vegas and in other limited areas, and automakers are expecting models to hit the showroom floor in just a few years. But what are the implications of this gigantic shift for our roads and on society? Self-driving cars have been involved in accidents, and there will almost certainly be more going forward. By 2030, however, self-driving cars should be able to demonstrate a tangible reduction in road deaths, both of passengers and pedestrians. Their real time intelligence will perform faster and better than any human can at detecting potential collisions.