Robotics & Automation


Cabinet paves way for self-driving vehicles on Japan's roads next year with new rules

The Japan Times

The Cabinet on Friday approved rules for operating partially self-driving vehicles, paving the way for the use of autonomous vehicles on public roads. Autonomous driving technology is classified into five levels, ranging from Level 1, which allows either steering, acceleration or braking to be automated, to fully automated Level 5. The government plans to enforce an ordinance defining violations and setting penalties by May next year as it envisions the use of Level 3 vehicles, which allow conditionally automated driving, on expressways in 2020. The newly decided penalties apply to the inappropriate use of Level 3 autonomous driving technologies, which require users to switch to manual operations when preset conditions regarding road type, driving speed, weather, time of day and other factors are no longer met. Violators of the ordinance will face fines of up to ¥12,000 ($110) depending on vehicle size.


Neuromodulated Patience for Robot and Self-Driving Vehicle Navigation

#artificialintelligence

Robots and self-driving vehicles face a number of challenges when navigating through real environments. Successful navigation in dynamic environments requires prioritizing subtasks and monitoring resources. Animals are under similar constraints. It has been shown that the neuromodulator serotonin regulates impulsiveness and patience in animals. In the present paper, we take inspiration from the serotonergic system and apply it to the task of robot navigation.


What Are The Risks And Benefits Of Artificial Intelligence?

#artificialintelligence

What are the risks and benefits of artificial intelligence? It's a complicated topic, but I'll try to unpack a few key points here. Let's start with a quick definition: AI is the simulation of human intelligence by machines. Example of AI systems used regularly in developed countries include Amazon's Alexa, smart replies in Gmail, Chatbots, predictive searches in Google, and recommendations. At a baseline level, AI helps improve our everyday lives by solving pain points, streamlining processes, and advancing human knowledge.


What Are The Risks And Benefits Of Artificial Intelligence?

#artificialintelligence

What are the risks and benefits of artificial intelligence? It's a complicated topic, but I'll try to unpack a few key points here. Let's start with a quick definition: AI is the simulation of human intelligence by machines. Example of AI systems used regularly in developed countries include Amazon's Alexa, smart replies in Gmail, Chatbots, predictive searches in Google, and recommendations. At a baseline level, AI helps improve our everyday lives by solving pain points, streamlining processes, and advancing human knowledge.


Man made software in his own image

#artificialintelligence

In 2002, a couple of Japanese visitors to Australia swapped passports with each other before walking through an automatic biometric border control gate being tested at Sydney airport. The facial recognition algorithm falsely matched each of them to the others' passport photo. These gentlemen were in fact part of an international aviation industry study group and were in the habit of trying to fool biometric systems then being trialed round the world. When I heard about this successful prank, I quipped that the algorithms were probably written by white people - because we think all Asians look the same. Colleagues thought I was making a typical sick joke, but actually I was half-serious.


Teaching a self-driving car the emergency stop is harder than it seems

#artificialintelligence

Much self-driving-car research focuses on pedestrian safety, but it is important to consider passenger safety and comfort, too. When braking to avoid a collision, for example, a vehicle should ideally ease, not slam, into a stop. In machine-learning parlance, this idea constitutes a multi-objective problem. Objective one: spare the pedestrian. Researchers at Ryerson University in Toronto took on this challenge with deep reinforcement learning.


Teaching a self-driving car the emergency stop is harder than it seems

#artificialintelligence

Much self-driving-car research focuses on pedestrian safety, but it is important to consider passenger safety and comfort, too. When braking to avoid a collision, for example, a vehicle should ideally ease, not slam, into a stop. In machine-learning parlance, this idea constitutes a multi-objective problem. Objective one: spare the pedestrian. Researchers at Ryerson University in Toronto took on this challenge with deep reinforcement learning.


THE SEDUCTIVE BUSINESS LOGIC OF ALGORITHMS

#artificialintelligence

Certain machine behaviors never cease to amaze me. I'm astounded by their ability to learn from their accomplishments and from their interactions with we humans. Unfortunately, many business managers still think of artificial intelligence (AI) and machine learning algorithms as something that will be impossible for them to understand. But I believe that knowing the fundamental principles that underlie the new technologies behind autonomous vehicles, shopping recommendation engines, Alexa and the rest can boost managers' confidence in them and help them make their companies more innovative. The two key drivers of major smart technologies today are machine learning and deep learning.


Ceva launches AI processor architecture and API for edge computer vision applications

#artificialintelligence

Wireless connectivity and smart sensing technology provider Ceva has introduced its second-generation AI processor architecture, NeuPro-S, to support deep neural network inferencing at the network edge. The company also launched the CDNN-Invite API, which provides deep neural network compiler technology to support heterogeneous co-processing of NeuPro-S cores with custom neural network engines in run-time firmware for unified neural network optimization. The company says the API represents an industry first, and is ideal for vision-based devices requiring edge AI processing, such as autonomous vehicles, smartphones, surveillance and consumer cameras, AR and VR headsets, and robotics. NeuPro-S is designed to optimize segmentation, detection, and classification of objects from edge device input with neural network processing. Ceva says it delivers significant performance improvements with system-aware enhancements, such as support for multi-level memory systems, multiple weight compression options, and heterogenous scalability to enable various combinations of AI engines in a single unified architecture.


Papers in Production Lightning Talks

#artificialintelligence

Shoup: I'm going to share very little of my personal knowledge, in fact, none of it, but I'm going to talk about a cool paper that I really like. Then Gwen [Shapira] is going to talk about another cool paper and Roland [Meertens] is going to talk about yet another cool paper. The one I want to talk about is a paper that's around using machine learning to do database indexing better. This is a picture of my bookshelf at home. A while ago, I bought myself a box set of "The Art of Computer Programming", which has basically all of computer science algorithms written by or assembled by Don Knuth. There's 4a, so he's still working on completing the thing, hopefully, that will happen. When we're choosing a data structure, typically we're choosing it in this way, we are trying to look for time complexity, how fast is it going to run, and space complexity, how big is it going to be? We typically evaluate those things asymptotically, we're not looking as much at real-world workloads, but looking at what are the complexity characteristics of this thing at the limit when things get very large? We're also, and this is critical, looking at those things without having seen the data and without having seen typically the usage pattern. We're doing is we're saying what is the least worst time and space complexity, given an arbitrary data distribution and an arbitrary usage pattern? It seems like we could do a little better than that, that's what this paper is about. What we'd like to be able to ask or to be able to answer is how could we achieve the best time/space complexity given a specific real-world data distribution and a specific real-world usage pattern.