Goto

Collaborating Authors

Road


'Robotaxis' roaming around Yokohama are winning over unlikely fans

The Japan Times

Nissan Motor Co. is carrying out Japan's largest demonstration to date of autonomous vehicles in Yokohama as the automaker moves toward rolling out a commercial automated service. Although regulations and the need for further technology improvements mean a full-scale launch is still some years away, Nissan has been testing the self-driving taxi service in Yokohama, near its corporate headquarters. Passengers can book rides via a smartphone app that covers some 650 routes and embark and disembark from 23 points around the city. For now, the autonomous taxis have operators sitting ready to take over in case the vehicles' various sensors encounter a situation that requires human assistance. The goal is to eventually have such "safety drivers" monitor a fleet of such taxis remotely.


Trends To Watch In 5G Connected Vehicles

#artificialintelligence

We all know 5G networks are here. Although autonomous vehicles are getting a substantial amount of attention, it is actually the technology that connects through fast speed and low latency cellular technology that is rapidly changing this industry with connected cars. As vehicles continue to advance, they are becoming more defined, automated, and electrified. In-vehicle connectivity is now thought to be a requirement from the outset. This transition to connected vehicles, that are intelligent, is certainly here and the smartphone will act as a personal assistant, providing music, the latest reports, and also maps that are needed.


Machine Learning Explained.

#artificialintelligence

Machine learning is a branch of artificial intelligence that focuses on the use of algorithms to make decisions. These algorithms are trained with historical data and based on what they infer from that data, are able to make predictions, classifications, and numerous other decisions, all without being explicitly programmed to do so. While the term machine learning has only recently become a buzzword, it has been around as far back as 1959, when it was coined by Arthur Samuel, a pioneer in the field. One of the earliest applications was in a game of checkers, in which self-proclaimed checkers master, Robert Nealey, lost against a computer on an IBM 7094. From those humble beginnings, technological developments around storage and processing power have enabled more powerful and widespread applications of machine learning, such as Amazon's recommendation engine and Google's self-driving cars.


Machines that see the world more like humans do

#artificialintelligence

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it. Move that computer vision system to a self-driving car and the stakes become much higher --for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street. To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects. The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene.


DeepMind makes bet on AI system that can play poker, chess, Go, and more

#artificialintelligence

DeepMind, the AI lab backed by Google parent company Alphabet, has long invested in game-playing AI systems. It's the lab's philosophy that games, while lacking an obvious commercial application, are uniquely relevant challenges of cognitive and reasoning capabilities. This makes them useful benchmarks of AI progress. In recent decades, games have given rise to the kind of self-learning AI that powers computer vision, self-driving cars, and natural language processing. In a continuation of its work, DeepMind has created a system called Player of Games, which the company first revealed in a research paper published on the preprint server Arxiv.org this week.


Feds talking to Tesla about video game feature that you can use while driving

FOX News

Tesla claims the Model S Plaid is the world's quickest car and that the $129,990 sedan can accelerate to 60 mph in less than two seconds. The National Highway Traffic Safety Administration (NHTSA) has reached out to Tesla to discuss a new feature available in its cars that allows passengers to play certain videogames on the central touchscreen display while the car is in drive. Tesla's infotainment system can play games and videos. The capability became available earlier this year and was demonstrated by owners in numerous YouTube videos before being highlighted in a New York Times article published on Tuesday. The latest Model S has an entertainment screen for the rear seat passengers.


How AIOps is charting paths to fully autonomous networks

#artificialintelligence

AIOps (AI for IT operations) adoption is on the rise as organizations invest in AI to make their IT ops smarter, faster, and more secure. Those who have adopted AIOps view the technology as no longer a nice-to-have but a necessity in the post-pandemic, work-from-home era. IT leaders are tasked with managing third-party cloud applications from devices and remote workers scattered across numerous locations in this new era. The insights come from a recently published State of AIOps Study, conducted by ZK Research, sponsored by Masergy, a software-defined networking (SD-WAN) services company. In August 2021, ZK Research surveyed more than 500 IT decision-makers in the U.S. across seven industries. IT decision-makers believe AIOps offers their organization several business benefits, including improved productivity, cloud application performance, and security.


Machines that see the world more like humans do

#artificialintelligence

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it. Move that computer vision system to a self-driving car and the stakes become much higher -- for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street. To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects. The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene.


Autonomous driving startup Deeproute.ai prices L4 solution at $10,000 – TechCrunch

#artificialintelligence

Deeproute.ai, an autonomous vehicle startup with offices in Shenzhen and Fremont, California, unveiled an ambitious self-driving solution on Wednesday. The package, named DeepRoute-Driver 2.0, is a production-ready Level 4 system that costs approximately $10,000. The price tag is incredible given the hardware used: five solid-state lidar sensors, eight cameras, a proprietary computing system, and an optional millimeter-wave radar. Lidar accounts for roughly half of the total cost, a Deeproute spokesperson told TechCrunch. "As the whole supply chain is getting more developed and scale[s] up, we can expect the cost can go further down."


Stellantis' AI strategy targets $22.6b in revenues by 2030

#artificialintelligence

Carmaker Stellantis announced a strategy Tuesday to embed AI-enabled software in 34 million vehicles across its 14 brands targeting 20 billion euros ($22.6 billion) in annual revenues by 2030. CEO Carlos Tavares heralded the move as part of a strategy that would transform the car company into a "sustainable mobility tech company," with business growth coming from over-the-air features and services. It includes key partnerships with BMW on autonomous driving, iPhone manufacturer Foxconn on customized cockpits and Waymo to expand their autonomous driving partnership into a light commercial vehicle delivery fleets. Stellantis' embrace of AI and expansion of software-enabled vehicles is part of a broad transformation in the auto industry, with a race toward more fully electric and hybrid powertrains, more autonomous driving features and increased connectivity in automobiles. Stellantis, which was formed from the combination of PSA Peugeot and FCA Fiat, said the software integration would seamlessly integrate into customers lives, with the capability of live updates providing upgraded services over time.