Goto

Collaborating Authors

Results


La veille de la cybersécurité

#artificialintelligence

A team of researchers at Cornell University has developed a new method enabling autonomous vehicles to create "memories" of previous experiences, which can then be used in future navigation. This will be especially useful when these self-driving cars can't rely on sensors in bad weather environments. Current self-driving cars that use artificial neural networks have no memory of the past, meaning they are constantly "seeing" things for the first time. And this is true regardless of how many times they've driven the exact same road. Killian Weinberger is senior author of the research and a professor of computer science.


Top tweets: NASA's Modular Robotic Vehicle (MRV) and more

#artificialintelligence

Verdict lists five of the top tweets on robotics in Q1 2022 based on data from GlobalData's Technology Influencer Platform. The top tweets are based on total engagements (likes and retweets) received on tweets from more than 380 robotics experts tracked by GlobalData's Technology Influencer platform during the first quarter (Q1) of 2022. Massimo, an engineer, shared an article on the NASA Johnson Space Centre building the MRV in collaboration with an automotive partner. The fully electric vehicle is being regarded as suitable for busy urban environments, large resort areas, and industrial complexes, the article detailed. The MRV has no mechanical connections to the steering, propulsion, or brake actuators.


Federal report on self-driving car crashes is important but incomplete

#artificialintelligence

Earlier this month, the National Highway Traffic Safety Administration (NHTSA) released a report documenting crashes involving cars with automated driving components. The report looked at data on Automated Driving Systems (commonly referred to as "self-driving cars") and Advanced Driver Assistance Systems (cars equipped with lane-keeping technology and adaptive cruise control, such as Tesla's Autopilot). The New York Times covered the report's release. A quick scroll through Twitter showed that the public divided: Is this technology something to praise, or something to fear? Ultimately, the NHTSA report, while an essential first step, doesn't leave a clear picture whether self-driving cars will prevent crashes when they arrive in the future.


New Method Helps Self-Driving Cars Create 'Memories'

#artificialintelligence

A team of researchers at Cornell University has developed a new method enabling autonomous vehicles to create "memories" of previous experiences, which can then be used in future navigation. This will be especially useful when these self-driving cars can't rely on sensors in bad weather environments. Current self-driving cars that use artificial neural networks have no memory of the past, meaning they are constantly "seeing" things for the first time. And this is true regardless of how many times they've driven the exact same road. Killian Weinberger is senior author of the research and a professor of computer science.


This self-driving car remembers the past using neural networks - Dataconomy

#artificialintelligence

Researchers at Cornell University have developed a technique to assist self-driving cars in recalling past events and utilize them as references while navigating, especially in bad weather when the vehicle's sensors cannot be trusted. On cars, normally, artificial neural networks are not concerned with memory from the past, and they always act like seeing the world for the first time. They have no recollection of previous drives down that road. This is not the only initiative towards developing self-driving vehicles, roboticists pushed an off-road car to the limits to gather data for self-driving ATVs, recently. The researchers have worked on three synchronous papers to address this problem. Two are going to be presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), which will take place June 19-24 in New Orleans.


Sweden's Einride to Test Autonomous Trucks on U.S. Roads

WSJ.com: WSJD - Technology

Swedish autonomous-truck startup Einride AB will test its self-driving freight vehicles on public roads in the U.S. in an operation with GE Appliances after getting approval from federal regulators. Einride plans to put one of its chunky electric vehicles, which have no cabs for drivers, on a one-mile stretch of road between two warehouses in Tennessee for GE Appliances, a subsidiary of home appliances company Haier. "This is a step-by-step approach, and this is a major step forward, in that it's actually now on public roads," said Robert Falck, chief executive of the six-year-old Stockholm-based company. Einride is joining a growing field of autonomous-truck startups in the race to get their technology on the road and bringing in revenue. Companies including San Diego-based TuSimple Holdings Inc., Pittsburgh-based Aurora Innovation Inc., and Waymo LLC, a division of Google parent Alphabet Inc., have announced tests of their driverless-truck technology in commercial operations carrying freight.


Partner Content

#artificialintelligence

Three years ago, a Georgia Tech study uncovered a major flaw in self-driving vehicles: they find it much harder to see darker-skinned pedestrians. The researchers were testing how accurately the vehicles' artificial intelligence–based object detection models noticed pedestrians of different races. But no matter what variables they changed -- how big the person was in the image, whether they were partially blocked, what time of day it was -- the imbalance remained, raising fears that in real-life applications, racialized people could be at higher risk of being hit by a self-driving car. It's just one of far too many examples showing how AI can be biased and, as a result, harm already-marginalized groups. "Think of something like melanoma detection," says Shingai Manjengwa, director of technical education at the Vector Institute for Artificial Intelligence.


Researchers release open-source photorealistic simulator for autonomous driving

Robohub

VISTA 2.0 is an open-source simulation engine that can make realistic environments for training and testing self-driving cars. Hyper-realistic virtual worlds have been heralded as the best driving schools for autonomous vehicles (AVs), since they've proven fruitful test beds for safely trying out dangerous driving scenarios. Tesla, Waymo, and other self-driving companies all rely heavily on data to enable expensive and proprietary photorealistic simulators, since testing and gathering nuanced I-almost-crashed data usually isn't the most easy or desirable to recreate. To that end, scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) created "VISTA 2.0," a data-driven simulation engine where vehicles can learn to drive in the real world and recover from near-crash scenarios. What's more, all of the code is being open-sourced to the public.


Researchers release open-source photorealistic simulator for autonomous driving

#artificialintelligence

VISTA 2.0 builds off of the team's previous model, VISTA, and it's fundamentally different from existing AV simulators since it's data-driven -- meaning it was built and photorealistically rendered from real-world data -- thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one camera sensor, achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized. Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data. "This is a massive jump in capabilities of data-driven simulation for autonomous vehicles, as well as the increase of scale and ability to handle greater driving complexity," says Alexander Amini, CSAIL PhD student and co-lead author on two new papers, together with fellow PhD student Tsun-Hsuan Wang.


I am not a robot: iOS verification update marks end of 'captchas'

The Guardian

An annoyance, an important security feature, an uncomfortable existential request: however you feel about being asked to prove you are not a robot, it has become a daily occurrence for most of us, but perhaps not one we would miss. A new feature in the upcoming versions of iOS and macOS, Apple's operating systems for iPhones and computers, promises to give the boot to "captchas" once and for all. Called "automatic verification", the technology will allow sites to verify you are not a robot without you having to do anything at all. Captchas – that's "Completely Automated Public Turing test to tell Computers and Humans Apart" – are the little tests you sometimes see when signing up to a website to help stop fraud. It may ask you to spot all the traffic lights in a picture, or type out some wobbly looking letters and numbers. If you get it wrong, it may ask you to start again, leading you to wonder if you really know what a traffic light looks like – or if you might be a robot after all.