Goto

Collaborating Authors

 puddle


Off-Road LiDAR Intensity Based Semantic Segmentation

Viswanath, Kasi, Jiang, Peng, PB, Sujit, Saripalli, Srikanth

arXiv.org Artificial Intelligence

LiDAR is used in autonomous driving to provide 3D spatial information and enable accurate perception in off-road environments, aiding in obstacle detection, mapping, and path planning. Learning-based LiDAR semantic segmentation utilizes machine learning techniques to automatically classify objects and regions in LiDAR point clouds. Learning-based models struggle in off-road environments due to the presence of diverse objects with varying colors, textures, and undefined boundaries, which can lead to difficulties in accurately classifying and segmenting objects using traditional geometric-based features. In this paper, we address this problem by harnessing the LiDAR intensity parameter to enhance object segmentation in off-road environments. Our approach was evaluated in the RELLIS-3D data set and yielded promising results as a preliminary analysis with improved mIoU for classes "puddle" and "grass" compared to more complex deep learning-based benchmarks. The methodology was evaluated for compatibility across both Velodyne and Ouster LiDAR systems, assuring its cross-platform applicability. This analysis advocates for the incorporation of calibrated intensity as a supplementary input, aiming to enhance the prediction accuracy of learning based semantic segmentation frameworks. https://github.com/MOONLABIISERB/lidar-intensity-predictor/tree/main


How might JPL look for life on watery worlds? With the help of this slithering robot

Los Angeles Times

Engineers at NASA's Jet Propulsion Laboratory are taking artificial intelligence to the next level -- by sending it into space disguised as a robotic snake. With the sun beating down on JPL's Mars Yard, the robot lifts its "head" off a glossy surface of faux ice to scan the world around it. It maps its surroundings, analyzes potential obstacles and chooses the safest path through a valley of fake boulders to the destination it has been instructed to reach. Once it has a plan in place, the 14-foot-long robot lowers its head, engages its 48 motors and slowly slithers forward. Its cautious movements are propelled by the clockwise or counterclockwise turns of the spiral connectors that link its 10 body segments, sending the cyborg in a specific direction.


Machine Translation between Spoken Languages and Signed Languages Represented in SignWriting

Jiang, Zifan, Moryossef, Amit, Müller, Mathias, Ebling, Sarah

arXiv.org Artificial Intelligence

This paper presents work on novel machine translation (MT) systems between spoken and signed languages, where signed languages are represented in SignWriting, a sign language writing system. Our work seeks to address the lack of out-of-the-box support for signed languages in current MT systems and is based on the SignBank dataset, which contains pairs of spoken language text and SignWriting content. We introduce novel methods to parse, factorize, decode, and evaluate SignWriting, leveraging ideas from neural factored MT. In a bilingual setup--translating from American Sign Language to (American) English--our method achieves over 30 BLEU, while in two multilingual setups--translating in both directions between spoken languages and signed languages--we achieve over 20 BLEU. We find that common MT techniques used to improve spoken language translation similarly affect the performance of sign language translation. These findings validate our use of an intermediate text representation for signed languages to include them in natural language processing research.


DriveFuzz: Discovering Autonomous Driving Bugs through Driving Quality-Guided Fuzzing

Kim, Seulbae, Liu, Major, Rhee, Junghwan "John", Jeon, Yuseok, Kwon, Yonghwi, Kim, Chung Hwan

arXiv.org Artificial Intelligence

Autonomous driving has become real; semi-autonomous driving vehicles in an affordable price range are already on the streets, and major automotive vendors are actively developing full self-driving systems to deploy them in this decade. Before rolling the products out to the end-users, it is critical to test and ensure the safety of the autonomous driving systems, consisting of multiple layers intertwined in a complicated way. However, while safety-critical bugs may exist in any layer and even across layers, relatively little attention has been given to testing the entire driving system across all the layers. Prior work mainly focuses on white-box testing of individual layers and preventing attacks on each layer. In this paper, we aim at holistic testing of autonomous driving systems that have a whole stack of layers integrated in their entirety. Instead of looking into the individual layers, we focus on the vehicle states that the system continuously changes in the driving environment. This allows us to design DriveFuzz, a new systematic fuzzing framework that can uncover potential vulnerabilities regardless of their locations. DriveFuzz automatically generates and mutates driving scenarios based on diverse factors leveraging a high-fidelity driving simulator. We build novel driving test oracles based on the real-world traffic rules to detect safety-critical misbehaviors, and guide the fuzzer towards such misbehaviors through driving quality metrics referring to the physical states of the vehicle. DriveFuzz has discovered 30 new bugs in various layers of two autonomous driving systems (Autoware and CARLA Behavior Agent) and three additional bugs in the CARLA simulator. We further analyze the impact of these bugs and how an adversary may exploit them as security vulnerabilities to cause critical accidents in the real world.


Puddles - AI Generated Artwork

#artificialintelligence

AI Art Generator App. ✅ Fast ✅ Free ✅ Easy. Create amazing artworks using artificial intelligence.


Challenges in Visual Anomaly Detection for Mobile Robots

Mantegazza, Dario, Giusti, Alessandro, Gambardella, Luca M., Rizzoli, Andrea, Guzzi, Jérôme

arXiv.org Artificial Intelligence

We consider the task of detecting anomalies for autonomous mobile robots based on vision. We categorize relevant types of visual anomalies and discuss how they can be detected by unsupervised deep learning methods. We propose a novel dataset built specifically for this task, on which we test a state-of-the-art approach; we finally discuss deployment in a real scenario.


OFFSEG: A Semantic Segmentation Framework For Off-Road Driving

Viswanath, Kasi, Singh, Kartikeya, Jiang, Peng, B., Sujit P., Saripalli, Srikanth

arXiv.org Artificial Intelligence

Off-road image semantic segmentation is challenging due to the presence of uneven terrains, unstructured class boundaries, irregular features and strong textures. These aspects affect the perception of the vehicle from which the information is used for path planning. Current off-road datasets exhibit difficulties like class imbalance and understanding of varying environmental topography. To overcome these issues we propose a framework for off-road semantic segmentation called as OFFSEG that involves (i) a pooled class semantic segmentation with four classes (sky, traversable region, non-traversable region and obstacle) using state-of-the-art deep learning architectures (ii) a colour segmentation methodology to segment out specific sub-classes (grass, puddle, dirt, gravel, etc.) from the traversable region for better scene understanding. The evaluation of the framework is carried out on two off-road driving datasets, namely, RELLIS-3D and RUGD. We have also tested proposed framework in IISERB campus frames. The results show that OFFSEG achieves good performance and also provides detailed information on the traversable region.


Evolution of circuits for machine learning

#artificialintelligence

The operation of the circuit is tuned by applying voltages to control wires. Inputs to the circuit are provided by voltages on input wires, and the circuit's output is determined by whether or not charge flows through an output wire.


Robots learn by 'following the leader' -- GCN

#artificialintelligence

Scientists at the Army Research Laboratory and Carnegie Mellon University's Robotics Institute are teaching robots how to be better mission partners to soldiers -- starting with how to find their way with minimal human intervention. Given that autonomous vehicles have been navigating streets in many U.S. cities for over a year, that may seem like not that big a deal. But according to ARL researcher Maggie Wigness, the challenges facing military robots are much greater. Specifically, unlike the self-driving cars being developed by Google, Uber and others, military robots will be operating in complex environments that don't have the benefit of standardized markings like lanes, street signs, curbs and traffic lights. "Environments that we operate in are highly unstructured compared to [those for] self-driving cars," Wigness said.


Talking Killer Robots at Davos

#artificialintelligence

Artificial intelligence already is a top topic at this year's World Economic Forum in Davos, Switzerland. Monday night, amid a driving blizzard that snarled traffic around town, I hosted a small dinner featuring Carnegie Mellon University's Justine Cassell. She is associate dean of technology, strategy, and impact at the university's school of computer science, and an expert on the human role in artificial intelligence. Cassell let loose the best one-liner I've heard that combats Elon Musk's fear that the robots will kill us all. "If you're afraid of the android revolution," she said, "just stand in a puddle.