Goto

Collaborating Authors

 plastic bag



Supplementary Materials A Extended Related Work (2)

Neural Information Processing Systems

We first discuss attacks that use physical objects as triggers, then discuss a few related works which use light as a trigger. We conclude by discussing the single proposed defense against physical backdoor attacks. As mentioned briefly in 2, [ 10 ] designs a backdoor attack against lane detection systems for autonomous vehicles. This attack expands the scope of physical backdoor attacks by attacking detection rather than classification models. Furthermore, it confirms the result from [ 43 ] that even when digitally altered images are used to poison a dataset, the triggers can be activated using physical objects (traffic cones in this setting) in real world scenarios. A second work [ 31 ] evaluates the effectiveness of using facial characteristics as backdoor triggers.


Knowledge Graphs as World Models for Semantic Material-Aware Obstacle Handling in Autonomous Vehicles

Bheemaiah, Ayush, Yang, Seungyong

arXiv.org Artificial Intelligence

The inability of autonomous vehicles (AVs) to infer the material properties of obstacles limits their decision-making capacity. While AVs rely on sensor systems such as cameras, LiDAR, and radar to detect obstacles, this study suggests combining sensors with a knowledge graph (KG)-based world model to improve AVs' comprehension of physical material qualities. Beyond sensor data, AVs can infer qualities such as malleability, density, and elasticity using a semantic KG that depicts the relationships between obstacles and their attributes. Using the CARLA autonomous driving simulator, we evaluated AV performance with and without KG integration. The findings demonstrate that the KG-based method improves obstacle management, which allows AVs to use material qualities to make better decisions about when to change lanes or apply emergency braking. For example, the KG-integrated AV changed lanes for hard impediments like traffic cones and successfully avoided collisions with flexible items such as plastic bags by passing over them. Compared to the control system, the KG framework demonstrated improved responsiveness to obstacles by resolving conflicting sensor data, causing emergency stops for 13.3% more cases. In addition, our method exhibits a 6.6% higher success rate in lane-changing maneuvers in experimental scenarios, particularly for larger, high-impact obstacles. While we focus particularly on autonomous driving, our work demonstrates the potential of KG-based world models to improve decision-making in embodied AI systems and scale to other domains, including robotics, healthcare, and environmental simulation.


What video game ephemera tell us about ourselves

The Guardian

I just finished writing a feature about the Video Game History Foundation in Oakland, California, and how it is preparing to share its digital archive of games magazines. From 30 January, you'll be able to visit the institute's website and explore a collection of about 1,500 publications from throughout the history of games, all scanned in high detail, all searchable for keywords. It's a magnificent resource for researchers and those who just want to find the first-ever review of Tetris or Pokémon. I can't wait to visit. While researching the article, I spoke to John O'Shea and Ann Wain from the National Videogame Museum in Sheffield, which is also collecting games mags and other printed ephemera.


Vision-based Manipulation of Transparent Plastic Bags in Industrial Setups

Adetunji, F., Karukayil, A., Samant, P., Shabana, S., Varghese, F., Upadhyay, U., Yadav, R. A., Partridge, A., Pendleton, E., Plant, R., Petillot, Y., Koskinopoulou, M.

arXiv.org Artificial Intelligence

This paper addresses the challenges of vision-based manipulation for autonomous cutting and unpacking of transparent plastic bags in industrial setups, aligning with the Industry 4.0 paradigm. Industry 4.0, driven by data, connectivity, analytics, and robotics, promises enhanced accessibility and sustainability throughout the value chain. The integration of autonomous systems, including collaborative robots (cobots), into industrial processes is pivotal for efficiency and safety. The proposed solution employs advanced Machine Learning algorithms, particularly Convolutional Neural Networks (CNNs), to identify transparent plastic bags under varying lighting and background conditions. Tracking algorithms and depth sensing technologies are utilized for 3D spatial awareness during pick and placement. The system addresses challenges in grasping and manipulation, considering optimal points, compliance control with vacuum gripping technology, and real-time automation for safe interaction in dynamic environments. The system's successful testing and validation in the lab with the FRANKA robot arm, showcases its potential for widespread industrial applications, while demonstrating effectiveness in automating the unpacking and cutting of transparent plastic bags for an 8-stack bulk-loader based on specific requirements and rigorous testing.


Bagging by Learning to Singulate Layers Using Interactive Perception

Chen, Lawrence Yunliang, Shi, Baiyu, Lin, Roy, Seita, Daniel, Ahmad, Ayah, Cheng, Richard, Kollar, Thomas, Held, David, Goldberg, Ken

arXiv.org Artificial Intelligence

Many fabric handling and 2D deformable material tasks in homes and industry require singulating layers of material such as opening a bag or arranging garments for sewing. In contrast to methods requiring specialized sensing or end effectors, we use only visual observations with ordinary parallel jaw grippers. We propose SLIP: Singulating Layers using Interactive Perception, and apply SLIP to the task of autonomous bagging. We develop SLIP-Bagging, a bagging algorithm that manipulates a plastic or fabric bag from an unstructured state, and uses SLIP to grasp the top layer of the bag to open it for object insertion. In physical experiments, a YuMi robot achieves a success rate of 67% to 81% across bags of a variety of materials, shapes, and sizes, significantly improving in success rate and generality over prior work. Experiments also suggest that SLIP can be applied to tasks such as singulating layers of folded cloth and garments. Supplementary material is available at https://sites.google.com/view/slip-bagging/.


ShakingBot: Dynamic Manipulation for Bagging

Gu, Ningquan, Zhang, Zhizhong, He, Ruhan, Yu, Lianqing

arXiv.org Artificial Intelligence

Bag manipulation through robots is complex and challenging due to the deformability of the bag. Based on dynamic manipulation strategy, we propose a new framework, ShakingBot, for the bagging tasks. ShakingBot utilizes a perception module to identify the key region of the plastic bag from arbitrary initial configurations. According to the segmentation, ShakingBot iteratively executes a novel set of actions, including Bag Adjustment, Dual-arm Shaking, and One-arm Holding, to open the bag. The dynamic action, Dual-arm Shaking, can effectively open the bag without the need to account for the crumpled configuration.Then, we insert the items and lift the bag for transport. We perform our method on a dual-arm robot and achieve a success rate of 21/33 for inserting at least one item across various initial bag configurations. In this work, we demonstrate the performance of dynamic shaking actions compared to the quasi-static manipulation in the bagging task. We also show that our method generalizes to variations despite the bag's size, pattern, and color.


Koala: An Index for Quantifying Overlaps with Pre-training Corpora

Vu, Thuy-Trang, He, Xuanli, Haffari, Gholamreza, Shareghi, Ehsan

arXiv.org Artificial Intelligence

In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour. Despite the importance, there is no public tool that supports such analysis of pre-training corpora at large scale. To help research in this space, we launch Koala, a searchable index over large pre-training corpora using compressed suffix arrays with highly efficient compression rate and search support. In its first release we index the public proportion of OPT 175B pre-training data. Koala provides a framework to do forensic analysis on the current and future benchmarks as well as to assess the degree of memorization in the output from the LLMs. Koala is available for public use at https://koala-index.erc.monash.edu/.


Inside the Dark Industry Where Old Cellphones and Computers Go to Die

Slate

NEW DELHI--As dawn breaks, hundreds of men move in and out of the congested alleys of Seelampur, pulling carts and driving dump trucks loaded with discarded cellphones, computers, air conditioners, and almost any other electronic waste imaginable. Located on the outskirts of New Delhi, Seelampur is the country's largest market dedicated to dismantling old tech, and it's home to an estimated 50,000 men, women, and children whose livelihoods depend on e-waste. Inside the labyrinth of alleys, hundreds of small establishments are packed with different electronic gadgets, which workers take apart mostly with their bare hands, a hammer, and pliers, hoping to extract precious metals like gold, silver, and tin--or any other useful item. Children move through the nooks and corners of the market with plastic bags on their shoulders, collecting potentially useful scraps among the e-waste leftovers piled in front of doorways. Aftab, 15, is one of them.


The role of computer vision in autonomous vehicles

AIHub

Recent advances in computer vision have revolutionized many areas of research including robotics, automation, and self-driving vehicles. The self-driving car industry has grown markedly in recent years, in no small part enabled by use of state-of-the-art computer vision techniques. However, there remain many challenges in the field. One of the most difficult problems in autonomous driving is perception. Once autonomous vehicles have an accurate perception of the world around them, planning and control become easier.