visibility
Ultralightweight sonar plus AI lets tiny drones navigate like bats
To help small aerial robots navigate in the dark and other low-visibility environments, my colleagues and I developed an ultrasound-based perception system inspired by bat echolocation. Current robots rely heavily on cameras or light detection and ranging, known as lidar, or both. But these sensors fail in visually challenging conditions, such as smoke, fog, dust, snow or complete darkness. I'm a scientific engineer who develops bio-inspired microrobots. To solve this challenge, my research team looked at nature's experts at navigating in poor visibility: bats.
- Asia > Middle East > Israel (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Graphics (0.93)
- Information Technology > Security & Privacy (1.00)
- Transportation (0.70)
- Government (0.67)
- Information Technology (0.94)
- Law (0.93)
- Media (1.00)
- Information Technology > Services (0.93)
- Law (0.67)
- Information Technology > Communications > Social Media (0.93)
- Information Technology > Artificial Intelligence > Machine Learning (0.92)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.48)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.05)
- Asia > Russia (0.04)
- North America > Canada (0.04)
0a630402ee92620dc2de3b704181de9b-Paper-Conference.pdf
Inthispaper,weaddress the"dual problem" ofmulti-viewscene reconstruction in which we utilize single-view images captured under different point lights to learnaneural scene representation. Different fromexisting single-viewmethods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the3Dgeometry andBRDFsofascene.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
CamoPatch: An Evolutionary Strategy for Generating Camoflauged Adversarial Patches
Deep neural networks (DNNs) have demonstrated vulnerabilities to adversarial examples, which raises concerns about their reliability in safety-critical applications. While the majority of existing methods generate adversarial examples by making small modifications to the entire image, recent research has proposed a practical alternative known as adversarial patches. Adversarial patches have shown to be highly effective in causing DNNs to misclassify by distorting a localized area (patch) of the image. However, existing methods often produce clearly visible distortions since they do not consider the visibility of the patch. To address this, we propose a novel method for constructing adversarial patches that approximates the appearance of the area it covers. We achieve this by using a set of semi-transparent, RGB-valued circles, drawing inspiration from the computational art community. We utilize an evolutionary strategy to optimize the properties of each shape, and employ a simulated annealing approach to optimize the patch's location. Our approach achieves better or comparable performance to state-of-the-art methods on ImageNet DNN classifiers while achieving a lower $l_2$ distance from the original image. By minimizing the visibility of the patch, this work further highlights the vulnerabilities of DNNs to adversarial patches.