Goto

Collaborating Authors

 antelope




A furry antelope robot is keeping tabs on its organic cousins

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Roboticists in China have developed a life-sized, furry, AI-enabled antelope designed to monitor the migration patterns of its real-life counterpart. This "bionic" antelope is part of a growing arsenal of somewhat convincing-looking robots used to observe wildlife in up close and personal ways human researchers often can't. The robot was first reported on by Chinese news agency Xinhua and was reportedly co-designed by DEEP Robotics and the Chinese Academy of Sciences. It was built to fill a gap in current efforts to monitor the once-endangered Tibetan antelope (Pantholops hodgsonii).


Antelope: Potent and Concealed Jailbreak Attack Strategy

Zhao, Xin, Chen, Xiaojun, Gao, Haoyu

arXiv.org Artificial Intelligence

Due to the remarkable generative potential of diffusion-based models, numerous researches have investigated jailbreak attacks targeting these frameworks. A particularly concerning threat within image models is the generation of Not-Safe-for-Work (NSFW) content. Despite the implementation of security filters, numerous efforts continue to explore ways to circumvent these safeguards. Current attack methodologies primarily encompass adversarial prompt engineering or concept obfuscation, yet they frequently suffer from slow search efficiency, conspicuous attack characteristics and poor alignment with targets. To overcome these challenges, we propose Antelope, a more robust and covert jailbreak attack strategy designed to expose security vulnerabilities inherent in generative models. Specifically, Antelope leverages the confusion of sensitive concepts with similar ones, facilitates searches in the semantically adjacent space of these related concepts and aligns them with the target imagery, thereby generating sensitive images that are consistent with the target and capable of evading detection. Besides, we successfully exploit the transferability of model-based attacks to penetrate online black-box services. Experimental evaluations demonstrate that Antelope outperforms existing baselines across multiple defensive mechanisms, underscoring its efficacy and versatility.


Representer Point Selection for Explaining Deep Neural Networks

Yeh, Chih-Kuan, Kim, Joon, Yen, Ian En-Hsu, Ravikumar, Pradeep K.

Neural Information Processing Systems

We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction. Specifically, we show that we can decompose the pre-activation prediction of a neural network into a linear combination of activations of training points, with the weights corresponding to what we call representer values, which thus capture the importance of that training point on the learned parameters of the network. But it provides a deeper understanding of the network than simply training point influence: with positive representer values corresponding to excitatory training points, and negative values corresponding to inhibitory points, which as we show provides considerably more insight. Our method is also much more scalable, allowing for real-time feedback in a manner not feasible with influence functions.


Representer Point Selection for Explaining Deep Neural Networks

Yeh, Chih-Kuan, Kim, Joon, Yen, Ian En-Hsu, Ravikumar, Pradeep K.

Neural Information Processing Systems

We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction. Specifically, we show that we can decompose the pre-activation prediction of a neural network into a linear combination of activations of training points, with the weights corresponding to what we call representer values, which thus capture the importance of that training point on the learned parameters of the network. But it provides a deeper understanding of the network than simply training point influence: with positive representer values corresponding to excitatory training points, and negative values corresponding to inhibitory points, which as we show provides considerably more insight. Our method is also much more scalable, allowing for real-time feedback in a manner not feasible with influence functions.


Niger will use drones to protect almost extinct antelope species

#artificialintelligence

When we think about endangered animals in Afrika at risk of extinction or being poached, we usually think of elephants and rhinos. This can be attributed to various factors including increased publicity around the increasing threats that rhinos and elephants face from poachers. However, there are other endangered animal species in Afrika that also require as much protection and publicity. Take the addax antelopes in Niger as an example. In 2016, the Sahara Conservation Fund (SCF) released their research report which stated there were likely only a handful of addax antelopes, specifically only 3, remaining in the wild in Niger.


How brain-inspired AI and neuroscience advances machine learning

#artificialintelligence

While building artificial systems does not necessarily require copying nature -- after all, airplanes fly without flapping their wings like birds -- the history of AI and machine learning convincingly demonstrates that drawing inspirations from neuroscience and psychology can lead to significant breakthroughs, with deep neural networks and reinforcement learning being perhaps the two most prominent examples. Taking inspiration from the brain, our IBM Research team recently used machine learning techniques to develop computational models of attention and memory. Our ultimate goal is to build lifelong learning AI systems, able to adapt to new environments while retaining what they have learned so far. This challenge can be broken down into short term adaptation, where there is little time to change a system and train it on what to pay attention to, and long term adaptation that is inspired by how the human brain forms memory and how neuroplasticity (e.g., adult neurogenesis) affects this process. Our team developed two important innovations that enable short-term and long-term adaptation which are a result of reward-driven attention techniques and enabling network "plasticity."