Goto

Collaborating Authors

 ding



Deep Learning Optimization of Two-State Pinching Antennas Systems

Karagiannidis, Odysseas G., Galanopoulou, Victoria E., Diamantoulakis, Panagiotis D., Ding, Zhiguo, Dobre, Octavia

arXiv.org Artificial Intelligence

The evolution of wireless communication systems requires flexible, energy-efficient, and cost-effective antenna technologies. Pinching antennas (PAs), which can dynamically control electromagnetic wave propagation through binary activation states, have recently emerged as a promising candidate. In this work, we investigate the problem of optimally selecting a subset of fixed-position PAs to activate in a waveguide, when the aim is to maximize the communication rate at a user terminal. Due to the complex interplay between antenna activation, waveguide-induced phase shifts, and power division, this problem is formulated as a combinatorial fractional 0-1 quadratic program. To efficiently solve this challenging problem, we use neural network architectures of varying complexity to learn activation policies directly from data, leveraging spatial features and signal structure. Furthermore, we incorporate user location uncertainty into our training and evaluation pipeline to simulate realistic deployment conditions. Simulation results demonstrate the effectiveness and robustness of the proposed models.


Worm towers are all around us

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Biologists estimate that four out of five animals on Earth are nematodes (AKA roundworms).The tiny, wriggling, transparent invertebrates are the most abundant creatures on the planet and are found nearly everywhere–from permafrost to the deep ocean. More than one million species make up this ubiquitous group, which includes parasites, decomposers, predators, and more. "They're not about to take over the world, because they already did," says Serena Ding, a biologist at the Max Planck Institute of Animal Behavior in Konstanz, Germany tells Popular Science. "Global worming has already happened."


GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction

Neural Information Processing Systems

Graph-structured data is integral to many applications, prompting the development of various graph representation methods. Graph autoencoders (GAEs), in particular, reconstruct graph structures from node embeddings. Current GAE models primarily utilize self-correlation to represent graph structures and focus on node-level tasks, often overlooking multi-graph scenarios. Our theoretical analysis indicates that self-correlation generally falls short in accurately representing specific graph features such as islands, symmetrical structures, and directional edges, particularly in smaller or multiple graph contexts.To address these limitations, we introduce a cross-correlation mechanism that significantly enhances the GAE representational capabilities. Additionally, we propose the GraphCroc, a new GAE that supports flexible encoder architectures tailored for various downstream tasks and ensures robust structural reconstruction, through a mirrored encoding-decoding process.


A flow-based latent state generative model of neural population responses to natural images

Neural Information Processing Systems

We present a joint deep neural system identification model for two major sources of neural variability: stimulus-driven and stimulus-conditioned fluctuations. To this end, we combine (1) state-of-the-art deep networks for stimulus-driven activity and (2) a flexible, normalizing flow-based generative model to capture the stimulus-conditioned variability including noise correlations. This allows us to train the model end-to-end without the need for sophisticated probabilistic approximations associated with many latent state models for stimulus-conditioned fluctuations. We train the model on the responses of thousands of neurons from multiple areas of the mouse visual cortex to natural images. We show that our model outperforms previous state-of-the-art models in predicting the distribution of neural population responses to novel stimuli, including shared stimulus-conditioned variability.


A flow-based latent state generative model of neural population responses to natural images

Neural Information Processing Systems

We present a joint deep neural system identification model for two major sources of neural variability: stimulus-driven and stimulus-conditioned fluctuations. To this end, we combine (1) state-of-the-art deep networks for stimulus-driven activity and (2) a flexible, normalizing flow-based generative model to capture the stimulus-conditioned variability including noise correlations. This allows us to train the model end-to-end without the need for sophisticated probabilistic approximations associated with many latent state models for stimulus-conditioned fluctuations. We train the model on the responses of thousands of neurons from multiple areas of the mouse visual cortex to natural images. We show that our model outperforms previous state-of-the-art models in predicting the distribution of neural population responses to novel stimuli, including shared stimulus-conditioned variability.


The CLIP Model is Secretly an Image-to-Prompt Converter

Neural Information Processing Systems

The Stable Diffusion model is a prominent text-to-image generation model that relies on a text prompt as its input, which is encoded using the Contrastive Language-Image Pre-Training (CLIP). However, text prompts have limitations when it comes to incorporating implicit information from reference images. Existing methods have attempted to address this limitation by employing expensive training procedures involving millions of training samples for image-to-image generation. In contrast, this paper demonstrates that the CLIP model, as utilized in Stable Diffusion, inherently possesses the ability to instantaneously convert images into text prompts. Such an image-to-prompt conversion can be achieved by utilizing a linear projection matrix that is calculated in a closed form. Moreover, the paper showcases that this capability can be further enhanced by either utilizing a small amount of similar-domain training data (approximately 100 images) or incorporating several online training steps (around 30 iterations) on the reference images.


BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs

Neural Information Processing Systems

Detecting which nodes in graphs are outliers is a relatively new machine learning task with numerous applications. Despite the proliferation of algorithms developed in recent years for this task, there has been no standard comprehensive setting for performance evaluation. Consequently, it has been difficult to understand which methods work well and when under a broad range of settings. To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights. Based on our experimental results, we discuss the pros and cons of existing graph outlier detection algorithms, and we highlight opportunities for future research.


Biologically Inspired Dynamic Thresholds for Spiking Neural Networks

Neural Information Processing Systems

The dynamic membrane potential threshold, as one of the essential properties of a biological neuron, is a spontaneous regulation mechanism that maintains neuronal homeostasis, i.e., the constant overall spiking firing rate of a neuron. As such, the neuron firing rate is regulated by a dynamic spiking threshold, which has been extensively studied in biology. Existing work in the machine learning community does not employ bioinspired spiking threshold schemes. This work aims at bridging this gap by introducing a novel bioinspired dynamic energy-temporal threshold (BDETT) scheme for spiking neural networks (SNNs). The proposed BDETT scheme mirrors two bioplausible observations: a dynamic threshold has 1) a positive correlation with the average membrane potential and 2) a negative correlation with the preceding rate of depolarization.


Driverless cars are mostly safer than humans – but worse at turns

New Scientist

One of the largest accident studies yet suggests self-driving cars may be safer than human drivers in routine circumstances – but it also shows the technology struggles more than humans during low-light conditions and when performing turns. The findings come at a time when autonomous vehicles are already driving in several US cities. The GM-owned company Cruise is trying to restart driverless car testing after a pedestrian-dragging incident in March led California to suspend its operating permit. Meanwhile, Google spin-off Waymo has been gradually expanding robotaxi operations in Austin, Los Angeles, Phoenix and San Francisco. "It is important to improve the safety of autonomous vehicles under dawn and dusk or turning conditions," says Shengxuan Ding at the University of Central Florida.