Goto

Collaborating Authors

 qnet


Learning To Communicate Over An Unknown Shared Network

Agarwal, Shivangi, Asija, Adi, Kaul, Sanjit K., Bhattacharya, Arani, Anand, Saket

arXiv.org Artificial Intelligence

As robots (edge-devices, agents) find uses in an increasing number of settings and edge-cloud resources become pervasive, wireless networks will often be shared by flows of data traffic that result from communication between agents and corresponding edge-cloud. In such settings, agent communicating with the edge-cloud is unaware of state of network resource, which evolves in response to not just agent's own communication at any given time but also to communication by other agents, which stays unknown to the agent. We address challenge of an agent learning a policy that allows it to decide whether or not to communicate with its cloud node, using limited feedback it obtains from its own attempts to communicate, to optimize its utility. The policy generalizes well to any number of other agents sharing the network and must not be trained for any particular network configuration. Our proposed policy is a DRL model Query Net (QNet) that we train using a proposed simulation-to-real framework. Our simulation model has just one parameter and is agnostic to specific configurations of any wireless network. It allows training an agent's policy over a wide range of outcomes that an agent's communication with its edge-cloud node may face when using a shared network, by suitably randomizing the simulation parameter. We propose a learning algorithm that addresses challenges observed in training QNet. We validate our simulation-to-real driven approach through experiments conducted on real wireless networks including WiFi and cellular. We compare QNet with other policies to demonstrate its efficacy. WiFi experiments involved as few as five agents, resulting in barely any contention for the network, to as many as fifty agents, resulting in severe contention. The cellular experiments spanned a broad range of network conditions, with baseline RTT ranging from a low of 0.07 second to a high of 0.83 second.


QSViT: A Methodology for Quantizing Spiking Vision Transformers

Putra, Rachmad Vidya Wicaksana, Iftikhar, Saad, Shafique, Muhammad

arXiv.org Artificial Intelligence

Vision Transformer (ViT)-based models have shown state-of-the-art performance (e.g., accuracy) in vision-based AI tasks. However, realizing their capability in resource-constrained embedded AI systems is challenging due to their inherent large memory footprints and complex computations, thereby incurring high power/energy consumption. Recently, Spiking Vision Transformer (SViT)-based models have emerged as alternate low-power ViT networks. However, their large memory footprints still hinder their applicability for resource-constrained embedded AI systems. Therefore, there is a need for a methodology to compress SViT models without degrading the accuracy significantly. To address this, we propose QSViT, a novel design methodology to compress the SViT models through a systematic quantization strategy across different network layers. To do this, our QSViT employs several key steps: (1) investigating the impact of different precision levels in different network layers, (2) identifying the appropriate base quantization settings for guiding bit precision reduction, (3) performing a guided quantization strategy based on the base settings to select the appropriate quantization setting, and (4) developing an efficient quantized network based on the selected quantization setting. The experimental results demonstrate that, our QSViT methodology achieves 22.75% memory saving and 21.33% power saving, while also maintaining high accuracy within 2.1% from that of the original non-quantized SViT model on the ImageNet dataset. These results highlight the potential of QSViT methodology to pave the way toward the efficient SViT deployments on resource-constrained embedded AI systems.


Magnetic Resonance Spectroscopy Quantification Aided by Deep Estimations of Imperfection Factors and Macromolecular Signal

Chen, Dicheng, Lin, Meijin, Liu, Huiting, Li, Jiayu, Zhou, Yirong, Kang, Taishan, Lin, Liangjie, Wu, Zhigang, Wang, Jiazheng, Li, Jing, Lin, Jianzhong, Chen, Xi, Guo, Di, Qu, Xiaobo

arXiv.org Artificial Intelligence

Objective: Magnetic Resonance Spectroscopy (MRS) is an important technique for biomedical detection. However, it is challenging to accurately quantify metabolites with proton MRS due to serious overlaps of metabolite signals, imperfections because of non-ideal acquisition conditions, and interference with strong background signals mainly from macromolecules. The most popular method, LCModel, adopts complicated non-linear least square to quantify metabolites and addresses these problems by designing empirical priors such as basis-sets, imperfection factors. However, when the signal-to-noise ratio of MRS signal is low, the solution may have large deviation. Methods: Linear Least Squares (LLS) is integrated with deep learning to reduce the complexity of solving this overall quantification. First, a neural network is designed to explicitly predict the imperfection factors and the overall signal from macromolecules. Then, metabolite quantification is solved analytically with the introduced LLS. In our Quantification Network (QNet), LLS takes part in the backpropagation of network training, which allows the feedback of the quantification error into metabolite spectrum estimation. This scheme greatly improves the generalization to metabolite concentrations unseen for training compared to the end-to-end deep learning method. Results: Experiments show that compared with LCModel, the proposed QNet, has smaller quantification errors for simulated data, and presents more stable quantification for 20 healthy in vivo data at a wide range of signal-to-noise ratio. QNet also outperforms other end-to-end deep learning methods. Conclusion: This study provides an intelligent, reliable and robust MRS quantification. Significance: QNet is the first LLS quantification aided by deep learning.


Learning quantum symmetries with interactive quantum-classical variational algorithms

Lu, Jonathan Z., Bravo, Rodrigo A., Hou, Kaiying, Dagnew, Gebremedhin A., Yelin, Susanne F., Najafi, Khadijeh

arXiv.org Artificial Intelligence

A symmetry of a state $\vert \psi \rangle$ is a unitary operator of which $\vert \psi \rangle$ is an eigenvector. When $\vert \psi \rangle$ is an unknown state supplied by a black-box oracle, the state's symmetries provide key physical insight into the quantum system; symmetries also boost many crucial quantum learning techniques. In this paper, we develop a variational hybrid quantum-classical learning scheme to systematically probe for symmetries of $\vert \psi \rangle$ with no a priori assumptions about the state. This procedure can be used to learn various symmetries at the same time. In order to avoid re-learning already known symmetries, we introduce an interactive protocol with a classical deep neural net. The classical net thereby regularizes against repetitive findings and allows our algorithm to terminate empirically with all possible symmetries found. Our scheme can be implemented efficiently on average with non-local SWAP gates; we also give a less efficient algorithm with only local operations, which may be more appropriate for current noisy quantum devices. We simulate our algorithm on representative families of states, including cluster states and ground states of Rydberg and Ising Hamiltonians. We also find that the numerical query complexity scales well with qubit size.