Hardware: Overviews
Towards Mobile Sensing with Event Cameras on High-agility Resource-constrained Devices: A Survey
Wang, Haoyang, Guo, Ruishan, Ma, Pengtao, Ruan, Ciyu, Luo, Xinyu, Ding, Wenhua, Zhong, Tianyang, Xu, Jingao, Liu, Yunhao, Chen, Xinlei
With the increasing complexity of mobile device applications, these devices are evolving toward high agility. This shift imposes new demands on mobile sensing, particularly in terms of achieving high accuracy and low latency. Event-based vision has emerged as a disruptive paradigm, offering high temporal resolution, low latency, and energy efficiency, making it well-suited for high-accuracy and low-latency sensing tasks on high-agility platforms. However, the presence of substantial noisy events, the lack of inherent semantic information, and the large data volume pose significant challenges for event-based data processing on resource-constrained mobile devices. This paper surveys the literature over the period 2014-2024, provides a comprehensive overview of event-based mobile sensing systems, covering fundamental principles, event abstraction methods, algorithmic advancements, hardware and software acceleration strategies. We also discuss key applications of event cameras in mobile sensing, including visual odometry, object tracking, optical flow estimation, and 3D reconstruction, while highlighting the challenges associated with event data processing, sensor fusion, and real-time deployment. Furthermore, we outline future research directions, such as improving event camera hardware with advanced optics, leveraging neuromorphic computing for efficient processing, and integrating bio-inspired algorithms to enhance perception. To support ongoing research, we provide an open-source \textit{Online Sheet} with curated resources and recent developments. We hope this survey serves as a valuable reference, facilitating the adoption of event-based vision across diverse applications.
Conditional Synthesis of 3D Molecules with Time Correction Sampler 1
Diffusion models have demonstrated remarkable success in various domains, including molecular generation. However, conditional molecular generation remains a fundamental challenge due to an intrinsic trade-off between targeting specific chemical properties and generating meaningful samples from the data distribution.
Towards Hardware Supported Domain Generalization in DNN-Based Edge Computing Devices for Health Monitoring
Loh, Johnson, Dudchenko, Lyubov, Viga, Justus, Gemmeke, Tobias
Deep neural network (DNN) models have shown remarkable success in many real-world scenarios, such as object detection and classification. Unfortunately, these models are not yet widely adopted in health monitoring due to exceptionally high requirements for model robustness and deployment in highly resource-constrained devices. In particular, the acquisition of biosignals, such as electrocardiogram (ECG), is subject to large variations between training and deployment, necessitating domain generalization (DG) for robust classification quality across sensors and patients. The continuous monitoring of ECG also requires the execution of DNN models in convenient wearable devices, which is achieved by specialized ECG accelerators with small form factor and ultra-low power consumption. However, combining DG capabilities with ECG accelerators remains a challenge. This article provides a comprehensive overview of ECG accelerators and DG methods and discusses the implication of the combination of both domains, such that multi-domain ECG monitoring is enabled with emerging algorithm-hardware co-optimized systems. Within this context, an approach based on correction layers is proposed to deploy DG capabilities on the edge. Here, the DNN fine-tuning for unknown domains is limited to a single layer, while the remaining DNN model remains unmodified. Thus, computational complexity (CC) for DG is reduced with minimal memory overhead compared to conventional fine-tuning of the whole DNN model. The DNN model-dependent CC is reduced by more than 2.5x compared to DNN fine-tuning at an average increase of F1 score by more than 20% on the generalized target domain. In summary, this article provides a novel perspective on robust DNN classification on the edge for health monitoring applications.
Towards Quantum Tensor Decomposition in Biomedical Applications
Burch, Myson, Zhang, Jiasen, Idumah, Gideon, Doga, Hakan, Lartey, Richard, Yehia, Lamis, Yang, Mingrui, Yildirim, Murat, Karaayvaz, Mihriban, Shehab, Omar, Guo, Weihong, Ni, Ying, Parida, Laxmi, Li, Xiaojuan, Bose, Aritra
Tensor decomposition has emerged as a powerful framework for feature extraction in multi-modal biomedical data. In this review, we present a comprehensive analysis of tensor decomposition methods such as Tucker, CANDECOMP/PARAFAC, spiked tensor decomposition, etc. and their diverse applications across biomedical domains such as imaging, multi-omics, and spatial transcriptomics. To systematically investigate the literature, we applied a topic modeling-based approach that identifies and groups distinct thematic sub-areas in biomedicine where tensor decomposition has been used, thereby revealing key trends and research directions. We evaluated challenges related to the scalability of latent spaces along with obtaining the optimal rank of the tensor, which often hinder the extraction of meaningful features from increasingly large and complex datasets. Additionally, we discuss recent advances in quantum algorithms for tensor decomposition, exploring how quantum computing can be leveraged to address these challenges. Our study includes a preliminary resource estimation analysis for quantum computing platforms and examines the feasibility of implementing quantum-enhanced tensor decomposition methods on near-term quantum devices. Collectively, this review not only synthesizes current applications and challenges of tensor decomposition in biomedical analyses but also outlines promising quantum computing strategies to enhance its impact on deriving actionable insights from complex biomedical data.
Qubit-Based Framework for Quantum Machine Learning: Bridging Classical Data and Quantum Algorithms
This paper dives into the exciting and rapidly growing field of quantum computing, explaining its core ideas, current progress, and how it could revolutionize the way we solve complex problems. It starts by breaking down the basics, like qubits, quantum circuits, and how principles like superposition and entanglement make quantum computers fundamentally different-and far more powerful for certain tasks-than the classical computers we use today. We also explore how quantum computing deals with complex problems and why it is uniquely suited for challenges classical systems struggle to handle. A big part of this paper focuses on Quantum Machine Learning (QML), where the strengths of quantum computing meet the world of artificial intelligence. By processing massive datasets and optimizing intricate algorithms, quantum systems offer new possibilities for machine learning. We highlight different approaches to combining quantum and classical computing, showing how they can work together to produce faster and more accurate results. Additionally, we explore the tools and platforms available-like TensorFlow Quantum, Qiskit and PennyLane-that are helping researchers and developers bring these theories to life. Of course, quantum computing has its hurdles. Challenges like scaling up hardware, correcting errors, and keeping qubits stable are significant roadblocks. Yet, with rapid advancements in cloud-based platforms and innovative technologies, the potential of quantum computing feels closer than ever. This paper aims to offer readers a clear and comprehensive introduction to quantum computing, its role in machine learning, and the immense possibilities it holds for the future of technology.
How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions of Qubits
Mohseni, Masoud, Scherer, Artur, Johnson, K. Grace, Wertheim, Oded, Otten, Matthew, Aadit, Navid Anjum, Alexeev, Yuri, Bresniker, Kirk M., Camsari, Kerem Y., Chapman, Barbara, Chatterjee, Soumitra, Dagnew, Gebremedhin A., Esposito, Aniello, Fahim, Farah, Fiorentino, Marco, Gajjar, Archit, Khalid, Abdullah, Kong, Xiangzhou, Kulchytskyy, Bohdan, Kyoseva, Elica, Li, Ruoyu, Lott, P. Aaron, Markov, Igor L., McDermott, Robert F., Pedretti, Giacomo, Rao, Pooja, Rieffel, Eleanor, Silva, Allyson, Sorebo, John, Spentzouris, Panagiotis, Steiner, Ziv, Torosov, Boyan, Venturelli, Davide, Visser, Robert J., Webb, Zak, Zhan, Xin, Cohen, Yonatan, Ronagh, Pooya, Ho, Alan, Beausoleil, Raymond G., Martinis, John M.
In the span of four decades, quantum computation has evolved from an intellectual curiosity to a potentially realizable technology. Today, small-scale demonstrations have become possible for quantum algorithmic primitives on hundreds of physical qubits and proof-of-principle error-correction on a single logical qubit. Nevertheless, despite significant progress and excitement, the path toward a full-stack scalable technology is largely unknown. There are significant outstanding quantum hardware, fabrication, software architecture, and algorithmic challenges that are either unresolved or overlooked. These issues could seriously undermine the arrival of utility-scale quantum computers for the foreseeable future. Here, we provide a comprehensive review of these scaling challenges. We show how the road to scaling could be paved by adopting existing semiconductor technology to build much higher-quality qubits, employing system engineering approaches, and performing distributed quantum computation within heterogeneous high-performance computing infrastructures. These opportunities for research and development could unlock certain promising applications, in particular, efficient quantum simulation/learning of quantum data generated by natural or engineered quantum systems. To estimate the true cost of such promises, we provide a detailed resource and sensitivity analysis for classically hard quantum chemistry calculations on surface-code error-corrected quantum computers given current, target, and desired hardware specifications based on superconducting qubits, accounting for a realistic distribution of errors. Furthermore, we argue that, to tackle industry-scale classical optimization and machine learning problems in a cost-effective manner, heterogeneous quantum-probabilistic computing with custom-designed accelerators should be considered as a complementary path toward scalability.
Digital Twin in Industries: A Comprehensive Survey
Zami, Md Bokhtiar Al, Shaon, Shaba, Quy, Vu Khanh, Nguyen, Dinh C.
Industrial networks are undergoing rapid transformation driven by the convergence of emerging technologies that are revolutionizing conventional workflows, enhancing operational efficiency, and fundamentally redefining the industrial landscape across diverse sectors. Amidst this revolution, Digital Twin (DT) emerges as a transformative innovation that seamlessly integrates real-world systems with their virtual counterparts, bridging the physical and digital realms. In this article, we present a comprehensive survey of the emerging DT-enabled services and applications across industries, beginning with an overview of DT fundamentals and its components to a discussion of key enabling technologies for DT. Different from literature works, we investigate and analyze the capabilities of DT across a wide range of industrial services, including data sharing, data offloading, integrated sensing and communication, content caching, resource allocation, wireless networking, and metaverse. In particular, we present an in-depth technical discussion of the roles of DT in industrial applications across various domains, including manufacturing, healthcare, transportation, energy, agriculture, space, oil and gas, as well as robotics. Throughout the technical analysis, we delve into real-time data communications between physical and virtual platforms to enable industrial DT networking. Subsequently, we extensively explore and analyze a wide range of major privacy and security issues in DT-based industry. Taxonomy tables and the key research findings from the survey are also given, emphasizing important insights into the significance of DT in industries. Finally, we point out future research directions to spur further research in this promising area.
Moving Forward: A Review of Autonomous Driving Software and Hardware Systems
Wang, Xu, Maleki, Mohammad Ali, Azhar, Muhammad Waqar, Trancoso, Pedro
With their potential to significantly reduce traffic accidents, enhance road safety, optimize traffic flow, and decrease congestion, autonomous driving systems are a major focus of research and development in recent years. Beyond these immediate benefits, they offer long-term advantages in promoting sustainable transportation by reducing emissions and fuel consumption. Achieving a high level of autonomy across diverse conditions requires a comprehensive understanding of the environment. This is accomplished by processing data from sensors such as cameras, radars, and LiDARs through a software stack that relies heavily on machine learning algorithms. These ML models demand significant computational resources and involve large-scale data movement, presenting challenges for hardware to execute them efficiently and at high speed. In this survey, we first outline and highlight the key components of self-driving systems, covering input sensors, commonly used datasets, simulation platforms, and the software architecture. We then explore the underlying hardware platforms that support the execution of these software systems. By presenting a comprehensive view of autonomous driving systems and their increasing demands, particularly for higher levels of autonomy, we analyze the performance and efficiency of scaled-up off-the-shelf GPU/CPU-based systems, emphasizing the challenges within the computational components. Through examples showcasing the diverse computational and memory requirements in the software stack, we demonstrate how more specialized hardware and processing closer to memory can enable more efficient execution with lower latency. Finally, based on current trends and future demands, we conclude by speculating what a future hardware platform for autonomous driving might look like.
IBM Quantum Computers: Evolution, Performance, and Future Directions
Quantum computers represent a transformative frontier in computational technology, promising exponential speedups beyond classical computing limits. IBM Quantum has led significant advancements in both hardware and software, providing access to quantum hardware via IBM Cloud since 2016, achieving a milestone with the world's first accessible quantum computer. This article explores IBM's quantum computing journey, focusing on the development of practical quantum computers. We summarize the evolution and advancements of IBM Quantum's processors across generations, including their recent breakthrough surpassing the 1,000-qubit barrier. The paper reviews detailed performance metrics across various hardware, tracing their evolution over time and highlighting IBM Quantum's transition from the noisy intermediate-scale quantum (NISQ) computing era towards fault-tolerant quantum computing capabilities.
A Comprehensive Survey on Kolmogorov Arnold Networks (KAN)
Hou, Yuntian, zhang, Di, Wu, Jinheng, Feng, Xiaohang
Through this comprehensive survey of Kolmogorov-Arnold Networks(KAN), we have gained a thorough understanding of its theoretical foundation, architectural design, application scenarios, and current research progress. KAN, with its unique architecture and flexible activation functions, excels in handling complex data patterns and nonlinear relationships, demonstrating wide-ranging application potential. While challenges remain, KAN is poised to pave the way for innovative solutions in various fields, potentially revolutionizing how we approach complex computational problems.