nanowire
QDFlow: A Python package for physics simulations of quantum dot devices
Buterakos, Donovan L., Kalantre, Sandesh S., Ziegler, Joshua, Taylor, Jacob M, Zwolak, Justyna P.
Recent advances in machine learning (ML) have accelerated progress in calibrating and operating quantum dot (QD) devices. However, most ML approaches rely on access to large, representative datasets designed to capture the full spectrum of data quality encountered in practice, with both high- and low-quality data for training, benchmarking, and validation, with labels capturing key features of the device state. Collating such datasets experimentally is challenging due to limited data availability, slow measurement bandwidths, and the labor-intensive nature of labeling. QDFlow is an open-source physics simulator for multi-QD arrays that generates realistic synthetic data with ground-truth labels. QDFlow combines a self-consistent Thomas-Fermi solver, a dynamic capacitance model, and flexible noise modules to simulate charge stability diagrams and ray-based data closely resembling experiments. With an extensive set of parameters that can be varied and customizable noise models, QDFlow supports the creation of large, diverse datasets for ML development, benchmarking, and quantum device research.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.04)
- North America > Canada (0.04)
- Information Technology > Hardware (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
Topological gap protocol based machine learning optimization of Majorana hybrid wires
Thamm, Matthias, Rosenow, Bernd
Majorana zero modes in superconductor-nanowire hybrid structures are a promising candidate for topologically protected qubits with the potential to be used in scalable structures. Currently, disorder in such Majorana wires is a major challenge, as it can destroy the topological phase and thus reduce the yield in the fabrication of Majorana devices. We study machine learning optimization of a gate array in proximity to a grounded Majorana wire, which allows us to reliably compensate even strong disorder. We propose a metric for optimization that is inspired by the topological gap protocol, and which can be implemented based on measurements of the non-local conductance through the wire.
- Europe > Germany > Saxony > Leipzig (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
Machine learning using magnetic stochastic synapses
Ellis, Matthew O. A., Welbourne, Alex, Kyle, Stephan J., Fry, Paul W., Allwood, Dan A., Hayward, Thomas J., Vasilaki, Eleni
The impressive performance of artificial neural networks has come at the cost of high energy usage and CO$_2$ emissions. Unconventional computing architectures, with magnetic systems as a candidate, have potential as alternative energy-efficient hardware, but, still face challenges, such as stochastic behaviour, in implementation. Here, we present a methodology for exploiting the traditionally detrimental stochastic effects in magnetic domain-wall motion in nanowires. We demonstrate functional binary stochastic synapses alongside a gradient learning rule that allows their training with applicability to a range of stochastic systems. The rule, utilising the mean and variance of the neuronal output distribution, finds a trade-off between synaptic stochasticity and energy efficiency depending on the number of measurements of each synapse. For single measurements, the rule results in binary synapses with minimal stochasticity, sacrificing potential performance for robustness. For multiple measurements, synaptic distributions are broad, approximating better-performing continuous synapses. This observation allows us to choose design principles depending on the desired performance and the device's operational speed and energy cost. We verify performance on physical hardware, showing it is comparable to a standard neural network.
Spray-on smart skin uses AI to rapidly understand hand tasks
A new smart skin developed at Stanford University might foretell a day when people type on invisible keyboards, identify objects by touch alone, or allow users to communicate by hand gestures with apps in immersive environments. In a just-publish paper in the journal Nature Electronics the researchers describe a new type of stretchable biocompatible material that gets sprayed on the back of the hand, like suntan spray. Integrated in the mesh is a tiny electrical network that senses as the skin stretches and bends and, using AI, the researchers can interpret myriad daily tasks from hand motions and gestures. The researchers say it could have applications and implications in fields as far-ranging as gaming, sports, telemedicine, and robotics. So far, several promising methods, such as measuring muscle electrical activities using wrist bands or wearable gloves, have been actively explored to enable various hand tasks and gesturing.
Semi-supervised machine learning model for analysis of nanowire morphologies from transmission electron microscopy images
Lu, Shizhao, Montz, Brian, Emrick, Todd, Jayaraman, Arthi
In the field of materials science, microscopy is the first and often only accessible method for structural characterization. There is a growing interest in the development of machine learning methods that can automate the analysis and interpretation of microscopy images. Typically training of machine learning models requires large numbers of images with associated structural labels, however, manual labeling of images requires domain knowledge and is prone to human error and subjectivity. To overcome these limitations, we present a semi-supervised transfer learning approach that uses a small number of labeled microscopy images for training and performs as effectively as methods trained on significantly larger image datasets. Specifically, we train an image encoder with unlabeled images using self-supervised learning methods and use that encoder for transfer learning of different downstream image tasks (classification and segmentation) with a minimal number of labeled images for training. We test the transfer learning ability of two self-supervised learning methods: SimCLR and Barlow-Twins on transmission electron microscopy (TEM) images. We demonstrate in detail how this machine learning workflow applied to TEM images of protein nanowires enables automated classification of nanowire morphologies (e.g., single nanowires, nanowire bundles, phase separated) as well as segmentation tasks that can serve as groundwork for quantification of nanowire domain sizes and shape analysis. We also extend the application of the machine learning workflow to classification of nanoparticle morphologies and identification of different type of viruses from TEM images.
- North America > United States > Massachusetts (0.04)
- Europe > Finland > North Karelia > Joensuu (0.04)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.68)
- Health & Medicine > Diagnostic Medicine (0.68)
- Education (0.68)
Cryogenic Neuromorphic Hardware
Islam, Md Mazharul, Alam, Shamiul, Hossain, Md Shafayat, Roy, Kaushik, Aziz, Ahmedullah
The revolution in artificial intelligence (AI) brings up an enormous storage and data processing requirement. Large power consumption and hardware overhead have become the main challenges for building next-generation AI hardware. To mitigate this, Neuromorphic computing has drawn immense attention due to its excellent capability for data processing with very low power consumption. While relentless research has been underway for years to minimize the power consumption in neuromorphic hardware, we are still a long way off from reaching the energy efficiency of the human brain. Furthermore, design complexity and process variation hinder the large-scale implementation of current neuromorphic platforms. Recently, the concept of implementing neuromorphic computing systems in cryogenic temperature has garnered intense interest thanks to their excellent speed and power metric. Several cryogenic devices can be engineered to work as neuromorphic primitives with ultra-low demand for power. Here we comprehensively review the cryogenic neuromorphic hardware. We classify the existing cryogenic neuromorphic hardware into several hierarchical categories and sketch a comparative analysis based on key performance metrics. Our analysis concisely describes the operation of the associated circuit topology and outlines the advantages and challenges encountered by the state-of-the-art technology platforms. Finally, we provide insights to circumvent these challenges for the future progression of research.
- North America > United States > Tennessee > Knox County > Knoxville (0.14)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- (3 more...)
- Information Technology (1.00)
- Energy (1.00)
- Health & Medicine > Therapeutic Area (0.46)
Machine learning optimization of Majorana hybrid nanowires
Thamm, Matthias, Rosenow, Bernd
As the complexity of quantum systems such as quantum bit arrays increases, efforts to automate expensive tuning are increasingly worthwhile. We investigate machine learning based tuning of gate arrays using the CMA-ES algorithm for the case study of Majorana wires with strong disorder. We find that the algorithm is able to efficiently improve the topological signatures, learn intrinsic disorder profiles, and completely eliminate disorder effects. For example, with only 20 gates, it is possible to fully recover Majorana zero modes destroyed by disorder by optimizing gate voltages.
- Europe > Germany > Saxony > Leipzig (0.04)
- North America > United States > New York > New York County > New York City (0.04)
Sustainable AI Processing at the Edge
Ollivier, Sébastien, Li, Sheng, Tang, Yue, Chaudhuri, Chayanika, Zhou, Peipei, Tang, Xulong, Hu, Jingtong, Jones, Alex K.
Deep neural networks have become a popular algorithm for a variety of applications using mobile devices including smart phones but also recently expanding to connected and autonomous vehicles (CAVs), robotics, or even unmanned aerial vehicles (UAVs), and other smart infrastructure. Convolutional Neural Networks (CNNs) have been demonstrated to provide solutions to these problems with relatively high accuracy. While there have been many proposals to improve the performance and energy efficiency of CNN inference, these algorithms are too compute and data intensive to execute directly on mobile nodes typically operating with limited computational and energy capabilities. Thus, edge servers, now being deployed often in conjunction with advanced (e.g., 5G) wireless networks, have become a popular target to accelerate CNN inference. Moreover, due to their deployment in the field, edge servers must operate under size, weight, and power (SWaP) constraints, while serving many concurrent requests from mobile clients. Thus, to accelerate CNNs, these edge servers often use energy-efficient accelerators, reduced precision, or both to achieve fast response time while balancing requests from multiple clients and maintaining a low operational energy cost. Recently, there has been a trend to push online training to edge server nodes to avoid communicating large datasets from edge to cloud servers [1]. However, online training typically requires much higher precision and floating-point computation compared to inference. Unfortunately, the proliferation of computing, both the mobile devices, and the edge servers themselves, can come at the expense of negative environmental impacts.
- North America > United States > Texas (0.05)
- North America > United States > New York (0.05)
- North America > United States > California (0.05)
- (2 more...)
- Education > Educational Setting > Online (1.00)
- Energy > Renewable (0.94)
- Energy > Power Industry (0.93)
Electronic skin has a strong future stretching ahead
A material that mimics human skin in strength, stretchability and sensitivity could be used to collect biological data in real time. Electronic skin, or e-skin, may play an important role in next-generation prosthetics, personalized medicine, soft robotics and artificial intelligence. "The ideal e-skin will mimic the many natural functions of human skin, such as sensing temperature and touch, accurately and in real time," says KAUST postdoc Yichen Cai. However, making suitably flexible electronics that can perform such delicate tasks while also enduring the bumps and scrapes of everyday life is challenging, and each material involved must be carefully engineered. Most e-skins are made by layering an active nanomaterial (the sensor) on a stretchy surface that attaches to human skin.
- Information Technology > Architecture > Real Time Systems (0.80)
- Information Technology > Artificial Intelligence > Robots (0.57)
Artificial eye that 'sees' like a human could transform robotics
Scientists have developed an artificial eye that could provide vision for humanoid robots, or even function as a bionic eye for visually impaired people in the future. Researchers from the Hong Kong University of Science and Technology built the ElectroChemical Eye – dubbed EC-Eye – to resemble the size and shape of a biological eye, but with vastly greater potential. The eye mimics the human iris and retina using a lens to focus light onto a dense arrays of light-sensitive nanowires. Information is then passed through the wires, which act like the brain's visual cortex, to a computer for processing. During tests, the computer was able to recognise the letters'E', 'I' and'Y' when they were projected onto the lens.