Goto

Collaborating Authors

 artificial synapse


Brain-inspired continual pre-trained learner via silent synaptic consolidation

Ran, Xuming, Yao, Juntao, Wang, Yusong, Xu, Mingkun, Liu, Dianbo

arXiv.org Artificial Intelligence

Pre-trained models have demonstrated impressive generalization capabilities, yet they remain vulnerable to catastrophic forgetting when incrementally trained on new tasks. Existing architecture-based strategies encounter two primary challenges: 1) Integrating a pre-trained network with a trainable sub-network complicates the delicate balance between learning plasticity and memory stability across evolving tasks during learning. In this study, we introduce the Artsy, inspired by the activation mechanisms of silent synapses via spike-timing-dependent plasticity observed in mature brains, to enhance the continual learning capabilities of pre-trained models. The Artsy integrates two key components: During training, the Artsy mimics mature brain dynamics by maintaining memory stability for previously learned knowledge within the pre-trained network while simultaneously promoting learning plasticity in task-specific sub-networks. During inference, artificial silent and functional synapses are utilized to establish precise connections between the pre-synaptic neurons in the pre-trained network and the post-synaptic neurons in the sub-networks, facilitated through synaptic consolidation, thereby enabling effective extraction of relevant information from test samples. Comprehensive experimental evaluations reveal that our model significantly outperforms conventional methods on class-incremental learning tasks, while also providing enhanced biological interpretability for architecture-based approaches. Moreover, we propose that the Artsy offers a promising avenue for simulating biological synaptic mechanisms, potentially advancing our understanding of neural plasticity in both artificial and biological systems. Pre-trained artificial neural networks have demonstrated notable generalization capabilities; however, they are prone to catastrophic forgetting when exposed to sequential training on new datasets, as outlined in previous studies Wang et al. (2024).


Mimicking the brain with single transistor artificial neurons - Advanced Science News

#artificialintelligence

The fourth industrial revolution is well underway with artificial intelligence (AI) at its heart powering new technologies and Internet of Things (IoT) devices from smartwatches to smart fridges, autonomous cars to home assistants, and security systems to a vast array of sensors. Using conventional computer architecture in the practical application of AI in IoTs leads to large power demands arising from the repetitive shifting of tremendous amounts of data between processors and memory units. These demands are only set to increase as AI improves and even larger amounts of data is generated. This increased power consumption comes with a potential impact on the environment via the emission of greenhouse gases through the generation of electricity through the burning of fossil fuels. The need to lower energy consumption in IoT technology has led to need for alternative, low-power alternatives that can implement AI.


'Artificial synapse' could make neural networks work more like brains

New Scientist

A resistor that works in a similar way to nerve cells in the body could be used to build neural networks for machine learning. Many large machine learning models rely on increasing amounts of processing power to achieve their results, but this has vast energy costs and produces large amounts of heat. One proposed solution is analogue machine learning, which works like a brain by using electronic devices similar to neurons to act as the parts of the model. However, these devices have so far not been fast, small or efficient enough to provide advantages over digital machine learning. Murat Onen at the Massachusetts Institute of Technology and his colleagues have created a nanoscale resistor that transmits protons from one terminal to another.


Developing an ultra-scalable artificial synapse

#artificialintelligence

A research team, led by Assistant Professor Desmond Loke from the Singapore University of Technology and Design (SUTD), has developed a new type of artificial synapse based on two-dimensional (2D) materials for highly scalable brain-inspired computing. Brain-inspired computing, which mimics how the human brain functions, has drawn significant scientific attention because of its uses in artificial intelligence functions and low energy consumption. For brain-inspired computing to work, synapses remembering the connections between two neurons are necessary, like human brains. In developing brains, synapses can be grouped into functional synapses and silent synapses. For functional synapses, the synapses are active, while for silent synapses, the synapses are inactive under normal conditions.


Researchers develop fast, low-energy artificial synapse for advanced AI systems

#artificialintelligence

Brain-inspired computing is a promising candidate for next-generation computing technologies. Developing next-generation advanced artificial intelligence (AI) systems that can be as energy-efficient, lightweight, and adaptable as the human brain has attracted significant interest. However, mimicking the brain's neuroplasticity, which is the ability to change a neural network connection, in traditional artificial synapses using ultralow energy is extremely challenging." An artificial synapse -- comprising a gap across two neurons to allow electrical signals to pass and communicate with each other -- can emulate the efficient neural signal transmission and memory formation process of the brain. To improve energy efficiency of the artificial synapse, Loke's research team has introduced a nanoscale deposit-only-metal-electrode fabrication process for artificial synapse for the first time. By using deposit-only nanopillar-based germanium-antimony-telluride memristive devices, the team designed a phase-change artificial synaptic device which has achieved an all-time-low energy consumption of 1.8 pJ per pair-pulse-based synaptic event. This is about 82% smaller compared to traditional artificial synapses. "The experiments have demonstrated that the artificial synapse based on phase-change materials could perform pair-pulse facilitation/depression, long-term potentiation/depression and spike timing dependent plasticity with ultralow energies.

  Country: Asia > Singapore (0.10)
  Genre: Research Report > New Finding (0.56)

A bio-inspired mechano-photonic artificial synapse

#artificialintelligence

Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.


Synthesizing an artificial synapse for artificial intelligence

#artificialintelligence

In reality, the opposite is true: a human brain - which today is still more proficient than CPUs at cognitive tasks like pattern recognition - needs only 20 watts of power to complete a task, while a supercomputer requires more than 50,000 times that amount of energy. For that reason, researchers are turning to neuromorphic computer and artificial neural networks that work more like the human brain. However, with current technology, it is both challenging and expensive to replicate the spatio-temporal processes native to the brain, like short-term and long-term memory, in artificial spiking neural networks (SNN). Feng Xiong, PhD, assistant professor of electrical and computer engineering at the University of Pittsburgh's Swanson School of Engineering, received a $500,000 CAREER Award from the National Science Foundation (NSF) for his work developing the missing element, a dynamic synapse, that will dramatically improve energy efficiency, bandwidth and cognitive capabilities of SNNs. "When the human brain sees rain and then feels wetness, or sees fire and feels heat, the brain's synapses link the two ideas, so in the future, it will associate rain with wetness and fire with warmth. The two ideas are strongly linked in the brain," explains Xiong.


The Machine Learning Potential of a Combined Tech Approach

#artificialintelligence

This is the first in a five-part series exploring the potential of unified deep learning with CPU, GPU and FGPA technologies. This post explores the machine learning potential of combining different advanced technologies. Deep learning and complex machine learning has quickly become one of the most important computationally intensive applications for a wide variety of fields. The combination of large data sets, high-performance computational capabilities, and evolving and improving algorithms has enabled many successful applications which were previously difficult or impossible to consider. This series explores the challenges of deep learning training and inference, and discusses the benefits of a comprehensive approach for combining CPU, GPU, and FPGA technologies, along with the appropriate software frameworks in a unified deep learning architecture.


Scientists May Have Found A Missing Piece for 'Artificial Brains'

#artificialintelligence

In a breakthrough that could usher in a new era of artificial intelligence (AI), scientists at the US National Institute of Standards and Technology (NIST) in Colorado have created a neuromorphic superconducting switch that "learns" to mimic neuro-biological architectures present in the human brain. In a paper published in Science Advances on January 26, the NIST research team headed by physicist Mike Schneider, said the synthethic switch they called a'superconducting synapse' – just like the structure in the brain that permits neurons to communicate with each other – has been shown to process synapse-like properties in a completely inorganic device. Researchers also note the new neuromorphic hardware, which has the shape of a metallic cylinder that's 10 micrometres in diameter, is designed to process incoming flows of electricity it receives and produce appropriate output signals. NIST further explains that multiple artificial synapses could be 3D stacked so that three-dimensional interconnectivity, high-device density and flexibility can be achieved. Another advantage of this technique is that by processing multi-dimensional information through a fault-tolerant computation process or learning through experience – or even from the surrounding environment, synthetic synapses, which can also connect processors and store memories of neuromorphic systems in units of magnetic flux, could offer a viable solution to mitigate the barriers of interconnect scaling in advanced computer systems.


Brain-Like Chips Now Beat the Human Brain in Speed and Efficiency

#artificialintelligence

Neuromorphic computing--the next big thing in artificial intelligence--is on fire. Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain. The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses--the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system's computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding. Brain-like hardware systems that compute faster--and more efficiently--than the human brain. "Ultimately we want a chip as big as a fingernail to replace one big supercomputer," said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.