neuron network
Bayesian Reasoning Enabled by Spin-Orbit Torque Magnetic Tunnel Junctions
Xu, Yingqian, Li, Xiaohan, Wan, Caihua, Zhang, Ran, He, Bin, Liu, Shiqiang, Xia, Jihao, Kong, Dehao, Xiong, Shilong, Yu, Guoqiang, Han, Xiufeng
The rapid development of artificial intelligence (AI) over the past few decades has been nourished by advancements in machine learning algorithms, increased computational power, and availability of vast amounts of data[1], which has in turn revolutionized numerous fields including but not limited to medical science and healthcare, information technologies, finance, transportation, and more. This regenerative feedback between AI and its applications leads to a further explosive growth of data and expansion of model scales, which calls for a paradigm shift toward efficient and speedy computing and memory technologies, especially, advanced algorithms and emerging AI hardware enabled by nonvolatile memories[2]. In this aspect, the emerging memory technologies, such as magnetic random-access memories[3], ferroelectric random-access memories[4], resistive random-access memories[5, 6] and phase-change random-access memories[7], have been implemented to accelerate AI computing, for instance, the matrix multiplication[8]. Thanks to their high energy-efficiency, fast speed, long endurance, and versatile functionalities, spin-tronic devices based on spin-orbit torques as one prominent example among emerging memories, have shown great potential in the aspect of hardware-accelerated true random number generation (TRNG)[9-18] besides of the matrix multiplication. For instance, the high quality true random number generators with stable and reconfigurable probability-tunability have been demonstrated using SOT -MTJs [19-21].
- Health & Medicine (1.00)
- Semiconductors & Electronics (0.95)
DAC: Deep Autoencoder-based Clustering, a General Deep Learning Framework of Representation Learning
Clustering performs an essential role in many real world applications, such as market research, pattern recognition, data analysis, and image processing. However, due to the high dimensionality of the input feature values, the data being fed to clustering algorithms usually contains noise and thus could lead to in-accurate clustering results. While traditional dimension reduction and feature selection algorithms could be used to address this problem, the simple heuristic rules used in those algorithms are based on some particular assumptions. When those assumptions does not hold, these algorithms then might not work. In this paper, we propose DAC, Deep Autoencoder-based Clustering, a generalized data-driven framework to learn clustering representations using deep neuron networks. Experiment results show that our approach could effectively boost performance of the K-Means clustering algorithm on a variety types of datasets.
AI Uncertainty Based on Rademacher Complexity and Shannon Entropy
In this paper from communication channel coding perspective we are able to present both a theoretical and practical discussion of AI's uncertainty, capacity and evolution for pattern classification based on the classical Rademacher complexity and Shannon entropy. First AI capacity is defined as in communication channels. It is shown qualitatively that the classical Rademacher complexity and Shannon entropy used in communication theory is closely related by their definitions, given a pattern classification problem with a complexity measured by Rademacher complexity. Secondly based on the Shannon mathematical theory on communication coding, we derive several sufficient and necessary conditions for an AI's error rate approaching zero in classifications problems. A 1/2 criteria on Shannon entropy is derived in this paper so that error rate can approach zero or is zero for AI pattern classification problems. Last but not least, we show our analysis and theory by providing examples of AI pattern classifications with error rate approaching zero or being zero.
Deep Learning – PyTorch from 0 to 1 - 128mots.com
When I wrote this blog post, I remembered the challenge I set for myself at the beginning of the year to learn deep learning, I did not even know Python at the time. What makes things difficult is not necessarily the complexity of the concepts, but it starts with questions like: What framework to use for deep learning? Which activation function should I choose? Which cost function is best suited for my problem? I still have to learn and work in the field but through this blog post, I would like to share and give you an overview of what I have learned about deep learning this year.
Derby scientists help make international artificial intelligence breakthrough
The proposed method, called Sparse Evolutionary Training, also gives full artificial intelligence capability to inexpensive computers, meaning it will be possible to turn any internet device into an intelligent Internet of Things object which can send and receive data. Artificial neural networks are widely used in industry, science and medicine to make sense of vast amounts of data, such as in medical diagnostics and personalised medicine. But ANNs are typically made up of lots of layers and millions of nodes, which means their intelligence is severely limited and requires super-computational power. The team of scientists from the University of Derby, Eindhoven University of Technology in the Netherlands, and the University of Texas, Austin, have joined together to develop a method that could push artificial intelligence well beyond its current boundaries. The new system substantially accelerates machine learning by replacing the typical neuronal networks with sparse layers, enabling the use of artificial intelligence in major problems such as genetic disease diagnostics.
- North America > United States > Texas > Travis County > Austin (0.27)
- Europe > Netherlands > North Brabant > Eindhoven (0.27)
Blockchain Powered AI Doctors to Revolutionize Medicine
Doc.ai is hoping to revolutionize the medical industry by bringing AI doctors to all through their smartphones. Using Blockchain technology, the platform will be able to collect masses of medical data globally and generate insights from that information. Furthermore, through machine learning, the data collected will be analyzed and processed in order to provide personalized feedback to users about their own medical issues. Doc.ai's platform aims to provide users with the ability to essentially call on a doctor in their pocket. However, because medical practitioners are in short supply, difficult to access and expensive, this company is looking at powerful technologies rather than humans.
SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system
Chevallier, Sylvain, Paugam-moisy, Hél\`ene, Sebag, Michele
Many complex systems, ranging from neural cell assemblies to insect societies, involve and rely on some division of labor. How to enforce such a division in a decentralized and distributed way, is tackled in this paper, using a spiking neuron network architecture. Specifically, a spatio-temporal model called SpikeAnts is shown to enforce the emergence of synchronized activities in an ant colony. Each ant is modelled from two spiking neurons; the ant colony is a sparsely connected spiking neuron network. Each ant makes its decision (among foraging, sleeping and self-grooming) from the competition between its two neurons, after the signals received from its neighbor ants. Interestingly, three types of temporal patterns emerge in the ant colony: asynchronous, synchronous, and synchronous periodic foraging activities - similar to the actual behavior of some living ant colonies. A phase diagram of the emergent activity patterns with respect to two control parameters, respectively accounting for ant sociability and receptivity, is presented and discussed.
A Self-Learning Neural Network
We propose a new neural network structure that is compatible with silicon technology and has built-in learning capability. The thrust of this network work is a new synapse function. The synapses have the feature that the learning parameter is embodied in the thresholds of MOSFET devices and is local in character. The network is shown to be capable of learning by example as well as exhibiting the desirable features of the Hopfield type networks. The thrust of what we want to discuss is a new synapse function for an artificial neuron to be used in a neural network.
- North America > United States > New York (0.05)
- North America > United States > California > San Diego County > San Diego (0.04)