perceptron model
- North America > United States > Massachusetts (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Sampling through Algorithmic Diffusion in non-convex Perceptron problems
Demyanenko, Elizaveta, Straziota, Davide, Baldassi, Carlo, Lucibello, Carlo
We analyze the problem of sampling from the solution space of simple yet non-convex neural network models by employing a denoising diffusion process known as Algorithmic Stochastic Localization, where the score function is provided by Approximate Message Passing. We introduce a formalism based on the replica method to characterize the process in the infinite-size limit in terms of a few order parameters, and, in particular, we provide criteria for the feasibility of sampling. We show that, in the case of the spherical perceptron problem with negative stability, approximate uniform sampling is achievable across the entire replica symmetric region of the phase diagram. In contrast, for the binary perceptron, uniform sampling via diffusion invariably fails due to the overlap gap property exhibited by the typical set of solutions. We discuss the first steps in defining alternative measures that can be efficiently sampled.
- North America > United States (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy (0.04)
Quantum Perceptron Models
We demonstrate how quantum computation can provide non-trivial improvements in the computational and statistical complexity of the perceptron model. We develop two quantum algorithms for perceptron learning. The first algorithm exploits quantum information processing to determine a separating hyperplane using a number of steps sublinear in the number of data points N, namely O( N).
- North America > United States > Washington > King County > Redmond (0.04)
- North America > United States > Massachusetts (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
A Bio-Inspired Chaos Sensor Model Based on the Perceptron Neural Network: Machine Learning Concept and Application for Computational Neuro-Science
Velichko, Andrei, Boriskov, Petr, Belyaev, Maksim, Putrolaynen, Vadim
The study presents a bio-inspired chaos sensor model based on the perceptron neural network for the estimation of entropy of spike train in neurodynamic systems. After training, the sensor on perceptron, having 50 neurons in the hidden layer and 1 neuron at the output, approximates the fuzzy entropy of a short time series with high accuracy, with a determination coefficient of R2 ~ 0.9. The Hindmarsh-Rose spike model was used to generate time series of spike intervals, and datasets for training and testing the perceptron. The selection of the hyperparameters of the perceptron model and the estimation of the sensor accuracy were performed using the K-block cross-validation method. Even for a hidden layer with one neuron, the model approximates the fuzzy entropy with good results and the metric R2 ~ 0.5-0.8. In a simplified model with one neuron and equal weights in the first layer, the principle of approximation is based on the linear transformation of the average value of the time series into the entropy value. An example of using the chaos sensor on spike train of action potential recordings from the L5 dorsal rootlet of rat is provided. The bio-inspired chaos sensor model based on an ensemble of neurons is able to dynamically track the chaotic behavior of a spike signal and transmit this information to other parts of the neurodynamic model for further processing. The study will be useful for specialists in the field of computational neuroscience, and also to create humanoid and animal robots, and bio-robots with limited resources.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Russia > North Caucasian Federal District > Republic of Karelia > Petrozavodsk (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (11 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
Simulation of a Variational Quantum Perceptron using Grover's Algorithm
Innan, Nouhaila, Bennai, Mohamed
Recently, there has been an increasing number of studies to combine the disciplines of quantum information and machine learning, and a variety of theories to merge these fields have consistently been put forward since machine learning is under pressure due to a lack of processing power of the increased amount of data in the world, and quantum computing offers these super computational capabilities. The combination of these two fields invariably leads to a massive interest in innovative information processing mechanisms that open up a new and improved range of solutions for various domains of applications, and the first concept was the research on quantum models of neural networks; it was essentially biologically inspired, in the hope of finding explanations for brain function within the framework of quantum theory [1]. In 2013, this combination got the name quantum machine learning by Lloyd et al. [2] as a definition of an area of research that explores the combination of quantum information and ML principles. However, the development of potential quantum machine learning algorithms has made some progress; several famous classical ML algorithms already have quantum analogs, such as the quantum support vector machine (QSVM), quantum k-means clustering, quantum Boltzmann machine (QBM), and the quantum perceptron (QP) which there have been some papers that mainly overview methods and algorithms of this model. Zhou et al. [3] developed a quantum perceptron approach based on the quantum phase capable of computing the XOR function using only one neuron, then Siomau et al. [4] introduced an autonomous quantum perceptron based on calculating a set of positive valued operators and valued measurements (POVM), after that Sagheer and Zidane [5] proposed a quantum perceptron based on Siomau method capable of constructing its own set of activation operators to be applied widely in both quantum and classical applications to overcome the linearity limitation of the classical perceptron In 2018, a multidimensional input quantum perceptron (MDIQP) was proposed by Yamamoto et al. [6]; their model had an arbitrary number of inputs with different synaptic weights, being able to form large quantum artificial neural networks (QANNs).
- Asia > China (0.04)
- Africa > Middle East > Morocco > Rabat-Salé-Kénitra Region > Rabat (0.04)
- Africa > Middle East > Morocco > Casablanca-Settat Region > Casablanca (0.04)
A Brief History of Deep Learning
Human inventions find their inspiration from nature. Likewise, deep learning was an attempt to model the human brain, one of the most complicated structures in the universe. The attempt was not to mimic every detail of the brain. Instead, artificial neural networks were inspired by biological neural networks, eventually leading to deep learning. So what is deep learning?
Predict Customer Churn with Neural Network
In real-world situations, data scientists often start an analysis with a simple and easy to implement model such as linear or logistic regression. There are various advantages of this approach such as getting a sense of the data with a minimum cost and giving food for thoughts on how to solve a business problem. In this blog post, I decided to start from the opposite side by applying a multilayer perceptron model (neural network) to predict customer churn. I think it is quite fun and exciting to try different algorithms or at least to know how you can solve a problem in a more sophisticated way. Customer churn is when a customer decides to stop using services, content, or products from a company.
Proof of the Contiguity Conjecture and Lognormal Limit for the Symmetric Perceptron
Abbe, Emmanuel, Li, Shuangping, Sly, Allan
We consider the symmetric binary perceptron model, a simple model of neural networks that has gathered significant attention in the statistical physics, information theory and probability theory communities, with recent connections made to the performance of learning algorithms in Baldassi et al. '15. We establish that the partition function of this model, normalized by its expected value, converges to a lognormal distribution. As a consequence, this allows us to establish several conjectures for this model: (i) it proves the contiguity conjecture of Aubin et al. '19 between the planted and unplanted models in the satisfiable regime; (ii) it establishes the sharp threshold conjecture; (iii) it proves the frozen 1-RSB conjecture in the symmetric case, conjectured first by Krauth-M\'ezard '89 in the asymmetric case. In a recent concurrent work of Perkins-Xu [PX21], the last two conjectures were also established by proving that the partition function concentrates on an exponential scale. This left open the contiguity conjecture and the lognormal limit characterization, which are established here. In particular, our proof technique relies on a dense counter-part of the small graph conditioning method, which was developed for sparse models in the celebrated work of Robinson and Wormald.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
How to Manually Optimize Neural Network Models
Deep learning neural network models are fit on training data using the stochastic gradient descent optimization algorithm. Updates to the weights of the model are made, using the backpropagation of error algorithm. The combination of the optimization and weight update algorithm was carefully chosen and is the most efficient approach known to fit neural networks. Nevertheless, it is possible to use alternate optimization algorithms to fit a neural network model to a training dataset. This can be a useful exercise to learn more about how neural networks function and the central nature of optimization in applied machine learning. It may also be required for neural networks with unconventional model architectures and non-differentiable transfer functions.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.57)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Perceptrons (0.34)
Analysis of Models for Decentralized and Collaborative AI on Blockchain
Machine learning has recently enabled large advances in artificial intelligence, but these results can be highly centralized. The large datasets required are generally proprietary; predictions are often sold on a per-query basis; and published models can quickly become out of date without effort to acquire more data and maintain them. Published proposals to provide models and data for free for certain tasks include Microsoft Research's Decentralized and Collaborative AI on Blockchain. The framework allows participants to collaboratively build a dataset and use smart contracts to share a continuously updated model on a public blockchain. The initial proposal gave an overview of the framework omitting many details of the models used and the incentive mechanisms in real world scenarios. For example, the Self-Assessment incentive mechanism proposed in their work could have problems such as participants losing deposits and the model becoming inaccurate over time if the proper parameters are not set when the framework is configured. In this work, we evaluate the use of several models and configurations in order to propose best practices when using the Self-Assessment incentive mechanism so that models can remain accurate and well-intended participants that submit correct data have the chance to profit. We have analyzed simulations for each of three models: Perceptron, Nave Bayes, and a Nearest Centroid Classifier, with three different datasets: predicting a sport with user activity from Endomondo, sentiment analysis on movie reviews from IMDB, and determining if a news article is fake. We compare several factors for each dataset when models are hosted in smart contracts on a public blockchain: their accuracy over time, balances of a good and bad user, and transaction costs (or gas) for deploying, updating, collecting refunds, and collecting rewards.
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (3 more...)
- Media > Film (0.36)
- Banking & Finance > Trading (0.30)
- Information Technology > e-Commerce > Financial Technology (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Perceptrons (0.38)