dimensional space
The receptron is a nonlinear threshold logic gate with intrinsic multi-dimensional selective capabilities for analog inputs
Paroli, B., Borghi, F., Potenza, M. A. C., Milani, P.
Threshold logic gates (TLGs) have been proposed as artificial counterparts of biological neurons with classification capabilities based on a linear predictor function combining a set of weights with the feature vector. The linearity of TLGs limits their classification capabilities requiring the use of networks for the accomplishment of complex tasks. A generalization of the TLG model called receptron, characterized by input-dependent weight functions allows for a significant enhancement of classification performances even with the use of a single unit. Here we formally demonstrate that a receptron, characterized by nonlinear input-dependent weight functions, exhibit intrinsic selective activation properties for analog inputs, when the input vector is within cubic domains in a 3D space. The proposed model can be extended to the n-dimensional case for multidimensional applications. Our results suggest that receptron-based networks can represent a new class of devices capable to manage a large number of analog inputs, for edge applications requiring high selectivity and classification capabilities without the burden of complex training.
- Europe > Italy > Lombardy > Milan (0.05)
- Europe > Italy > Basilicata > Potenza Province > Potenza (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
I have a concern that the quantitative results are misleading and/or wrong. I have a suspicion that the model may be performing similarly to a computer graphics engine ... where it generates very good naturalistic images, but where most real images would be assigned extremely low probability. This could make for a fine paper, but the results would need to be presented in a way that makes this clear. Detailed comments as follow: 45 - 'indicating a better density model...' don't think this part follows 57-61 - Should note these aren't actually generative models as typically defined. Though p(visible hidden) is straightforward, p(hidden) is complex and difficult to sample from.
- Research Report > Strength High (0.30)
- Research Report > Experimental Study (0.30)
Document Author Classification Using Parsed Language Structure
Moon, Todd K, Gunther, Jacob H.
Over the years there has been ongoing interest in detecting authorship of a text based on statistical properties of the text, such as by using occurrence rates of noncontextual words. In previous work, these techniques have been used, for example, to determine authorship of all of \emph{The Federalist Papers}. Such methods may be useful in more modern times to detect fake or AI authorship. Progress in statistical natural language parsers introduces the possibility of using grammatical structure to detect authorship. In this paper we explore a new possibility for detecting authorship using grammatical structural information extracted using a statistical natural language parser. This paper provides a proof of concept, testing author classification based on grammatical structure on a set of "proof texts," The Federalist Papers and Sanditon which have been as test cases in previous authorship detection studies. Several features extracted from the statistical natural language parser were explored: all subtrees of some depth from any level; rooted subtrees of some depth, part of speech, and part of speech by level in the parse tree. It was found to be helpful to project the features into a lower dimensional space. Statistical experiments on these documents demonstrate that information from a statistical parser can, in fact, assist in distinguishing authors.
- North America > United States > Virginia (0.04)
- North America > United States > New York (0.04)
- North America > United States > Utah > Cache County > Logan (0.04)
- (5 more...)
f340f1b1f65b6df5b5e3f94d95b11daf-Reviews.html
Mixture models (MM) assume that instances are drawn from a mixture of K component distributions with unknown coefficients. Topic models (TM), on the other hand, assume that samples/documents have different mixing weights of the underlying topic distribution over words. This paper tries to close the gap between MM and TM. Their proposed model assumes that several samples are drawn from the same underlying K distributions, but similar to TM, has different mixing weights and assume that instances are treated as feature vectors similar to MM. This is a theory paper that provides two algorithms that can recover the underlying structure for this model.
Using multiple samples to learn mixture models
The goal is to associate instances with their generating distributions, or to identify the parameters of the hidden distributions. In this work we make the assumption that we have access to several samples drawn from the same K underlying distributions, but with different mixing weights. As with topic modeling, having multiple samples is often a reasonable assumption. Instead of pooling the data into one sample, we prove that it is possible to use the differences between the samples to better recover the underlying structure. We present algorithms that recover the underlying structure under milder assumptions than the current state of art when either the dimensionality or the separation is high. The methods, when applied to topic modeling, allow generalization to words not present in the training data.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > India > Andhra Pradesh > Bay of Bengal (0.04)
Celestial Machine Learning: Discovering the Planarity, Heliocentricity, and Orbital Equation of Mars with AI Feynman
Khoo, Zi-Yu, Rajiv, Gokul, Yang, Abel, Low, Jonathan Sze Choong, Bressan, Stéphane
Can a machine or algorithm discover or learn the elliptical orbit of Mars from astronomical sightings alone? Johannes Kepler required two paradigm shifts to discover his First Law regarding the elliptical orbit of Mars. Firstly, a shift from the geocentric to the heliocentric frame of reference. Secondly, the reduction of the orbit of Mars from a three- to a two-dimensional space. We extend AI Feynman, a physics-inspired tool for symbolic regression, to discover the heliocentricity and planarity of Mars' orbit and emulate his discovery of Kepler's first law.
- North America > United States > New York > New York County > New York City (0.14)
- Asia > Singapore (0.05)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (3 more...)