High-Accuracy Population-Based Image Search - DZone AI

#artificialintelligence

Established in 2018, the Machine Intelligence Technology Laboratory comprises of a group of outstanding scientists and engineers, with research centers located in Hangzhou, Beijing, Seattle, Silicon Valley, and Singapore. Machine Intelligence Technology Laboratory is Alibaba's core team responsible for the research and development of artificial intelligence technologies. Relying on Alibaba's valuable massive data and machine learning/deep learning technologies, the lab has developed image recognition, speech interaction, natural language understanding, intelligent decision-making, and other core artificial intelligence technologies. It fully empowers Alibaba Group's important businesses such as e-commerce, finance, logistics, social interaction, and entertainment, and also provides outputs to ecosystem partners to jointly build a smart future. Image Search is an intelligent image search product that enables search by image using image recognition and search functions, based on deep learning and large-scale machine learning technologies.


Asymptotic Bayesian Generalization Error in a General Stochastic Matrix Factorization for Markov Chain and Bayesian Network

arXiv.org Machine Learning

Stochastic matrix factorization (SMF) can be regarded as a restriction of non-negative matrix factorization (NMF). SMF is useful for inference of topic models, NMF for binary matrices data, Markov chains, and Bayesian networks. However, SMF needs strong assumptions to reach a unique factorization and its theoretical prediction accuracy has not yet been clarified. In this paper, we study the maximum the pole of zeta function (real log canonical threshold) of a general SMF and derive an upper bound of the generalization error in Bayesian inference. The results give a foundation for a widely applicable and rigorous factorization method of SMF and mean that the generalization error in SMF becomes smaller than regular statistical models by Bayesian inference.


Exhaustive search for sparse variable selection in linear regression

arXiv.org Machine Learning

We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.


The Effect of Singularities in a Learning Machine when the True Parameters Do Not Lie on such Singularities

Neural Information Processing Systems

A lot of learning machines with hidden variables used in information sciencehave singularities in their parameter spaces. At singularities, the Fisher information matrix becomes degenerate, resulting that the learning theory of regular statistical models does not hold. Recently, it was proven that, if the true parameter is contained in singularities, then the coefficient of the Bayes generalization erroris equal to the pole of the zeta function of the Kullback information.


Reading the Behavior Signature: Predicting Leader Personality from Individual and Group Actions

AAAI Conferences

The personality of a leader can be used to predict that leader's actions as well as those of the group that he or she leads. However, except for a small number of well-known leaders, the personality of leaders must be inferred from actions and other evidence. We have developed a Bayesian network to infer leader personality variables related to violence from evidence of leader and group actions and the situational demands and context in which the actions occur. The network was applied to a historical situation, and its ability to distinguish extreme personalities was established.