vector


Artificial Intelligence Enabled audience profiling in MoMAGIC

#artificialintelligence

"Sell the right product to the right customer" is a dream of every marketers and advertisers. As a leading digital marketing company in Asia, MoMAGIC is approaching this ultimate goal by launching two data-driven solutions: TrueReach and TrueInsight, which is capable of understanding and targeting audiences from both macroscopic and microscopic point of view. In this article, we will share the core ideas behind our solutions and some insights we learned when playing with large-scale real data. As the backbone of all MoMAGIC services, TrueInsight integrates large-scale, heterogeneous data sources (e.g., AD-request, web behaviors, etc.) and transforms those high-frequent, noisy dataflows into structured datasets. In order to complete those challenging tasks quickly and accurately, we first carefully designed the data processing pipelines of TrueInsight in a parallel and distributed manner to guarantee its performance scalability, which means TrueInsight can process large-scale data from multiple sources without scarifying its performance.


Can science writing be automated?

MIT News

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand. Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two. Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition. The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.


r/MachineLearning - [P] I used a Variational Autoencoder to build a feature-based face editing software

#artificialintelligence

In my latest weekend-project I have been using a Variational Autoencoder to build a feature-based face editor. The model is explained in my youtube video. The feature editing is based on modifying the latent distribution of the VAE. After training of the VAE is completed, the latent space is mapped by encoding the training data once more. Latent space vectors of each feature are determined based on the labels of the training data.


10 Algorithms Every Machine Learning Enthusiast Should Know

#artificialintelligence

It is very crucial for the machine learning enthusiasts to know and understands the basic and important machine learning algorithms in order to keep themselves up with the current trend. In this article, we list down 10 basic algorithms which play very important roles in the machine learning era. Logistic regression, also known as the logit classifier is a popular mathematical modelling procedure used in the analysis of data. Regression Analysis is used to conduct when the dependent variable is binary i.e. 0 and 1. In Logistic Regression, logistic function is used to describe the mathematical form on which the logistic model is based.


Machine Learning Basics : Scalars, Vectors, Matrices and Tensors

#artificialintelligence

A scalar is just a single number, in contrast to most of the other objects like Vectors, which are usually arrays of multiple numbers. We write scalars in italics. We usually give scalars lower-case variable names. When we introduce them, we specify what kind of number they are. "Let n N be the number of units," while defining a natural number scalar.


In praise of the autoencoder

#artificialintelligence

When you consider all the machine learning (ML) algorithms, you'll find there is a subset of very pragmatic ones: neural networks. They usually require no statistical hypothesis and no specific data preparation except for normalization. The power of each network lies in its architecture, its activation functions, its regularization terms, plus a few other features. When you consider architectures for neural networks, there is a very versatile one that can serve a variety of purposes -- two in particular: detection of unknown unexpected events and dimensionality reduction of the input space. This neural network is called autoencoder.


How To Build A Successful AI PoC

#artificialintelligence

As an example, I will take a system which classifies documents. It answers to "What kind of document is this?" with classes like an "electric invoice" or a "to-do list". You can find great tutorials on how to architect your server or your data conciliation layer on the web. The simplest solution for an AI PoC in Python is using Flask and a SQL database, but it highly depends on your needs and what you already have. Here is a tutorial on using Flask with SQLALchemy.


Contextual Compositionality Detection with External Knowledge Bases andWord Embeddings

arXiv.org Artificial Intelligence

When the meaning of a phrase cannot be inferred from the individual meanings of its words (e.g., hot dog), that phrase is said to be non-compositional. Automatic compositionality detection in multi-word phrases is critical in any application of semantic processing, such as search engines; failing to detect non-compositional phrases can hurt system effectiveness notably. Existing research treats phrases as either compositional or non-compositional in a deterministic manner. In this paper, we operationalize the viewpoint that compositionality is contextual rather than deterministic, i.e., that whether a phrase is compositional or non-compositional depends on its context. For example, the phrase `green card' is compositional when referring to a green colored card, whereas it is non-compositional when meaning permanent residence authorization. We address the challenge of detecting this type of contextual compositionality as follows: given a multi-word phrase, we enrich the word embedding representing its semantics with evidence about its global context (terms it often collocates with) as well as its local context (narratives where that phrase is used, which we call usage scenarios). We further extend this representation with information extracted from external knowledge bases. The resulting representation incorporates both localized context and more general usage of the phrase and allows to detect its compositionality in a non-deterministic and contextual way. Empirical evaluation of our model on a dataset of phrase compositionality, manually collected by crowdsourcing contextual compositionality assessments, shows that our model outperforms state-of-the-art baselines notably on detecting phrase compositionality.


How Machine (Deep) Learning Helps Us Understand Human Learning: the Value of Big Ideas

arXiv.org Artificial Intelligence

I use simulation of two multilayer neural networks to gain intuition into the determinants of human learning. The first network, the teacher, is trained to achieve a high accuracy in handwritten digit recognition. The second network, the student, learns to reproduce the output of the first network. I show that learning from the teacher is more effective than learning from the data under the appropriate degree of regularization. Regularization allows the teacher to distinguish the trends and to deliver "big ideas" to the student. I also model other learning situations such as expert and novice teachers, high- and low-ability students and biased learning experience due to, e.g., poverty and trauma. The results from computer simulation accord remarkably well with finding of the modern psychological literature. The code is written in MATLAB and will be publicly available from the author's web page.


Autoregressive Models for Sequences of Graphs

arXiv.org Artificial Intelligence

This paper proposes an autoregressive (AR) model for sequences of graphs, which generalises traditional AR models. A first novelty consists in formalising the AR model for a very general family of graphs, characterised by a variable topology, and attributes associated with nodes and edges. A graph neural network (GNN) is also proposed to learn the AR function associated with the graph-generating process (GGP), and subsequently predict the next graph in a sequence. The proposed method is compared with four baselines on synthetic GGPs, denoting a significantly better performance on all considered problems.