similarity


How to Build AI Products by Ria Sankar, Microsoft Advisor

#artificialintelligence

Artificial Intelligence (AI) and Machine Learning (ML) products are unique. They hold enormous power and are by definition constantly changing. Due to the level of sophistication involved, the development process for AI products is distinct from traditional products. In this presentation, Ria Sankar, Director of Program Management at Microsoft, introduces the best practices for developing AI products with insight, integrity, and consistency. Ria Sankar is a founding member of the AI for Good Research Lab at Microsoft.


How does artificial intelligence work in customer service?

#artificialintelligence

Our clients often ask us how our language technology is able to detect questions that are formulated differently, contain errors, or combine multiple questions in one text. In order to answer these questions, we have written this article to provide a general overview of our technology. The descriptions are intentionally kept simple to ensure easy understanding. This article is primarily intended for service managers and business managers who want to automate their customer service. We have been working in the field of NLU (Natural Language Understanding) / NLP (Natural Language Processing) for several years.


Neural Style Transfer -- Using Deep Learning to Generate Art

#artificialintelligence

Wouldn't it be nice if Vincent Van Gogh had painted your portrait? Or imagine Claude Monet interpreting your hometown instead of the French countryside. Alas these great artists are no more around to paint more masterpieces but they have left their great creations behind for us to learn from. These artists' paintings were unique in many ways but both the artists had a definitive characteristic style of painting. Their use of color, strokes, type of colors all brought about a character in their paintings which made those paintings so priced and unique.


(PDF) Input Similarity from the Neural Network Perspective

#artificialintelligence

The results in figure 6 show the average and standard deviation over 60 runs for each curve. Similar results were observed on all cases. More specifically, the model is made out of 4 neural networks. The resulting amount of similarities to compute would be around half a billion. We thus obtain 3045 patches representing the dataset.


Do Deep Neural Networks 'See' Faces Like Brains Do?

#artificialintelligence

Recognizing faces is as natural and habitual as can be for human beings. Even with their undeveloped vision, babies can recognize their mother's face within days, while adults typically know some 5,000 faces. But what actually happens inside our brains during the process of recognizing a face? How are different facial features encoded in our brains? And can artificial intelligence learn to recognize faces the way humans do?


Improving Cross-Lingual Transfer Learning by Filtering Training Data : Alexa Blogs

#artificialintelligence

This type of cross-lingual transfer learning can make it easier to bootstrap a model in a language for which training data is scarce, by taking advantage of more abundant data in a source language. But sometimes the data in the source language is so abundant that using all of it to train a transfer model would be impractically time consuming. Moreover, linguistic differences between source and target languages mean that pruning the training data in the source language, so that its statistical patterns better match those of the target language, can actually improve the performance of the transferred model. In a paper we're presenting at this year's Conference on Empirical Methods in Natural Language Processing, we describe experiments with a new data selection technique that let us halve the amount of training data required in the source language, while actually improving a transfer model's performance in a target language. For evaluation purposes, we used two techniques to cut the source-language data set in half: one was our data selection technique, and the other was random sampling.


The Space of ArXiv Papers - WebSystemer.no

#artificialintelligence

A TeX file contains basic instructions for typesetting, and can be rendered to various formats, most frequently PDFs. LaTeX contains a set of convenience macros that make it easy to define and re-use writing essentials, such as titles, headers, equations, sections, and footers. Having documents stored in TeX means that all you need to recreate these pretty, structured, and formatted research papers in their full glory are their source .tex


Adversarial Fisher Vectors for Unsupervised Representation Learning

arXiv.org Machine Learning

We examine Generative Adversarial Networks (GANs) through the lens of deep Energy Based Models (EBMs), with the goal of exploiting the density model that follows from this formulation. In contrast to a traditional view where the discriminator learns a constant function when reaching convergence, here we show that it can provide useful information for downstream tasks, e.g., feature extraction for classification. To be concrete, in the EBM formulation, the discriminator learns an unnormalized density function (i.e., the negative energy term) that characterizes the data manifold. We propose to evaluate both the generator and the discriminator by deriving corresponding Fisher Score and Fisher Information from the EBM. We show that by assuming that the generated examples form an estimate of the learned density, both the Fisher Information and the normalized Fisher Vectors are easy to compute. We also show that we are able to derive a distance metric between examples and between sets of examples. We conduct experiments showing that the GAN-induced Fisher Vectors demonstrate competitive performance as unsupervised feature extractors for classification and perceptual similarity tasks. Code is available at \url{https://github.com/apple/ml-afv}.


Neural Similarity Learning

arXiv.org Machine Learning

Inner product-based convolution has been the founding stone of convolutional neural networks (CNNs), enabling end-to-end learning of visual representation. By generalizing inner product with a bilinear matrix, we propose the neural similarity which serves as a learnable parametric similarity measure for CNNs. Neural similarity naturally generalizes the convolution and enhances flexibility. Further, we consider the neural similarity learning (NSL) in order to learn the neural similarity adaptively from training data. Specifically, we propose two different ways of learning the neural similarity: static NSL and dynamic NSL. Interestingly, dynamic neural similarity makes the CNN become a dynamic inference network. By regularizing the bilinear matrix, NSL can be viewed as learning the shape of kernel and the similarity measure simultaneously. We further justify the effectiveness of NSL with a theoretical viewpoint. Most importantly, NSL shows promising performance in visual recognition and few-shot learning, validating the superiority of NSL over the inner product-based convolution counterparts.


PerceptNet: A Human Visual System Inspired Neural Network for Estimating Perceptual Distance

arXiv.org Machine Learning

PERCEPTNET: A HUMAN VISUAL SYSTEM INSPIRED NEURAL NETWORK FOR ESTIMA TING PERCEPTUAL DIST ANCE Alexander Hepburn null V alero Laparra † Jesús Malo † Ryan McConville null Raul Santos-Rodriguez null null Department of Engineering Mathematics, University of Bristol † Image Processing Lab, Universitat de V alencia ABSTRACT Traditionally, the vision community has devised algorithms to estimate the distance between an original image and images that have been subject to perturbations. Inspiration was usually taken from the human visual perceptual system and how the system processes different perturbations in order to replicate to what extent it determines our ability to judge image quality. While recent works have presented deep neural networks trained to predict human perceptual quality, very few borrow any intuitions from the human visual system. To address this, we present PerceptNet, a convolutional neural network where the architecture has been chosen to reflect the structure and various stages in the human visual system. We evaluate PerceptNet on various traditional perception datasets and note strong performance on a number of them as compared with traditional image quality metrics.