similarity


Hybrid content-based and collaborative filtering recommendations with {ordinal} logistic regression (1): Feature engineering

@machinelearnbot

Content based systems (CF) rely on a typical description of items over feature vectors, and then recommend novel items to users by computing some similarity metric between them and the items that the user has already rated. The content-based component of the system encompasses two matrices: the user-user and the item-item proximity matrices, both obtained from applying the relevant distance metric over a set of features that characterize users and items, respectively. The CF component of the system, relies on the typical user-user and item-item similarity matrices computed from the known, past user-item ratings, providing for a memory component of the recommender. The Jaccard distance, on the other hand, neglects the covariances from the rating scales altogether, preserving only the information about the extent of the shared ratings.


Deduplicating Massive Datasets with Locality Sensitive Hashing

#artificialintelligence

The similarity of those documents can then simply be defined as the Jaccard similarity of the two sets of shingles; the number of elements (shingles) they have in common as a proportion of the combined size of the two sets, or the size of the intersection divided by the size of the union. This should all be fine however, as we've already defined the task to be about finding near duplicate documents not semantically similar ones, for document collections with longer documents this method should work very well. The problem is that finding those duplicates took quite a long time as computing the Jaccard similarity of the documents requires comparing every document to every other document, this approach is clearly not scalable. Locality Sensitive Hashing (LSH) is a generic hashing technique that aims, as the name suggests, to preserve the local relations of the data while significantly reducing the dimensionality of the dataset.


Engineering Hyper Personalisation at Scale – Building Fynd

@machinelearnbot

Along with generating user memory, Stream Mapper also computes the similarity between brands and collections that can also recommended to users. With the help of affinity score calculated at Stream Mapper, Pipeline predicts the influence among brand, category and collections for users and ranks for generating dynamic page contents. With the help of FyndRank algorithm, we are dynamically generating gender based recommendation content for various sections of the app -- For You, Brand and Collection. Brand, Category products) with the predicted sequence of new feed cards on Feed, Brand page and Collection page.



Your Artificial Intelligence Is Not Bias-Free

#artificialintelligence

A new wave of algorithmic issues has recently hit the news, bringing the bias of AI into greater focus. To put it simply, "machine bias is human bias." When the data is skewed by human bias, the AI results will be skewed, as well -- in this case impacting something as serious as human freedom. As opposed to showing them all news options, it shows them the options they are most likely to agree with -- a situation that further compounds political issues on both sides.


Boosting Product Categorization with Machine Learning

#artificialintelligence

To recap our goal: We want to build a machine learning system that predicts a broad range of product categories from names, images or descriptions. We tested prediction accuracies for a range of machine learning models in the library scikit-learn: Naive Bayes, Logistic Regression, k-Nearest Neighbors, Random Forests, Support Vector Machines and Gradient Boosting. Even though descriptions have more complex syntax than product names, the same combination of Logistic Regression and TF-IDF achieved the highest accuracies. To account for the variance in the similarities between matches, we multiply the probabilities of our class predictions with these similarity scores to quantify our confidence in the final category predictions.


277 Data Science Key Terms, Explained 7wData

@machinelearnbot

Read on to find terminology related to Big Data, machine learning, natural language processing, descriptive statistics, and much more. This post presents a collection of data science related key terms with concise, no-nonsense definitions, organized into 12 distinct topics. Enjoying a surge in research and industry, due mainly to its incredible successes in a number of different areas, Deep learning is the process of applying deep neural network technologies - that is, neural network architectures with multiple hidden layers - to solve problems. Deep learning is a process, like data mining, which employs deep neural network architectures, which are particular types of machine learning algorithms.


How Will Artificial Intelligence Kickstart the Internet of Things? Let's Find Out

#artificialintelligence

Undoubtedly, finding actionable insights in terabytes of machine data is not a cakewalk, just ask a data scientist. The only possible way to keep up with the terabytes of data generated by IoT devices and sensors and gain the hidden insights that it holds is using Artificial Intelligence, commonly known as AI. In an IoT environment, Artificial Intelligence (AI) can aid business enterprises take the billions of data points they have and prune them down to what's really helpful and actionable. Gartner has predicted that by the end of next year, 6 billion connected devices will be requesting support, which means that processes, technologies, and strategies will have to be in place to respond to them.


Indicator Based Recommenders – The One We Missed

@machinelearnbot

With Collaborative Filtering you use fairly straightforward matrix factorization to build tables that identify relationships between users with similar tastes (histories on your web site) and the items they consume. Perhaps the most unique characteristic of the Log Likelihood Ratio is that it can identify uncommon relationships (anomalous co-occurrences). It's not the most commonly occurring similarities among users that make for good recommendations. In the case of Indicator-Based the ease of development is based on the existence of Mahout based'RowSimilarityJob' command which greatly simplifies creating the recommendation matrix.


Recommendation System Algorithms: An Overview

@machinelearnbot

These are two basic approaches in CF: user-based collaborative filtering and item-based collaborative filtering, respectively. Imagine, we're building a big recommendation system where collaborative filtering and matrix decompositions should work longer. According to the study "Deep Neural Networks for YouTube Recommendations", the YouTube recommendation system algorithm consists of two neural networks: one for candidate generation and one for ranking. Taking events from a user's history as input, the candidate generation network significantly decreases the amount of videos and makes a group of the most relevant ones from a large corpus.