Goto

Collaborating Authors

recommender


It's No Joke: AI Beats Humans At Making You Laugh

#artificialintelligence

We all enjoy sharing jokes with friends, hoping a witty one might elicit a smile--or maybe even a belly laugh. A lawyer opened the door of his BMW, when, suddenly, a car came along and hit the door, ripping it off completely. When the police arrived at the scene, the lawyer was complaining bitterly about the damage to his precious BMW. "Officer, look what they've done to my Beeeeemer!" he whined. "You lawyers are so materialistic, you make me sick!" retorted the officer.


Uncovering the Data-Related Limits of Human Reasoning Research: An Analysis based on Recommender Systems

arXiv.org Artificial Intelligence

Understanding the fundamentals of human reasoning is central to the development of any system built to closely interact with humans. Cognitive science pursues the goal of modeling human-like intelligence from a theory-driven perspective with a strong focus on explainability. Syllogistic reasoning as one of the core domains of human reasoning research has seen a surge of computational models being developed over the last years. However, recent analyses of models' predictive performances revealed a stagnation in improvement. We believe that most of the problems encountered in cognitive science are not due to the specific models that have been developed but can be traced back to the peculiarities of behavioral data instead. Therefore, we investigate potential data-related reasons for the problems in human reasoning research by comparing model performances on human and artificially generated datasets. In particular, we apply collaborative filtering recommenders to investigate the adversarial effects of inconsistencies and noise in data and illustrate the potential for data-driven methods in a field of research predominantly concerned with gaining high-level theoretical insight into a domain. Our work (i) provides insight into the levels of noise to be expected from human responses in reasoning data, (ii) uncovers evidence for an upper-bound of performance that is close to being reached urging for an extension of the modeling task, and (iii) introduces the tools and presents initial results to pioneer a new paradigm for investigating and modeling reasoning focusing on predicting responses for individual human reasoners.


Content-based Recommender Using Natural Language Processing (NLP) - KDnuggets

#artificialintelligence

When we provide ratings for products and services on the internet, all the preferences we express and data we share (explicitly or not), are used to generate recommendations by recommender systems. The most common examples are that of Amazon, Google and Netflix. In this article, I have combined movie attributes such as genre, plot, director and main actors to calculate its cosine similarity with another movie. The dataset is IMDB top 250 English movies downloaded from data.world. Exploring the dataset, there are 250 movies (rows) and 38 attributes (columns).


It's No Joke: AI Beats Humans at Making You Laugh

#artificialintelligence

We all enjoy sharing jokes with friends, hoping a witty one might elicit a smile--or maybe even a belly laugh. A lawyer opened the door of his BMW, when, suddenly, a car came along and hit the door, ripping it off completely. When the police arrived at the scene, the lawyer was complaining bitterly about the damage to his precious BMW. "Officer, look what they've done to my Beeeeemer!" he whined. "You lawyers are so materialistic, you make me sick!" retorted the officer.


How to evaluate a machine learning model - part 3

#artificialintelligence

This blog post is the continuation of my previous articles part 1 and part 2. The average per-class accuracy is a variation of accuracy. It is defined as the average of the accuracy for each individual class. Accuracy is an example of what is known as a micro-average, while average per-class accuracy is a macro-average. In general, when there are different numbers of examples per class, the average per-class accuracy will be different from the accuracy. Why this is important is because when the classes are imbalanced, i.e., there are a lot more examples of one class than of the other, and then the accuracy will give an imprecise picture as the class with more examples will dominate the statistic.


The machine learning techniques used in Cloud IAM Recommender Google Cloud Blog

#artificialintelligence

To help you fine-tune your Google Cloud environment, we offer a family of'recommenders' that suggest ways to optimize how you configure your infrastructure and security settings. But unlike many other recommendation engines, which use policy-based rules, some Google Cloud recommenders use machine learning (ML) to generate their suggestions. In this blog post, we'll take a look at one of our recommendation engines, the Cloud Identity and Access Management (IAM) Recommender, and take you on a behind-the-scenes look at the ML that powers its functionality. IAM Recommender helps security professionals enforce the principle of least privilege by identifying and removing unwanted access to GCP resources. It does this by using machine learning to help determine what users actually need by analyzing their permission usage over a 90 day period.


Democratization Social trading into Digital Banking using ML - K Nearest Neighbors

#artificialintelligence

Social trading is an alternative way of trading by looking at what other traders are doing and comparing and copying their techniques and strategies. Social trading allows traders to trade online with the help of others and some have claimed shortens the learning curve from novice to experienced trader. By copying trades, traders can learn which strategies work and which do not work. Social trading is used to do speculation; in the moral context speculative practices are considered negatively and to be avoided by each individual who conversely should maintain a long term horizon avoiding any types of short term speculation. For instance, if you look at the eToro, one the biggest Social Trading Platform.


Democratization Social trading into Digital Banking using ML - K Nearest Neighbors

#artificialintelligence

Social trading is an alternative way of trading by looking at what other traders are doing and comparing and copying their techniques and strategies. Social trading allows traders to trade online with the help of others and some have claimed shortens the learning curve from novice to experienced trader. By copying trades, traders can learn which strategies work and which do not work. Social trading is used to do speculation; in the moral context speculative practices are considered negatively and to be avoided by each individual who conversely should maintain a long term horizon avoiding any types of short term speculation. For instance, if you look at the eToro, one the biggest Social Trading Platform.


LIDA: Lightweight Interactive Dialogue Annotator

arXiv.org Artificial Intelligence

Dialogue systems have the potential to change how people interact with machines but are highly dependent on the quality of the data used to train them. It is therefore important to develop good dialogue annotation tools which can improve the speed and quality of dialogue data annotation. With this in mind, we introduce LIDA, an annotation tool designed specifically for conversation data. As far as we know, LIDA is the first dialogue annotation system that handles the entire dialogue annotation pipeline from raw text, as may be the output of transcription services, to structured conversation data. Furthermore it supports the integration of arbitrary machine learning models as annotation recommenders and also has a dedicated interface to resolve inter-annotator disagreements such as after crowdsourcing annotations for a dataset. LIDA is fully open source, documented and publicly available [ https://github.com/Wluper/lida ]


Collaborative Filtering with A Synthetic Feedback Loop

arXiv.org Machine Learning

We propose a novel learning framework for recommendation systems, assisting collaborative filtering with a synthetic feedback loop. The proposed framework consists of a "recommender" and a "virtual user." The recommender is formulizd as a collaborative-filtering method, recommending items according to observed user behavior. The virtual user estimates rewards from the recommended items and generates the influence of the rewards on observed user behavior. The recommender connected with the virtual user constructs a closed loop, that recommends users with items and imitates the unobserved feedback of the users to the recommended items. The synthetic feedback is used to augment observed user behavior and improve recommendation results. Such a model can be interpreted as the inverse reinforcement learning, which can be learned effectively via rollout (simulation). Experimental results show that the proposed framework is able to boost the performance of existing collaborative filtering methods on multiple datasets.