Goto

Collaborating Authors

Results


Why games may not be the best benchmark for AI

#artificialintelligence

Did you miss a session from the Future of Work Summit? In 2019, San Francisco-based AI research lab OpenAI held a tournament to tout the prowess of OpenAI Five, a system designed to play the multiplayer battle arena game Dota 2. OpenAI Five defeated a team of professional players -- twice. And when made publicly available, OpenAI Five managed to win against 99.4% of people who played against it online. OpenAI has invested heavily in games for research, developing libraries like CoinRun and Neural MMO, a simulator that plops AI in the middle of an RPG-like world. But that approach is changing.


Europe is seeing a hiring boom in tech industry machine learning roles

#artificialintelligence

Europe was the fastest growing region for machine learning hiring among tech industry companies in the three months ending October. The number of roles in Europe made up 9.4% of total machine learning jobs – up from 7.7% in the same quarter last year. That was followed by Middle East & Africa, which saw a -0.2 year-on-year percentage point change in machine learning roles. The figures are compiled by GlobalData, who track the number of new job postings from key companies in various sectors over time. Using textual analysis, these job advertisements are then classified thematically.


Automated Detection of GDPR Disclosure Requirements in Privacy Policies using Deep Active Learning

arXiv.org Artificial Intelligence

Since GDPR came into force in May 2018, companies have worked on their data practices to comply with this privacy law. In particular, since the privacy policy is the essential communication channel for users to understand and control their privacy, many companies updated their privacy policies after GDPR was enforced. However, most privacy policies are verbose, full of jargon, and vaguely describe companies' data practices and users' rights. Therefore, it is unclear if they comply with GDPR. In this paper, we create a privacy policy dataset of 1,080 websites labeled with the 18 GDPR requirements and develop a Convolutional Neural Network (CNN) based model which can classify the privacy policies with an accuracy of 89.2%. We apply our model to perform a measurement on the compliance in the privacy policies. Our results show that even after GDPR went into effect, 97% of websites still fail to comply with at least one requirement of GDPR.


Most Successful Machine Learning Companies

#artificialintelligence

The core expertise of InData Labs is in Artificial Intelligence and Data Science, and they are proficient in advanced analytics languages and tools such as Python, R, Tensorflow, Keras, Alteryx, and others. InData Labs offers AI consulting and development services, as well as AI-powered mobile app development, to help clients grow their businesses. Development of customized solutions based on artificial intelligence from scratch. Development of products based on artificial intelligence. Indium Softwares' Machine Learning (ML) service enables companies to gain a competitive edge with privileges such as customer lifetime value prediction, proactive maintenance, spam detection, and more. Indium's motto is "making technology work," and they provide best-in-class machine learning algorithms as well as machine learning consulting solutions.


Top 100 Artificial Intelligence Companies in the World

#artificialintelligence

Artificial Intelligence (AI) is not just a buzzword, but a crucial part of the technology landscape. AI is changing every industry and business function, which results in increased interest in its applications, subdomains and related fields. This makes AI companies the top leaders driving the technology swift. AI helps us to optimise and automate crucial business processes, gather essential data and transform the world, one step at a time. From Google and Amazon to Apple and Microsoft, every major tech company is dedicating resources to breakthroughs in artificial intelligence. As big enterprises are busy acquiring or merging with other emerging inventions, small AI companies are also working hard to develop their own intelligent technology and services. By leveraging artificial intelligence, organizations get an innovative edge in the digital age. AI consults are also working to provide companies with expertise that can help them grow. In this digital era, AI is also a significant place for investment. AI companies are constantly developing the latest products to provide the simplest solutions. Henceforth, Analytics Insight brings you the list of top 100 AI companies that are leading the technology drive towards a better tomorrow. AEye develops advanced vision hardware, software, and algorithms that act as the eyes and visual cortex of autonomous vehicles. AEye is an artificial perception pioneer and creator of iDAR, a new form of intelligent data collection that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. Their mission was to acquire the most information with the fewest ones and zeros. This would allow AEye to drive the automotive industry into the next realm of autonomy. Algorithmia invented the AI Layer.


Self-supervised Learning for Large-scale Item Recommendations

arXiv.org Machine Learning

Large scale recommender models find most relevant items from huge catalogs, and they play a critical role in modern search and recommendation systems. To model the input space with large-vocab categorical features, a typical recommender model learns a joint embedding space through neural networks for both queries and items from user feedback data. However, with millions to billions of items, the power-law user feedback makes labels very sparse for a large amount of long-tail items. Inspired by the recent success in self-supervised representation learning research in both computer vision and natural language understanding, we propose a multi-task self-supervised learning (SSL) framework for large-scale item recommendations. The framework is designed to tackle the label sparsity problem by learning more robust item representations. Furthermore, we propose two self-supervised tasks applicable to models with categorical features within the proposed framework: (i) Feature Masking (FM) and (ii) Feature Dropout (FD). We evaluate our framework using two large-scale datasets with 500M and 1B training examples respectively. Our results demonstrate that the proposed framework outperforms traditional supervised learning only models and state-of-the-art regularization techniques in the context of item recommendations. The SSL framework shows larger improvement with less supervision compared to the counterparts. We also apply the proposed techniques to a web-scale commercial app-to-app recommendation system, and significantly improve top-tier business metrics via A/B experiments on live traffic. Our online results also verify our hypothesis that our framework indeed improves model performance on slices that lack supervision.


Reinforcement Learning for Strategic Recommendations

arXiv.org Machine Learning

Strategic recommendations (SR) refer to the problem where an intelligent agent observes the sequential behaviors and activities of users and decides when and how to interact with them to optimize some long-term objectives, both for the user and the business. These systems are in their infancy in the industry and in need of practical solutions to some fundamental research challenges. At Adobe research, we have been implementing such systems for various use-cases, including points of interest recommendations, tutorial recommendations, next step guidance in multi-media editing software, and ad recommendation for optimizing lifetime value. There are many research challenges when building these systems, such as modeling the sequential behavior of users, deciding when to intervene and offer recommendations without annoying the user, evaluating policies offline with high confidence, safe deployment, non-stationarity, building systems from passive data that do not contain past recommendations, resource constraint optimization in multi-user systems, scaling to large and dynamic actions spaces, and handling and incorporating human cognitive biases. In this paper we cover various use-cases and research challenges we solved to make these systems practical.


Iterative Boosting Deep Neural Networks for Predicting Click-Through Rate

arXiv.org Machine Learning

The click-through rate (CTR) reflects the ratio of clicks on a specific item to its total number of views. It has significant impact on websites' advertising revenue. Learning sophisticated models to understand and predict user behavior is essential for maximizing the CTR in recommendation systems. Recent works have suggested new methods that replace the expensive and time-consuming feature engineering process with a variety of deep learning (DL) classifiers capable of capturing complicated patterns from raw data; these methods have shown significant improvement on the CTR prediction task. While DL techniques can learn intricate user behavior patterns, it relies on a vast amount of data and does not perform as well when there is a limited amount of data. We propose XDBoost, a new DL method for capturing complex patterns that requires just a limited amount of raw data. XDBoost is an iterative three-stage neural network model influenced by the traditional machine learning boosting mechanism. XDBoost's components operate sequentially similar to boosting; However, unlike conventional boosting, XDBoost does not sum the predictions generated by its components. Instead, it utilizes these predictions as new artificial features and enhances CTR prediction by retraining the model using these features. Comprehensive experiments conducted to illustrate the effectiveness of XDBoost on two datasets demonstrated its ability to outperform existing state-of-the-art (SOTA) models for CTR prediction.


Explainable Artificial Intelligence: a Systematic Review

arXiv.org Artificial Intelligence

This has led to the development of a plethora of domain-dependent and context-specific methods for dealing with the interpretation of machine learning (ML) models and the formation of explanations for humans. Unfortunately, this trend is far from being over, with an abundance of knowledge in the field which is scattered and needs organisation. The goal of this article is to systematically review research works in the field of XAI and to try to define some boundaries in the field. From several hundreds of research articles focused on the concept of explainability, about 350 have been considered for review by using the following search methodology. In a first phase, Google Scholar was queried to find papers related to "explainable artificial intelligence", "explainable machine learning" and "interpretable machine learning". Subsequently, the bibliographic section of these articles was thoroughly examined to retrieve further relevant scientific studies. The first noticeable thing, as shown in figure 2 (a), is the distribution of the publication dates of selected research articles: sporadic in the 70s and 80s, receiving preliminary attention in the 90s, showing raising interest in 2000 and becoming a recognised body of knowledge after 2010. The first research concerned the development of an explanation-based system and its integration in a computer program designed to help doctors make diagnoses [3]. Some of the more recent papers focus on work devoted to the clustering of methods for explainability, motivating the need for organising the XAI literature [4, 5, 6].


Parallelizing Machine Learning as a Service for the End-User

arXiv.org Artificial Intelligence

As ML applications are becoming ever more pervasive, fully-trained systems are made increasingly available to a wide public, allowing end-users to submit queries with their own data, and to efficiently retrieve results. With increasingly sophisticated such services, a new challenge is how to scale up to evergrowing user bases. In this paper, we present a distributed architecture that could be exploited to parallelize a typical ML system pipeline. We propose a case study consisting of a text mining service and discuss how the method can be generalized to many similar applications. We demonstrate the significance of the computational gain boosted by the distributed architecture by way of an extensive experimental evaluation.