Machine Learning is being used in countless applications today. It is a natural fit in domains where there is no single algorithm that works perfectly, and there is a large amount of unseen data that the algorithm needs to do a good job predicting the right output. Unlike traditional algorithm problems where we expect exact optimal answers, machine learning applications can tolerate approximate answers. Deep Learning with neural networks has been the dominant methodology of training new machine learning models for the past decade. Its rise to prominence is often attributed to the ImageNet  competition in 2012.
For more than 30 years, Geoffrey Hinton hovered at the edges of artificial intelligence research, an outsider clinging to a simple proposition: that computers could think like humans do--using intuition rather than rules. The idea had taken root in Hinton as a teenager when a friend described how a hologram works: innumerable beams of light bouncing off an object are recorded, and then those many representations are scattered over a huge database. Hinton, who comes from a somewhat eccentric, generations-deep family of overachieving scientists, immediately understood that the human brain worked like that, too--information in our brains is spread across a vast network of cells, linked by an endless map of neurons, firing and connecting and transmitting along a billion paths. He wondered: could a computer behave the same way? The answer, according to the academic mainstream, was a deafening no. Computers learned best by rules and logic, they said. And besides, Hinton's notion, called neural networks--which later became the groundwork for "deep learning" or "machine learning"--had already been disproven. In the late '50s, a Cornell scientist named Frank Rosenblatt had proposed the world's first neural network machine. It was called the Perceptron, and it had a simple objective--to recognize images. The goal was to show it a picture of an apple, and it would, at least in theory, spit out "apple." The Perceptron ran on an IBM mainframe, and it was ugly.
The emergence and continued reliance on the Internet and related technologies has resulted in the generation of large amounts of data that can be made available for analyses. However, humans do not possess the cognitive capabilities to understand such large amounts of data. Machine learning (ML) provides a mechanism for humans to process large amounts of data, gain insights about the behavior of the data, and make more informed decision based on the resulting analysis. ML has applications in various fields. This review focuses on some of the fields and applications such as education, healthcare, network security, banking and finance, and social media. Within these fields, there are multiple unique challenges that exist. However, ML can provide solutions to these challenges, as well as create further research opportunities. Accordingly, this work surveys some of the challenges facing the aforementioned fields and presents some of the previous literature works that tackled them. Moreover, it suggests several research opportunities that benefit from the use of ML to address these challenges.
Abstract--In this paper, several Collaborative Filtering (CF) approaches with latent variable methods were studied using user-item interactions to capture important hidden variations of the sparse customer purchasing behaviors. The latent factors are used to generalize the purchasing pattern of the customers and to provide product recommendations. CF with Neural Collaborative Filtering (NCF) was shown to produce the highest Normalized Discounted Cumulative Gain (NDCG) performance on the real-world proprietary dataset provided by a large parts supply company. Different hyperparameters were tested using Bayesian Optimization (BO) for applicability in the CF framework. External data sources like click-data and metrics like Clickthrough Rate (CTR) were reviewed for potential extensions to the work presented. The work shown in this paper provides techniques the Company can use to provide product recommendations to enhance revenues, attract new customers, and gain advantages over competitors. With today's ever-increasing ease of access to the internet more advertisements, attract new clients, and retain existing and information, we have reached a point of information clients .
Recommender systems trained in a continuous learning fashion are plagued by the feedback loop problem, also known as algorithmic bias. This causes a newly trained model to act greedily and favor items that have already been engaged by users. This behavior is particularly harmful in personalised ads recommendations, as it can also cause new campaigns to remain unexplored. Exploration aims to address this limitation by providing new information about the environment, which encompasses user preference, and can lead to higher long-term reward. In this work, we formulate a display advertising recommender as a contextual bandit and implement exploration techniques that require sampling from the posterior distribution of click-through-rates in a computationally tractable manner. Traditional large-scale deep learning models do not provide uncertainty estimates by default. We approximate these uncertainty measurements of the predictions by employing a bootstrapped model with multiple heads and dropout units. We benchmark a number of different models in an offline simulation environment using a publicly available dataset of user-ads engagements. We test our proposed deep Bayesian bandits algorithm in the offline simulation and online AB setting with large-scale production traffic, where we demonstrate a positive gain of our exploration model.
Recently, neural networks have been widely used in e-commerce recommender systems, owing to the rapid development of deep learning. We formalize the recommender system as a sequential recommendation problem, intending to predict the next items that the user might be interacted with. Recent works usually give an overall embedding from a user's behavior sequence. However, a unified user embedding cannot reflect the user's multiple interests during a period. In this paper, we propose a novel controllable multi-interest framework for the sequential recommendation, called ComiRec. Our multi-interest module captures multiple interests from user behavior sequences, which can be exploited for retrieving candidate items from the large-scale item pool. These items are then fed into an aggregation module to obtain the overall recommendation. The aggregation module leverages a controllable factor to balance the recommendation accuracy and diversity. We conduct experiments for the sequential recommendation on two real-world datasets, Amazon and Taobao. Experimental results demonstrate that our framework achieves significant improvements over state-of-the-art models. Our framework has also been successfully deployed on the offline Alibaba distributed cloud platform.
Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods. The company already used purchasing data and a simple statistical method to predict sales. With deep learning, a technique that has helped produce spectacular AI advances in recent years--as well as additional data, including local weather, traffic conditions, and competitors' actions--the company cut the number of errors by three-quarters. It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.
Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods. The company already used purchasing data and a simple statistical method to predict sales. With deep learning, a technique that has helped produce spectacular AI advances in recent years--as well as additional data including local weather, traffic conditions, and competitors' actions--the company cut the number of errors by three-quarters. It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.
The Canadian artificial intelligence (AI) industry has been growing fast, and the country has been aiming for more through massive AI research. There are signs all over that Canada is already having an AI-driven digital economy as cities are emerging as hubs for AI labs and deep learning research. There is an increase in the number of AI startups in cities such as Montreal, Vancouver, and Toronto, among others. Canada has become a breeding ground for AI innovations. Inc. (NASDAQ: AMZN), Intel Corp (NASDAQ: INTC), and Uber Technologies (NYSE: UBER) have invested significantly in AI research in the country.
Artificial intelligence has to go in new directions if it's to realize the machine equivalent of common sense, and three of its most prominent proponents are in violent agreement about exactly how to do that. Yoshua Bengio of Canada's MILA institute, Geoffrey Hinton of the University of Toronto, and Yann LeCun of Facebook, who have called themselves co-conspirators in the revival of the once-moribund field of "deep learning," took the stage Sunday night at the Hilton hotel in midtown Manhattan for the 34th annual conference of the Association for the Advancement of Artificial Intelligence. The three, who were dubbed the "godfathers" of deep learning by the conference, were being honored for having received last year's Turing Award for lifetime achievements in computing. Each of the three scientists got a half-hour to talk, and each one acknowledged numerous shortcomings in deep learning, things such as "adversarial examples," where an object recognition system can be tricked into misidentifying an object just by adding noise to a picture. "There's been a lot of talk of the negatives about deep learning," LeCun noted.