Goto

Collaborating Authors

Results


Artificial Intelligence cuts down packaging issues for Amazon - Cybersecurity Insiders

#artificialintelligence

From January 3rd, 2022, Amazon will be solving most of its packaging issues with the help of AI based machine learning tools. Meaning, the Jeff Bezos led company will be amalgamating computer vision and natural language processing to'guestimate' the right amount of packaging required to pack millions of products it ships to its customers. According to an update released to the media, Amazon expressed that the use of AI tech has reduced the packaging consumption per shipment by over 33% that accounts for 3 million tons of packaging required to prepare over 2 billion different sized boxes. From the year 2019, Amazon tested the Machine Learning model of packaging in its facilities located across the United States and was happy to announce that it was 100% satisfied with the results. To achieve its vision, the American retail giant had to upgrade its packaging and distribution tunnels with some software driven cameras and some sensors.


Forecasting: theory and practice

arXiv.org Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.


Evolution of Cresta's machine learning architecture: Migration to AWS and PyTorch

#artificialintelligence

Cresta Intelligence, a California-based AI startup, makes businesses radically more productive by using Expertise AI to help sales and service teams unlock their full potential. Cresta is bringing together world-renowned AI thought-leaders, engineers, and investors to create a real-time coaching and management solution that transforms sales and increases service productivity, weeks after application deployment. Cresta enables customers such as Intuit, Cox Communications, and Porsche to realize a 20% improvement in sales conversion rate, 25% greater average order value, and millions of dollars in additional annual revenue. This post discusses Cresta's journey as they moved from a multi-cloud environment to consolidating their machine learning (ML) workloads on AWS. It also gives a high-level view of their legacy and current training and inference architectures.


Utilizing Textual Reviews in Latent Factor Models for Recommender Systems

arXiv.org Machine Learning

Most of the existing recommender systems are based only on the rating data, and they ignore other sources of information that might increase the quality of recommendations, such as textual reviews, or user and item characteristics. Moreover, the majority of those systems are applicable only on small datasets (with thousands of observations) and are unable to handle large datasets (with millions of observations). We propose a recommender algorithm that combines a rating modelling technique (i.e., Latent Factor Model) with a topic modelling method based on textual reviews (i.e., Latent Dirichlet Allocation), and we extend the algorithm such that it allows adding extra user- and item-specific information to the system. We evaluate the performance of the algorithm using Amazon.com datasets with different sizes, corresponding to 23 product categories. After comparing the built model to four other models we found that combining textual reviews with ratings leads to better recommendations. Moreover, we found that adding extra user and item features to the model increases its prediction accuracy, which is especially true for medium and large datasets.


End-to-End Conversational Search for Online Shopping with Utterance Transfer

arXiv.org Artificial Intelligence

Successful conversational search systems can present natural, adaptive and interactive shopping experience for online shopping customers. However, building such systems from scratch faces real word challenges from both imperfect product schema/knowledge and lack of training dialog data.In this work we first propose ConvSearch, an end-to-end conversational search system that deeply combines the dialog system with search. It leverages the text profile to retrieve products, which is more robust against imperfect product schema/knowledge compared with using product attributes alone. We then address the lack of data challenges by proposing an utterance transfer approach that generates dialogue utterances by using existing dialog from other domains, and leveraging the search behavior data from e-commerce retailer. With utterance transfer, we introduce a new conversational search dataset for online shopping. Experiments show that our utterance transfer method can significantly improve the availability of training dialogue data without crowd-sourcing, and the conversational search system significantly outperformed the best tested baseline.


Did the Model Change? Efficiently Assessing Machine Learning API Shifts

arXiv.org Artificial Intelligence

Machine learning (ML) prediction APIs are increasingly widely used. An ML API can change over time due to model updates or retraining. This presents a key challenge in the usage of the API because it is often not clear to the user if and how the ML model has changed. Model shifts can affect downstream application performance and also create oversight issues (e.g. if consistency is desired). In this paper, we initiate a systematic investigation of ML API shifts. We first quantify the performance shifts from 2020 to 2021 of popular ML APIs from Google, Microsoft, Amazon, and others on a variety of datasets. We identified significant model shifts in 12 out of 36 cases we investigated. Interestingly, we found several datasets where the API's predictions became significantly worse over time. This motivated us to formulate the API shift assessment problem at a more fine-grained level as estimating how the API model's confusion matrix changes over time when the data distribution is constant. Monitoring confusion matrix shifts using standard random sampling can require a large number of samples, which is expensive as each API call costs a fee. We propose a principled adaptive sampling algorithm, MASA, to efficiently estimate confusion matrix shifts. MASA can accurately estimate the confusion matrix shifts in commercial ML APIs using up to 90% fewer samples compared to random sampling. This work establishes ML API shifts as an important problem to study and provides a cost-effective approach to monitor such shifts.


The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI

arXiv.org Artificial Intelligence

There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.


Changing trends in Retail Industry -- using AI

#artificialintelligence

AI technology which includes advanced analytics, deep learning, machine learning and other cognitive solutions, is a new digital transformation moving towards successful business in the retail market. Infinite Analysis was founded in 2012 with an intention of becoming the premier AI and personalization engine in retail and e-commerce search. By making use of Natural Language Processing (NLP), Machine Learning and a lot of Predictive analysis, Infinite analysis predicts users behaviour for retail and e-commerce applications. Standard cognition is a software development company based in San Francisco. They use artificial Intelligence technology, that enables consumers to buy and checkout without waiting in line for scan or pay.


Markdowns in E-Commerce Fresh Retail: A Counterfactual Prediction and Multi-Period Optimization Approach

arXiv.org Artificial Intelligence

In this paper, by leveraging abundant observational transaction data, we propose a novel data-driven and interpretable pricing approach for markdowns, consisting of counterfactual prediction and multi-period price optimization. Firstly, we build a semi-parametric structural model to learn individual price elasticity and predict counterfactual demand. This semi-parametric model takes advantage of both the predictability of nonparametric machine learning model and the interpretability of economic model. Secondly, we propose a multi-period dynamic pricing algorithm to maximize the overall profit of a perishable product over its finite selling horizon. Different with the traditional approaches that use the deterministic demand, we model the uncertainty of counterfactual demand since it inevitably has randomness in the prediction process. Based on the stochastic model, we derive a sequential pricing strategy by Markov decision process, and design a two-stage algorithm to solve it. The proposed algorithm is very efficient. It reduces the time complexity from exponential to polynomial. Experimental results show the advantages of our pricing algorithm, and the proposed framework has been successfully deployed to the well-known e-commerce fresh retail scenario - Freshippo.


MLSys 2021: Bridging the divide between machine learning and systems

#artificialintelligence

Machine learning MLSys 2021: Bridging the divide between machine learning and systems Amazon distinguished scientist and conference general chair Alex Smola on what makes MLSys unique -- both thematically and culturally. Email Alex Smola, Amazon vice president and distinguished scientist The Conference on Machine Learning and Systems ( MLSys), which starts next week, is only four years old, but Amazon scientists already have a rich history of involvement with it. Amazon Scholar Michael I. Jordan is on the steering committee; vice president and distinguished scientist Inderjit Dhillon is on the board and was general chair last year; and vice president and distinguished scientist Alex Smola, who is also on the steering committee, is this year's general chair. As the deep-learning revolution spread, MLSys was founded to bridge two communities that had much to offer each other but that were often working independently: machine learning researchers and system developers. Registration for the conference is still open, with the very low fees of $25 for students and $100 for academics and professionals. "If you look at the big machine learning conferences, they mostly focus on, 'Okay, here's a cool algorithm, and here are the amazing things that it can do. And by the way, it now recognizes cats even better than before,'" Smola says. "They're conferences where people mostly show an increase in capability. At the same time, there are systems conferences, and they mostly care about file systems, databases, high availability, fault tolerance, and all of that. "Now, why do you need something in-between? Well, because quite often in machine learning, approximate is good enough. You don't necessarily need such good guarantees from your systems.