Goto

Collaborating Authors

Optimization


La veille de la cybersécurité

#artificialintelligence

Getting the software right is important when developing machine learning models, such as recommendation or classification systems. But at eBay, optimizing the software to run on a particular piece of hardware using distillation and quantization techniques was absolutely essential to ensure scalability. "[I]n order to build a truly global marketplace that is driven by state of the art and powerful and scalable AI services," Kopru said, "you have to do a lot of optimizations after model training, and specifically for the target hardware." With 1.5 billion active listings from more than 19 million active sellers trying to reach 159 million active buyers, the ecommerce giant has a global reach that is matched by only a handful of firms. Machine learning and other AI techniques, such as natural language processing (NLP), play big roles in scaling eBay's operations to reach its massive audience. For instance, automatically generated descriptions of product listings is crucial for displaying information on the small screens of smart phones, Kopru said.


Global Big Data Conference

#artificialintelligence

Getting the software right is important when developing machine learning models, such as recommendation or classification systems. But at eBay, optimizing the software to run on a particular piece of hardware using distillation and quantization techniques was absolutely essential to ensure scalability. "[I]n order to build a truly global marketplace that is driven by state of the art and powerful and scalable AI services," Kopru said, "you have to do a lot of optimizations after model training, and specifically for the target hardware." With 1.5 billion active listings from more than 19 million active sellers trying to reach 159 million active buyers, the ecommerce giant has a global reach that is matched by only a handful of firms. Machine learning and other AI techniques, such as natural language processing (NLP), play big roles in scaling eBay's operations to reach its massive audience. For instance, automatically generated descriptions of product listings is crucial for displaying information on the small screens of smart phones, Kopru said.


Complete Step-by-Step Gradient Descent Algorithm from Scratch

#artificialintelligence

If you've been studying machine learning long enough, you've probably heard terms such as SGD or Adam. They are two of many optimization algorithms. Optimization algorithms are the heart of machine learning which are responsible for the intricate work of machine learning models to learn from data. It turns out that optimization has been around for a long time, even outside of the machine learning realm. Investors seek to create portfolios that avoid excessive risk while achieving a high rate of return.


Why Silicon Valley's Optimization Mindset Sets Us Up for Failure

TIME - Tech

In 2013 a Silicon Valley software engineer decided that food is an inconvenience--a pain point in a busy life. Buying food, preparing it, and cleaning up afterwards struck him as an inefficient way to feed himself. And so was born the idea of Soylent, Rob Rhinehart's meal replacement powder, described on its website as an International Complete Nutrition Platform. Soylent is the logical result of an engineer's approach to the "problem" of feeding oneself with food: there must be a more optimal solution. It's not hard to sense the trouble with this crushingly instrumental approach to nutrition.


Fast AutoML with FLAML + Ray Tune - KDnuggets

#artificialintelligence

FLAML is a lightweight Python library from Microsoft Research that finds accurate machine learning models in an efficient and economical way using cutting edge algorithms designed to be resource-efficient and easily parallelizable. FLAML can also utilize Ray Tune for distributed hyperparameter tuning to scale up these AutoML methods across a cluster. AutoML is known to be a resource and time consuming operation as it involves trials and errors to find a hyperparameter configuration with good performance. Since the space of possible configuration values is often very large, there is a need for an economical AutoML method that can more effectively search them. To address both of these factors, Microsoft Researchers have developed FLAML (Fast Lightweight AutoML).


Artificial Intelligence and Its Application in Optimization under Uncertainty

#artificialintelligence

Nowadays, the increase in data acquisition and availability and complexity around optimization make it imperative to jointly use artificial intelligence (AI) and optimization for devising data-driven and intelligent decision support systems (DSS). A DSS can be successful if large amounts of interactive data proceed fast and robustly and extract useful information and knowledge to help decision-making. In this context, the data-driven approach has gained prominence due to its provision of insights for decision-making and easy implementation. The data-driven approach can discover various database patterns without relying on prior knowledge while also handling flexible objectives and multiple scenarios. This chapter reviews recent advances in data-driven optimization, highlighting the promise of data-driven optimization that integrates mathematical programming and machine learning (ML) for decision-making under uncertainty and identifies potential research opportunities. This chapter provides guidelines and implications for researchers, managers, and practitioners in operations research who want to advance their decision-making capabilities under uncertainty concerning data-driven optimization. Then, a comprehensive review and classification of the relevant publications on the data-driven stochastic program, data-driven robust optimization, and data-driven chance-constrained are presented. This chapter also identifies fertile avenues for future research that focus on deep-data-driven optimization, deep data-driven models, as well as online learning-based data-driven optimization. Perspectives on reinforcement learning (RL)-based data-driven optimization and deep RL for solving NP-hard problems are discussed. We investigate the application of data-driven optimization in different case studies to demonstrate improvements in operational performance over conventional optimization methodology. Finally, some managerial implications and some future directions are provided.


Run:AI integrates GPU optimization tool with MLOps platforms

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Run:AI today announced it has added support for both MLflow, an open source tool for managing the lifecycle of machine learning algorithms, and Kubeflow, an open source framework for machine learning operations (MLOps) deployed on Kubernetes clusters, to its namesake tool for graphical processor unit (GPU) resource optimization. The company also revealed that it has added support for Apache Airflow, open source software that can be employed to programmatically create, schedule, and monitor workflows. The overall goal is to enable GPU optimization, as well as training AI models from within an MLOps platform, Run:AI CEO Omri Geller told VentureBeat. "It can be managed more end-to-end," he said.


Lagrange Multiplier Approach with Inequality Constraints

#artificialintelligence

In a previous post, we introduced the method of Lagrange multipliers to find local minima or local maxima of a function with equality constraints. The same method can be applied to those with inequality constraints as well. In this tutorial, you will discover the method of Lagrange multipliers applied to find the local minimum or maximum of a function when inequality constraints are present, optionally together with equality constraints. Lagrange Multiplier Approach with Inequality Constraints Photo by Christine Roy, some rights reserved. You can review these concepts by clicking on the links above.


Fast AutoML with FLAML + Ray Tune

#artificialintelligence

FLAML is a lightweight Python library from Microsoft Research that finds accurate machine learning models in an efficient and economical way using cutting edge algorithms designed to be resource-efficient and easily parallelizable. FLAML can also utilize Ray Tune for distributed hyperparameter tuning to scale up these AutoML methods across a cluster. AutoML is known to be a resource and time consuming operation as it involves trials and errors to find a hyperparameter configuration with good performance. Since the space of possible configuration values is often very large, there is a need for an economical AutoML method that can more effectively search them. To address both of these factors, Microsoft Researchers have developed FLAML (Fast Lightweight AutoML).


Optimization Case Study: Defining the problem -- Part 1

#artificialintelligence

This is a two-part case study where we define the optimization problem in part one, and we use the Pulp python library as a tool to solve the business problem in part two. Optimization problems are the dilemma of any business. This can come in two decision-making: either maximation and minimization, be it profit or cost. A typical optimization problem can be solved with an optimization method which in itself is mathematical. Therefore we need to represent our above component definitions mathematically.