Goto

Collaborating Authors

 daub


Silicon Valley May Never Learn Its Lesson

The Atlantic - Technology

Over and over during Sam Bankman-Fried's trial, lawyers showed pictures of the FTX founder living his best life. There he was at the Super Bowl flanked by Katy Perry and Orlando Bloom. There he was on a private jet, sleeping with his hands folded. There he was onstage, in shorts and a T-shirt, with Bill Clinton and Tony Blair. The very traits that made him a cause célèbre in Silicon Valley--his intellect, his obsession with scale, his story--turned into liabilities.


This is why AI has yet to reshape most businesses

#artificialintelligence

The art of making perfumes and colognes hasn't changed much since the 1880s, when synthetic ingredients began to be used. Expert fragrance creators tinker with combinations of chemicals in hopes of producing compelling new scents. So Achim Daub, an executive at one of the world's biggest makers of fragrances, Symrise, wondered what would happen if he injected artificial intelligence into the process. Would a machine suggest appealing formulas that a human might not think to try? Daub hired IBM to design a computer system that would pore over massive amounts of information--the formulas of existing fragrances, consumer data, regulatory information, on and on--and then suggest new formulations for particular markets. The system is called Philyra, after the Greek goddess of fragrance.


Why AI Has Yet to Reshape Most Businesses - AI Trends

#artificialintelligence

The art of making perfumes and colognes hasn't changed much since the 1880s, when synthetic ingredients began to be used. Expert fragrance creators tinker with combinations of chemicals in hopes of producing compelling new scents. So Achim Daub, an executive at one of the world's biggest makers of fragrances, Symrise, wondered what would happen if he injected artificial intelligence into the process. Would a machine suggest appealing formulas that a human might not think to try? Daub hired IBM to design a computer system that would pore over massive amounts of information--the formulas of existing fragrances, consumer data, regulatory information, on and on--and then suggest new formulations for particular markets. The system is called Philyra, after the Greek goddess of fragrance.


This is why AI has yet to reshape most businesses

#artificialintelligence

The art of making perfumes and colognes hasn't changed much since the 1880s, when synthetic ingredients began to be used. Expert fragrance creators tinker with combinations of chemicals in hopes of producing compelling new scents. So Achim Daub, an executive at one of the world's biggest makers of fragrances, Symrise, wondered what would happen if he injected artificial intelligence into the process. Would a machine suggest appealing formulas that a human might not think to try? Daub hired IBM to design a computer system that would pore over massive amounts of information--the formulas of existing fragrances, consumer data, regulatory information, on and on--and then suggest new formulations for particular markets. The system is called Philyra, after the Greek goddess of fragrance.


Forget about Chanel No. 5. IBM is now making perfume using AI.

#artificialintelligence

The creation of a perfume is often treated as a bespoke art. The French pride themselves on centuries in the olfactory business, and professional scent masters -- often referred to as "noses" -- spend decades learning the craft, apprenticing under masters. Giant cosmetic companies such as Coty and Estée Lauder write huge checks to storied fragrance agencies, which will employ meticulous perfume chemists, scrupulous in the art of aromachology. A common theme here is that the skill of developing a fragrance is extremely valuable -- and extremely human. Scent is, after all, the sense that science says has the strongest ability to evoke memories, or trigger emotions and moods.


Selecting Near-Optimal Learners via Incremental Data Allocation

Sabharwal, Ashish (Allen Institute for AI) | Samulowitz, Horst (IBM T. J. Watson Research Center) | Tesauro, Gerald (IBM T. J. Watson Research Center)

AAAI Conferences

We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyper-parameters. Inspired by the principle of "optimism under uncertainty," we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on $n$ samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.


Selecting Near-Optimal Learners via Incremental Data Allocation

Sabharwal, Ashish, Samulowitz, Horst, Tesauro, Gerald

arXiv.org Machine Learning

We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyper-parameters. Inspired by the principle of "optimism under uncertainty," we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on $n$ samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.