Goto

Collaborating Authors

 subramanian


Mixture Density Networks for Classification with an Application to Product Bundling

Gugulothu, Narendhar, Bhat, Sanjay P., Bodas, Tejas

arXiv.org Artificial Intelligence

While mixture density networks (MDNs) have been extensively used for regression tasks, they have not been used much for classification tasks. One reason for this is that the usability of MDNs for classification is not clear and straightforward. In this paper, we propose two MDN-based models for classification tasks. Both models fit mixtures of Gaussians to the the data and use the fitted distributions to classify a given sample by evaluating the learnt cumulative distribution function for the given input features. While the proposed MDN-based models perform slightly better than, or on par with, five baseline classification models on three publicly available datasets, the real utility of our models comes out through a real-world product bundling application. Specifically, we use our MDN-based models to learn the willingness-to-pay (WTP) distributions for two products from synthetic sales data of the individual products. The Gaussian mixture representation of the learnt WTP distributions is then exploited to obtain the WTP distribution of the bundle consisting of both the products. The proposed MDN-based models are able to approximate the true WTP distributions of both products and the bundle well.


Improving Downstream Task Performance by Treating Numbers as Entities

Sundararaman, Dhanasekar, Subramanian, Vivek, Wang, Guoyin, Xu, Liyan, Carin, Lawrence

arXiv.org Artificial Intelligence

Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed. Though numbers are typically not accounted for distinctly in most NLP tasks, there is still an underlying amount of numeracy already exhibited by NLP models. In this work, we attempt to tap this potential of state-of-the-art NLP models and transfer their ability to boost performance in related tasks. Our proposed classification of numbers into entities helps NLP models perform well on several tasks, including a handcrafted Fill-In-The-Blank (FITB) task and on question answering using joint embeddings, outperforming the BERT and RoBERTa baseline classification.


How artificial intelligence 'blew up' tennis

#artificialintelligence

Bridie Lynch has been playing and coaching tennis for most of her life. As her parents run a local tennis club in Wales, she was immersed in the sport from the age of 14. One aspect she has noticed is the embrace of technology, at all levels of tennis. "Tennis is such a technical sport. These days, anyone I play or coach is into tech, be it video analysis or longest rally stats."


Making the most of MLOps

#artificialintelligence

When companies first start deploying artificial intelligence and building machine learning projects, the focus tends to be on theory. Is there a model that can provide the necessary results? How can it be built? How can it be trained? But the tools that data scientists use to create these proofs of concept often don't translate well into production systems.


Improving PPA In Complex Designs With AI

#artificialintelligence

The goal of chip design always has been to optimize power, performance, and area (PPA), but results can vary greatly even with the best tools and highly experienced engineering teams. Optimizing PPA involves a growing number of tradeoffs that can vary by application, by availability of IP and other components, as well as the familiarity of engineers with different tools and methodologies. For example, higher performance may be achieved with a larger processor, but it also can be done using smaller, more specialized processing elements with tighter integration of hardware and software. So even in the same area and with the same power budget, there are different ways of achieving the same goal, and the optimum mix may vary depending upon a specific domain or vendor's needs. This is made even more complex by the rising demand for security.


Catching the Fakes

Communications of the ACM

Counterfeiting is a big business. Nearly $509 billion of fake and pirated products were sold internationally in 2016. In that year, the latest for which data was available, counterfeit goods made up 3.3% of international trade, up from 2.5% three years earlier, according to the Organization for Economic Cooperation and Development. That figure, which does not include domestic trade in fakes, not only means companies are losing revenue and consumers are not getting their money's worth; counterfeiting also helps fund organized crime. Because it skirts safety regulations, makers of counterfeits could use toxic materials or produce unsafe products.


SAP BrandVoice: Fashion Tech India: Real-Time AI Data Drives Competitive Retail Advantage

#artificialintelligence

Consumer fashion may be among the most unpredictable markets on the planet, but one startup in India has created an AI-based demand sensing platform that combines the brilliance of data scientists with seasoned industry experts to ferret out trends with uncanny accuracy. The idea is to close the gap between supply and demand. Omni-channel retailers are using AI to synch design and merchandising decisions with breaking consumer demand trends for sustainable growth. "We help companies create demand-driven fashion forecasts from consumer data across a holistic value chain," Ganesh Subramanian, founder and CEO at Stylumia. "Our demand sensing engine collects and analyzes publicly available global data to rank product trends, providing fashion designers, retail buyers, and merchandisers with a much deeper understanding of real-time consumer demand signals."


How intelligent workload management tools can help IT admins cut through cloud complexity

#artificialintelligence

The pace of digital transformation has notably picked up in the past decade, as enterprises invest in technology to retain their competitive edge and avoid having their market share eroded by disruptive newcomers. Organisations' ability to out-innovate their competitors in this way often requires a full-scale modernisation of the IT infrastructure stack underpinning their operations so they are better positioned to respond to the changing needs of their customers. For many enterprises, this process of modernisation has seen them look to invest in making their private, virtualised datacentres and server rooms more agile, responsive and easier to manage by investing in software-defined networking (SDN) technologies and automation tools. Such investments can help enterprises make better and more efficient use of their existing compute capacity, but that alone may not be enough to stave off competitive threats, prompting some IT leaders to weigh up a move to the public cloud. The benefits of such an approach are well-documented and proven, with the public cloud offering enterprises ready access to an almost infinite supply of cloud-based compute resources that can be set to auto-scale in line with peaks and troughs in demand, meaning enterprises only pay for what they use.


AI in testing: 13 essential resources for QA pros

#artificialintelligence

What if you could make software testing simple? What if it could be done without all the conversations, questions, defect reports, and metrics? We've been promised artificial intelligence (AI) as the solution to all problems related to testing, especially by those who have never tested--those who believe that what we do as testers is little more than tapping screens to make comparisons. Although I've stated that AI is coming and will change software testing forever (eventually), we're not there yet--not even close. But that doesn't mean we can't use AI to support our testing efforts.


Learnability of Timescale Graphical Event Models

Behrendt, Philipp

arXiv.org Machine Learning

This technical report tries to fill a gap in current literature on Timescale Graphical Event Models. I propose and evaluate different heuristics to determine hyper-parameters during the structure learning algorithm and refine an existing distance measure. A comprehensive benchmark on synthetic data will be conducted allowing conclusions about the applicability of the different heuristics.