Kambhatla, Nanda
Natural Language Assistant: A Dialog System for Online Product Recommendation
Chai, Joyce, Horvath, Veronika, Nicolov, Nicolas, Stys, Margo, Kambhatla, Nanda, Zadrozny, Wlodek, Melville, Prem
With the emergence of electronic-commerce systems, successful information access on electroniccommerce web sites becomes essential. To provide an efficient solution for information access, we have built the NATURAL language ASSISTANT (NLA), a web-based natural language dialog system to help users find relevant products on electronic-commerce sites. The system brings together technologies in natural language processing and human-computer interaction to create a faster and more intuitive way of interacting with web sites. By combining statistical parsing techniques with traditional AI rule-based technology, we have created a dialog system that accommodates both customer needs and business requirements.
Natural Language Assistant: A Dialog System for Online Product Recommendation
Chai, Joyce, Horvath, Veronika, Nicolov, Nicolas, Stys, Margo, Kambhatla, Nanda, Zadrozny, Wlodek, Melville, Prem
With the emergence of electronic-commerce systems, successful information access on electroniccommerce web sites becomes essential. Menu-driven navigation and keyword search currently provided by most commercial sites have considerable limitations because they tend to overwhelm and frustrate users with lengthy, rigid, and ineffective interactions. To provide an efficient solution for information access, we have built the NATURAL language ASSISTANT (NLA), a web-based natural language dialog system to help users find relevant products on electronic-commerce sites. The system brings together technologies in natural language processing and human-computer interaction to create a faster and more intuitive way of interacting with web sites. By combining statistical parsing techniques with traditional AI rule-based technology, we have created a dialog system that accommodates both customer needs and business requirements. The system is currently embedded in an application for recommending laptops and was deployed as a pilot on IBM's web site.
Classifying with Gaussian Mixtures and Clusters
Kambhatla, Nanda, Leen, Todd K.
In this paper, we derive classifiers which are winner-take-all (WTA) approximations to a Bayes classifier with Gaussian mixtures for class conditional densities. The derived classifiers include clustering based algorithms like LVQ and k-Means. We propose a constrained rank Gaussian mixtures model and derive a WTA algorithm for it. Our experiments with two speech classification tasks indicate that the constrained rank model and the WTA approximations improve the performance over the unconstrained models. 1 Introduction A classifier assigns vectors from Rn (n dimensional feature space) to one of K classes, partitioning the feature space into a set of K disjoint regions. A Bayesian classifier builds the partition based on a model of the class conditional probability densities of the inputs (the partition is optimal for the given model).
Classifying with Gaussian Mixtures and Clusters
Kambhatla, Nanda, Leen, Todd K.
In this paper, we derive classifiers which are winner-take-all (WTA) approximations to a Bayes classifier with Gaussian mixtures for class conditional densities. The derived classifiers include clustering based algorithms like LVQ and k-Means. We propose a constrained rank Gaussian mixtures model and derive a WTA algorithm for it. Our experiments with two speech classification tasks indicate that the constrained rank model and the WTA approximations improve the performance over the unconstrained models. 1 Introduction A classifier assigns vectors from Rn (n dimensional feature space) to one of K classes, partitioning the feature space into a set of K disjoint regions. A Bayesian classifier builds the partition based on a model of the class conditional probability densities of the inputs (the partition is optimal for the given model).
Classifying with Gaussian Mixtures and Clusters
Kambhatla, Nanda, Leen, Todd K.
In this paper, we derive classifiers which are winner-take-all (WTA) approximations to a Bayes classifier with Gaussian mixtures for class conditional densities. The derived classifiers include clustering based algorithms like LVQ and k-Means. We propose a constrained rank Gaussian mixtures model and derive a WTA algorithm for it. Our experiments with two speech classification tasks indicate that the constrained rank model and the WTA approximations improve the performance over the unconstrained models. 1 Introduction A classifier assigns vectors from Rn (n dimensional feature space) to one of K classes, partitioning the feature space into a set of K disjoint regions. A Bayesian classifier builds the partition based on a model of the class conditional probability densities of the inputs (the partition is optimal for the given model).
Fast Non-Linear Dimension Reduction
Kambhatla, Nanda, Leen, Todd K.
Fast Non-Linear Dimension Reduction
Kambhatla, Nanda, Leen, Todd K.
We propose a new distance measure which is optimal for the task of local PCA. Our results with speech and image data indicate that the nonlinear techniques provide more accurate encodings than PCA. Our local linear algorithm produces more accurate encodings (except for one simulation with image data), and trains much faster than five layer auto-associative networks. Acknowledgments This work was supported by grants from the Air Force Office of Scientific Research (F49620-93-1-0253) and Electric Power Research Institute (RP8015-2). The authors are grateful to Gary Cottrell and David DeMers for providing their image database and clarifying their experimental results. We also thank our colleagues in the Center for Spoken Language Understanding at OGI for providing speech data.
Fast Non-Linear Dimension Reduction
Kambhatla, Nanda, Leen, Todd K.