Suresha, Suhas
Domain-specific Question Answering with Hybrid Search
Sultania, Dewang, Lu, Zhaoyu, Naik, Twisha, Dernoncourt, Franck, Yoon, David Seunghyun, Sharma, Sanat, Bui, Trung, Gupta, Ashok, Vatsa, Tushar, Suresha, Suhas, Verma, Ishita, Belavadi, Vibha, Chen, Cheng, Friedrich, Michael
With the increasing adoption of Large Language Models A production-ready, generalizable framework for LLMbased (LLMs) in enterprise settings, ensuring accurate and reliable QA systems built on Elasticsearch question-answering systems remains a critical challenge. A flexible hybrid retrieval mechanism combining dense Building upon our previous work on domain-specific and sparse search methods question answering about Adobe products (Sharma et al. A comprehensive evaluation framework for assessing 2024), which established a retrieval-aware framework with QA system performance self-supervised training, we now present a production-ready, Empirical analysis demonstrating the effectiveness of our generalizable architecture alongside a comprehensive evaluation approach across various metrics methodology. Our core contribution is a flexible, scalable framework built on Elasticsearch that can be adapted Through this work, we provide not only theoretical insights for any LLM-based question-answering system. This framework but also a practical, deployable solution for building reliable seamlessly integrates hybrid retrieval mechanisms, domain-specific question-answering systems that can combining dense and sparse search with boost matching, be adapted to various enterprise needs.
Probabilistic Semantic Inpainting with Pixel Constrained CNNs
Dupont, Emilien, Suresha, Suhas
Semantic inpainting is the task of inferring missing pixels in an image given surrounding pixels and high level image semantics. Most semantic inpainting algorithms are deterministic: given an image with missing regions, a single inpainted image is generated. However, there are often several plausible inpaintings for a given missing region. In this paper, we propose a method to perform probabilistic semantic inpainting by building a model, based on PixelCNNs, that learns a distribution of images conditioned on a subset of visible pixels. Experiments on the MNIST and CelebA datasets show that our method produces diverse and realistic inpaintings. Further, our model also estimates the likelihood of each sample which we show correlates well with the realism of the generated inpaintings.