Pathuri, Jeevan
Small Language Models for Application Interactions: A Case Study
Li, Beibin, Zhang, Yi, Bubeck, Sébastien, Pathuri, Jeevan, Menache, Ishai
Large Language Models (LLMs) are becoming pervasive in assisting humans with a wide variety of tasks, such as writing documents, presenting work, coding and health assistant. Generative LLMs are being rapidly integrated in user-facing software, for answering questions and increasing productivity through simple, language based interactions with technology. One of the key operating principles behind LLMs is exploiting their ability to generalize to unseen tasks by providing examples through the prompt itself - an approach commonly known as in-context learning. While LLMs are being designed to support larger prompt sizes, processing very large prompts might be expensive and incur non-negligible latencies. In this paper, we consider the alternative of using Small Language Models (SLMs), which are being developed nowadays and open-sourced by several companies.
Large Language Models for Supply Chain Optimization
Li, Beibin, Mellou, Konstantina, Zhang, Bo, Pathuri, Jeevan, Menache, Ishai
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.