Optimizing Large Language Models for Dynamic Constraints through Human-in-the-Loop Discriminators

Wei, Timothy, Miin, Annabelle, Miin, Anastasia

arXiv.org Artificial Intelligence 

Those methods usually rely on data curation reflecting on the deliberate reasoning path in specific application Large Language Models (LLMs) have recently demonstrated impressive areas. When it comes to complex application constraints, capabilities across various real-world applications. However, high-quality solutions often demand a large volume of data to cover due to the current text-in-text-out paradigm, it remains challenging enough data cases and the corresponding reasoning logic. This for LLMs to handle dynamic and complex application constraints, process fundamentally differs from the typical human cognition let alone devise general solutions that meet predefined process: faced with unfamiliar problems, people first seek to capture system goals. Current common practices like model finetuning and the overview of the underlying application constraints, which are reflection-based reasoning often address these issues case-by-case, potential rules summarized from observations. Next, these learned limiting their generalizability. To address this issue, we propose rules will be further refined when exception cases arise. The essence a flexible framework that enables LLMs to interact with system of this cognitive process lies in distilling rules and identifying minimal interfaces, summarize constraint concepts, and continually optimize cases for refinement rather than depending on the inefficient performance metrics by collaborating with human experts.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found