Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
–Neural Information Processing Systems
Large Language Models (LLMs) are widely used for knowledge-seeking purposes yet suffer from hallucinations. The knowledge boundary of an LLM limits its factual understanding, beyond which it may begin to hallucinate. Investigating the perception of LLMs' knowledge boundary is crucial for detecting hallucinations and LLMs' reliable generation. Current studies perceive LLMs' knowledge boundary on questions with concrete answers (close-ended questions) while paying limited attention to semi-open-ended questions that correspond to many potential answers. Some researchers achieve it by judging whether the question is answerable or not. However, this paradigm is not so suitable for semi-open-ended questions, which are usually "partially answerable questions" containing both answerable answers and ambiguous (unanswerable) answers.
Neural Information Processing Systems
Mar-26-2025, 10:34:09 GMT
- Country:
- Africa > Middle East
- Asia
- China (0.68)
- India > NCT (0.14)
- Middle East
- Bahrain (0.14)
- Iraq (0.14)
- Israel (0.14)
- Lebanon (0.14)
- Qatar (0.14)
- Republic of Türkiye (0.14)
- Saudi Arabia (0.14)
- UAE (0.14)
- Europe
- Austria > Vienna (0.14)
- Middle East
- Ukraine > Kyiv Oblast
- Kyiv (0.14)
- North America
- Canada > Ontario (0.14)
- Mexico > Mexico City (0.14)
- United States > Pennsylvania (0.14)
- Oceania (1.00)
- South America > Paraguay
- Asunción (0.14)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Health & Medicine > Consumer Health (0.93)
- Technology: