Are Large Language Models In-Context Personalized Summarizers? Get an iCOPERNICUS Test Done!
Patel, Divya, Patel, Pathik, Chander, Ankush, Dasgupta, Sourish, Chakraborty, Tanmoy
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have succeeded considerably in In-Context-Learning (ICL) based summarization. However, saliency is subject to the users' specific preference histories. Hence, we need reliable In-Context Personalization Learning (ICPL) capabilities within such LLMs. For any arbitrary LLM to exhibit ICPL, it needs to have the ability to discern contrast in user profiles. A recent study proposed a measure for degree-of-personalization called EGISES for the first time. EGISES measures a model's responsiveness to user profile differences. However, it cannot test if a model utilizes all three types of cues provided in ICPL prompts: (i) example summaries, (ii) user's reading histories, and (iii) contrast in user profiles. To address this, we propose the iCOPERNICUS framework, a novel In-COntext PERsonalization learNIng sCrUtiny of Summarization capability in LLMs that uses EGISES as a comparative measure. As a case-study, we evaluate 17 state-of-the-art LLMs based on their reported ICL performances and observe that 15 models' ICPL degrades (min: 1.6%; max: 3.6%) when probed with richer prompts, thereby showing lack of true ICPL.
arXiv.org Artificial Intelligence
Sep-30-2024
- Country:
- Asia
- China > Hong Kong (0.04)
- India > NCT
- Delhi (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.04)
- Europe
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.04)
- Ontario > Toronto (0.04)
- British Columbia > Metro Vancouver Regional District
- United States
- New York > New York County
- New York City (0.04)
- Washington > King County
- Seattle (0.04)
- New York > New York County
- Canada
- South America > Chile
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Technology: