Goto

Collaborating Authors

 rationale 0


Text-Based Approaches to Item Alignment to Content Standards in Large-Scale Reading & Writing Tests

Fu, Yanbin, Jiao, Hong, Zhou, Tianyi, Zhang, Nan, Li, Ming, Xu, Qingshu, Peters, Sydney, Lissitz, Robert W.

arXiv.org Artificial Intelligence

Yanbin Fu, Hong Jiao, Tianyi Zhou, Nan Zhang, Ming Li, Qingshu Xu, Sydney Peters, Robert W. Lissitz University of Maryland, College Park Abstract Aligning test items to content standards is a critical step in test development to collect validity evidence based on content. Item alignment has typically been conducted by human experts. This judgmental process can be subjective and time - consuming. This study investigated the performance of fine - tuned small language models (SLMs) for automated item alignment using data from a large - scale standardized reading and writing test for college admissions. Different SLMs were trained for alignment at both domain and skill levels respectively with 10 skills mapped to 4 content domains. The model performance was evaluated in multiple criteria on two testing datasets. The impact of types and sizes of the input data for training was investigated. Results showed that including more item text data led to substantially better model performance, surpassing the improvements induced by sample size inc rease alone. For comparison, supervised machine learning models were trained using the embeddings from the multilingual - E5 - lar ge - instruct model. The study results showed that fine - tuned SLMs consistently outperformed the embedding - based supervised machine learning models, particularly for the more fine - grained skill alignment. To better understand model mis classifications, multiple semantic similarity analysis including pairwise cosine similarity, Kullback - Leibler divergence of embedding distributions, and two - dimension projections of item embeddings were conducted.


Think Before Refusal : Triggering Safety Reflection in LLMs to Mitigate False Refusal Behavior

Si, Shengyun, Wang, Xinpeng, Zhai, Guangyao, Navab, Nassir, Plank, Barbara

arXiv.org Artificial Intelligence

Recent advancements in large language models (LLMs) have demonstrated that fine-tuning and human alignment can render LLMs harmless. In practice, such "harmlessness" behavior is mainly achieved by training models to reject harmful requests, such as "Explain how to burn down my neighbor's house", where the model appropriately declines to respond. However, this approach can inadvertently result in false refusal, where models reject benign queries as well, such as "Tell me how to kill a Python process". In this work, we demonstrate that prompting safety reflection before generating a response can mitigate false refusal behavior. Building on this finding, we introduce the Think-Before-Refusal (TBR) schema and conduct safety-aware instruction fine-tuning incorporating safety reflection. In an ablation study across 15 pre-trained models, we show that models fine-tuned with safety reflection significantly reduce false refusal behavior while maintaining safety and overall performance compared to those fine-tuned without safety reflection.


Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning

Hee, Ming Shan, Kumaresan, Aditi, Lee, Roy Ka-Wei

arXiv.org Artificial Intelligence

The widespread presence of hate speech on the internet, including formats such as text-based tweets and vision-language memes, poses a significant challenge to digital platform safety. Recent research has developed detection models tailored to specific modalities; however, there is a notable gap in transferring detection capabilities across different formats. This study conducts extensive experiments using few-shot in-context learning with large language models to explore the transferability of hate speech detection between modalities. Our findings demonstrate that text-based hate speech examples can significantly enhance the classification accuracy of vision-language hate speech. Moreover, text-based demonstrations outperform vision-language demonstrations in few-shot learning settings. These results highlight the effectiveness of cross-modality knowledge transfer and offer valuable insights for improving hate speech detection systems.