Yin, Jianghao
Boosting Conversational Question Answering with Fine-Grained Retrieval-Augmentation and Self-Check
Ye, Linhao, Lei, Zhikai, Yin, Jianghao, Chen, Qin, Zhou, Jie, He, Liang
Retrieval-Augmented Generation (RAG) aims to generate more reliable Conversational Question Answering (CQA) has attracted great and accurate responses, by augmenting large language models attention in both academia and industry in recent years, which (LLMs) with the external vast and dynamic knowledge. Most previous provides more natural human-computer interactions by extending work focuses on using RAG for single-round question answering, single-turn question answering (QA) to conversational settings [23, while how to adapt RAG to the complex conversational setting 33]. In CQA, users usually ask multiple follow-up questions using wherein the question is interdependent on the preceding context is anaphora that refers to certain concepts in previous conversation not well studied. In this paper, we propose a conversation-level RAG history, or ellipsis that can be omitted. As shown in Figure 1, the (ConvRAG) approach, which incorporates fine-grained retrieval augmentation'battle' in the current question refers to'Hunayn' in the first turn, and self-check for conversational question answering making it more challenging than single-turn QA. (CQA). In particular, our approach consists of three components, One key challenge in CQA is how to explicitly represent the namely conversational question refiner, fine-grained retriever and questions based on the interdependent context. Previous work focuses self-check based response generator, which work collaboratively on using the question rewriting methods for a better question for question understanding and relevant information acquisition understanding. Elgoharyet et al. [11] first released a dataset with in conversational settings. Extensive experiments demonstrate the human rewrites of questions and analysed the writing quality.
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Dan, Yuhao, Lei, Zhikai, Gu, Yiyang, Li, Yong, Yin, Jianghao, Lin, Jiaju, Ye, Linhao, Tie, Zhiyan, Zhou, Yougen, Wang, Yilei, Zhou, Aimin, Zhou, Ze, Chen, Qin, Zhou, Jie, He, Liang, Qiu, Xipeng
EduChat (https://www.educhat.top/) is a large-scale language model (LLM)-based chatbot system in the education domain. Its goal is to support personalized, fair, and compassionate intelligent education, serving teachers, students, and parents. Guided by theories from psychology and education, it further strengthens educational functions such as open question answering, essay assessment, Socratic teaching, and emotional support based on the existing basic LLMs. Particularly, we learn domain-specific knowledge by pre-training on the educational corpus and stimulate various skills with tool use by fine-tuning on designed system prompts and instructions. Currently, EduChat is available online as an open-source project, with its code, data, and model parameters available on platforms (e.g., GitHub https://github.com/icalk-nlp/EduChat, Hugging Face https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its capabilities online (https://vimeo.com/851004454). This initiative aims to promote research and applications of LLMs for intelligent education.