Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey

Liu, Xiaoyu, Xu, Paiheng, Wu, Junda, Yuan, Jiaxin, Yang, Yifan, Zhou, Yuhang, Liu, Fuxiao, Guan, Tianrui, Wang, Haoliang, Yu, Tong, McAuley, Julian, Ai, Wei, Huang, Furong

arXiv.org Artificial Intelligence 

Recently Large Language Models (LLMs) have showcased remarkable versatility across a spectrum of critical tasks. An LLM is adept at tasks such as copywriting, enhancing original sentences with their distinct style and voice, responding to knowledge base queries, generating code, solving mathematical problems, and performing classification or generation tasks tailored to user requirements. Moreover, there has been a recent expansion into multi-modal variants, such as Large Visual Language Models (LVLMs) or Large Multi-modal Language Models, which broaden their input/output capabilities to encompass various modalities. This evolution has significantly enhanced both the potential and range of applications of these models. In this survey, our primary focus is on Transformer-based Large Language Models (LLMs). The capability of LLMs is fundamentally rooted in their inference abilities, which dictates their proficiency in comprehending, processing, and providing solutions to various inquiries, as well as their adaptability to societally impactful domains.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found