Towards AGI in Computer Vision: Lessons Learned from GPT and Large Language Models

Xie, Lingxi, Wei, Longhui, Zhang, Xiaopeng, Bi, Kaifeng, Gu, Xiaotao, Chang, Jianlong, Tian, Qi

arXiv.org Artificial Intelligence 

Abstract--The AI community has been pursuing algorithms known as artificial general intelligence (AGI) that apply to any kind of real-world problem. Recently, chat systems powered by large language models (LLMs) emerge and rapidly become a promising direction to achieve AGI in natural language processing (NLP), but the path towards AGI in computer vision (CV) remains unclear. One may owe the dilemma to the fact that visual signals are more complex than language signals, yet we are interested in finding concrete reasons, as well as absorbing experiences from GPT and LLMs to solve the problem. In this paper, we start with a conceptual definition of AGI and briefly review how NLP solves a wide range of tasks via a chat system. The analysis inspires us that unification is the next important goal of CV. But, despite various efforts in this direction, CV is still far from a system like GPT that naturally integrates all tasks. We point out that the essential weakness of CV lies in lacking a paradigm to learn from environments, yet NLP has accomplished the task in the text world. We then imagine a pipeline that puts a CV algorithm (i.e., an agent) in world-scale, interactable environments, pre-trains it to predict future frames with respect to its action, and then fine-tunes it with instruction to accomplish various tasks. We expect substantial research and engineering efforts to push the idea forward and scale it up, for which we share our perspectives on future research directions. Some researchers believed that such systems designs do not generally transfer to other problems such as can be seen as early sparks of AGI [2]. These systems were image captioning [11] or visual content generation [12]. In recent years, enhanced by instruct tuning [4]. Equipped with an external there are many efforts in this direction, and we roughly categorize knowledge base and specifically designed modules, they them into five research topics, namely, (i) open-world can accomplish complex tasks such as solving mathematical visual recognition based on vision-language alignment [13], questions, generating visual contents, etc., reflecting its (ii) the Segment Anything task [14] for generic visual recognition, strong ability to understand users' intentions and perform (iii) generalized visual encoding to unify vision preliminary chain-of-thoughts [5]. Despite known weaknesses tasks [15], [16], [17], (iv) LLM-guided visual understanding in some aspects (e.g., telling scientific facts and relationships to enhance the logic in CV [18], [19], and (v) multimodal between named people), these pioneering studies dialog to facilitate vision-language interaction [11], [20].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found