Fine-tuned network relies on generic representation to solve unseen cognitive task

Lin, Dongyan

arXiv.org Artificial Intelligence 

We aim to understand the extent to more on generic pretrained representation, or develop which fine-tuned models depend on their pretrained representations brand new task-specific solutions? Here, to solve a novel task. To this end, we compare the we fine-tuned GPT-2 on a context-dependent representations after fine-tuning with those developed by decision-making task, novel to the model but GPT-2 optimized solely on this task from scratch. We chose adapted from neuroscience literature. We compared this task not only because it is novel but also because its its performance and internal mechanisms grounding in neuroscience allows us to explore the data with to a version of GPT-2 trained from scratch on computational neuroscience methods and make direct comparisons the same task. Our results show that fine-tuned between representations in biological and artificial models depend heavily on pretrained representations, neural networks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found