Technical Perspective: Unsafe Code Still a Hurdle Copilot Must Clear

Communications of the ACM 

In recent years, enormous progress has been made in the field of large language models (LLMs). Based on neural network architectures, specifically transformer models, they have proven highly effective in natural language processing (NLP). The models are designed to understand, generate, and work with human language. Trained on large datasets consisting of text from the Internet, books, articles, and many other data sources, the model learns to predict the next word in a sentence based on previous words. LLMs are not only able to generate human language but can also generate source code to support humans in the implementation of software systems.