Code to Comment Translation: A Comparative Study on Model Effectiveness & Errors
Mahmud, Junayed, Faisal, Fahim, Arnob, Raihan Islam, Anastasopoulos, Antonios, Moran, Kevin
–arXiv.org Artificial Intelligence
Automated source code summarization is a popular software engineering research topic wherein machine translation models are employed to "translate" code snippets into relevant natural language descriptions. Most evaluations of such models are conducted using automatic reference-based metrics. However, given the relatively large semantic gap between programming languages and natural language, we argue that this line of research would benefit from a qualitative investigation into the various error modes of current state-of-the-art models. Therefore, in this work, we perform both a quantitative and qualitative comparison of three recently proposed source code summarization models. In our quantitative evaluation, we compare the models based on the smoothed BLEU-4, METEOR, and ROUGE-L machine translation metrics, and in our qualitative evaluation, we perform a manual open-coding of the most common errors committed by the models when compared to ground truth captions. Our investigation reveals new insights into the relationship between metric-based performance and model prediction errors grounded in an empirically derived error taxonomy that can be used to drive future research efforts
arXiv.org Artificial Intelligence
Jun-15-2021
- Country:
- North America > United States > Minnesota (0.28)
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (0.93)
- Research Report
- Technology: