چکيده
This study investigates the use of Generative Pre-trained Transformers (GPTs) for automated code comment generation, which addresses the issues of manual documentation in software development.
Manual code commenting is frequently overlooked due to its time-consuming nature, resulting in insufficient documentation that impedes code comprehension and maintenance.
This study seeks to assess the efficacy of GPT models, notably GPT-3, in producing accurate, contextually appropriate, and extensive code comments.
The process entails picking pre-trained GPT models and applying them to publicly available code repositories on platforms such as GitHub, with a focus on Python and JavaScript.
The generated comments are compared to current manual comments and a baseline model using quantitative measures like BLEU and ROUGE scores, as well as qualitative assessment by software development specialists.
Key hypotheses include GPTs' superior efficacy over traditional rule-based methods, the higher quality of GPT-generated comments in terms of clarity and relevance, the varying effectiveness of GPTs across programming languages, and the potential for improved code comprehension and reduced maintenance time when using GPT-generated comments.
This study intends to show that GPTs can greatly improve code documenting standards, resulting in higher software quality, developer productivity, and maintainability.
The findings are expected to give empirical evidence supporting the use of AI in software documentation procedures, as well as suggestions for future research areas in optimizing GPT models for different programming languages and development environments.