Empirical Evaluation of Large Language Models for Automated Unit Test Generation

Organization
Software Engineering Analytics
Abstract
Developers write unit tests to ensure software correctness, but manually writing these tests is time-consuming and could benefit from automation. Recently, Large Language Models (LLMs) have been applied to various aspects of software development, including the automated generation of unit tests. This thesis aims to empirically evaluate the effectiveness of LLMs in generating unit tests by comparing their performance against that of human testers.
Graduation Theses defence year
2024-2025
Supervisor
Faiz Ali Shah
Spoken language (s)
English
Requirements for candidates
Level
Masters
Keywords
#SEA

Application of contact

 
Name
Faiz Ali Shah
Phone
E-mail
faiz.ali.shah@ut.ee