Evaluating Transformer Architecture for The Game of Chess

Name
Raiko Marrandi
Abstract
Transformers are state-of-the-art natural language processing models, which have shown success in a variety of areas not directly related to natural language. This work evaluates the learning capabilities of transformers in the game of chess. The models are trained using an unannotated dataset of played chess games in Forsyth-Edwards notation (FEN) and their performances are compared with models trained on less comprehensive datasets used in prior research. The findings show that the models are not capable of generalizing on the richer FEN dataset and demonstrate inferior performance compared to the control models across all evaluation metrics.
Graduation Thesis language
English
Graduation Thesis type
Bachelor - Computer Science
Supervisor(s)
Eduard Barbu
Defence year
2023
 
PDF