Exploring Social Biases of Large Language Models
Organisatsiooni nimi
Computational Social Science Group
Kokkuvõte
Pretrained language models, particularly large language models (LLMs) like ChatGPT, have achieved significant success across various NLP tasks. However, there is considerable evidence that these models tend to adopt the cultural biases inherent in the datasets used for training, thereby unintentionally reinforcing biased patterns and potentially causing harm. This thesis aims to investigate these biases across different categories, such as racial and gender biases, in both Estonian and English language models.
Lõputöö kaitsmise aasta
2024-2025
Juhendaja
Ahmed Sabir
Suhtlemiskeel(ed)
inglise keel
Nõuded kandideerijale
Tase
Magister
Märksõnad
Kandideerimise kontakt
Nimi
Ahmed Sabir
Tel
E-mail