Performance in a dialectal profiling task of LLMs for varieties of Brazilian Portuguese
Freitag, Raquel Meister Ko, de Gois, Túlio Sousa
–arXiv.org Artificial Intelligence
Advances in generative AI have enabled near-human responses, crucial for overcoming the Turing test Danziger [2018]. However, achieving this requires algorithms to replicate ethically questionable human behaviors, including biases learned by large language models (LLMs) Freitag [2021]. Biases can be explicit, consciously manipulated, or implicit, operating unconsciously through automatic associations. These biases affect generative AI in two key areas: the rules and filters applied during LLM fine-tuning, and the linguistic datasets used for training. However, the specifics of these biases--whether in rules, filters, or dataset selection--remain unclear Bender et al. [2021].
arXiv.org Artificial Intelligence
Oct-14-2024
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States (0.04)
- South America > Brazil
- Minas Gerais (0.05)
- Pernambuco (0.06)
- Rio Grande do Sul (0.04)
- São Paulo (0.06)
- Europe > United Kingdom
- Genre:
- Research Report (0.64)
- Technology: