From Priest to Doctor: Domain Adaptation for Low-Resource Neural Machine Translation Conference Proceeding uri icon

Overview

abstract

  • Many of the world‘s languages have insufficient data to train high-performing general neural machine translation (NMT) models, let alone domain-specific models, and often the only available parallel data are small amounts of religious texts. Hence, domain adaptation (DA) is a crucial issue faced by contemporary NMT and has, so far, been underexplored for low-resource languages. In this paper, we evaluate a set of methods from both low-resource NMT and DA in a realistic setting, in which we aim to translate between a high-resource and a low-resource language with access to only: a) parallel Bible data, b) a bilingual dictionary, and c) a monolingual target-domain corpus in the high-resource language. Our results show that the effectiveness of the tested methods varies, with the simplest one, DALI, being most effective. We follow up with a small human evaluation of DALI, which shows that there is still a need for more careful investigation of how to accomplish DA for low-resource NMT.

publication date

  • January 1, 2025

Date in CU Experts

  • February 18, 2025 6:50 AM

Full Author List

  • Marashian A; Rice E; Gessler L; Palmer A; von der Wense K

Full Editor List

  • Rambow O; Wanner L; Apidianaki M; Al-Khalifa H; Eugenio BD; Schockaert S

author count

  • 5

Additional Document Info

start page

  • 7087

end page

  • 7098