The Machine Translation Post-Editing Annotation System (MTPEAS)
A standardized and user-friendly taxonomy for student post-editing quality assessment
Machine translation post-editing quality evaluation has received relatively little attention in translation
pedagogy to date. It is a time-consuming process that involves the comparison of three texts (source text, machine translation and
student post-edited text) and the systematic identification and correction of students’ edits (or absence thereof) of machine
translation (MT) output. There are as yet no widely available, standardized, user-friendly annotation systems for use in
translator education. In this article, we address this gap by describing the Machine Translation Post-Editing Annotation
System (MTPEAS). MTPEAS includes a taxonomy of seven categories that are presented in easy-to-understand terms:
Value-adding edits, Successful edits, Unnecessary edits, Incomplete edits, Error-introducing edits, Unsuccessful edits, and
Missing edits. We then assess the robustness of the MTPEAS taxonomy in a pilot study of 30 students’ post-edited texts and offer
some preliminary findings on students’ MT error identification and correction skills.
Article outline
- 1.Introduction: Post-editing quality evaluation in translator education
- 2.The Machine Translation Post-Editing Annotation System (MTPEAS)
- 2.1The MTPEAS annotation system
- 2.2The MTPEAS taxonomy
- 2.3The MTPEAS decision tree: A user-friendly, graphical representation of the annotation system
- 3.Pilot Study: Data and methodology
- 4.Pilot Study: Results and discussion
- 4.1Inter-rater agreement on segment identification
- 4.2Inter-rater agreement on MTPEAS annotation
- 4.3Preliminary findings
- 5.Concluding remarks
- Acknowledgments
- Notes
-
References