The article aims to show how interpreter trainers holistically grade student performances. For this purpose,
experimental rating sessions were held for four undergraduate interpreter trainers. The raters were asked to think aloud their
quality judgments while holistically assessing six recordings of consecutive interpretation. Their concurrent verbal reports,
along with reflective reports, interview transcripts, and video recordings of computer screen activity, were collected and
analysed in detail. Findings revealed various facets of interpreting performance assessment, including what procedures the raters
followed, what aspects of the performance they focused on, what criteria they depended on for their judgment decisions, and why
two ratings of the same performance were divergent. This article also presents a tentative model for holistic rating of
consecutive interpretation.
Bartłomiejczyk, M. (2007). Interpreting quality as perceived by trainee interpreters: Self-evaluation. The Interpreter and Translator Trainer 1 (2), 247–267.
Bejar, I. I. (2012). Rater cognition: Implications for validity. Educational Measurement: Issues and Practice 31 (3), 2–9.
Bowles, M. A. (2010). The think-aloud controversy in second language research. London: Routledge.
Bühler, H. (1986). Linguistic (semantic) and extralinguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters. Multilingua 5 (4), 231–235.
Chiaro, D. & Nocella, G. (2004). Interpreters’ perception of linguistic and non-linguistic factors affecting quality: A survey through the World Wide Web. Meta 49 (2), 279–293.
Choi, J. Y. (2013). Assessing the impact of text length on consecutive interpreting. In D. Tsagari & R. van Deemter (Eds.), Assessment issues in language translation and interpreting. Frankfurt am Main: Peter Lang, 85–96.
Coffey, A. & Atkinson, P. (1996). Making sense of qualitative data: Complementary research strategies. Thousand Oaks, CA: Sage.
Cohen, A. D. (2000). Exploring strategies in test-taking: Fine-tuning verbal reports from respondents. In G. Ekbatani & H. Pierson (Eds.), Learner-directed assessment in ESL. Mahwah, NJ: Lawrence Erlbaum, 127–150.
De Gregoris, G. (2014). The limits of expectations vs. assessment questionnaire-based surveys on simultaneous interpreting quality: The need for a gestaltic model of perception. Rivista internazionale di tecnica della traduzione 161, 57–87.
DeRember, M. L. (1998). Writing assessment: Raters’ elaboration of the rating task. Assessing Writing 5 (1), 7–29.
Englund Dimitrova, B. & Tiselius, E. (2014). Retrospection in interpreting and translation: Explaining the process?MonTI Special Issue 11, 177–200.
Ericsson, K. A. & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA: MIT Press.
Ericsson, K. A. & Simon, H. A. (1998). How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking. Mind, Culture and Activity 5 (3), 178–186.
Eyckmans, J., Anckaert, P. & Segers, W. (2016). Translation and interpretation skills. In D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment. Berlin: Walter de Gruyter, 219–235.
Garzone, G. (2003). Reliability of quality criteria evaluation in survey research. In A. Collados Aís, M. M. Fernández Sánchez & D. Gile (Eds.), La evaluación de la calidad en interpretación: Investigación. Granada: Comares.
Green, A. (1998). Verbal protocol analysis in language testing research: A handbook. Cambridge: Cambridge University Press.
Iglesias Fernández, E. (2013). Unpacking delivery criteria in interpreting quality assessment. In D. Tsagari & R. van Deemter (Eds.), Assessment issues in language translation and interpreting. Frankfurt am Main: Peter Lang, 51–66.
Ivanova, A. (2000). The use of retrospection in research on simultaneous interpreting. In S. Tirkkonen-Condit & R. Jääskeläinen (Eds.), Tapping and mapping the processes of translation and interpreting: Outlooks on empirical research. Amsterdam: John Benjamins, 27–52.
Jarvella, R. J., Jensen, A., Jensen, E. H. & Anderson, M. S. (2002). Towards characterizing translator expertise, knowledge and know-how: Some findings using TAPs and experimental methods. In A. Riccardi (Ed.), Translation studies: Perspectives on an emerging discipline. Cambridge: Cambridge University Press, 172–197.
Jourdenais, R. (2001). Cognition, instruction and protocol analysis. In P. Robinson (Ed.), Cognition and second language instruction. Cambridge: Cambridge University Press, 354–375.
Lee, S.-B. (2014). An interpreting self-efficacy (ISE) scale for undergraduate students majoring in consecutive interpreting: Construction and preliminary validation. The Interpreter and Translator Trainer 8 (2), 183–203.
Li, D. (2004). Trustworthiness of think-aloud protocols in the study of translation processes. International Journal of Applied Linguistics 14 (3), 301–313.
Liu, M. (2013). Design and analysis of Taiwan’s interpretation certification examination. In D. Tsagari & R. van Deemter (Eds.), Assessment issues in language translation and interpreting. Frankfurt am Main: Peter Lang, 163–178.
Liu, M. (2015). Assessment. In F. Pöchhacker (Ed.), Routledge encyclopedia of interpreting studies. London: Routledge, 20–22.
Lörscher, W. (2005). The translation process: Methods and problems of its investigation. Meta 50 (2), 597–608.
Miles, M. B., Huberman, A. M. & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage.
Naumenko, O. (2015). Improving performance assessment score validation practices: An instructional module on generalizability theory. Working Papers on Language and Diversity in Education I (1), 1–17.
O’Hagan, S. (2014). Variability in assessor responses to undergraduate essays: An issue for assessment quality in higher education. Bern: Peter Lang.
Pöchhacker, F. (2001). Quality assessment in conference and community interpreting. Meta 46 (2), 410–425.
Pressley, M. & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Lawrence Erlbaum.
Rallis, S. F. & Rossman, G. B. (2003). Mixed methods in evaluation contexts: A pragmatic framework. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social & behavioral research. Thousand Oaks, CA: Sage, 491–512.
Russell, D. & Winston, B. (2014). Tapping into the interpreting process: Using participant reports to inform the interpreting process in educational settings. Translation & Interpreting 6 (1), 102–127.
Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). Thousand Oaks, CA: Sage.
Saldanha, G. & O’Brien, S. (2013). Research methodologies in translation studies. London: Routledge.
Sun, S. (2011). Think-aloud-based translation process research: Some methodological considerations. Meta 56 (4), 928–951.
Tiselius, E. (2009). Revisiting Carroll’s scales. In C. Angelelli & H. E. Jacobson (Eds.), Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice. Amsterdam: John Benjamins.
Van Someren, M. W., Barnard, Y. F. & Sandberg, J. A. C. (1994). The think-aloud method: A practical guide to modeling cognitive processes. London: Academic Press.
Wallace, M. (2013). Rethinking bifurcated testing models in the court interpreter certification process. In D. Tsagari & R. van Deemter (Eds.), Assessment issues in language translation and interpreting. Frankfurt am Main: Peter Lang, 67–83.
Wang, J., Napier, J., Goswell, D. & Carmichael, A. (2015). The design and application of rubrics to assess signed language interpreting performance. The Interpreter and Translator Trainer 9 (1), 83–103.
Wigglesworth, G. (2005). Current approaches to researching second language learner processes. Annual Review of Applied Linguistics 251, 90–111.
Wu, F. S. (2013). How do we assess students in the interpreting examinations? In D. Tsagari & R. van Deemter (Eds.), Assessment issues in language translation and interpreting. Frankfurt: Peter Lang, 15–33.
Zwischenberger, C. (2010). Quality criteria in simultaneous interpreting: An international vs. a national view. The Interpreters’ Newsletter 151, 127–142.
2024. Raters’ scoring process in assessment of interpreting: an empirical study based on eye tracking and retrospective verbalisation. The Interpreter and Translator Trainer 18:3 ► pp. 400 ff.
Lu, Rong, Muhammad Alif Redzuan Abdullah & Lay Hoon Ang
2023. Into-A or Into-B, That is a Question: A Systematic Literature Review of Directionality and Performance in Consecutive Interpreting. SAGE Open 13:4
2022. Holistic versus analytic scoring of spoken-language interpreting: a multi-perspectival comparative analysis. The Interpreter and Translator Trainer 16:4 ► pp. 558 ff.
Yang, Yuan & Xiangdong Li
2022. Which theories are taught to students and how they are taught: A content analysis of interpreting textbooks . Círculo de Lingüística Aplicada a la Comunicación 92 ► pp. 167 ff.
2021. Introducing China’s Standards of English Language Ability (CSE)—Interpreting Competence Scales. In Testing and Assessment of Interpreting [New Frontiers in Translation Studies, ], ► pp. 15 ff.
Abdel Latif, Muhammad M. M.
2020. Translation and Interpreting Assessment Research. In Translator and Interpreter Education Research [New Frontiers in Translation Studies, ], ► pp. 61 ff.
Wang, Weiwei, Yi Xu, Binhua Wang & Lei Mu
2020. Developing Interpreting Competence Scales in China. Frontiers in Psychology 11
This list is based on CrossRef data as of 12 september 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.