What is it about?
This paper explores a fresh approach to live multilingual subtitling in Korea, aiming to streamline the process by combining two traditionally separate roles into one. Instead of relying on both transcription specialists and simultaneous interpreters, the proposed model suggests using respeaking—a technique where a professional repeats spoken content, which is then converted into real-time subtitles by advanced speech recognition software. By comparing the current method with this new approach, the paper highlights the potential for respeakers to enhance the efficiency of live subtitling. To support this shift, it also suggests that training programs for translators and interpreters should start incorporating respeaking technology, preparing the next generation of professionals for this evolving landscape.
Featured Image
Photo by Jacob Hodgson on Unsplash
Why is it important?
The paper's call to integrate respeaking into the training of translators and interpreters is forward-thinking. It anticipates the evolving demands of the industry and ensures that future professionals are equipped with the skills needed to meet these challenges. This makes the paper not just a technical proposal, but a roadmap for adapting to the future of translation and interpreting services.
Perspectives
Read the Original
This page is a summary of: A model of live interlingual subtitling using respeaking technology, Babel Revue internationale de la traduction / International Journal of Translation / Revista Internacional de Traducción, October 2020, John Benjamins,
DOI: 10.1075/babel.00182.jin.
You can read the full text:
Contributors
The following have contributed to this page