What is it about?
We took state-of-the-art tools designed to detect when a language learner mispronounced some word and put it in an app. The app taught beginner French using dialogues. We had absolute beginners of French use the app for about an hour and graded their improvement. We found that both a state-of-the-art mispronunciation detection tool and one that guessed at mispronounced words lead to similar improvements in the language. The guessing tool also matched teacher feedback more strongly.
Featured Image
Photo by Daniel Romero on Unsplash
Why is it important?
When building AI software, engineers often focus on technical goals that resemble their real-life counterparts but are easier to conceptualize and measure. For pronunciation training, we might end up trying to detect a learner's mispronunciations with 100% accuracy. When a teacher engages a learner, however, she must balance a variety of language goals while simultaneously keeping her feedback consistent and focused. That the guessing tool kept up with the state-of-the-art system is evidence that those technical goals aren't as in line with real-world performance as we might have thought. We advocate instead for focusing on evaluating language learning tools in terms of how well they perform their prospective task, rather than what is convenient from an engineering perspective.
Perspectives
Read the Original
This page is a summary of: Designing Pronunciation Learning Tools, April 2018, ACM (Association for Computing Machinery),
DOI: 10.1145/3173574.3173930.
You can read the full text:
Contributors
The following have contributed to this page