Around 2019, some of our international students at the University of Illinois College of Medicine (UICOM) met with me to talk about the automatic transcripts created for all of our video content in our video storage system, Echo360. We had turned on the transcripts feature in spring 2019 as a pilot to determine the accuracy of the built-in automatic speech recognition (ASR) in the program and figure out how to best provide students with additional ways to consume the videos.
What we discovered pretty quickly is the accuracy was sometimes really poor. After discussions with Echo360 representatives, we found out that there is not a built-in dictionary for medical, pharmacologic, and certain higher level scientific terms. For example, if a faculty member said ‘hypoallergenic’, the system might not have heard it correctly and the transcript would show ‘hyper-allergenic’ which is incorrect.
Initially, some of these brave medical students tried to correct the transcripts themselves and soon discovered just how long it can take to correct them. We determined that a better option, until Echo360 has the dictionaries in their system, was to hire non-pre-med students to edit the transcripts. I worked with my colleague Dr. Elizabeth Balderas to interview and hire 5 students to do this work. This is still a work in progress and we hope that eventually the company will incorporate a medical dictionary.
Here is an example of how the system incorrectly heard the term ECG from a faculty member who was speaking. The correct term is ECG. After listening to this part of the lecture. it was very clear he said ECG, not eggs.
It is really imperative that we correct these transcripts. There are students who have learning disabilities or other accommodations. Sometimes we have faculty who have thick accents because English is not their first language and we have students who’s first language is also not English. We need to be able to provide the best education to all students.