Closed captions are the primary assistive technology used to allow individuals who are deaf and who have hearing-impairments to watch television and movies. They consist of transcribed text, which runs across the bottom of the screen, and include dialogue as well as all other audible occurrences such as music, laughter and sound effects. This process can be completed by professionals live in the moment, or overlaid on a pre-recorded video file. Similarly, technology is available to caption videos live in the moment, or a pre-existing video file can be processes and captioned. This review will focus on the technology versions of this process, however a quick Google search can yield many professional stenographers and captioners in your area.
Efficacy of Closed Captions
While some research focuses on populations with disabilities, much of the research on closed captions focuses on universal design and the benefits of closed captions to all users; those individuals with and without disabilities. As access-for-all is being increasingly mandated by law, companies are being required to provide services such as closed captions for all content, rather than on a ad-hoc basis. Furthermore, with online education becoming increasingly popular in post-secondary institutions, many researchers have focused on how closed captions increase school performance. One such study shows that students using online video material with professionally developed captions had increased scores on assessments than those students using video materials without closed captions (Dallas, McCarthy & Long, 2016). Another study duplicated these results and also asked for student perspectives on the matter. They found that students reported captions were useful in that they increased comprehension, helped with the spelling of keywords, and assisted with note taking (Morris et al., 2016).
Research that looks at the effects of captions on students with disabilities, report similar findings. One study reporting that captions and other video accessibility features (i.e. described video) give students access to information that without these features they would not be able to access (Rodriguez & Diaz, 2017). One review that compiled the results of over 100 empirical studies concluded that captioning a video improves attention and comprehension of the video and retention of video content. These effects were particularly true for people watching videos that are not in their first language, for children and adults learning to read, and for people who are deaf or hard of hearing (Gernsbacher, 2015). So while we can be sure that closed captions have benefits for individuals with disabilities and typically developing users alike, how successful are the technologies available to create these captions? As professional captioning can be very expensive, it will be increasingly important that these captioning softwares are accurate and easy to use if they are to become ubiquitous.
Live captioning uses Voice Recognition software to provide captions live as someone is talking. This technology is developing rapidly, but is far from perfect. One study that asked for user experiences of live captioning software from users who are deaf or hard of hearing found that accuracy and usability of these programs remain to be the most common issues reported (Kawas, Karalis, Wen & Ladner, 2016). These participants further detailed that currently available captioning technologies tend to limit students' autonomy in the classroom and can present a variety of user experience shortcomings, such as complicated setups and poor feedback over caption presentation (Kawas et al., 2016). One key feature of live captioning is low latency times. This means that the time between the captions need to match the visual/auditory presentations of speech, and that delays in this presentation can be troublesome (Lasecki et al., 2012). These authors reported that currently the only reliable source of live captions are stenographers who must have extensive training, use special keyboards, and be hired in advance. While live captioning is less expensive and available on-demand, it has low accuracy, is highly sensitive to ambient noise, and requires users to have training beforehand, which can render them unusable in many real-world contexts (Lasecki et al., 2012).
Automatic Video File Captioning
Automatic video captioning is a process in which a program takes an existing video file and transcribes the audio into written text and then with the user’s assistance overlays the appropriate text over the appropriate video sections. One example of this is the automatic captioning available on YouTube. The transcript output in these softwares often contain errors and require the user to correct these errors manually (Gernsbacher, 2015). However, given that this process is not as time sensitive as live captioning this is relatively less of an issue; some light editing is much less cumbersome than creating captions from scratch. The programs used to convert transcribe these videos and place captions on appropriate video segments are constantly being developed and revised to increase accuracy (Hazen, 2006).
Research Rating: Due to the experimental nature of the information cited in this description this information is to be trusted as valid and reliable.
More affordable than professional captioning
Develops captions that are useful for students with and without disabilities
Web based editing programs are available for users to manually create captions for free
Transcriptions are not perfect, most contain errors that need to be manually corrected
The high quality programs are often very expensive
The programs used to convert transcribe these videos and place captions on appropriate video segments are constantly being developed and revised to increase accuracy (Hazen, 2006). When buying closed caption software, consider that while high quality programs offer superior accuracy currently, improvements to free software are making them increasingly viable, especially in future years.
Special Consideration: Workflow
Exact prices change frequently, which is why only approximate ranges are listed.
$ - Under $5
$$ - Between $6 and $50
$$$ - Between $51 and $250
$$$$ - Over $250
Dallas, B. K., McCarthy, A. K., & Long, G. (2016). Examining the Educational Benefits of and Attitudes toward Closed Captioning among Undergraduate Students. Journal of the Scholarship of Teaching and Learning, 16(2), 50-65.
Gernsbacher, M. A. (2015). Video captions benefit everyone. Policy insights from the behavioral and brain sciences, 2(1), 195-202.
Hazen, T. J. (2006). Automatic alignment and error correction of human generated transcripts for long speech recordings. In Ninth International Conference on Spoken Language Processing.
Kawas, S., Karalis, G., Wen, T., & Ladner, R. E. (2016, October). Improving real-time captioning experiences for deaf and hard of hearing students. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 15-23). ACM.
Lasecki, W., Miller, C., Sadilek, A., Abumoussa, A., Borrello, D., Kushalnagar, R., & Bigham, J. (2012, October). Real-time captioning by groups of non-experts. In Proceedings of the 25th annual ACM symposium on User interface software and technology (pp. 23-34). ACM.
Morris, K. K., Frechette, C., Dukes III, L., Stowell, N., Topping, N. E., & Brodosi, D. (2016). Closed Captioning Matters: Examining the Value of Closed Captions for" All" Students. Journal of Postsecondary Education and Disability, 29(3), 231-238.
Rodriguez, J., & Diaz, M. V. (2017). Media with Captions and Description to Support Learning among Children with Sensory Disabilities. Universal Journal of Educational Research, 5(11), 2016-2025.
Written by Harrison McNaughtan, Last Revision May 2018