Optical Character Recognition
Optical Character Recognition (OCR) is a technology that enables you to convert different types of documents, such as scanned paper documents, PDF files or images captured by a digital camera into editable and searchable data. OCRs as we know them now were developed in the 1970s. Throughout the past four decades OCR technology has evolved beyond the variables that were originally used to evaluate them. OCRs that are currently available on the market all have essentially perfect conversion accuracy, can detect dozens of fonts and languages, and can work with low contrast and low lighting. It is undoubtable that OCR can be used effectively by students, and can help to overcome specific concerns of students with reading deficits.
This type of software has been shown to specifically assist college students with reading disability complete a reading comprehension task more effectively than when using a human reader or when using no assistance at all; the more severe the disability the greater the tech assisted performance.
While many different OCR apps are available, many actually use the same back end programing; those program are compared below.
Research Rating: Due to the experimental nature of the information cited in this description this information is to be trusted as valid and reliable.
Quickly converts inaccessible text into accessible formats; prepares documents for TTS
OCRs are available as both computer based and mobile based apps, consider student environment and material format when comparing both options across the below two charts.
Special Consideration: Workflow
Exact prices change frequently, which is why only approximate ranges are listed.
$ - Under $5
$$ - Between $6 and $50
$$$ - Between $51 and $250
$$$$ - Over $250
Higgins, E. L., & Raskind, M. H. (1997). The Compensatory Effectiveness of Optical Character Recognition/Speech Synthesis on Reading Comprehension of Postsecondary Students with Learning Disabilities. Learning Disabilities: A Multidisciplinary Journal, 8(2), 75-87.
Written by Harrison McNaughtan, Last Revision May 2018