Auto summary technology takes a passage and automatically summarizes it into its most essential points; shortening the passage while preserving meaning. The function of auto summary is similar to Scalable Vocabulary which aims to increase readability. Auto summary can be mounted on mobile devices and typically does not require an internet connection. Auto summarization is based on the principle that condensed, critical information is essential for learning (Maranzo, Pickering, & Pollock, 2001). The four features measuring the quality of an auto summary tool are information preservation, cohesion, condensation, and redundancy (Huang, He, Wei, & Li, 2010).
Auto summary technology is improving and there are dozens of tools available with frequent updates (Yang, 2011). Currently, the most effective auto summary tools are using methods to reduce text size that label content-specific phrases and remove redundant statements (Gambhir & Gupta, 2016). These extractive methods will not link phrases and words to make new sentences. Summarizers that generate brand new statements are abstractive methods and require more complicated natural language processing algorithms (Yang & Wang, 2008). Summarizers which can replicate human abstractions successfully are more useful to users, some computer scientists have started to develop these tools (e.g., Yang & Wang, 2008).
The effectiveness of auto summary tools has been tested experimentally (Yang et al., 2013). Yang and colleagues tested 25 participants who were given passages to study, passages on topics that the participants had no background knowledge in. Some participants had the original passage (1500 words), while others had various length (400, 250, 100 words) auto summarized passages. The participants with auto summary passages made more mistakes, but also completed their test in significantly less time. Participants with 400 word summaries, or 30% of the passage, made 10% more mistakes than the full passage group, but were just over twice as fast. The smaller summaries led to more mistakes and faster responses. Because of this, it is recommended to have auto summaries of approximately 30% of the total text size in order to maximize efficiency while not greatly sacrificing accuracy (Yang et al., 2013).
A separate case study by Edyburn (2002) followed a single student with academic difficulties and poor reading and writing. This student was training on how to use auto summary and asked to complete a research task. The student’s writing grade improved from a C+ to an A, and the length of the student’s passages increased from 80 word assignments to 180 word assignments. Therefore, it seems that auto summary tools can be used to support reading comprehension skills.
Research Rating: Due to the experimental nature of the information cited in this description this information is to be trusted as valid and reliable.
Almost no learning curve; users need only to copy and paste information
Effectively reduces size of documents while preserving meaning, which can reduce information processing load
Can be mounted onto mobile devices
Significantly faster than summarizing by hand
Some information is lost in the auto summary process and no system will ever be perfect
Has been shown to increase errors when used to study for tests
Requires information in electronic text form
Summarization “by hand” or summarization done by the reader has shown to be an excellent study tool that significantly increases comprehension and test scores on related content (Friend, 2000). Part of these effects have been shown to be the act of creating the summary is what improves retention, meaning using auto summary does not have the same effects. As such, Auto Summary is not recommended as a study tool for every child, but is best used primarily when there is an excess of information that needs to be reduced.
There are a wide range of research algorithms in development, which experiment with different methods (the “brain” of the summary tool). These algorithms, once available, have been used to create numerous repeats of free summary tools on various websites. According to Gambhir & Gupta (2016), extractive methods (pulling exact sentences and phrases) should be preferred over abstractive (reconstructing) until better software develops.
Special Consideration: Workflow
Exact prices change frequently, which is why only approximate ranges are listed.
$ - Under $5
$$ - Between $6 and $50
$$$ - Between $51 and $250
$$$$ - Over $250
Edyburn, D. (2002). Research and Practice. Journal of Special Education Technology, 17(4). 53-60.
Gambhir, M., & Gupta, V. (2016). Recent automatic text summarization techniques: A survey. Artificial Intelligence Review, 47(1), 1-66. doi:10.1007/s10462-016-9475-9
Huang L, He Y, Wei F, Li W (2010) Modeling document summarization as multi-objective optimization. In: Proceedings of the third international symposium on intelligent information technology and security informatics, pp 382–386
Friend, R. (2000). Teaching Summarization as a Content Area Reading Strategy. Journal of Adolescent & Adult Literacy, 44(4), 320-329.
R.J. Marzano, D.J. Pickering, J.E. Pollock. Classroom instruction that works: Research-based strategies for increasing student achievement. (2nd ed.), ASCD, Alexandria, VA (2001)
Yang, C. C., & Wang, F. L. (2008). Hierarchical summarization of large documents. Journal of the American Society for Information Science and Technology, 59(6), 887-902. doi:10.1002/asi.20781
Yang, G., Chen, N., K., Sutinen, E., Anderson, T., & Wen, D. (2013). The effectiveness of automatic text summarization in mobile learning contexts. Computers & Education, 68, 233-243. doi:10.1016/j.compedu.2013.05.012
Yang, F. (2011). Study on core Technologies of Query-oriented Automatic Summarization. Procedia Engineering, 15, 3600-3603. doi:10.1016/j.proeng.2011.08.674
Written by Francis Wall, Last Revision May 2018