Judy F. Chen & Clyde A. Warden


Teachers of English as a foreign language (EFL) face numerous difficulties that computer technology may be especially well situated to relieve. An apparently intractable problem, especially in Asian language classrooms, is large class size and low skill level. If computer software could assist in the correction of assignments, a huge time savings could be transferred to teaching and more effective teacher/student interaction. However, when one explores the application of computer technology to error correction and language learning in general, it is never very long before the first qualifier is raised: the technology simply has not reached a high enough level yet to do _____. The blank can be filled in with almost anything. No matter what language learning area one wishes to apply computer software, it would always work much better if the software were only more advanced. While the present level of technology is no where near perfect, automated correction in the EFL/ESL field has met with some success. This paper reviews the application of error correction software in EFL/ESL research and finds that while the software is far from flawless, successful implementation of automated error correction depends more on the role it is expected to play in the classroom than its ability to replicate the teacher's intelligence.


The key to perfect computer based language error correction is machine intelligence or AI (Artificial Intelligence). This goal is difficult to define, but progress is being made every year. There is little doubt that the better AI characteristics software can exhibit, the more useful its application in the language learning and teaching environments can be. Harrington (1996) points out that intelligent computer-assisted language learning software is made up of three parts: the domain knowledge (often the L2 grammar), a student model (tracks what the student knows) and an instructional component (tasks and activities). It is the domain knowledge which really separates intelligent software from more traditional types of learning software. The ability to understand the underlying meaning and correctness of a student’s response is central to creating accurate feedback rather than short, canned comments.

In the attempt to achieve a better domain knowledge base, many language researchers have been writing software, called parsers, that can understand language. The most accessible approach involves the use of corpora. Corpora are collections of words that may represent commonly used words in certain types of writings. Liou (1992) used this approach in order to build a large knowledge base from which language learning software could be developed.

Once a knowledge base is built, software can be written to compare student input with the knowledge base. Such programs are often not a single program but a group of large programs that use huge amounts of memory and take long periods of time to process data (Coniam, 1991). Efforts to create small and fast programs to parse sentences has been documented (Coniam, 1991; Webster, 1991; Baldry, et al., 1991; Xu, 1994), but the search for a completely reliable program that has true understanding of the English language has proven elusive.

One of the major difficulties of understanding language for a computer is that meaning exist at so many levels. As Peng (1993) points out, focusing on the character level is achievable and is well executed in many spell checkers. Such an approach does not, however, look at the phrase level meanings. Additionally, since we are discussing language software in the context of teaching and learning, one of the basic assumptions is that non-proficient writers would use the software. This leads to the largest stumbling block of software, i.e., parsing.

A program can break up input into individual vocabulary words and check for spelling and most basic allowable combinations (a pre-parse stage); however, when an attempt is made to truly parse the sentence, any errors will stop the program in its tracks (Bolt, 1991; Tschichold et al., 1994). Parsing a sentence calls for creating a parse tree that matches all the structures with allowable linguistic structures. When a word has been spelled wrong, or verb tense used wrong, how can the parsing continue? While a teacher can usually guess what a student was trying to say, such guessing is not in the nature of computers. Even teachers can have difficulty deciphering what a student wanted to express in his/her muddled writing.

What we are left with are numerous programs that can do some things well, but none that can do it all. In the area of text description and creation of corpus, software is well developed and proving very useful. Teachers can use software to find commonly occurring patterns and then use the feedback in class (Tribble, 1991; Pickard, 1994; Ma, 1994; Shillaw, 1994; Li & Pemberton, 1994; Milton & Tsang, 1993; McEnery & Wilson, 1993), or to inform teaching pedagogy. Programs that address vocabulary building are also well adapted to the computer. Since computers excel at search and match techniques, in large databases, the ability to quickly find many lexical meanings of a word has been achieved by numerous programs (Clarke, 1992; Goodfellow, 1993).


Before perfection can be reached in AI software, many computer programs are making their way into the EFL writing class. Such opportunities to use software have not lead to widespread adoption on the part of ESL teachers, most often because commercially available programs do not go beyond simple drills (often referred to as drill & kill). Most EFL teachers stay fairly optimistic about CALL mostly because the economic benefits from using CALL are so clear, as pointed out by Donaldson and Morgan (1994, p. 43):

. . . to maximize the effectiveness of classroom time and obtain optimal results from each moment of preparation time. The solution for us has been to expand the arsenal of available aids by employing CALL.

The novelty and game orientation of some drill based software can be helpful in motivating students (Healey, 1992) even though the actual drills may not be so helpful. At the other extreme are word processing (text-manipulation) programs that have been around for quite a while. Kenning (1991), confirming what others have found, in her study of student preferences in using software showed learners prefer text-manipulation software over other structured programs. The benefits computer software brings to writing are especially important for EFL students (Bernard, 1993) who are struggling with their writing. Computers allow these students to concentrate on the process of writing, not on the mechanics of a typing machine or the problems of handwriting.

In contrast to drill & kill exercises, programs that allow students to edit text are clearly superior. The next step towards automation is the ability to bring some of the more basic and refined abilities of AI to bear on open-ended text exercises. The result can be very helpful in relieving teachers’ burden and increasing feedback to students. Jamieson, et al., 1993, in their study, had students complete open-ended writing tasks. The assignments were then scored by computer and next by humans with a resulting score correlation coefficient of .90, confirming that computers can score open ended tasks. Software checked mainly for key phrases within students’ writing to make sure development of the writing structures was correct. While not checking for character level errors or conceptual problems, such application of computer software reveals a road to better applications of technology in EFL.

While benefits do accrue from the use of word processors in writing classes (Daiute, 1983; Fisher, 1983; Gula, 1983; Wresch, 1988), EFL class settings often face numerous constraints. Lam and Pennington (1995) addressed this problem in their study of Hong Kong secondary level students. All software use was limited by Commodore PCs with single 360K disk drives. This meant that the operating system as well as any CALL software would have to fit onto one 360K disk. Simple editing software was used and 18 essays prepared by two groups, one on computer and one by hand. The result was that even with minimum hardware and software, the group using the computers showed significantly better writing in organization, vocabulary, language use and mechanics.

The direction pointed to here is one of adaptation to the specific needs of EFL students. While computers can play many roles in native English speakers’ classrooms, teachers overseas are required to adapt and make computers fit in a more fundamental way. Advanced networked computer systems, such as Daedalus, may work well for native English speakers, or advanced English majors (Downs-Gamble, 1994), but at the lower EFL levels we may need to depend more on the fundamental advantages computers and text-manipulation software can bring to our students. As Ellen Lange (1993, p. 15-19) points out:

...my own experiences all indicate that grammar checkers written for mainstream composition students are not compatible with ESL students' needs. As concerned composition teachers, even if we are not software writers, we can at least use the computer for grammar exercises to teach our students to become better independent editors of their own texts.

Yet ESL composition instructors can address this situation by creating or adapting their own highly effective grammar-based exercises for the computer. The way students work on grammar-based exercises on the computer is what I believe helps ESL composition students most. It closely resembles the editing process we want them to use in their own writing. For example, while doing these exercises on the computer, students change, puzzle over, and think through possible answers just as we want them to do when they are writing a text on the computer.

It does appear that the most useful application for computer software in EFL classes lies not in the most advanced AI arena, nor in the most basic drill & kill exercises. A middle ground that brings the fundamental benefits of computers without the requirements of prohibitively expensive machines, yet addresses some of the medium level concerns of EFL teachers and students. Systems that allow students to be semi-autonomous in the process of their writing but still gives guidance and feedback. Ken Hyland (1993) sounds a word of caution over CALL optimism. He points out that unrealistic expectations may lead to a repeat of the language lab fiasco, the results of which we still live with in Taiwan today. Twenty years after it began and more than ten years past its theoretical downfall, nearly every school in Taiwan has a language lab which few credit with having any appreciable impact on students’ ability.


Application of grammar checking software is a logical step in CALL; however, due to the above cited shortcomings in software and the lack of hardware in most EFL settings, such application has not become widespread. Numerous programs are commercially available that claim to check grammar errors in English writing. Bolt (1992) has described most of these programs in detail, including: Correct Grammar, Right Writer, Grammatik, CorrecText, Reader, Power Edit and LINGER. While these programs have differing demands on hardware, most do run on a PC. Bolt points out the very important characteristic of transparency which he defines as the degree to which the program’s underlying functions and logic can be seen and changed. Grammatik was found to offer the greatest access to its rule base as well as allowing changes to rules or new rules to be added.

Healey (1992, p. 14) also examined such programs and attempted to add some new rules to Grammatik but found that such an exercise required considerable work on the part of the teacher. She did go on to find: Though the grammar checker may not find every error, its work in “consciousness raising” can be very helpful for language learners. Brock (1990), teaching in Hong Kong, also found that modifying Grammatik was helpful when rules were programmed for some common errors of Cantonese speakers learning English.

Garton and Levy (1994) using a later version of Grammatik, version 5, found Grammatik5 to be much improved over earlier versions. Although at first use, Grammatik5 seems to be very inaccurate, after modification, it improves greatly. In their study, Garton and Levy gathered a large database of students’ writing. They then ran some documents through the Grammatik grammar/style checking software. The results directed them towards what rules to turn off, because such rules were not accurate or did not apply to EFL students and towards the creation of new rules to find errors in the students’ writing that the program had missed. After making changes, the database could again be used to verify the accuracy of new rules programmed into Grammatik. While the computer could not replace a teacher or even a tutor, it could be a useful tool in helping to raise the awareness of students. More recently, new versions of Grammatik, as well as other grammar checkers, have moved away from programmability, and while their default modes often outperform earlier versions, they are not well suited for EFL research.

Liou (1991, 1992, 1993a, 1994) has performed a number of experiments using Grammatik as well as custom designed software to find its impact on EFL students in Taiwan. Although the studies are usually small in number, the results tend to be positive in showing that groups using CALL perform somewhat better than those not using it. When grammar oriented CALL was applied in a process oriented class setting, Liou (1993b, p. 25) found that the CALL group was able to rectify more of their errors during redrafts and made fewer errors than the non-CALL group:

It is evident that subjects (non-CALL) were not able to correct most of their mistakes by themselves even after some devices to raise their consciousness as to form, such as marks, were used.

The inability of EFL students to overcome some errors has also been observed by Dalgish (1991) when he wanted to find the common errors of students learning English in Sweden. The same topic was pursued by Brehony and Ryan (1994) with the understanding that an EFL learner’s mistakes often reflect the usage or structure of his/her native language. These interlingual errors can be affected by CALL simply because they can be easily identified and then codified in software. Simple matching procedures can be used to flag such common errors.


Overall, we can see expressed in the literature, a great disappointment that computers cannot be as intelligent as we would like. However, with modest goals in mind, most researchers are generally positive towards CALL. Of special interest are the findings that simply using computers in the language classroom appears to improve students’ abilities as well as their attitudes. Grammar/style checking software can be modified to make a better fit with specific EFL settings and may actually do a good job at consciousness raising as well as addressing errors that are difficult to treat (mostly carry over from the L1) under normal classroom settings. Caution should always be exhibited, lest we bite off more than we can chew. CALL should answer questions that do not already have easily accessible answers (Pennington, 1992), rather than a complete adoption of every new technology.


Baldry, A., Piastra, M. & Bolognesi, R. (1991). A new parsing technique for building CALL applications on small computers. Computer Assisted Language Learning, 4(3), 183-189.
Bernard, S. (1993). ESL/EFL process writing with computers. CAELL Journal, 4(2), 16-22.
Bolt, P. (1991). eL: A computer-based system for parsing and correcting written English. Computer Assisted Language Learning, 4(3), 173-182.
Bolt, P. (1992). An evaluation of grammar-checking programs as self-help learning aids for learners of English as a foreign language. Computer Assisted Language Learning, 5(1-2), 49-91.
Brehony, T., & Ryan, K. (1994). Francophone stylistic grammar checking (FSGC) using link grammars. Computer Assisted Language Learning, 7(3), 257-269.
Brock, M. N. (1990). Customizing a computerized text analyzer for ESL writers: Cost versus gain. CALICO Journal, 8, 51-60.
Clarke, M. (1992). Vocabulary learning with and without computers some thoughts on a way forward. Computer Assisted Language Learning, 5(3), 139-146.
Coniam, D. (1991). A simple syntax analyzer? In J. Milton & K. Tong (Eds.), Text Analysis in Computer assisted language learning, 132-138. Hong Kong: Hong Kong University of Science & Technology.
Daiute, C. A. (1983). The computer as stylus and audience. College Composition and Communication, 32(2), 134-145.
Dalgish, G. (1991). Computer-assisted error analysis and courseware design: Applications for ESL in the Swedish context. CALICO Journal, 9(2), 39-56.
Donaldson, R. & Morgan, L. (1994). Making the most of scarce resources:A small college language department's experience with HyperCard. CALICO Journal, 11(4), 41-59.
Downs-Gamble, M. (1994). The Daedalus integrated writing environment: Interactive humanities education. Computer Assisted Language Learning, No. 7 (July), 4-5.
Fisher, G. (1983). Word processing – Will it make all kids love to write? Instructor and Teacher, 92(6), 87-88.
Garton, J., & Levy, M. (1994). A CALL model for a writing advisor. CAELL Journal, 4(4), 15-20.
Goodfellow, R. (1993). CALL for vocabulary requirements, theory & design. Computer Assisted Language Learning, 6(2), 99-122.
Gula, R. J. (1983). Beyond the typewriter: An English teacher looks at the word processor. Independent School, 42(3), 41-43.
Harrington, M. (1996). Intelligent computer-assisted language learning. ON-CALL, 10(3), 2-9.
Healey, D. (1992). Where's the beef? Grammar practice with computers. CAELL Journal, 3(1), 10-16.
Hyland, K. (1993). ESL computer writers: What can we do to help? System, 21(1), 21-30.
Jamieson, J., Campbell, J., Norfleet, L., & Berbisada, N. (1993). Reliability of a computerized scoring routine for an open-ended task. System, 21(3), 305-322.
Kenning, M. (1991). CALL evaluation the learner's view. Computer Assisted Language Learning, 4(1), 21-27.
Lam, F., & Pennington, M. (1995). The computer vs. the pen: A comparative study of word processing in a Hong Kong secondary classroom. Computer Assisted Language Learning, 8(1), 75-92.
Lange, E. (1993). Using computer-based grammar exercises in ESL composition classes. CAELL Journal, 4(3), 15-19.
Li Siu-leung, E. & Pemberton, R. (1994). An investigation of students' knowledge of academic and subtechnical vocabulary, In L. Flowerdew, & A.K.K. Tong (Eds.), Entering text, 183-196. Hong Kong: The Hong Kong University of Science & Technology.
Liou, H. (1991). Development of an English grammar checker a progress report. CALICO Journal, 9(2), 57-70.
Liou, H. (1992). An automatic text-analysis project for EFL writing revision. System, 20(4), 481-492.
Liou, H. (1993a). Integrating text-analysis programs into classroom writing revision. CAELL Journal, 4(1), 21-27.
Liou, H. (1993b). Investigation of using text-critiquing programs in a process-oriented writing class. CALICO Journal, 10(4), 17-38.
Liou, H. (1994). Practical considerations for multimedia courseware development: an EFL IVD experience. CALICO Journal, 11(3), 47-74.
Ma Ka-Cheung, B. (1994). Learning strategies in ESP classroom concordancing: An initial investigation into data-driven learning, In L. Flowerdew, & A.K.K. Tong (Eds.), Entering text, 197-214. Hong Kong:The Hong Kong University of Science & Technology.
McEnery, T. & Wilson, A. (1993). The role of corpora in computer-assisted language learning. Computer Assisted Language Learning, 6(3), 233-248.
Milton, J. & Tsang, E. (1993). A corpus-based study of logical connectors in EFL students' writing:Directions for future research, In R. Pemberton, & E.S.C. Tsang (Eds.), Studies in lexis, 215-246. Hong Kong:The Hong Kong University of Science & Technology.
Peng, Y. (1993). Answer markup on computer assisted language learning. CALICO Journal, 10(3), 31-40.
Pennington, M. (1992). Beyond off-the-shelf computer remedies for student writers:Alternatives to canned feedback. System, 20(4), 423-437.
Pickard, V. (1994). Producing a concordanced-based self-access vocabulary package:Some problems and solutions, In L. Flowerdew, & A.K.K. Tong (Eds.), Entering text, 215-227. Hong Kong: The Hong Kong University of Science & Technology.
Shillaw, J. (1994). Using a corpus to develop vocabulary tests, In L. Flowerdew, & A.K.K. Tong (Eds.), Entering text, 166-182. Hong Kong:The Hong Kong University of Science & Technology.
Tribble, C., (1991). Applications of stylistics in English language teaching, In J. Milton & K. Tong (Eds.), Text analysis in computer assisted language learning, 158-166. Hong Kong: Hong Kong University of Science & Technology.
Tschichold, C., Bodmer F., Cornu, E., Grosjean, F., Grosjean, L., Kubler, N., & Tschumi, C. (1994). Detecting and correcting errors in second language texts. Computer Assisted Language Learning, 7(2), 151-160.
Webster, J. (1991). Text analysis using the functional grammar processor, In J. Milton & K. Tong (Eds.), Text Analysis in computer assisted language learning, 139-157. Hong Kong: Hong Kong University of Science & Technology.
Wresch, W. (1988). Six directions for computer analysis of student writing. The Computer Teacher, 42(April), 13-16.
Xu, L. (1994). Smart Marker--an efficient GPSG parser, In L. Flowerdew, & A.K.K. Tong (Eds.), Entering text, 144-156. Hong Kong: The Hong Kong University of Science & Technology.


Clyde Warden lives in Taiwan where he is an associate professor of English business communication. Research specialties include CALL (Computer Assisted Language Learning) and custom computer software development. E-mail: warden@dec8.cyit.edu.tw World Wide Web: http://www.cyit.edu.tw/~warden/ Judy Chen teaches business English in Taiwan where she is an associate professor. She has presented papers at both domestic and international conferences. Her main academic emphasis is on CALL as well as professional English E-mail: jfc@rs1.occc.edu.tw World Wide Web: http://www.occc.edu.tw/~jfc/

Copyright (C) 1997-99 All rights reserved