Abstract
Assessing reading comprehension ability of EFL learners has always been one of the disputable factors in language testing. Considering reading comprehension as one of the significant skills in learning a foreign language, it is essential for EFL teachers or experts to evaluate the ability of the learners in their reading performance. Thus, they always try to find and examine different kinds of methods to assess reading comprehension ability of the students more perfectly.
It is sometimes argued that the established means of assessment may not measure the actual learning of some students. In other words, the reading tasks after reading passages can affect the way we evaluate our learners’ comprehension ability. Therefore, we can conclude that finding an appropriate method to estimate the reading ability of EFL learners is one of the crucial factors in language testing and teaching.
Based on the literature, there are a lot of techniques for assessing reading comprehension in a foreign or second language context. However, none of them can be accepted as a perfect method to evaluate reading ability. The present paper tries to introduce different assessment means which are used to measure EFL learners’ reading comprehension. In other words, this article is a brief review on literature about the most common techniques and their strengths and weaknesses in evaluating the reading comprehension of our students.
Introduction
Reading comprehension is always one of the important skills which is crucial to second (SL) or foreign language (FL) learning among EFL learners. As Eskey (2005, 563) has declared, many EFL students rarely need to speak the language in their day to day lives but may need to read it in order to “access the wealth of information’’, recorded exclusively in English. In fact, the ability to read the written language at a reasonable rate and with good comprehension has been known to be as important as oral skills, if not more important (Eskey 1988).
There are a number of reasons for this. First, in most EFL situations, learning to read a FL is the ultimate goal of the students because most of the time, EFL learners aim to read for information or study purposes. Second, extensive reading can enhance the process of language acquisition and provide good models for writing and sometimes learning vocabulary or idioms. Thus, we can claim that reading is of utmost importance both among teachers and students (Richards and Renandya 2002).
As Grabe (1995) stated, reading for comprehension is the primary purpose for reading. In other words, EFL teachers should try their best to teach students how to comprehend the texts better. Reading can be defined as an interactive process between a reader and a text in which the reader constantly interacts with the text as he/she tries to elicit the meaning by using both linguistic and semantic knowledge (Alyousef 2006).
Most researches on reading now focus on the effective reading strategies that increase students’ comprehension, and different methods are used to develop students’ ability in reading comprehension (Chamot and Malley 1994); however, less attention is paid to the assessment techniques which should be used to analyze the students’ strength and weakness in reading the target language. Most studies (Mohan 1990; Carrell 1991; 1992, Devine 1993; Koda 1994) are concerned with different ways of reading instruction while for improving teaching practice and reading comprehension skills, we may need some useful assessment techniques to evaluate our own teaching methods.
A lot of components are involved in the process of reading. Grabe and Stoller (2002) classified them into 2 different processes for proficient readers: lower-level process (bottom-up) related to grammar and vocabulary recognition; and higher-level process (or top-down) related to comprehension, schemata, and interpretation of a text. Thus, for being a competent reader we need a combination of both of these processes. Brown (2004) stated that a professional reader is the one who can master fundamental bottom-up and top down strategies; as well as an appropriate contents and formal schemata.
As you know, the ultimate aim of the teachers in reading comprehension classes is to identify the successful and unsuccessful students through assessment practices. In the case of foreign language reading, evaluation should try to collect information from students’ reading abilities, and then use that information for planning and implementing better reading classes (Gersten 1999). Therefore, teaching reading comprehension and assessing it should go hand and hand. Aweiss (1993) mentioned that assessment is a crucial element of successful instruction since it helps teachers be aware of students’ learning and, therefore, make it possible to prepare and apply more effective teaching method. FL reading assessment should focus on the idea of recognizing readers in the classroom so that non-proficient readers can get more attention to improve and proficient readers can develop their abilities.
However, these days, there appears to be a certain intrinsic disagreement between the goals of student evaluation and its means. The aim is usually to assess the students’ learning
ability to get enough information for more efficient instruction. The means, however, are often restricted to estimating the students’ present performance level. This challenge was recognized as early as 1934 by Vygotsky (1934; 1986, see also Minick 1987; Kozulin 1998). Vygotsky believed that a socially meaningful cooperative activity is the normal learning situation for a student. New cognitive functions and learning abilities create within this interpersonal interaction and only later are they internalized and changed becoming the student’s internal cognitive processes. Thus, under conditions of joint or assisted performance students may show certain developing functions that have not yet been internalized yet.
Regarding the above explanation, which task or method can be a good tool for measuring the EFL students’ ability in reading comprehension? Stavans and Oded (1993, 481) suggested that “the established means of assessment may not measure the actual learning of some students with particular learning styles. Recent studies on reading comprehension strategies used by unsuccessful language learners have revealed that some of these learners use the same kind of strategies at the same frequency as do successful learners. Yet their performance on reading comprehension assessments is appreciably lower.”
The aim of this study is to have a brief look at the literature to find out different methods which are used to assess the reading performance of EFL learners in English classrooms. In other words, we want to analyze the strength and weakness of testing mythologies in reading comprehension from the view points of some English experts and researchers to investigate whether the task performance can be influential in helping students to become a successful reader or not.
Now, in the next section, we are going to have a short review on the literature concerning reading comprehension and its most common assessment techniques which are the focus of this descriptive paper.
Review of the related literature
As Kobayashi (2002) maintained if tests are designed to provide an exact measure of learners’ language abilities, examiners have to reduce the effect of overriding factors such as text organization and response format on test results. In her study on the results of reading comprehension tests for about 754 Japanese university students, it was found that text organization and test format have a significant influence on the students’ performance. When we construct the texts clearly, the more proficient students can get better scores in summary writing and open-ended questions. However, the structure of the text has a little effect on the performance of the less proficient students. This implies that coherent texts make it easier to differentiate between students with different levels of proficiency. Kobayashi believed that examiners have paid little attention to the impact of text structure and test format on students’ results so far. By considering these factors, they can improve the validity of their tests.
According to Cross and Paris (1987), reading comprehension assessment should be employed based on three particular purposes. The first one is sorting which is used to predict a learner’s academic success or to show mastery of an instructional program. The second one is diagnosing which is aimed at collecting information from learners’ strategies and processes so that the teacher can choose the best instruction process. The final goal is assessment, which refers to determining the effect of a program on a specific community.
Grabe and Stoller (2002) suggested that how the main goal of foreign language reading evaluation should be to initiate assessment practices that include the following: fluency and reading speed, automaticity and quick word recognition, search processes, vocabulary knowledge, morphological knowledge, syntactic knowledge, text structure awareness and discourse organization, main ideas comprehension, recall of relevant details, inferences about text information, strategic processing abilities, summarization, synthesis skills and evaluation and finally, critical reading. The authors enlightened that assessment tasks should be based on realistic needs and activities.
Aweiss (1993) claimed that assessment techniques vary from the unstructured and gathering of information throughout instruction to structured tests with particularly definite outcomes and guidelines for running and scoring. Aebersold and Field (1997) distinguished some forms of assessment as informal, alternative, developmental, learning-based, and student-centered which include journals, portfolios, homework, teacher assessment, self assessment, and peer assessment, whereas others are considered formal, teacher controlled, conventional, and regular methods including quizzes and exercises. In a study of assessment instruments used for foreign language teaching, Frodden, Restrepo and Maturana (2004) categorized assessment instruments as hard and soft. Hard assessment instruments are a traditional way to assess objectivity, precision, and reliability considering result rather than process. Soft assessment instruments, on the other hand, are concerned with the naturalistic, alternative and purposeful ways of assessment.
Although there may be a great variety of assessment and testing measures to evaluate the reading ability, no method should be chosen as the best, as explained by Alderson (2000, 204) “It is certainly sensible to assume that no method can possibly fulfill all testing purposes… certain methods are commonplace merely for reasons of convenience and efficiency, often at the expense of validity, and it would be naive to assume that because a method is widely used it is therefore valid”. Consequently, “it is now generally accepted that it is inadequate to measure the understanding of text by only one method, and that objective methods can usefully be complemented by more subjectively evaluated techniques. This makes good sense, since in real life reading, readers typically respond to texts in a variety of different ways.” (Alderson 2000, 207)
Cioffi and Carney (1983) stated that typical assessment procedures are best at measuring the students’ skills knowledge, but inadequate for estimating the students’ learning potential and provide little help for recognizing the conditions under which the development can be made.
In the second language acquisition, there is not a single definition for ‘task’. As Bygate, Skehan and Swain (2000) suggested, task definitions are usually “context-free.” In other words, tasks are considered differently based on the different perspectives. For example, Bialystok (1990) and Pica (1991) defined tasks as a way to meet criteria for information control, information flow and objectives of the study. Some other researchers view tasks as an entirely classroom interaction. For example, tasks are viewed as products (Horowitz 1986) or “real academic assignments” situated in a disciplinary context (Swales 1990). Crookes (1986, 1) defined a task as “a piece of work or activity, usually of a specified objective, undertaken as part of an educational course or at work.” Willis (1996, 53) defined a classroom task as “a goal-oriented activity in which learners use language to achieve a real outcome.” Nunan (1989, 10) regarded tasks as classroom work which “involves learners in comprehending, manipulating, producing, or interacting in the target language while their attention is principally focused on meaning rather than form.”
The third type of definition includes the perspectives of both the classroom and of research. Skehan (1996) considered classroom and L2 research tasks as activities which have meaning and generally have some resemblance to real-life language use, and success on the task is assessed in terms of reaching a result.
Perfetti (1997) suggested that based on the types of texts used and the types of tasks carried out, readers may build up a complex combination of information that can be learned.
In upper-level foreign language courses, the ability of students to read articles and literary selections and to respond to them in an intuitive and critical manner plays an important role (Ruiz-Funes 1999a; 1999b). Writing a journal inspires students’ reflection on learning and thus improves learning (Todd et al. 2001). As Olson (2003) claimed, asking students to write a journal make them become more aware of the strategies they use in reading and writing. Journals help students to associate reading and writing by combining the two, allowing students to build their own meaning (Atwell 1987; Parsons 1990; Tierney and Shanahan 1991). Therefore, journals give students an informal chance to increase their understanding of learning, and help the teacher to learn what each individual student is doing and thinking (Tierney and Readence 2000).
It is believed that the method of evaluating reading comprehension affects how readers perform on a test of reading comprehension (Wolf 1993). Additionally, Alderson (2000) stated that there is no best technique for testing reading. Some common reading assessment measures include multiple-choice, written and oral recall, cloze, summary, sentence completion, short-answer, open-ended-question, true/false, matching activity, checklist, ordering, and fill-in-the-blank tests. Researchers declared that the result of each individual assessment task presents an incomplete representation of reading comprehension (Alderson 2000; Bernhardt 1991; Brantmeier 2001). Thus, to understand the complete image and to be able to generalize research outcomes, various assessment tasks are needed (Bernhardt 1991). Correspondingly, Anderson, Bachman, Perkins, and Cohen (1991, 61) argued that “more than one source of data needs to be used in determining the success of reading comprehension test items.” Moreover, because test performance may be influenced by test method, Bachman (1990) considered it as important to utilize various task types to decrease such effects.
An ordinary method used to quantify L2 comprehension is the written recall task (Barnett 1988; Brantmeier 2001; 2003; Carrell 1983; Lee 1986a; 1986b; Maxim 2002). Bernhardt (1991) asserted that conducting the free recall does not affect a reader’s understanding of the text in any way. She argued that with multiple-choice or open-ended questions extra interaction exists among texts, reader, questioner, and the questions. When students are asked to write freely they are not restricted by the prearranged and created assessment tasks. In other words, the free-written task accepts the role of the individual reader in meaning construction.
Multiple choice questions are another common way of assessing learners’ reading comprehension because they are familiar to subjects and are easy for researchers to score (Wolf 1993). Alderson (2000, 211) proposed that multiple-choice test items are so fashionable because they give testers the chance to control test-takers’ thought processes while responding; they “allow testers to control the range of possible answers.” Although preparing a multiple-choice test is time-consuming, it is easy to score, and to assess. Weir (1990) also mentioned that multiple-choice questions are popular since they are completely objective. Statman (1988, 367) suggested that “multiple choice items which have the format of a question with one of four distracters giving the correct answer are a clearer, authentic and more valid way of testing the reading comprehension of foreign learners of English at university level than is the common format in which the testee has to complete a sentence stem by choosing one of four distracters.” However, multiple-choice tests have some drawbacks. First, distracters may deceive the test-takers deliberately, which leads to a false evaluation. Second, being a good reader does not guarantee being successful in a multiple choice test because this type of test involves a separate ability. Third, test-takers may not be able to connect the stem and the answer in the same way that the tester presumes (Cohen 1998).
The other test type of the reading comprehension is short-answer questions. As Weir (1993) emphasized, short-answer tests are tremendously helpful for testing reading comprehension. As Alderson (2000, 227) declared, short-answer tests are seen as “a semi-objective alternative to multiple choice.” Cohen (1998) believed that open-ended questions let test-takers copy the answer from the text, but to do so, the testee needs to understand the text to write the correct answer. Test-takers have to answer a question briefly by inferring from the text, not simply by responding “yes” or “no.”
However, short-answer tests are not easy to make as the test designer must consider all possible answers. Scoring depends on careful preparation of the answer keys. As Hughes (2003, 144) shown, “The best short-answer questions are those with a unique correct response.” He also recommended that this technique is so useful when the test designer wants to test the ability to recognize referents.
In a study on assessing EFL reading comprehension by Stavans and Oded (1993), it was found that in comparison to multiple questions and recall tasks, the open-ended test format is the most facilitating assessment tool for reading comprehension. In her study on testing methods in reading comprehension, Shohamy (1984) indicated that each of the testing methods had different degrees of difficulty for the test-takers. These effects were strongest on low-level students. She recommended the use of multiple choice questions for testing reading comprehension.
Many professionals have ignored the difference between ‘the cloze test’ and ‘gap-filling and used them interchangeably (Razi 2005). Alderson (2000, 207) defined the cloze test as “…typically constructed by deleting from selected texts every n-th word … and simply requiring the test-taker to restore the word that has been deleted”. He claimed that ‘n’ regularly varies from intervals of every 5th word to every 12th word; however, ‘n’ is a number between 5 to 11 according to Weir (1990) and just 5 to 7 according to McNamara (2000)
Designing a cloze test requires the tester to decide which word to delete first; the other deletions go after this systematically. Cloze tests can be prepared easily, but as testers are not able to control which words to delete, except the first one, they do not know what their tests assess (Alderson 2000). Cohen (1998) concluded that cloze tests do not evaluate overall reading ability but they do quantify local-level reading. These tests can be marked easily since the testers expect to see the words that they deleted in advance. They are also suggested to accept other answers which can be meaningful in the determined blanks.
To prepare a gap-filling test, however, the tester must decide which words to remove one by one. The crossing out of the words does not rely on any system, so making a gap-filling test is as easy as designing a cloze test. Deletion of the words is done on a rational basis; thus, the tester can control the test. However, Weir (1993) criticized gap-filling tests because this type of test does not need extracting information by skimming, so the marking process of gap-filling tests is almost the same as the one in cloze test process.
Yamashita (2003) indicated that the gap-filling test produced text-level processing and distinguished well between skilled and less skilled readers. Therefore, she supported the claim that a gap-filling test can be used as a test to measure higher order processing ability. In contrast, Alderson (1979) believed that most of the research which has been done with the native speakers of English can not produce clear-cut evidence that the cloze test is a valid test of reading comprehension. It is stated that cloze tests are not suitable for testing higher order language skills whereas they are useful in testing lower order skills.
Another method used tremendously by the test designers is ‘true or false’ technique. Hughes (2003) stated that the problem with this technique is that the testees have a 50% chance of guessing the right answer without understanding the text. Although by adding one more statement such as ‘not given’, we may decrease this possibility to 33.3%. However, such statements essentially test the ability of inferring meaning rather than comprehension. Another way to resolve this problem is asking the students to correct the false sentences. Both designing and scoring of these tests are easy (Ur 1996).
And finally, among other fashionable techniques to assess reading comprehension are matching activities and ordering tasks. In ‘matching tests’, each item is like a distracter except one. As Alderson (2000, 219) stated, since “… there is only one final choice”, giving more alternatives is more reasonable. He claimed that these tests are difficult to create due to the need to prevent unpremeditated choices. The scoring process of this task is easy because the test-takers get points for each correct matching.
Through ‘ordering tasks’, testees are asked to put the scrambled words, sentences, paragraphs or texts into correct order. Although they test “… the ability to detect cohesion, overall text organization or complex grammar…” (Alderson 2000, 221), the administration of these tests is somehow problematic. Firstly, the test-takers may suggest another reasonable order different from the tester’s. The second problem is scoring. The tester will possibly have difficulties in giving marks to the ones who answer only half the test correctly.
Conclusion
From the presented issues it was found that L2 reading is a multivariate process involving a variety of text and reader characteristics. Thus, for assessing reading comprehension performance of our learners, we need to apply different kinds of methods to evaluate their reading ability. In other words, we need a combination of various techniques to identify the weakness and strength of our students to help them become a good reader. However, the question is that which techniques should be combined with each other in order to gain the accepted result. Thus, further studies can be carried out to find an appropriate way to test reading comprehension performance of the EFL learners because as mentioned before, it is one of the skills which helps the students to get enough information from different texts written in the target language.
References
Aebersold, J. A., and Field, M. L. 1997. From reader to reading teacher. Cambridge: Cambridge University Press.
Alderson, J. C. 1979. The cloze procedure and proficiency in English as a foreign language. TESOL Quarterly 13(2): 219-227.
Alderson, J. C. 2000. Assessing reading. Cambridge: Cambridge University Press.
Alyousef, H. S. 2006. Teaching reading comprehension to ESL/EFL learners. Journal of Language and Learning 5(1): 63-73.
Anderson, N. J., Bachman, L., Perkins, K., and Cohen, A. 1991. An exploratory study into the construct validity of a reading comprehension test: Triangulation of data sources. Language Testing 31: 67-86.
Atwell, N. 1987. In the middle: Writing, reading, and learning with adolescents. Portsmouth, NH: Heinemann.
Aweiss, S. 1993. Meaning construction in foreign language reading. Paper presented at the Annual Meeting of the American Association for Applied Linguistics, Atlanta. Eric Digest ED360850.
Bachman, L. F. 1990. Fundamental considerations in language testing. Oxford: Oxford University Press.
Barnett, M. A. 1988. Reading through context: How real and perceived strategy use affects L2 comprehension. Modern Language Journal 72: 150-160.
Bernhardt, E. B. 1991. Reading development in a second-language. Norwood, NJ: Ablex.
Bialystok, E. 1990. Communication strategies. Basil Blackwell.
Brantmeier, C. 2001. Second language reading research on passage content and gender: Challenges for the intermediate level curriculum. Foreign Language Annals 34: 325-333.
Brantmeier, C. 2003. Does gender make a difference? Passage content and comprehension in second language reading. Reading in a Foreign Language 15(1): 1-24.
Brown, H. D. 2004. Language assessment: Principles and classroom practices. New York: Pearson Education.
Bygate, M., Skehan, P., and Swain, M., eds. 2001. Researching pedagogic tasks: Second language learning, teaching and testing. London, UK: Longman.
Carrell, P. L. 1983. Some issues in studying the role of schemata, or background knowledge. In Second Language Comprehension, Reading in a Foreign Language 1: 81-92.
Carrell, P. 1991. Strategic reading. In Linguistics and language pedagogy: The state of the art. ed. J. E. Alatis, Washington, DC: Georgetown University Press.
Carrell, P. 1992. Awareness of text structure: Effects on recall. Language Learning 42(1): 1-20.
Chamot, A. U., and O’Malley, M. 1994. The CALLA handbook. Reading, MA: Addison-Wesley.
Cioffi, G., and Carney, J. 1983. Dynamic assessment of reading disabilities. The Reading Teacher 36: 764-768.
Cohen, A. D. 1998. Strategies in learning and using a second language. TESL-EJ, 3(4). http://tesl-ej.org/ej12/r10.html (accessed January, 1999).
Crookes, G. 1986. Task classification: A cross-disciplinary review. Technical report, No. 4. Honolulu, Center for Second Language Research, Social Science Research Institute, University of Hawaii at Manoa.
Cross, D. R., and Paris, S. G. 1987. Assessment of reading comprehension: Matching tests purposes and tests properties. Educational Psychologist 22: 313-322.
Devine, J. 1993. The role of metacognition in second language reading and writing. In Reading in the composition classroom. eds. J. G. Carson, and I. Leki. Boston: Heinle and Heinle Publishers.
Eskey, D. E. 1988. Holding in the Bottom: An interactive approach to the language
problems of second language readers. In Interactive approaches to second language reading. eds. P. L., Carrell, J. Devine, and D. E. Eskey. Cambridge: Cambridge University Press.
Eskey, D. E. 2005. Reading in a second language. In Handbook of research in second language teaching and learning. ed. Hinkel, E. Mahwah: Lawrence Erlbaum.
Frodden, C., Restrepo, M. and Maturana, L. 2004. Analysis of assessment instruments used in foreign language teaching. Íkala, Revista de Lenguaje y Cultura 9(15): 171-201.
Gersten, R. 1999. Lost opportunities: Challenges confronting four teachers of English-language learners. The Elementary School Journal 100(1): 37- 56.
Grabe, W. 1995. Dilemmas for the development of second language reading abilities. Prospect 10(2): 38-51.
Grabe, W., and Stoller, F. L. 2002. Teaching and researching reading. Harlow: Pearson Education Limited.
Horowitz, D. 1986. What professors actually require: Academic tasks for the ESL classroom. TESOL Quarterly.
Hughes, A. 2003. Testing for language teachers. New York: Cambridge University Press.
Kabayashi, M. 2002. Method effects on reading comprehension test performance: Text organization and response format. Language Testing 19(2): 193-220.
Koda, K. 1994. Second language learning research: Problems and possibilities. Applied Psycholinguistics 15(1): 1-28.
Kozulin, A. 1998. Psychological tools: A sociocultural approach to education. Cambridge, MA: Harvard University Press.
Lee, J. F. 1986a. On the use of the recall task to measure L2 reading comprehension. Studies in Second Language Acquisition 8: 201-12.
Lee, J. F. 1986b. Background knowledge and L2 reading. Modern Language Journal 70: 350-354.
Maxim, H. 2002. A study into the feasibility and effects of reading extended authentic discourse in the beginning German language classroom. Modern Language Journal 86: 20-35.
McNamara, T. 2000. Language testing. Oxford: Oxford University Press.
Minick, N. 1987. Implications of Vygotsky’s theory for dynamic assessment. In Dynamic assessment. ed. C. Lidz, 116-140. New York: Guilford Press.
Mohan, B. 1990. LEP students and the integration of language and content: Knowledge structure and tasks. In Proceedings of the First Research Symposium on Limited English Proficient Students’ Issues. ed. C. Simich-Dudgeon, 113-160. Washington, DC: Office of Bilingual Education and Minority Language Affairs.
Nunan, D. 1989. Designing tasks for the communicative classroom. New York: Cambridge University Press.
Olson, C. B. 2003. The reading/writing connection: Strategies for teaching and learning in the secondary classroom. Pearson Education, Inc.
Parsons, L. 1990. Response Journals. Markham, Ontario: Pembroke Publishers Limited.
Perfetti, C. A. 1997. The psycholinguistics of spelling and reading. In Learning to spell: Research, theory, and practice across languages. eds. C. A. Perfetti, L. Rieben, and M. Fayol, 21-38. Mahwah, NJ: Erlbaum.
Pica, T., Holliday, L., Lewis, N., Berducci, D., and J. Newman. 1991. Language learning through interaction: What role does gender play? Studies in Second Language Acquisition, 343-76.
Razi, S. 2005. A fresh look at the evaluation of ordering tasks in reading comprehension: WEIGHTED MARKING PROTOCOL. The Reading Matrix 5(1): 1-15.
Richards, J. C., and Renandya, W. A. eds. 2002. Methodology in language teaching: An anthology of current practice. Cambridge: Cambridge University Press.
Ruiz-Funes, M. 1999a. The process of reading to write by a skilled Spanish-as-a foreign language student: A case study. Foreign Language Annals 32(1): 45-62.
Ruiz-Funes, M. 1999b. Writing, reading, and reading-to-write in a foreign language: A critical review. Foreign Language Annals 32(4): 514-526.
Shohamy, E. 1984. Does the testing method make a difference? The case of reading comprehension. Language Testing 1(2): 140-170.
Skehan, P. 1996. A framework for the implementation of task-based instruction. Applied Linguistics 17: 38-62.
Statman, S. 1988. Asking a clear question and get a clear answer: An enquiry into the question/answer and sentence completion formats of multiple choice items. System 16(3): 367-376.
Stavans, A., and Oded, B. 1993. Assessing EFL reading comprehension: The case of Ethiopian learners. System 21(4): 481-494.
Swales, J. M. 1990. Genre analysis. Cambridge: Cambridge University Press.
Tierney, R. J., and Readence, J. E. 2000. Reading strategies and practices. Needham Heights: Allyn & Bacon.
Tierney, R. J., and Shanahan, T. 1991. Research on the reading-writing relationship:
Interactions, transactions, and outcomes. In Handbook of reading research. eds. R. Barr, M. L. Kamil, P. Mosenthal, and P. D. Pearson, vol2, 246-280. New York: Longman.
Todd, R. W., Mills, N., Palard, C., and Khamcharoen, P. 2001. Giving feedback on journals. ELT Journal 55(4): 354-359.
Ur, P. 1996. A course in language teaching. Cambridge: Cambridge University Press.
Vygotsky, L. 1934/1986. Thought and language. Rev. ed. Cambridge, MA: MIT Press.
Weir, C. J. 1990. Communicative language testing. New York: Prentice Hall.
Weir, C. J. 1993. Understanding & developing language tests. New York: Prentice Hall.
Willis, J. 1996. A framework of task-based learning. Harlow; Essex: Addison Wesley Longman Limited.
Wolf, D. 1993. A comparison of assessment tasks used to measure FL reading comprehension. Modern Language Journal 77: 473-89.
Yamashita, J. 2003. Processes of taking a gap-filling test: Comparison of skilled and less skilled EFL learners. Language Testing 20(3): 267-293.