By Akram Faravani
Abstract
This study investigates the effect of portfolios on students’ critical thinking ability, reading comprehension ability, and reading achievement. 32 students participated in this project and responded to 3 tests of (a) schema-based reading test, (b) a disclosed reading comprehension subtest of the TOEFL, and (c) a Persian critical thinking test as pre-test and post-test. Reading portfolios were implemented in the experimental group while in the control group the readings were practiced in a traditional way. The results indicated that portfolios could promote students’ critical thinking ability. Findings also demonstrated that reading portfolios improved student’s reading achievement and reading comprehension ability.
1. Introduction
In the past, according to Tannenbaum (1996; cited in Coombe &Barlow, 2004), educators came to realize that alternative forms of assessment are an important means of gaining a dynamic picture of students’ academic and linguistic development. One of the most exciting developments in the reform of teacher-education programs is the use of alternative forms of assessment to evaluate students’ learning, and one of the most popular forms of authentic assessment is the use of portfolios (Barret, 2001). Portfolio assessment is in the forefront of alternative assessment approaches (Coombe & Barlow, 2004). According to Wiggins (1989; cited in Coombe &Barlow, 2004), there has been a growing recognition that a single measure was incapable of estimating the diversity of skills, knowledge, processes, and strategies that combine to determine students’ progress. Altan (2002) indicated that the most pervasive inadequacy in traditional and standardized testing was the assumption that all students could be assessed using the same instrument. “Since we do not learn in the same way, we cannot be assessed in a uniform fashion. Therefore, teachers must seek to assess their students’ learning in ways which will give an accurate overview of their strength and weaknesses” (Altan, 2002, p.57).
According to Bloom (1956; mentioned in Seiter, 1995, p.9), standards that were content laden with few references to analysis, synthesis, and evaluation skills served little purpose for today’s students. Newmann (1991; cited in Seiter, 1995) believed that these thinking skills were critical because they permitted knowledge to be applied to the solution of new problems.
Costa (1993; cited in Seiter, 1995) maintained that teachers should “replace some of their obsolete, traditional views of education… and let go of their obsession with content acquisition and knowledge retention as merely ends in themselves”. Gardner (1982; cited in Seiter, 1995) indicated that standards and assessment tools should reflect a school environment where the knowledge obtained is useful and where there is a strong relationship between the skills taught and the problems students find outside of school.
“Using higher order thinking skills provide educators with numerous benefits. Educators are able to break down territorial content walls and find it easier to form consensus on what students should know and be able to do” (Seiter, 1995, p.10). New standards based upon higher order skills have generated a demand for assessment procedures (Seiter, 1995). According to Marzano (1993; cited in Seiter, 1995), traditional standardized tests required students to recall or recognize fragmented and isolated bits of information. They rarely asked students to apply that information. Stake (1991; cited in Seiter, 1995) believed that standardized tests are intended to provide concrete information concerning what the students have achieved. Information about the quality of education is not what the tests provide.
Gray (1993; cited in Seiter, 1995, p.11) said, “Humans do not accumulate skills and facts in a neat orderly fashion”. In assessment, focus is on documenting student growth over time, rather than on comparing students with one another (Valdez-Pierce & O’Malley, 1992). One such assessment tool utilized for ongoing assessment is the portfolio.
The key element in the portfolio is reflection. Reflective thinking is utilized in portfolios when students evaluate their works and seek improvement based on rational and alternative actions (Seiter, 1995).
1.2 Theorical Framework
The theoretical framework underlying portfolios is constructivism. Constructivists, such as Piaget (1977; cited in Wang, 2004), regarded learning as an active process and believed that students must have opportunities to construct their knowledge through their own experiences in a meaningful context since learners learn best when they actively construct their own understanding. In their view, learning should be whole, authentic, and real. Vygotsky (1978; cited in Brown, 2000) maintained that social interaction was foundational in cognitive development. He claims that it is the collaboration between people that causes learning to occur, not just a rich, interesting environment. Therefore, cooperative and collaborative skills should be used in constructivist classes.
According to Dewey (1963), learning is a mental process that involves “thinking, using intelligence, making judgments, and looking for meanings”. In addition, learning does not happen only in the mind. “It occurs both in the mind and in a social medium through interaction (Dewey, 1963, p.7).
According to Dewey (1963), learning requires some outside guidance from teachers, parents or social institutions and it should meet the students’ needs.
Piaget believed that students must have the opportunities to construct knowledge through their own experiences and this is what we must do in constructivist classrooms (Wang, 2004).
According to Wang (2004), assessment from a constructivist perspective seeks to know what the learner knows. Since meaning making is a complex and multifaceted phenomenon, assessment of learners’ knowledge must also be multifaceted and multi-modal. Therefore we need to develop more diverse and complex ways of assessing learning.
1.3 Assessment
The term assessment refers to a variety of ways of collecting information on a learners’ language ability or achievement (Brindley 1989). Atlan (2002) indicated:
Assessment is an ongoing process through which student learning is not only monitored but in which students are involved in making decisions about the degree to which their performance matches their ability. Therefore assessment can be considered an interactive process that engages both teacher and student in monitoring the students’ performance and progress. Since assessment is part of the process of learning, it must provide multiple opportunities for the measurement of processes and yield data that a teacher can use to develop a course grade. (p. 57)
Atlan (2002) believed that assessment in the classroom promoted the meaningful involvement of the students with material that is central to the teaching objectives of a course. Also regular assessment of learning could provide learners with feedback about their language performance at various stages in the developmental process. Shepard (2002) believed that assessment should be moved into the middle of the teaching and learning process instead of being postponed as only the end-point of instruction.
According to Wang (2004), assessment from a constructivist perspective seeks to know what the learner knows. Since meaning making is a complex and multifaceted phenomenon, assessment of learners’ knowledge must also be multifaceted and multi-modal. Therefore we need to develop more diverse and complex ways of assessing learning.
Research has told us that students learn in various ways. Therefore, we can assume that students will need to be tested in a variety of ways in order to be fair to the individual (Belle, 1999). “Traditional ways of assessing have been widely used in the past. But many teachers feel that alternative testing is the most accurate way of evaluating students’ higher order thinking skills (Belle, 1999, p.6).
In order for authentic assessment to be utilized successfully, a number of elements must be understood (Gordon, 1998). Students must be actively involved in solving problems and in working cooperatively and the activity needs to be meaningful to the students. The problem must be real or relevant to their lives.
Portfolios, as one type of alternative assessment, allow students to review, reflect, and determine what caused them to change.
1.4 The concept of Portfolio
The concept of portfolio development was adopted from the field of fine arts where portfolios are used to display illustrative samples of artist’s work (Moya & O’Malley, 1994). According to Kaczmarek (1994), portfolio is a collection of samples of student work developed over time, chosen according to specific criteria to reflect student progress and achievement, and presented with an introduction, explanation or assessment of the contents. Portfolio assessment is an ongoing process involving the student and the teacher in selecting samples of student work and the main purpose is to show the student’s progress (Altan, 2002).
Paulson and Meyer (1991; cited in Coombe & Barlow, 2004) stated that portfolios must include student participation in four important areas: (1) the selection of portfolio contents, (2) the guidelines for selection, (3) the criteria for judging merit, and (4) evidence of student reflection. Hamp-Lyons and Condon (2000; cited in Coombe & Barlow, 2004) offered nine characteristics of good portfolios. However they stressed that all these characteristics may not be found in all portfolio systems equally.
- Collection: The portfolio judges more than a single performance.
- Range: The writer is able to use different genres that show off different areas of expertise.
- Content richness: Writers bring their experiences with them into the assessment.
- Delayed evaluation: Students can go back and revise their work.
- Selection: Students participate in the selection process.
- Student–centered control: The learner is responsible for his/her success.
- Reflection and self–assessment: The learner self assesses and/or reflects on what he/she has learned.
- Growth along specific parameters: Portfolios allow evaluators to ask specific questions such as “has the writer developed over time/become a better speller?”
- Development over time: Readers can trace the development of each piece. (p. 20)
1.5 The Importance of a Reflective Element in Portfolios
According to O’Malley and Valden-Pierce (1996; cited in Coombe & Barlow, 2004), one of the main benefits of portfolio assessment is the promotion of learner reflection. By having reflection as part of the portfolio process, students are asked to think about their needs, goals, weakness and strength in language learning (Coombe & Barlow, 2004). “By having a reflective element in a portfolio, the process is more personalized. Learner reflection enhances the feeling of learner ownership of their work and increases opportunities for dialogs between students and teachers about curricular goals and learner progress” (Coombe & Barlow, 2004, p.19).
“Portfolios make students the agents of reflection and decision-making and give them control of their own learning. They encourage students to reflect on their own leaning and to assess their own strengths and weaknesses” (Genesee & Upshure, 1996, p.105).
Sunstein (1992; cited in Hirvela & Pierson, 2000, p.114) said, “Portfolios provide an invitation for self evaluation. As we reflect on growth, we grow still more”. In addition to growth, learner directed assessment situated in portfolio pedagogy changes the entire learning dynamic of the classroom (Hirvela & Pierson, 2000). Courts and Amiran (1991; cited in Hirvela & Pierson, 2000, p.114) pointed out: “The sort of critical analysis and reflexive thinking which the portfolio requires invites genuine engagement”. Furthermore, as Murphy (1994; cited in Hirvela & Pierson, 2000, p.114) noted, “When students are encouraged to make choices and to reflect on those choices, they are encouraged to take responsibility for their own learning”.
Porter and Cleland (1995; cited in Hirvela & Pierson, 2000, p.114) summarized the benefits of reflection accruing from portfolio pedagogy.
Table 2.6
Benefits of reflection in Portfolios
1. Reflection allows learners to examine their language process. 2. Reflection allows learners to take responsibility for their own learning. 3. Reflection allows learners to see “gaps” in their learning. 4. Reflection allows learners to determine strategies that supported their learning. 5. Reflection allows learners to see changes and development over time.
|
1.6 Critical Thinking
Critical thinking is the cornerstone of a well-rounded and complete education. Students graduating with a strong critical thinking background tend to use their talents quickly and maintain high levels of success (Shawn, 1996). “Almost everyone agrees that one of the main goals of education, at whatever level, is to help students develop general thinking skills, especially critical thinking skills” (Gelder, 2005, p.1). Gelder indicated that students do not acquire these skills as much as they could and should. Therefore, we need to generally improve our teaching and our educational system.
Although many psychologists and others have proposed several definitions for the term critical thinking, these definitions tend to be similar in content (Halpern, 1997). Halpern (1997) defined critical thinking in a way that captured the main concepts:
Critical thinking is the use of those cognitive skills or strategies that increase the probability of a desirable outcome. It is used to describe thinking that is purposeful, reasoned and goal directed—the kind of thinking involved in solving problems, formulating inferences, calculating likelihoods, and making decisions when the thinker is using skills that are thoughtful and effective for the particular context and type of thinking task. (p.4)
The present study was undertaken to explore the effect of reading portfolios on students’ achievement, reading comprehension, and critical thinking ability. The following research questions were raised to guide the study:
- Do portfolios have any effect on students’ critical thinking ability?
- Can reading portfolios increase students’ reading comprehension ability more than traditional tests?
- Can reading portfolios increase students’ achievement more than traditional tests?
2. Method
Participants
The participants in this study were 32 students who studied general English at a private language institute. In this study the students took part in an intensive term that lasted for one month. The term had 21 sessions and each session lasted for one hour and forty-five minutes. Therefore, they attended 5 days in a week. The participants were pre-intermediate students from three EFL classes. Their background was different. They all had a high school diploma. There were 13 university students and 17 university graduates. Only two of them held M.A/ M.S degrees. They ranged in age between 20 and 32 and only one of them was 57. All of them were female. The participants took all the tests as part of a course requirement offered by the researcher.
Instrumentation
2.2.1 Schema-Based Cloze MCIT
Schema-based cloze multiple choice item tests (MCITs) were first developed by Khodadady (1997) to remove the shortcomings of traditional cloze MCITs. In constructing traditional cloze MCITs, test writers usually either write the reading passages themselves or choose the texts which best yield well-functioning items. Then the test writers delete a number of words from these texts and present them as the keyed responses of their items. For example, Hale, Stansfield, Rock, Hicks, Butler, and Oller (1988) designed the following traditional cloze multiple-choice item.
“Folk song,” then has come to be the inclusive term, covering many … of music.
|
a. varieties* |
b. advances |
c. conclaves |
d. adherents |
As can be seen, the distracters of the item above, i.e., advances, conclaves and adherents, have no semantic relationships with each other. To solve the problem of writing or choosing appropriate texts, Khodadady (1997) stated that all authentic texts written for the purpose of reading can be used to develop well-functioning multiple choice items if they are based on schema theory.
According to Khodadady and Herriman (2000), each and all words comprising an authentic text form its schemata because they represent the writer’s background knowledge of whatever topic conveyed by the text. A given test will have construct validity if it measures the test takers’ knowledge of each schema in relation to other schemata which have semantic, syntactic and discoursal relationships with the constituting schemata of the text. Khodadady and Seif (2006), for example, deleted the schema ground deleted to construct the following item.
Dinosaur footprints, bones, and teeth were preserved, or saved, in the …
|
A. land |
B. earth |
C. ground* |
D. dust |
In contrast to the semantically unrelated distracters of traditional cloze multiple choice items, the choices employed in schema-based cloze multiple choice items, e.g., land, earth and dust, share the semantic feature of soil with the deleted schema ground. Although dust as a choice can mean very small particles of soil, it is not used by the author of the text because it cannot preserve bones. Similarly, the readers of the text should know that the author used neither land nor earth because they refer to the purposeful use of a piece of ground. These two choices also differ from ground in embodying the distinctive semantic features of air and water, respectively.
2.2.2 Watson-Glaser Critical Thinking Appraisal (WGCTA)
According to Psychcorp (2000), Watson-Glaser Critical Thinking Appraisal WGCTA was an assessment tool designed to measure an individual’s critical thinking skills. The examinee was asked to evaluate reading passages that included problems, statements, arguments, and interpretations. The original version of the test had 80 items and could be completed in 60 minutes.
The WGCTA was useful in determining an individual’s ability to think critically, assessing if employees have improved their critical thinking from training and instructional programs. It could also be used for conducting research on the critical thinking construct.
The test is divided into five sections, each of which measures a different aspect of critical thinking. The sections were inference, recognition of assumptions, deduction, interpretation, and evaluation of arguments.
According to Psychcorp (2000), each section is preceded by its own instructions with clear examples of the type of questions to be answered. The sections are not individually timed, nor are they administered or scored separately. Each section carries the same weighting towards the overall test results.
2.2.3 A disclosed reading comprehension subtest of the TOEFL (ETS)
The book TOEFL published by Educational Testing Service (2003) has four subtests of listening, structure, written expression, and reading comprehension. In this study, reading comprehension, a disclosed subtest of the TOEFL (2003) was used to measure the participants’ command of reading comprehension and to find out if they have improved. The reading comprehension part had 5 passages with 50 multiple-choice items.
2.3 Procedure
The researcher gave the participants in the both experimental and control groups the “Schema-based Language test” as the pre-test. Then the researcher provided another pre-test which was the “Watson-Glaser Critical thinking test” and another test that was “A disclosed reading comprehension subtest of the TOEFL (ETS)”. The researcher provided the same tests at the end of the term as a type of pre-post procedure. In the experimental group, the students practiced six passages with the new way of assessment, i.e. portfolios. In the control group the same passages were practiced using the traditional way of assessment. They read the passage in the class and then they made some questions and answered them. The students tried to understand the meaning of the words they don’t know by looking them up in the dictionary. Next, they summarized the text. Next day, they reviewed the previous text in the class. The students asked their friends the questions they made from the passage and the teacher wanted two or three students to tell the summary of the text.
The students in both groups were not allowed to take home the passages. They only had access to them in the class. The amount of time spent on passages in two groups was the same. They studied the passage for 20-30 minutes in both groups. Then, in the experimental and control group, the students had their own procedures for about one hour. Each session lasted one hour and fifty minutes.
In the first meeting with the participants in the experimental group, the researcher presented the idea of the portfolio. However, it was the teacher who kept portfolios in a folder, not the students. The researcher told the students what to do with portfolios. She told the students how to self/peer assess. She explained the benchmarks in the self/peer assessment sheets. For instance, if they participated in small group discussions most of the time, they should tick “Almost Always” (see Appendix 1). The researcher explained about structural, visual, meaning cues and other strategies in the table.
In this study, the number of passages in the portfolio was six. According to Genesee & Upshure (1996), the number of pieces in a portfolio should be limited since portfolios that are constantly expanding and never cleaned out become difficult to store, review, and assess.
The teacher introduced the topics of six passages and all students showed interest in these topics and liked them. Students practiced in groups of three.
The implementation of reading portfolios in this study had the following stages:
In the class,
(a) The students were required to read the passage in the class in their groups.
(b) They tried to guess the meaning of the words, think about cues (structural, meaning, or visual) and other strategies in reading.
(c) They made some questions and answered their friends’ questions.
(d) They had a group discussion (groups of 3) about the passage and about the strategies they used in reading. This activity was important because it allowed the students to learn from each other.
(e) They related the reading to their real life and talked about it.
At home,
(a) They completed the self or peer assessment sheets (see Appendix 1).
(b) They answered the questions of the reflection forms (see Appendix 2). In this part, reflection is incorporated into the portfolio. (They could answer in Farsi for the parts that they couldn’t express what they mean in English.)
In the class, the other day,
Students’ portfolios were reviewed by small groups of students and the whole class with the help of teacher; therefore, students were taught how to provide positive, constructive feedback to one another
2.4 Design
This study enjoyed quasi-experimental design. One type of quasi-experimental design is non-equivalent pre-test post-test design. “This design is often used in classroom experiments when experimental and control groups are such naturally assembled groups as intact classes which may be similar” (Best, 1977, p.104). Quasi-experimental designs are used when it is not possible to guarantee the random selection of the subjects (Best, 1977). Therefore, the design of the study was quasi-experimental since the groups were not randomly selected.
To analyze the scores obtained from pre-tests and post-tests, the researcher used SPSS power pack, version 11.5. To estimate the reliability of the test scores, coefficient alpha was used. The scores of the students in the 3 pre-tests were correlated with the scores in the post-tests. To find out if there is any significant difference in the pre-tests between the two groups, analysis of variance (ANOVA) was used. The paired t-test was also used to know if there is any significant difference within the groups. The researcher used two-tailed test since there is no logical or theoretical reason to expect one of the means to be higher than the other (Brown, 1998). The level of the significance of the tests was also measured.
3. Results and Discussion
Table 3.1 shows the descriptive statistics related to the 3 tests of critical thinking, TOEFL, and schema. The most reliable test was Schema-based cloze MCIT Post test (0.89) and the lowest reliable one was Reading Comprehension Pretest (0.65).
Table 3.1
Descriptive statistics for the scores of the participants on the three tests of critical thinking, reading comprehension, and schema-based cloze MCIT:
Test |
Items |
Test takers |
Mean |
SD |
Kurtosis |
Std. Error |
? |
Schema-based cloze MCIT (Pretest) |
69 |
29 |
22.34 |
7.48 |
-0.39 |
0.85 |
0.78 |
Schema-based cloze MCIT (Post test) |
69 |
25 |
44.36 |
10.88 |
-0.32 |
0.90 |
0.89 |
Reading Comprehension (Pretest) |
50 |
26 |
21.77 |
5.57 |
0.12 |
0.89 |
0.65 |
Reading Comprehension (Post test) |
50 |
26 |
27.54 |
7.59 |
2.00 |
0.89 |
0.82 |
Critical Thinking (Pretest) |
80 |
27 |
51.56 |
7.67 |
3.04 |
0.87 |
0.76 |
Critical Thinking (Post test) |
80 |
28 |
55.21 |
7.98 |
2.65 |
0.86 |
0.80 |
By having a look at the means of the scores presented in table 3.1, it can be concluded that the most difficult test was Reading Comprehension Pretest (21.77), i.e., the lowest reliable one that had also, as its standard deviation (5.57) indicates, the lowest discrimination power. On the other hand, the easiest test was Critical Thinking (Post test) with the mean of 55.21.
The results of using one-way ANOVA suggest that there is no significant difference in the pre test scores of the 3 tests of critical thinking, reading comprehension, and schema-based cloze MCIT between the groups.
Paired sample t-test for each group was performed to see if there is any significant difference within each group. The results imply that while there is no significant difference in the scores on the pre-test post-test of Persian Critical thinking test in the control group, there is a significant difference in the scores of the pre-test and post-test of the same test in the experimental group after the implementation of reading portfolios (see Tables 3.2, 3.3).
Table 3.2
Paired Samples Test of Critical thinking (control group)
(The difference within the control group)
|
Paired Differences |
t |
df |
Sig. (2-tailed) |
|||||
|
Mean |
Std. Deviation |
Std. Error Mean |
95% Confidence Interval of the Difference |
|
|
|
||
|
|
|
|
Lower |
Upper |
|
|
|
|
Pair 1 |
CThink Pre – CThink Post |
-2.73 |
9.870 |
2.976 |
-9.36 |
3.90 |
-.916 |
10 |
.381 |
The difference is significant at .05 level.
Table 3.3
Paired Samples Test of Critical thinking (experimental group)
(The difference within the experimental group)
|
Paired Differences |
t |
df |
Sig. (2-tailed) |
|||||
|
Mean |
Std. Deviation |
Std. Error Mean |
95% Confidence Interval of the Difference |
|
|
|
||
|
|
|
|
Lower |
Upper |
|
|
|
|
Pair 1 |
CThink Pre – CThink Post |
-4.92 |
7.170 |
1.989 |
-9.26 |
-.59 |
-2.476 |
12 |
.029 |
The difference is significant at .05 level.
Restatement of hypothesis (1). There is no significant difference between the means of pre-tests and post-tests of critical thinking for the experimental and control groups after the implementation of reading portfolios.
This hypothesis can be rejected based on the results obtained from paired t-test (see Tables 3.2&3.3). In the control group, the observed difference (.916) is less than the critical difference (2.228). Since the observed difference (2.476) in the experimental group is greater than the critical difference (2.179) at .05 level of significance, we can reject this hypothesis and say that there is a significant difference between the means of pre-tests and post-tests of critical thinking for the experimental and control groups after the implementation of reading portfolios.
Paired samples t-test was also used to find out if there is any significant difference in the students’ reading comprehension ability gained as a result of implementing reading portfolios in the experimental group. Tables 3.4 and 3.5 show the level of significance.
Table 3.4
Paired Samples Test of reading comprehension (experimental group)
(The difference within the experimental group)
|
Paired Differences |
t |
df |
Sig. (2-tailed) |
|||||
|
Mean |
Std. Deviation |
Std. Error Mean |
95% Confidence Interval of the Difference |
|
|
|
||
|
|
|
|
Lower |
Upper |
|
|
|
|
Pair 1 |
TOEFL Pre – TOEFL Post |
-5.29 |
4.462 |
1.193 |
-7.86 |
-2.71 |
-4.432 |
13 |
.001 |
The difference is significant at .05 level.
Table 3.5
Paired Samples Test of reading comprehension (control group)
(The difference within the control group)
|
Paired Differences |
t |
df |
Sig. (2-tailed) |
|||||
|
Mean |
Std. Deviation |
Std. Error Mean |
95% Confidence Interval of the Difference |
|
|
|
||
|
|
|
|
Lower |
Upper |
|
|
|
|
Pair 1 |
TOEFL Pre – TOEFL Post |
-5.00 |
12.773 |
4.516 |
-15.68 |
5.68 |
-1.107 |
7 |
.305 |
The difference is significant at .05 level.
Restatement of hypothesis (2). There is no significant difference between the means of pre-tests and post-tests of a disclosed reading comprehension subtest of TOEFL for the experimental and control groups after the implementation of reading portfolios.
According to the results obtained from paired t-test (see Tables 3.4 & 3.5), this
hypothesis can be rejected at 0.05 level of significance. The results show that in the experimental group, the observed difference (4.432) is much greater than the critical difference (2.228) at .05 level of significance. The results also reveal that the observed difference (1.107) in the control group is smaller than the critical difference (2.365). Therefore, we can reject this null hypothesis and say that There is a significant difference between the means of pre-tests and post-tests of a disclosed reading comprehension subtest of TOEFL for the experimental and control groups after the implementation of reading portfolios.
Investigating the effect of reading portfolios on student’s achievement was another hypothesis in this study. Paired sample t-test was employed for both groups to find out if there is a significant difference in the scores obtained from schema-based cloze MCIT. The results show that there is a significant difference in the scores obtained from the pre-test and post-test of the schema-based reading comprehension test in the experimental group after the implementation of reading portfolios (Tables 3.6 &3.7). As it is mentioned in table 4.16, the p-value in the control group is .008, which is not less than .05.
Table 3.6
Paired Samples Test of schema-based cloze MCIT (experimental group)
(The difference within the experimental group)
|
Paired Differences |
t |
df |
Sig. (2-tailed) |
|||||
|
Mean |
Std. Deviation |
Std. Error Mean |
95% Confidence Interval of the Difference |
|
|
|
||
|
|
|
|
Lower |
Upper |
|
|
|
|
Pair 1 |
Schema Pre – Schema Post |
-27.13 |
9.054 |
2.338 |
-32.15 |
-22.12 |
-11.606 |
14 |
.000 |
The difference is significant at .05 level.
Table 3.7
Paired Samples Test of schema-based cloze MCIT (control group)
(The difference within the control group)
|
Paired Differences |
t |
df |
Sig. (2-tailed) |
|||||
|
Mean |
Std. Deviation |
Std. Error Mean |
95% Confidence Interval of the Difference |
|
|
|
||
|
|
|
|
Lower |
Upper |
|
|
|
|
Pair 1 |
Schema Pre – Schema Post |
-14.44 |
12.269 |
4.090 |
-23.88 |
-5.01 |
-3.532 |
8 |
.008 |
The difference is significant at .05 level.
Restatement of hypothesis (3). There is no significant difference between the means of pre-tests and post-tests of schema-based MCIT (achievement) for the experimental and control groups after the implementation of reading portfolios.
This hypothesis can also be rejected at 0.05 level of significance based on the results obtained from paired t-test (see Tables 3.6&3.7). These results show that the p-value in the experimental group (.000) is less than .05 and the observed difference (11.606) is much greater than the critical difference (2.145). Therefore, the difference within the experimental group is highly significant. The observed difference (3.532) in the control group is also greater than the critical difference (2.306). Therefore there is a significant difference between the scores obtained from pre-test and post-test of schema-based reading MCIT in the control group. Since the difference between the observed value and the critical value is much greater in the experimental group, we can reject the null hypothesis and say that there is a significant difference between the means of pre-tests and post-tests of schema-based MCIT (achievement) for the experimental and control groups after the implementation of reading portfolios.
5. Conclusion
In this study, the researcher tried to investigate the effect of reading portfolios on students’ critical thinking ability, reading comprehension ability, and achievement.
The participants were EFL Iranian students. Persian critical thinking test, a subtest of reading comprehension (TOEFL), and schema-based cloze MCIT were the instruments that the researcher used to collect data.
The results of paired t-test showed that portfolios increased higher order thinking skills especially critical thinking skills. The results also revealed that reading portfolios increased students’ reading comprehension ability and achievement. Portfolios, as an alternative assessment tool, measure the performance of authentic or real life tasks (Hypki, 1994). According to Hypki (1994), in using portfolios, learners acquire important critical thinking, organizational, and communicative skills. In portfolios, the students think about their learning and learn about the way in which they think. Metacognition or thinking about ones thought processes is a higher order thinking skill which is found, used, and developed through the use of portfolios (Hypki, 1994). The ability of the students to assess their own progress revealed they were actively engaged in the process of metacognition. It also showed that they were thinking about the skills which they had learned. Therefore, one reason could be the metacognition element in portfolios that increased students’ critical thinking ability.
If an individual is thinking about his own thought processes and progress, it follows that he is learning how to learn. Therefore, one reason of the students’ progress in
reading comprehension and achievement might be the fact that the students know how to learn. That the students are aware of both successful and unsuccessful learning strategies could be another reason.
According to Richards & Rodgers (2001), cooperative language learning develops learners’ critical thinking skills. They also believed that cooperative learning can raise the achievement of all students, including those who are gifted or academically handicapped. In addition, cooperative learning provides opportunities for learners to develop successful learning and communicative strategies. In the implementation of reading portfolios, the students had a discussion about the passage and their experiences related to the topic of the passage. According to Wang (2004), in constructivist learning, a rich learning environment alone is not enough for students to learn. They need to communicate, interact and collaborate with their peers and teachers. Vygotsky (1978; mentioned in Wang, 2002) claimed that it was the collaboration between people that caused learning to occur, not a rich, interesting environment. In this study, nearly all of the students collaborated with peers or asked advice from the teacher during portfolio implementation. Therefore, cooperative learning and the students’ interaction and communication with peers could be other factors that caused progress in students’ critical thinking ability, reading comprehension, and achievement.
References
Altan, M. Z. (2002). Assessment for multiple intelligences. MET, 11, 56-60.
Au, K. H. (1994). Portfolio assessment: Experiences at the Kamehameha elementary
education program (chapter 5). In S. W. Valencia, E. H. Hiebert, & P. P.
Barret, H. (2001). Electronic portfolio development strategies. Presented at the PT3
annual grantees meeting, Washington, D.C.
Belle, D. (1999). Traditional assessment versus alternative assessment. Eric: The
Educational Resources Information Center
Retrieved in January 20, 2006, from http://SearchERIC.org/ericdc/ED431012.htm
Best, J. W. (1977). Research in education (3rd ed.). Prentice-Hall, Inc.
Brindley, G. (1989). Assessment (chapter 20). In R. Carter & D. Nunan (Eds.) Teaching English to speakers of other languages (pp 137-143). Cambridge: Cambridge University Press.
Brown, H. D. (2000). Principles of language learning and teaching. Forth edition,
White Plains, NY: Pearson Education.
Brown, J. D. (1988). Understanding research in second language learning.
Cambridge: Cambridge University Press.
Coombe, C. & Barlow L. (2004). The reflective portfolio, two case studies from the United Arab Emirates. English Teaching Forum, 18-22.
Dewey, J. (1963). Experience and education. New York: Macmillan.
Educational Testing Service (2003) Reading for TOEFL. Princeton, NJ: ETS.
Genesee, F. and J. A. Upshur. 1996. Classroom-based evaluation in second language education. Cambridge: Cambridge University Press.
Gordon, R. (1998). A curriculum for authentic learning. The Journal of Education Digest, 4-8
Hale, G. A., Stansfield, C. W., Rock, D. A., Hicks, M. M., Butler, F. A., & Oller, J. W., Jr. (1988). Multiple-choice cloze items and the test of English as a foreign language (TOEFL Research Report No. 26). Princeton, NJ: Educational Testing Service
Halpern, D. F. (1997). Critical Thinking across the curriculum: A brief edition of thought and knowledge. Mahwah: Lawrence Erlbaum Associates.
Hirvela, A., & Pierson, H. (2000). Portfolio: Vehicles for authentic self assessment (chapter 6). In G. Ekbatani, H. Pierson (Eds.). Learner directed assessment in ESL (pp 105-126). Machwah: Lawrence Erlbaum Associates.
Hypki, C. (1994). Thinking about learning and learning about thinking: Using
portfolio assessment in adult education. A handbook for instructors and
tutors. Eric: The Educational Resources Information Center
Retrieved in January 20, 2006, from http://SearchERIC.org/ericdb/ED384778.htm
Kaczmark, N. (1994). Using portfolios: How do you begin?, Paper presented at the National Catholic Educational association Convention and Exposition. ERIC: The Educational Resources Information Center. Retrieved in January 20, 2006, from http://SearchERIC.org/ericdb/ED405341.htm
Khodadady, E. (1997). Schemata theory and multiple choice item tests measuring reading comprehension. Unpublished PhD thesis, the University of Western Australia.
Khodadady, E., & Herriman, M. (2000). Schemata Theory and Selected Response Item Tests: From Theory to Practice. In A. J. Kunnan (Ed.), Fairness and validation on language assessment (pp. 201-222). Cambridge: CUP.
Khodadady, E., & Seif, S. (2006). Measuring Translation Ability and Achievement: A Schema-Based Approach. Paper presented at TELLSI 2006 Conference: Language Education in Iran and Beyond, Razi University of Kermanshah, Iran.
Mohammad Yari, A. (2004). Investigating people’s critical thinking ability and its effect in the workplace. Unpublished MA. thesis, Ferdowsi University of Mashhad.
Moya, S. S.,& O’Malley, J. M. (1994). A Portfolio Assessment Model for ESL. The Journal of Educational Issues of Language Minority Students, 13, 13-36, Retrieved in January 16, 2006, from http:www.ncbe.gwu.edu.
Psychcorp, (2000). Watson-Glaser critical thinking test. Retrieved in January 29,2006 from http://www.pantesting.com/products/PsychCorp/WGCTA.asp
Richards, J. & Rodgers, T. (2001). Approaches and methods in language teaching. Cambridge: Cambridge University Press.
Seiter, D. M. (1995). Assessing the influence of portfolios on higher order thinking skills. Retrieved in January 16, 2006, from http://SearchERIC.org/ericdb/ED391737.htm
Shepard, L. A. (2000). The role of assessment in a learning culture. Educational
Research, 29(7), 4-14.
Valden-Pierce, L. & O’Malley J. (1992). Performance and portfolio assessment for language minority students. NCBE Program Information Guide Series, 9. Retrieved in January 16, 2006, from http:www.ncbe.gwu.edu/ncbepubs/pigs/pig9.htm.
Wang, T. (2004). Learning experiences in developing electronic portfolios in a masters’ educational technology program: A Case Study. Unpublished PhD thesis, University of Ohio.
Appendix 1
Self/Peer-Assessment
Name: |
Family name: |
Reading No.: |
Date: |
Please put a check mark (?) in the box which best describes your/your classmate’s reading activities.
Activities |
Almost Always |
Sometimes |
Almost Never |
1. Participates in small group discussion. |
|
|
|
2. Shares responses to the reading. |
|
|
|
3. Comprehends questions. |
|
|
|
4. Gives quality responses during small group discussions. |
|
|
|
5. Uses structure and background knowledge (look-in strategy). |
|
|
|
6. Uses other sources (dictionary, thesaurus, content, and Text). |
|
|
|
7. Integrates strategies and sources. |
|
|
|
8. Uses meaning cues. |
|
|
|
9. Uses structural cues. |
|
|
|
10. Uses visual cues. |
|
|
|
11. Integrates cues (meaning, structural, visual). |
|
|
|
12. Makes predictions and reads to find out if it was right. |
|
|
|
13. Reads the sentences before and after a word he doesn’t know. |
|
|
|
14. Guesses the meanings of the words he doesn’t know from the context. |
|
|
|
15. Looks for the main idea. |
|
|
|
16. Discusses what he reads with others. |
|
|
|
17. Has the ability to self-correct. |
|
|
|
18. Recognizes cause and relationships between/among sentences. |
|
|
|
19. Draws inferences. |
|
|
|
20. Can provide examples from personal experience and/or prior knowledge and uses relevant examples from the text. |
|
|
|
21. Recognizes logical order. |
|
|
|
22. Recognizes paraphrasing. |
|
|
|
Adapted From Au, K. H. (1994).
Appendix 2
Response to the readings
(Self-reflection)
Name: Date:
Reading Number: Group:
The main topic ion this reading is ….
Have you read this reading before?
The author concludes that …
How is the reading related to your everyday life? What does the author want you to learn?
One of the new words my group talked about was ….
How did your group figure out its meaning?
Did you like the reading? Why or why not?
What problems did you have when you read this passage? (What were your weaknesses in reading?)
What do you think about your reading now? Do you think you made progress? What are your strengths in reading now?
Adapted From: Au, K. H. (1994).
***This research paper is the intellectual property of Akram Faravani. Contact Akram at afaravani AT yahoo DOT com.
3 comments