#73, Research Paper: ‘The Relationship Between the Use of Portfolio Assessment and Students’ Improvement in Listening Comprehension’ by Majid Ghelichi

** This paper is submitted by Majid Ghelichi (MA. In TEFLE) of Iran University of Science and Technology.

Abstract

Much criticism of the traditional methods of assessment has been presented in the

literature along with convincing arguments in favor of a need for alternative methods of assessment. Therefore, the aim of this study was to investigate the effect of a new form of assessment called portfolio assessment on the improvement of listening comprehension of Iranian EFL university students. Two groups of freshmen students in Allameh Tabatabaee University served as the subjects of the study. They attended  laboratory classes two days a week, two hours a day. One class served as the experimental group and the other class as the control group. The experimental group received the special treatment which was the use of portfolio assessment during a ten-week period, while no treatment was given to the control group. The control group was managed by the ordinary method of teaching and evaluation. At the end of the experiment a test of listening comprehension was administered to both groups. The experimental group performed significantly better than the control group. The results of the study showed that portfolio assessment had a positive effect on improving listening comprehension of the experimental group who were exposed to this method of assessment.

Key words: Portfolio Assessment, Listening Comprehension Improvement, Conference Sessions, Portfolio, Learner Reflection, Learner Involvement, Critical Thinking, Self-Evaluation, Self-Monitoring

Introduction

The current trend in educational systems for measuring students’ ability in general, and language ability in particular is that teachers administer a single examination at the end of an instructional course for students to take. Whatever grade the students get on this final examination will determine whether they are qualified to pass the course or not. Scholars, however, believe that we can never obtain the true score of an individual’s language ability by just one single test ( Bachman, 1990 ; Lefrancois, 1991; Gipps, 1994 ; Genesee and Upshur, 1996 ). The only feasible approach, as Bachman ( 1990 ) says, is that  “If we could obtain measures for an individual under all the different conditions specified in the universe of possible measures, his average score on these measures might be considered the best indicator of his ability” (p. 191 ).

Some research studies have been conducted on the effectiveness of new methods of testing and assessment which are assumed to be more vigorous tools both in measuring learners’ ability and in helping them to become real assessors of their own ability in such a way that they can proceed towards real, meaningful learning. Some instances of research on the relationship between new forms of assessment and learners’ achievement have appeared in the literature ( Gimenez, 1995; Matsumoto, 1996; Rubin et al ; Puhl, 1997 ; Baker, 1991 ; Black and Wiliam, 1998 ).Of course, there is not total agreement in the findings of these pieces of research about the effectiveness of new trends in assessment. But they mostly suggest that such trends are helpful.

These research studies have been directed towards investigating the relationship between alternative forms of assessment and learner progress in the area of composition writing and ESP. Listening, as one of the four major skills of language, however, has always been neglected  in terms of  both teaching and testing. The major reason for this neglect is the belief that this skill is automatically acquired by the learner as he/she learns how to speak the language. In other words, merely exposing the students to the spoken language is an adequate instruction for mastering listening ( Taylor, 1981 ; Call, 1985 ; Nicholas, 1988 ). Furthermore, among the observed pieces of research in the literature no evidence of any empirical research on the relationship between alternative forms of assessment, as opposed to the traditional ones, and learner progress in listening comprehension has been seen, at least in Iran. The present study, therefore, has tried to investigate the degree of effectiveness of using portfolio assessment, as an alternative form of assessment, for improving listening comprehension of university students in an EFL setting in Iran. It has aimed at involving the students in the process of teaching, learning, and assessing listening comprehension in the form of portfolio assessment and corrective/supportive feedback away from mere resort to traditional forms of assessment such as single, one-shot tests which have deprived the learners of any active involvement and participation in the complex nature of teaching-learning-testing processes in the classroom.

The expectation has been that using portfolio assessment would help alleviate the aforementioned inadequacies of the existing commonly-used methods of assessment and evaluation which not only fail to help us approximate the true scores as reliable indicators of learners’ ability but also make  learners  form negative and undesirable attitudes towards the whole process of learning and assessment.

Portfolio Assessment

Portfolio assessment is an ongoing process involving the students and teacher in selecting samples of student work for inclusion in a collection, the main purpose of which is to show the students’ progress…perhaps the greatest overall benefit of using portfolio assessment is that the students are taught by example to become independent thinkers, and the development of their autonomy as learners is facilitated(Gipps, 1994, p.3) . Assessment, according to Gipps, is “a wide range of methods for evaluating pupil performance and attainment including formal testing and examination, practical and oral assessment, classroom-based assessment carried out by teachers, and portfolios”(p.vii). Formative assessment takes place during the course of teaching and is used essentially to feed back into the teaching/learning process. The role of feedback in teaching in the form of  comments or information learners receive on the success of a learning task, either from the teacher or from other learners is something to be taken as axiomatic.

Portfolios

A good account of portfolios and their application is given by Genesee and Upshur (1996) as follows :

A portfolio is a purposeful collection of students’ work that demonstrates to the students and others their efforts, progress, and achievements in given areas. Student portfolios have been inspired by professionals such as photographers and architects as a means of keeping a record of their accomplishments to  show to others. Second language portfolios can have a very specific focus,  such as writing, or broad focus that includes examples of all aspects  of language development. Students should have their own portfolios, which can be a conventional file folder, a small cardboard box, a section  of a file drawer, or some other such receptacle (p.99).

Genesee and Upshur maintain that the primary value of portfolios is in the assessment of student achievement. They are particularly useful in this respect because they provide a continuous record of students’ language development that can be shared with others.

Genesee and Upshur clearly state that reviewing portfolios can increase the students’ involvement in and ownership of their own learning. The positive effects of portfolios on student learning arise from the opportunities they afford students to become actively involved in assessment and learning ( p.99).

Uses, Benefits, and Advantages of Portfolios

Genesee and Upshur (pp.99-100) cite the benefits of portfolios as follows

  • a continuous, cumulative record of language development,
  • a holistic view of student learning,
  • insights about progress of individual students,
  • opportunities for collaborative assessment and goal setting with students,
  • tangible evidence of student learning to be shared with parents, other educators, and

other students,

  • opportunities to use metalanguage to talk about language.

Portfolios promote :

  • student involvement in assessment,
  • responsibility for self-assessment,
  • interaction with teachers, parents, and students about learning,
  • students’ ownership of and responsibility for their own learning,
  • excitement about learning,
  • students’ ability to think critically about school work,
  • collaborative, sharing classrooms.

A comprehensive account of the advantages and functions of portfolios is given by Brown and Hudson (pp.664-665) which further induce and encourage the use of portfolios in education :

The literature reports at least three advantages for portfolio assessments. We see these advantages as falling into three categories : strengthening students’ learning, enhancing the teacher’s role, and improving testing processes.

According to Brown and Hudson, portfolios may strengthen student learning in that they :

a) capitalize on work that would normally be done in the classroom anyway; b)focus learner’s attention on learning processes; c) facilitate practice and revision processes; d) help motivate students; e) increase student’s involvement in the learning processes; f)foster student-teacher and student-student collaboration; g) provide means for establishing minimum standards for classroom work and progress;  h) encourage students to learn the metalanguage necessary for students and teachers to talk about language growth; and i) permit the assessment of the multiple dimensions of language learning.

Portfolios and Records

Inger (1993) refers to the usefulness and versatility of portfolios. An eminent characteristic of portfolios, as he mentions (p.2), is tat there is more insistence on individual as opposed to collaborative work. Another important characteristic is that evaluation schemes tend to be analytic–breaking  the work down into component parts and features and analyzing each separately, rather than holistic, i.e., giving an assessment of the work as a whole.

Inger further continues to demonstrate another technique of maintaining assessment which he describes as ‘records’. He says that :

Some assessment models emphasize student-maintained records as an ideal way of

encouraging students to assess their own strengths and weaknesses. As students

engage in keeping records of their own work, they develop skills that are crucial to

success in school and the work place , i.e., the capacity to organize information and

store it in such a way that they can easily retrieve it (p.2).

Portfolio Assessment and Learner Reflection

Gottlieb (1995), and O’Malley and Valdez Pierce (1996) hold:

It is generally recognized that one of the main benefits of portfolio assessment is the promotion of learner reflection ( cited in Maricel, G. Santos, 1997). Huerta

Macias (1995) maintains:  “ As part of the portfolio process, students are asked to think about their needs, goals, weaknesses, and strengths in language learning” ( cited in Santos,1997).

The benefits of learner reflection in portfolios have been mentioned by Brookfield (1995) as being of crucial importance to teaches in their professional practice.

According to Santos(1997), portfolio reflection benefits learners, teachers, and the curriculum in the following ways :

1.It helps us take informed action.

Learner reflection encourages students to examine their efforts and the consequences of their actions. It also helps them to see connections between their goals and benefits about their learning and their actual learning behavior. Students can adjust their goals according to what they want and know they can do. Teachers also can take informed actions based on what they learn about their students from their reflection. They can see whether adjustments in the curriculum or teaching approach need to be made. They can also see which classroom activities were most valuable to the learners and, thus, worth keeping as a part of the curriculum.

2.It helps us avoid pointless blaming.

Without reflection, learners often suffer (quietly) wondering why they are not improving. They may blame themselves, the teacher, the text, or the course activities for their boredom, failure, or the lack of progress. Learner reflection encourages students to identify obstacles in their learning and quickly begin considering solutions. Based on their reflection, teaches can adjust the curriculum or change the teaching approach to transform the obstacles into achievable goals.

3.It creates a healthy learning environment.

Through learner reflection, students find that self-analysis can be a rewarding experience. Reflection in portfolios is an essential activity which enables teachers and learners to dispel an assessment myth — that assessment is ‘something done to students on their work’ (Sweet, 1995, cited in Santos, 1997). Santos maintains that learner reflection allows students to contribute their own insights about learning to the assessment process. It enhances feelings of learners’ ownership of their work. It increases opportunities for dialogue between teachers and students about curriculum goals and learning process.

Assessment should create an atmosphere for students  in which they feel responsible for decision making and their own learning. In the course of teaching and assessing, students should not be under absolute control of the teacher without actively and creatively participating in the classroom activities and procedures. Genesee and Upshur (1996) maintain

Many methods of assessment treat students as objects of evaluation and place the responsibility and the task of assessment in the hands of teachers

or other adults. There is little opportunity provided by  these   methods for students to assume positions of responsibility and control. In comparison,

portfolios make students the agents of reflection and decision making and, thus, give them control of their own learning. They encourage students to

reflect on their own learning, to assess their own strengths and weaknesses, and to identify their own goals for learning. Teachers do this by asking

students to reflect on their work and by being supportive and attentive during such reflections (p.105).

Genessee and Upshur suggest some some ways of accomplishing this, some of which are presented below :

1.During portfolio conferences, allow students to control the review process; ask   them to describe their current strengths and weaknesses  and to indicate where they have made progress; ask them to give evidence of this progress,

2.be interested, supportive, and constructive when providing response to or feedback about portfolio pieces and students’ reflections on their work,

3.ask students how they think they can strengthen weaknesses and what  the teacher can do to help,

4.collaborate with the students to set goals for language development,

5.encourage students to reflect on their work in the presence of other  students so that they see this as an integral aspect of classroom teaching  and learning and so that they become comfortable with self-assessment  and adept at giving supportive feedback to their peers; it is important when students share their portfolios that the interaction be non-competitive and student-centered  (pp.105-106).

So, it is important that portfolios create an interactive situation in which students are more and more involved in learning.

As it is implied and understood from these arguments and guidelines, what makes portfolio assessment an enhancing device to be at the service of meaningful learning and development of critical thinking is the washback effect of assessment and feedback created by portfolio assessment. A number of studies (Alderson and Hamp-Lyons,1996 ; Shohamy, Donitsa-Schmidt, and Ferman, 1996 ; Wall, 1996 ; Watanabe,1992,1996a,1996b) have confirmed the existence and complex nature of the washback effect.

Listening Comprehension Improvement

Students’ improvement in listening comprehension is operationally defined as the difference between the scores obtained by the experimental and control groups on a test of listening comprehension after the treatment.

Methodology

This study was carried out to investigate whether a shift from traditional methods of assessment towards new alternative methods, in this case ‘ portfolio assessment’, could help students to improve their listening comprehension, among other things, by helping them to develop the habits of reflective thinking and self-involvement in the teaching-learning processes

Selection of the participants

The students who participated in this research project were 63 freshmen students in Allameh Tabatabee University. There were 35 female and 28 male students. They were almost between 20 – 23 years of age. They were all freshmen students. They had enrolled in two laboratory classes of listening comprehension. They attended the laboratory classes two days a week, two hours a day. One class consisting of 31 students served as the experimental group and the other class consisting of 32 students served as the control group.

Instruments

Two tests were used in this study. One which was used at the outset of the study was a Michigan Test of general language proficiency consisting of four subtests of grammar, vocabulary, reading comprehension, and listening comprehension. It was administered to check the degree of homogeneity of general language proficiency of the two groups of subjects. The other test which was given at the end of the research project was the same listening comprehension subtest of the Michigan Test which was also administered at the beginning of the treatment . It was administered to check whether there was any significant difference in the performance of two groups. The whole Michigan Test of general language proficiency consisted of 120 items and the listening comprehension subtest consisted of 20 items. The textbook ‘ Person to Person’ was used as the course book in the lab classes. During the carry-out of the research project, some questions about the contents of the course book were asked. The answers to these questions and the follow-up comments and feedback given about these questions by the teacher and the students formed part of the portfolio completion. Eight questions which were repeatedly answered and commented on by the students all throughout the treatment were also used as part of the portfolio assessment. These eight questions were not related to the contents of the course book but were some questions which were aimed at involving the students in the classroom procedures such as teaching, learning, and assessing.

Procedures

At the beginning of the study, the Michigan Test was administered to check the degree of homogeneity of general language proficiency of the two groups of students who were the subjects of the study. The results of the test showed that the two groups were homogeneous in their general language proficiency. Then, the ten-week experiment began as follows :

1.In the experimental class the principles of portfolio assessment were explained to the students in English as follows :

One)  they were told that the criterion of passing or failing the course would not be a single test score at the end of the semester but the whole range of classroom activities such as answering questions, reviewing and assessing one’s own performance, active participation in all classroom activities and procedures, teacher’s comments and evaluation both in class and in the portfolios throughout the semester would be the criteria of passing or failing.

Two) the concepts of reflective thinking, self-involvement, self-monitoring, self-assessment, and portfolio assessment were defined and explained to the students in the experimental class.

Three) The students were required to provide folders with some blank sheets of paper on which to write the contents of their portfolios and bring these folders with themselves every session.

Four)  The following eight questions were explained and distributed to all the experimental subjects. The students were told in detail that these questions were aimed at involving them in the process of portfolio assessment, i.e.,the questions encouraged students’ reflective thinking and critical analysis of their own learning and the teacher’s method of teaching. The students were also reuired to keep these questions and their answers in their own portfolios.

The eight questions were as follows :

1.What do you think of listening comprehension? Do you think you are making  progress in LC ? If yes, to what extent, if no, what do you think is the cause  of your failure? Give reasons to support your answers.

2. What parts of the classroom activities did you like most and what parts did   you not enjoy? Why?

3. What activities can help you make more progress and overcome your weaknesses in LC ?

4. How did you feel while you were listening to the tape recorded materials  today? Explain the reasons for your feelings.

5. Give your own suggestions about what the instructor should do in class and   how he should assess your performance.

6. What are your strengths and weaknesses in LC so far ?

7.Record any incorrect pronunciation that you hear in class.

8. Ask the lecturers one-two questions about the contents of his/her lecture. If you don’t have any questions, mention one-two important points about  what he/she said.

Five) the nature of conference sessions was explained to the students. They were told that they had to be ready to discuss about the contents of their  portfolios in the conference sessions.

2.The students listened to tape recorded materials in their booths and the researcher checked the performance of each individual student by asking some questions about the listening material. Then, particular feedback and,  where needed, correction and explanation were given to that individual student. This routine went on until all the students had been checked and   given particular feedback on the quality of their work.

3. Some questions about the contents of listening materials were asked from the whole class and the students were required to answer them and keep them in their portfolios. Feedback was given to some students along with possible explanation of the causes of failure as far as time allowed. This part usually   took about ten minutes of the class time.The students had been told that their answers to these questions and other questions asked on their booths as well as their participation in conference sessions and the quality of their portfolios would be

considered as the pass-fail criteria. Although the purpose of this stipulation was to check whether a change in students’ attitudes towards assessment, i.e., a shift from single-case tests towards formative assessment in the form of portfolio assessment, could help enhance students’ achievement and learning, none of the above factors were considered as part of the

Listening Comprehension test scores at the end of the treatment. But the sole criterion of the final LC test scores was the performance of the two groups of subjects on that test after the

experiment had finished.

4. At the end of each class, five to ten minutes of time was devoted to students’ answering the eight questions and keeping the answers in their portfolios.

5. After each class, the folders were collected by the researcher and evaluated at home. The researcher tried to give comments, suggestions, and solutions for each particular student’s portfolio. These comments and suggestions were in the form of either qualitative remarks on a scale of average, good, very good, excellent, etc. or complete pieces of explanation about each particular problem or full comments on each piece of student answer. For example, one student had answered question number 6 “ What are your strengths and    weaknesses so far ?” as follows :

“ I think not knowing every new word is my main problem.”

The researcher’s suggestion was “ You don’t have to worry about that. You usually do not need to know every new word while you are listening. What you can or should do is to make use of the whole utterance (sentence) and the context in which that utterance is used to guess the meaning of that new word and so understand the entire message.

Another student had answered question number 7 “ Record any incorrect pronunciation that you hear in class.” in this way :

“ One of the students said ‘feorite’ instead of ‘ favorite’. ”

The researcher commented “ very good ”.

After the portfolios had been assessed by the researcher, they were returned to the students and the suggestions given by the students about what the teacher should do in class were incorporated into the classroom procedures if they were appropriate and practical. For example, more repetition, work on stress and intonation patterns, teacher’s modeling of difficult sentences were some suggestions, among others, given by the students.

6. Conference sessions were held every other week for the students to discuss fully about their portfolios and for the researcher to make sure that the students were really actively engaged in the process of portfolio assessment, self-reflection, and self-assessment because giving oral reports on the portfolios instantaneously in class during conference sessions could signal the students’ having been involved in thinking about their own portfolios and about their learning strengths and weaknesses. In these conference sessions, the students had to compare their answers to the eight questions in the previous sessions with their answers in the subsequent (following) sessions to check whether they had made any progress. The whole class were required to listen attentively while each individual student was giving his/her report. Another purpose of these sessions was to identify the students’ strengths and weaknesses as they themselves reported and also as the researcher himself understood from their reports and, if necessary, to provide feedback and remedy for them and/or to make modifications in the teaching-assessing methods. For example, assessing through exams, i.e., written questions; explaining listening passages sentence by sentence and part by part; orderliness of the classroom activities; more and more repetition, etc. were some bases for modification of teaching-assessing procedures given by students.

In the control group, however, the classroom was managed and conducted in the usual way. The students listened to the tape recorded materials and worked on the exercises without any reflection on their own work and without any feedback from the teacher unless asked. No conference session was held. However, instead of the conference sessions and self-reflection and self-monitoring activities which were part of the classroom procedures in the experimental group, the students in the control group were required to tell stories as a placebo.

Results

The data gathered were analyzed through some statistical tests which are presented below:

Analysis1.1. Estimating the Normality of Distribution of the Scores of the Experimental and Control  Groups on the Michigan GLP Test

Table 1.1. Descriptive Statistics of EG and CG Scores on Michigan GLP Test

Variance Skewness Kurtosis
Experimental 166.54 .026 – .273
Control 188.24 .276 – .682

The nature of this research study involves the use of the statistical T-Test to compare the means of the two groups for the purposes of 1) determining the homogeneity of subjects and 2) checking the differences between the performance of the two groups on the Listening Comprehension test. Some basic assumptions must be met for using T-Test appropriately. Farhady (1995) mentions four assumptions underlying T-Test as follows :

The first assumption is that the scores are measured on an interval scale. That is, the scores are continuous. The second assumption is that every subject should be assigned to only one group in independent group T-test. That is, one subject cannot be a member of both experimental and control groups. The third assumption is that every subject’s score must be independent of any other subject’s score. And the last assumption is that the scores should be approximately normally distributed, and the variances of the groups should not be significantly different from each other ( pp.351-2).

The data and the results of the data analysis 1.1. clearly show that the above mentioned assumptions have been met in this set of scores. The skewness and kurtosis obtained in the analyses show that the data distribution is not abnormal. The other three assumptions, i.e., each subject’s score being independent of the other subject’s score, the scores being measured on an interval scale, and every subject being assigned to only one group have also been met because by looking at the scores we can easily understand that they are measured on an interval scale, and, thus, they are continuous. We can also understand that the scores are independent of one another. Finally, every subject was assigned to only one group, i.e., experimental or control group. In other words, there were two groups of students in two distinct classes one of which served as the experimental group and the other one as the control group. So, all the assumptions underlying T-Test have been met in the scores in this study. Therefore, we are allowed to use T-Test for our data analyses.

Analysis 1.2. Comparison of the Means of the Two Groups on the Michigan GLP  Test Using Independent Samples T-Test

Table 1.2. Group Statistics of EG and CG on Michigan GLP Test

Group Statistics

Group N Mean Std.Deviation Std. Error Mean
EG

CG

31

32

84.84

83.22

12.91

13.72

2.32

2.43

Table 1.3. Independent Samples T-Test for the Michigan GLP Test

Leven’s Test for equality of variances
F Sig.
Equal variances assumed

Equal variances not assumed

.228 .635
t-test for Equality of Means
t df Sig.(2-tailed) Mean Difference Std. Error Difference 95% confidence interval of the Difference
lower upper

8.33

8.33

Equal variances assumed

Equal variances not assumed

.482

.483

61

60.949

.631

.631

1.62

1.62

3.36

3.35

-5.10

-5.09

The Michigan Test measures a whole construct called General Language Proficiency. This test was given to the subjects prior to the experiment to determine the degree to which they were homogeneous in their background language ability. The results of the T-Test analysis clearly show that there was no significant difference in the background language ability of the two groups of subjects initially. The t-observed value obtained in this analysis is .482, and the P-value is .631 . The critical value of t corresponding to 61 degrees of freedom at the .05 level of significance is almost 2.00 .Thus, the t-observed value is smaller than the t- critical value, and the P-value is greater than the level of significance, implying that there was no significant difference between the two groups. So, we could be almost sure that the subjects were, to a great extent, homogeneous in general language proficiency prior to the treatment. This can contribute to the internal validity of the research project.

Analysis 1.3. Estimating the Normality of Distribution of the LC Test Scores

Table 1.4. Descriptive Statistics of the LC Test Scores

Variance Skewness Kurtosis
Experimental 2.57 – .264 – .749
Control 2.49 .858 1.15

Table 1.4. shows that there is not any significant difference between the variances of the two groups of subjects on the LC test. It also demonstrates that the scores are approximately normally distributed because the skewness and kurtosis values are within +/- 1.5 . So, one of the assumptions underlying T-Test has been met. The other three assumptions have also been met as we explained in analysis 4.2. before. So, the use of  T-Test for comparing the means of the LC test scores is justified.

Analysis 1.4. Examining  the Null Hypothesis  : Comparison of the Means of the Two

Groups on the LC Test

Table 1.5. Group Statistics of EG and CG on the LC Test

Group N Mean Std. Deviation Std. Error Mean
EG

CG

31

32

16.03

12.34

1.60

1.58

.29

.28

Table 1.6. Independent Samples T-Test for the LC Test

Leven’s Test for Equality of Variances
Equal variances assumed

Equal variances not assumed

F Sig.
.031 .862

t-test for Equality of Means
T df Sig. (2-tailed) Mean Difference Std.Error Difference 95%confidence interval of the Difference
lower upper
Equal variances assumed

Equal variances not assumed

9.206

9.204

61

60.865

.000

.000

3.69

3.69

.40

.40

2.89

2.89

4.49

4.49

Table 1.5. shows that the mean score of the experimental group on the Listening Comprehension Test is 16.03 and the mean score of the control group is 12.34. Table 1.6. shows the results of the T-Test analysis. The t-observed value in this analysis is 9.206, and the P-value is .000 . The null hypothesis of the study was tested at .05 level of significance. The critical value of t corresponding to 61 degrees of freedom [ the total number of subjects in the two groups was 63 students ] at the .05 level of significance for the two-tailed test is almost 2.00 . The t-observed value is far greater than the t-critical value and the P-value is quite smaller than the level of significance .05. Therefore, the null hypothesis “ There is no relationship between portfolio assessment and listening comprehension improvement of the Iranian EFL freshmen university students.” can be safely rejected. The results clearly indicate that the experimental  group outperformed the control group after the treatment, while they were homogeneous prior to the experiment. The difference in performance in favor of the experimental group after the treatment indicates that the treatment has been effective as far as this research project is concerned.

Conclusions

In the previous section we saw that the results of the data analysis on the LC test showed that there was a significant difference between the experimental group and the control group in their performance on a test of listening comprehension. Such a significant difference after the treatment, i.e., portfolio assessment, clearly indicates that getting the learners actively involved in their own learning processes and in the classroom procedures through self-monitoring and teacher feedback rather than merely requiring them to come to the class and finally sit to take tests without any reflection, enhances and improves their learning and facilitates their performance on learning tasks. Putting the classroom activities and procedures in an interactive framework in the form of portfolio assessment within which the learners can be involved in and aware of their own learning processes and monitor and modify their own performance can lead to what all teaching activities should pursue, i.e., meaningful learning. Teaching accompanied by portfolio assessment is a teaching which, as Lefrancois (1991) puts it, empowers students. In Lefrancois’ words, it “empowers students by enabling them to do things they could not otherwise do” (p.10). What it enables students to do is, in addition to performance on particular tasks, the development of the habits of critical thinking and self-evaluation. Considering the nature and results of this study, we may be in a position to say that learners’ critical thinking about and evaluation of their own work in portfolio assessment along with teacher feedback instead of being required to merely memorize and regurgitate unrelated, non-internalized pieces of information can enhance and improve their meaningful, long-lasting learning.

Critical, reflective thinking and feedback which are important elements in portfolio assessment can help learners to more and more benefit from teaching and identify strengths and weaknesses both in their own work and also in the teacher’s work. Awareness of strengths leads to a feeling of success which results in more motivation for more learning. Awareness of weaknesses through teacher’s corrective feedback and self-evaluation leads to conscious attempts to rectify those weaknesses towards progress, of course if such an awareness is logically controlled through self-reflection and teacher feedback in a way that does not lead to disillusion and a sense of defeat. Such are the effects of portfolio assessment. The findings of this study are in accordance with those reached by Gimenez (1996), Matsumoto (1996), Puhl (1997), and Black and Wiliam (1998). They all found positive relationships between continuous and portfolio assessment and substantial learning gains.

References

Alderson, C., Wall, D., (1993). Does washback exist ? Applied Linguistics.

Vol. 14, No. 2, 115-129

Alderson, J.C., Wall,D., (1993b).Examining washback : The Sri Lankan

impact study. Language Testing, 10,41-69.

Allwright, R. L.,(1984). The importance of interaction in classroom

language learning. Applied Linguistics, 5, 156-171.

American Psychological Association., (1991). Publication Manual of the

American Psychological Association. ( 3rd Ed). Washington, D.C :

Author.

Anonymous. (1999). APA Home Page. Internet Article, Vol. 30, No.9

Ashouri, V., (1997). The impact of repeated measurement on the language

achievement of EFL students. Unpublished MA Thesis. Islamic Azad

University.

Bachman, L.F., (1990). Fundamental considerations in language testing.

Oxford University Press.

Bailey, K. M., (1996). Working for washback : A review of the washback

concept in language testing. Language Testing, 13, 257-279.

Baker, N. W., (1993). The effect of portfolio-based instruction on

composition students’ final examination scores, course grades, and

attitudes towards writing. Research in the Teaching of English. Vol.

27, No. 2 . Southern Missouri State University.

Bangert-Drowns, R. L., Kulik, J.A., & Kulik Chen-Lin, C., (1991). Effects

of frequent classroom testing. Journal of Educational Research,

Vol.85, No. 2 , 89-99

Black,P., Wiliam,D., (1998). Inside the black box : Raising standards

through classroom assessment. Inline Article From Internet, Kappan

Professional Journal.

Black, P., Wiliam, D., (1998). Assessment and classroom learning.

Assessment in Education, pp. 7-74.

Blanche, P., & Merino, B. J., (1989). Self-assessment of foreign language

skills : Implications for teachers and researchers.Language Learning,

39, 313-340.

Bowen, J.D., Madsen, H., & Hilferty, A., (1985). TESOL. Techniques and

procedures. Newbury House Publishers, Inc. 73-99

Brown, J.D., & Hudson, T.,(1998). The alternatives in language

assessment. University of Hawaii. TESOL Quarterly, vol.32, No.4,

Winter 1998. Pp. 653-675.

Celci-Murcia, M., (1991). Teaching English as a second or foreign

language. University of California, Los Angeles. 63-79

Chastain, K., (1988). Developing second language skills. Theory and

Practice. University of Virginia. 190-210

Cunningham, G.K., ( 1998 ). Assessment in the classroom : constructing

and interpreting tests. The Falmer Press.

Davies, A., (1990). Principles of language testing. Basil Blackwell.

Dietel,R.J., Herman, J.C., & Kunth, R.A., (1991). What does research say

about assessment ? NCREL , Oak Brook. 1-22

Elliot, S. N., (1994). Creating meaningful performance assessment.

ERIC Digest E 531. From the ERIC database.

Faez, F., (1999). The washback effect of frequent quizzes on reading ability

of EFL students. Unpublished MA Thesis. University of Tehran.

Farhady, H., ( 1995). Research methods in applied linguistics. Payame

Noor Publications.

Farhady,H., Jafarpour, A., & Birjandi, P., (1995). Language skills

testing.From theory to practice. SAMT Publications. 213-230

Filler, A., (2000). Assessment : social practice and social product. pp. 1-10,

83-86, 151-167.

Genesee, R. , Upshur, J.A., ( 1996 ). Classroom-based evaluation in second

language education. Cambridge University Press.

Gimenez, J. C., (1996). Process assessment in ESP : Input, throughput, and

Output. English for Specific Purposes, Vol. 15, No. 3 .

Gipps C. V., (1994). Beyond testing : towards a theory of educational

assessment. The Falmer Press. 123-143

Glaser, R., Silver, E., (1994). Assessment, testing, and instruction :

retrospect and prospect. CSE Technical Report.

Hamp-Lyons, L., (1997). Washback, impact, and validity : ethical

concerns. Language Testing. Vol. 14

Hatch, E., Farhady, H., (1981). Research design and statistics for applied

linguistics. Tehran : Rahnama Publications. 18-32

Heaton, J.D., (1988). Writing English language tests. London : Longman.

5-14

Heienman, L. K., (1990). Self-assessment of second language ability : The

role of response effects. Language Testing, 7, 174-201.

Herman, J. L., Aschbacher, P. R., & Winters, L., (1992). A practical guide

to alternative assessment. Alexandria, VA : Association for

Supervision and Curriculum Development.

Huerta-Macias, A., (1995). Alternative assessment : Responses to

commonly asked questions. TESOL Journal, 5 (1), 8-11.

Hughes, A., (1989). Testing for language teachers. Cambridge University

Press.

Inger, M., (1993). Authentic assessment in secondary education. Testing

Article, Internet. No.6

Lefrancois, G.R.,(1991). Psychology for teaching. University of Alberta.

365-393

Madsen, H. S., (1983). Techniques in testing. Oxford University Press.

Matsumoto, K., (1996). Helping L2 learners reflect to classroom learning.

ELT Journal, Vol. 5, No.2. Oxford University Press.

McNamara, M. G., Deane, D., (1995). Self-assessment activities : Toward

Autonomy in language learning. TESOL Journal, 5 (1), 17-21.

Messick, S., (1996). Validity and washback in language testing. Language

Testing. Vol. 13, No. 3, 241-256

Nicholas, L. N., (1988). Teaching listening comprehension.

English Teaching Forum.

Nunan , D., (1988a). The learner-centered curriculum.

Cambridge : Cambridge University Press.

Nunan, D., (1992). Collaborative language learning and teaching.

Cambridge University Press.

O’Malley, J. M. , Chamot A. UHL, & Kupper L.,(1989).

Listening comprehension strategies in second language acquisition.

Applied Linguistics. Vol. 10, No.4, Oxford University Press.

Oscarson [ Oskarsson ], M.,(1989). Self-assessment of language

Proficiency : Rationale and applications. Language Testing, 6, 1-13.

Paulston, C.B., Bruder, M.N., (1976). Teaching English as a second

language. Techniques and procedures. Cambridge, Massachusetts.

127-155

Perrone, V., (1991). Expanding student assessment. Association for

Supervision and Curriculum Development, Edited. pp. 22-31, 47-71.

Prodromou, L., (1995). The backwash effect : from testing to teaching.

ELT Journal. Vol. 49, No. 1, Oxford University Press.

Puhl, C. A.,(1997). Develop, not judge : Continuous assessment in the ESL

classroom. English Teaching Forum.

Richards, J.D., Platt, J., & Platt, H., (1992). Longman dictionary of

language teaching and applied linguistics. Longman Group UK Limited.

Rivers, W.M., (1968). Teaching foreign language skills. Chicago

University  Press. 151-172

Rubin, J., (1994).A review of second language listening comprehension

research. The Modern Language Journal, Vol. 8, No. ii

Salehpour, S.F., (1996). The effect of regular testing on the reading

comprehension of Iranian EFL learners. Unpublished MA Thesis.

Islamic Azad University.

Santos, M. G., (1997). Portfolio assessment. Internet Article. Vol. 35,

No. 2

Seliger, H. W., Shohamy, E.,(1989). Second language research methods.

Oxford : Oxford University Press.

Shohamy, E., Donitsa-Schmidt, S., & Ferman, I., (1996). Test impact

revisited : Washbck effect overtime. Language Testing, 13, 298-317.

SPSS for Windows (Version 9.05). Computer Software.

Stephen, N.  E., (1994). Creating meaningful performance assessment.

Taylor, H. M., (1981). Learning to listen to English. TESOL Quarterly, 15,

41-50.

Ur, P., (1984). Teaching listening comprehension. Cambridge University

Press. 22-31, 127-147

Vogely, A., (1995). Perceived strategy use during performance on three

authentic listening comprehension tasks. The Modern Language

Journal.Vol. 79, No. 1

Wall, D., (1996). Introducing new tests into traditional systems : Insights

from general education and from innovation theory. Language Testing,

13, 234-354.

Wiliam, D., Black, P., (1996). Meanings and consequences : A basis for

Distinguishing formative and summative functions of

assessment.British Educational Research Journal, Vol. 22, pp. 537-

548.

Zoufan, S.,(1997). The effect of discourse markers used in EFL teachers’

academic lectures on listening comprehension of EFL students.

Unpublished MA Thesis. Isfahan : Iran.

Appendix

Contextual Questions during Eight

Sessions

Session One

1.What is the topic/subject of the meeting ?

2. Who is going to deliver/give a lecture ?

3. When does the meeting begin ?

4. When does the meeting end ?

5. Which Wednesday will the meeting be held on ?

6. What is the admission fee for the meeting ?

7. Why did the wife suggest going to a dance ?

8. What did the husband mean by ‘doing something different’ ?

9. What does ‘if’ mean in ‘I wonder if you …’ ?

10. Why does the girl say “ We have got everything here.” to the woman ?

11. What does ‘ pretty ’ mean in ‘ It was a pretty hard day.’ ?

12. Why didn’t the man accept his wife’s suggestion to go to a dance ?

13. How did the husband feel during the day ?

14. What does the husband suggest to do ?

15. What is the husband doing now ?

16. How do you know that he is calling someone ?

17. What time does the wife suggest for playing Greedge

Session Two

1.What were the boys discussing about ?

2. Why was the small boy crying and screaming ?

3. What did the other boy call the event described by his friend who was

running ?

4. What did the mother ask her son and what did he say to make her

mother believe him ?

5. What did the mother order her son to do then ?

6. What did the mother promise to her son ?

7. What did the boy ask the strange creature ?

8. why did the boy leave the strange creature ?

9. What did the small boy ask his only brother ?

10.Why was the mother surprised and what did she ask her son ?

11.What did the mother say then ?

Session Three

1.What was the lecturer’s viewpoint about Helen Keller ?

2. How do you evaluate her pronunciation ? clear    unclear     bad ? Give

your opinion.

3. What was the topic of the conversation ?

4. Where did the man want to go ?

5. Who gave him the direction ?

6. What did the man mean by saying ‘I see’ ?

7. What did he have to do to get to his destination ?

8. What part of the conversation helps you to give this answer ?

9. What did the woman say in the end ?

10. Which of these words or phrases did you hear in the conversation and

which ones did you not hear ?

two avenues

main street

straight down

turn right

square

highway

go up the stairs

turn around

reception desk

information desk

hotel reservation

Session Four

Pay attention to the differences in the meaning of these sentences and decide what makes the differences.

1.You go to work.               You go to work.

2.I don’t like this film at all.

You don’t.        You don’t.

3.You are missing the point.         You are missing the point.

4. I don’t agree with you.               I don’t agree with you.

5. You really don’t  like Felimi.      You really don’t like Felimi.

6.Are you going to give a lecture ?   Are you going to give a lecture ?

Decide whether the following sentences are pronounced with question intonation or declarative intonation.

1.That is a good book.              2. I know English very well.

3.They bought a new car.         4.You have to study hard.

5.We had better go to school.   6.This is a nice job.

7.You enjoyed the book.           8.They have bought a car.

9.I have to stay here.                 10.She

Session Five

1.Write 8 to 10 sentences which you completely understood.

2.Write those sentences which you couldn’t understand completely.

3.Write the translation of the sentences that you un

Session Six

1.Where was the boy born, and when ?

2.Where did he grow up?

3.What did he do first when he grew up ?

4.Was he satisfied with his graduation ?

5.How did the dancer feel when she was offered a job ?

6.Was she an amateur or a professional dancer ?

7.What plans did she have for the future ?

8.What did the man mean by ‘the best of luck in all you do’ ?

Session Seven

1.Who taught Ernest Hemingway the outdoor affairs ?

2.What were these outdoor affairs ?

3.How did he develop his journalistic style ?

4.Why was he rejected in military service ?

5.What did he do in Europe ?

6.Why did he return to States ?

7.What did he do in Paris ?

8.What is the theme of ‘ Son of Horizons’ ?

9.What was his fourth novel ? What was it about ?

10.What was ‘Across the River and into the Trees’ based on ?

11.Why did Hemingway kill himself, and how ?

12.What is your opinion about Hemingway’s life ?

13.How do you evaluate his talents, style, and behavior according to what

you heard ?

Session Eight

1.What did the girl say she was looking for ?

2.What did the woman answer her ?

3.What was the mother concerned about and for whom ?

4. What did the girl ask about ?

5.Why did the  girl’s mother die ?

6.What did the girl claim to be her own ?

7.Why was the girl arrested ?

8.What did she say for being innocent?

9.What did the prisoner man want the girl to do ?

10.What did he say about himself ?

11.How did the girl describe her previous conditions ?

12.How did the jailer say the girl could escape from the prison ?

2 comments

  1. Pingback: ELTWeekly Issue#73

Leave a comment

Your email address will not be published. Required fields are marked *