Article Categories
User Login
Registered users can post in the support forumsNewest Downloads
- Extensive Reading Badges [Dec 9 2014]
- Português (Portuguese Language Pack) [May 6 2014]
Presentation on the MOARS Peer Assessment Module, Thursday 5 June 2014 in Amsterdam |
Articles - News |
Written by Bill Pellowe |
Wednesday, 23 April 2014 11:22 |
On June 5th, Trevor Holster, J. Lake and Bill Pellowe will give a paper on the Peer Assessment add-on for MOARS at the 36th Language Testing Research Colloquium (4-6 June 2014) in Amsterdam. Title: Many-faceted Rasch analysis of peer-assessment as a diagnostic toolTrevor A. Holster (Fukuoka Women's University, Japan) Venue: VU University Amsterdam, The Netherlands Date: Thursday 5 June Language proficiency frameworks can guide curriculum planners and classroom teachers, exemplified by Hadley's (2001) accessible introduction to the ACTFL framework as the basis for instructional planning. The use of rubrics in instruction requires learners to both understand the rubric and to be able to assess the strengths and shortcomings of their own work relative to the rubric though, presupposing that learners have sufficient metalinguistic knowledge to understand the rubric, an assumption that Tokunaga (2010) challenges. Previous research has supported the formative use of peer assessment but found peer assessors to be inconsistent in their interpretation of rating rubrics (Cheng & Warren, 2005; Farrokhi, Esfandiari, & Schaefer, 2012; Mok, 2011; Saito, 2008). However Toppings (1998) notion of "learning by assessing" holds that interaction with the rubric during peer assessment can drive learning suggesting that instructional explanations based on proficiency frameworks may be less effective than using peer assessment as a mechanism to improve students understanding of the rubric. Additionally, fit analysis of peer assessors' interpretation of the rubric can guide remedial instruction by identifying rubric items that students struggle to interpret. Peer assessment was piloted in academic writing classes at a Japanese women's university (n = 24). Students assessed each others' essays using a 9-item rubric and entered their ratings into an on-line database. Many-faceted Rasch analysis found general agreement in rank ordering between students' ratings and teachers' ratings, but that students tended to rate holistically, did not use the full range of the rating scale, and were much more lenient than teachers. The rating patterns provide evidence that students were unable to interpret the rubric clearly rather than simply interpreting it differently than teachers, meaning that as well as being unable to provide diagnostic feedback to each other, students were unlikely to understand feedback from teachers. By anchoring the difficulty of the rubric items against teacher ratings using the Facets software package, the most misfitting items when rated by peer assessors could be identified, these being "Introduction", "Thesis statement", and "Conclusion". In their second essay, all rubric items improved substantively, but "Thesis statement" was the only item showing substantively and statistically significant improvement greater than overall improvement. These pilot results indicated points of weakness in the instructional materials and also suggested that the observed gains in proficiency were more likely due to practice and learning by assessing than to instruction and feedback. Revised instructional materials were produced to address the problematic rubric items and are being operationally piloted during the second semester of 2013 (n = 105). Results due in February 2014 will confirm or disconfirm whether the revised instruction improved student understanding of the rubric and how this affected gains between the first and second essays produced by students. Cheng, W., & Warren, M. (2005). Peer assessment of language proficiency. Language Testing, 22(1), 93-121. doi: 10.1191/0265532205lt298oa |
- ▼ Articles (26)
- ▼ MOARS Features (1)
- ► News (21)
- ► Using MOARS in Classrooms (4)