Dr Stuart Grey

Home Publications Public Engagement All Posts

stu@stugrey.com

Peer Evaluation of Student-Generated Content

Dec 22, 2021

The following is based on my reading of the following paper, all errors are my own.

George Gyamfi, Barbara Hanna & Hassan Khosravi (2021): Supporting peer evaluation of student-generated content: a study of three approaches, Assessment & Evaluation in Higher Education,

DOI: 10.1080/02602938.2021.2006140


Student development of study materials is an effective strategy for developing a repository of revision items. However, selection criteria are needed to separate high-quality from low-quality resources. The paper describes three peer assessment approaches for evaluating student-created content. Students were asked to evaluate the quality of a range of resource types and provide feedback on what they thought was important for evaluating the quality of these resources. The analysis of comments showed that students went beyond the criteria supplied in the rubrics and applied their understanding of resource quality.

Student-generated content is not always of high quality. Low-quality student-created resources can hinder effective learning, wasting peers' time when shared with them. On the other hand, teachers require guidelines for evaluating the process and product of student work before making it available to other students. Peer evaluation of student-generated content is a solution to assess the quality of this type of content.

The first approach is a copy-and-paste template that provides no guidance on the criteria to use when rating.

The second approach is a simple rubric with four categories: Accuracy, Clarity, Organization and Design.

The third and final approach includes a non-data driven rubric and a data-informed rubric. An assessment rubric is a tool that both gives guidance for students' productions and facilitates an assessment process, enabling instructors to evaluate, measure, provide feedback, and report on achievements.

These approaches have two key benefits:

  • Student involvement in the creation of content can build large repositories of useful learning resources in a short time, thus reducing the burden on teachers.

  • Content creation deepens students' understanding of course material and enables them to engage meaningfully with their work and become aware of what they produce.

The critical question that is asked is, "Does the student's work reflect understanding?" Is there evidence of creativity or innovation? Does it demonstrate planning and effort?

The peer evaluation process successfully engaged the students in reflection and self-evaluation, effectively fostering learning. The students evaluated their revision materials using a set of criteria that they developed themselves and revised before submission. As a result, they experienced increased knowledge and understanding of the subject. In addition, the original rubric became more data-informed based on observations of student behaviour and feedback.

The moderation process for this study is mainly unguided. Student-moderators rate their agreement with the statement 'This resource should be added to the repository' and score their confidence in that assessment. The original rubric was developed after an extensive review of existing rubrics for the evaluation of student-generated content. The challenges in developing this rubric were multifold, including accounting for the subjective nature of the content and providing some degree of consistency across reviewing students.

The original rubric was a description of the qualities of a good resource. The new rubric aimed at better describing a quality learning resource and considering students' understanding of appropriate standards for a given assignment while also addressing lenient marking by students. Overall, users of the rubric found it clear and accurate. They also considered its criterion-related themes valuable to learning, though they did not find them all equally valuable.

The Kruskal-Wallis H test showed that the data-informed rubric had a statistically significant impact on the consistency of ratings, while the original rubric did not. The study results show that the original rubric group had higher average confidence in their assessments than the other two groups. Average comment length was longest for data-informed rubrics, followed by no rubrics and finally the original rubrics.

This study provides evidence for the effectiveness of data-informed rubrics in promoting better agreement. Using explicit criteria to rate a resource on a rating scale is an effective way to promote higher agreement ratings when compared with no rubric or original rubric groups.

Key Findings

  • The data-informed rubric group had a lower mean confidence score. This was most likely due to the additional criteria, 'appropriateness of difficulty' and 'encouragement of critical thinking and reasoning'.
  • All groups showed increased confidence from the first to the second assessment. Those who had their judgments validated by others also had a more significant gain than those who did not.

In the experiment, the data-informed rubric group had a greater agreement with the original development team's rubric. This implies that asking students to rate quality criteria without guidance was not as haphazard as might be expected but based on a solid intuitive sense of appropriate quality standards for learning resources.

It was found that the data-informed rubric helped students to better identify the learning goals of a course and increased student participation in evaluating a learning resource.

In asking for consideration of the appropriateness of the level of difficulty and the encouragement of critical thinking, the data-informed rubric seems to have made greater demands on students' evaluative abilities.

Peer evaluation of student-developed revision materials is an important aspect of the writing process. Peer evaluation can lead to more sustainable resources, higher-level learning skills, and greater student involvement in their learning.

Conclusion

The study provides evidence by analysing comments that instructors can nurture the development of students' evaluative ability and enhance their learning using rubrics, particularly data-led rubrics.

Initial analysis of this pilot study indicated that the rubric enhanced student engagement and resulted in a higher quality repository. However, it is unclear whether more prolonged use of the rubric would result in higher confidence levels. Further work aims to explore the impact of the wording in the rubric on inclusion and resource quality as