Changing the Game through Formative Assessment
According to Morgan and O’Reilly (1999), many in higher education believe that assessment is a game played by faculty and students:
“Assessment seemed to be viewed as something of a game by teachers and learners alike. Success seemed to rest on one’s capacity to play the game: to ‘crack the code,’ to work out what was wanted by individual assessors, to feed back their most favored theories, and so on.”
The problem with this approach to learning is that many students are simply cue-deaf and unable to crack the code. To maximize the potential for learning, faculty should work at implementing good feedback practices. According to Hattie (1987), the most powerful single influence on student achievement is feedback.
Seven Principles of Good Feedback Practice
- helps clarify what good performance is (goals, criteria, expected standards);
- facilitates the development of self-assessment (reflection) in learning;
- delivers high quality information to students about their learning;
- encourages teacher and peer dialogue around learning;
- encourages positive motivational beliefs and self-esteem;
- provides opportunities to close the gap between current and desired performance;
- provides information to teachers that can be used to help shape the teaching.
(Nicol & Macfarlane-Dick, 2006)
Classroom and Online Examples of Good Feedback
In a study of first-year biology students, Orsmond and his colleagues gave exemplars of biology posters to students who used the exemplars to develop grading criteria that they applied to their posters via self and peer assessments. Two results of using exemplars are worth noticing: 1. “The use of exemplars forms a focus for meaningful formative feedback.” 2. “The use of exemplars can help students demonstrate greater understanding of both marking criteria and subject standards” (Orsmond, Merry & Reiling, 2002). If you are unfamiliar with using exemplars, the following definition will help: “Exemplars are key examples chosen so as to be typical of designated levels of quality or competence. The exemplars are not the standards themselves, but are indicative of them; they specify standards implicitly” (Sadler, 1987).
“In an online or blended learning context, exemplars are easily made available to students for consultation, for example, within a virtual learning environment (VLE). However, it might be more effective to supplement this strategy with additional activities that encourage students to interact with, and externalize, criteria and standards. For instance, groups of students might be required, before carrying out an assignment, to examine two exemplars of a completed task (e.g. a good and a poor essay) and to post within an online discussion board their reasons why one is better than the other including the criteria they had used to make this judgment. The teacher might then clarify any areas of misunderstanding (mismatches in conceptions) and publish online a criterion sheet that draws on this student-generated discussion” (Nicol & Milligan, 2006).
“Forbes & Spence (1991) reported a study of assessment on an engineering course at Strathclyde University. When lecturers stopped marking weekly problem sheets because they were simply too busy, students did indeed stop tackling the problems, and their exam marks went down as a consequence. But when lecturers introduced periodic peer assessment of the problem sheets — as a course requirement but without the marks contributing — students’ exam marks increased dramatically to a level well above that achieved previously when lecturers did the marking. What achieved the learning was the quality of student engagement in learning tasks, not teachers doing lots of marking. The trick when designing assessment regimes is to generate engagement with learning tasks without generating piles of marking.” (Gibbs & Simpson, 2004)
Gibbs, Graham, and Simpson, Claire. (2004) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education 1, pp. 3-31.
Hattie, J.A. (1987) Identifying the salient facets of a model of student learning: a synthesis of meta-analyses. International Journal of Educational Research 11, pp. 187-212.
Morgan, C. & O’Reilly, M. (1999). Assessing open and distance learners. Sterling, VA: Stylus Publishing.
Nicol, David J., and Macfarlane-Dick, Debra. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31, no. 2, pp. 199-218.
Nicol, D. J. & Milligan, C. (2006), Rethinking technology-supported assessment in terms of the seven principles of good feedback practice. In C. Bryan and K. Clegg (Eds), Innovative Assessment in Higher Education, Taylor and Francis Group Ltd, London
Orsmond, Paul, Merry, Stephen & Reiling, Kevin. (2002). “The Use of Exemplars and Formative Feedback when Using Student Derived Marking Criteria in Peer and Self-Assessment.” Assessment & Evaluation in Higher Education 27, no. 4, pp. 309- 323.
Sadler, D. Royce. (1987). “Specifying and Promulgating Achievement Standards.” Oxford Review of Education 13, no.2, pp. 191-209.