With 20 blank faces staring back at me, it was clear that no formative assessment was required. We had definitely not met the success criteria for this session: ‘To understand the features of an effective poetry recital and the criteria set out in the poetry recital rubric.’
After painstakingly working through the criterion on the rubric, discussing what each might look like and clarifying unfamiliar words, my students were no clearer about the expectations of the poetry recital assessment. I, like many other teachers, thought that rubrics clearly articulated the expectations of assessments and would naturally promote achievement. However, there is little empirical evidence to support this intuitive belief (Andrade, Du, & Wang, 2008) and the bewildered faces staring back at me certainly did not add any anecdotal weight to the argument either.
When the time came to begin preparing to teach poetry again, the blanks faces still haunted me. The near uselessness of the rubric for my students troubled me and I continued to question how I could make the expectations of assessments clear to students. Professional reading as part of my school’s Explicit Improvement Agenda led me to Dylan Wiliam’s research on Formative Assessment. His 2015 book with Siobhan Leahy, Embedding formative assessment, suggested one small change to my practice to better utilise rubrics: Start with samples of work, rather than rubrics, to communicate quality.
Following William and Leahy’s advice, I went back to recordings of previous student recitals. I shared two performances with my class and opened up a discussion of each. Together we had a rich discussion exploring the features, strengths and weaknesses of both performances. We then benchmarked both performances according to the rubric. This enabled to students to anchor the once vague and decontextualised descriptors in the rubric to concrete, achievable examples. It also facilitated fierce debate amongst students about the level achieved by each performance.
This, in turn, forced students to delve into the language of the rubric and use differences in the level of each criterion to justify their thinking. Not only could my students now explain why the performances reflected particular levels set out in the rubric, they could also explain why they were above or below other levels.
In subsequent sessions, we also developed practical steps for improving the weaknesses we identified in each performance. Providing peer feedback and self-assessment provided students with opportunities to use these skills in developing and implementing constructive feedback to lift the standard of their performances.
Combining rubrics with samples of work has created powerful opportunities for my students to develop a clear picture of assessment tasks and develop the ability to improve their work.
How to use student samples of work
- Starting with a whole class discussion about two samples of work: In my experience, students find it easier to identify the qualities of good work when they are contrasting samples than when given a single sample, irrespective of the strength of the other piece.
- Asking students to decide which piece is better and describe why: I’ve found the Think-Pair-Share technique is a terrific tool to use in these discussions to give students time to think about the qualities of strong and weak pieces. This technique also provides the opportunity to develop and use vocabulary for assessing improving their work.
- Allowing students to annotate the sample: I encourage my students to identify and justify the strengths, weaknesses, limitations, qualities they want to incorporate into their own work, features they recognise in the piece and, most importantly, how to improve the piece. Discussions around improving the work are key – they generate ideas for improving one’s own work, send a clear message to students that every piece of work can be improved and prevent students from copying work because of a misconception that what you present is how it should be done.
What samples to use
When beginning this approach, I found it preferable to have one sample that was relatively weak and one that was strong. However, as students become more adept at identifying the differences in quality between different samples, I’ve used more samples and pieces that are much closer in quality.
Anonymous samples of student work are the key – ideally from previous years, students in other classes or from other schools. Wiliam and Leahy highlight how using anonymous work takes the emotion out of analysing a sample. ‘Assessing one’s work, as well as assessing the work of one’s peers in the classroom is emotionally charged, and the emotional resonances can often interfere with engaging in the demands of the task. However, assessing the work of the anonymous other is emotionally neutral, so students are able to focus more effectively on the task.’ (Wiliam & Leahy, 2015)
Once you have a collection of samples, you can make deliberate choices about which ones you will use with your students. I’ve found that selecting samples showing common errors students make and dispelling the misconceptions they have – such as longer answers always being better – are more effective. There will be errors specific to your subject disciplines that you know students make year after year.
When students notice the mistakes of others, they are less likely to make the same mistakes in their own work. You can find samples that include the features of strong and weak pieces you want to highlight. This allows students to expand conditions of possibility for their work and develop their skills in improving their work.
Developing independent learners is at the heart of formative assessment. I would, therefore, encourage teachers to seek out resources on formative assessment, particularly those focused on activating students as owners of their own learning and as resources for one another.
Andrade, H., Du, Y., & Wang, X. (2008). Putting Rubrics to the Test: The Effect of a Model, Criteria Generation, and Rubric-Referenced Self-Assessment on Elementary School Students' Writing. Educational Measurement: Issues And Practice, 27(2), 3-13.
Andrade, H. & Valtcheva, A. (2009). Promoting Learning and Achievement Through Self-Assessment. Theory Into Practice, 48(1), 12-19.
Black, P. & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation And Accountability, 21(1), 5-31.
Wiliam, D. & Leahy, S. (2015). Embedding formative assessment. Victoria: Hawker Brownlow Education.
Choose a topic you’re going to be teaching this year that you’ve taught before: What were some of the common student mistakes and misconceptions in previous years? How could you incorporate samples of work into your practice to highlight these? How will you ensure the pieces of student work that you select remain anonymous?