‘That’s him pushing the stone up the hill, the jerk. I call it a stone – it’s nearer the size of a kirk. When he first started out, it just used to irk, but now it incences me, and him, the absolute berk.’
Carol Ann Duffy, ‘Mrs Sisyphus’
Marking has always been for me, I confess, something of an onerous task. Time-consuming, draining, laborious and – to be brutally honest with myself – quite ineffective at advancing learning.
Over the last few years my use of verbal feedback and continual checking for understanding has developed enormously. I frequently use hinge questions, multiple choice quizzes, thinking routines, exit tickets, bursts of silent writing where I circulate around the room and add instant comments while pupils write, etc. Through this, I have become reasonably effective at preventing major misconceptions from creeping into essays before they are submitted. That said, marking full VCE essays remains my metaphorical boulder. I have always been guilty of over-marking essays with the usual litany of methods: highlighting errors, comments in the margins, SMART targets at the end of the piece with a summative comment, etc. In part, this is because it was the default expectation of teachers when I first entered the profession, reinforced by the then expectations of Ofsted (The UK national inspectorate).
It took me a long time to fully face up to the one question that every English teacher should ask themselves when they are about to pick up their red pen:
How exactly is that mark on the page going to help the student improve?
Much recent discussion of marking, such as from Jo Facer at Michaela school in the UK, argues eloquently in favour of not writing comments on students’ work and instead on prioritising continual whole class feedback on common errors and weaknesses. There is a great deal of merit (and sanity) in that approach and I have begun to use it increasingly with Years 7-9 classes, often with single paragraph pieces. I will share my own methodology on this in a future post. (For a more comprehensive review of feedback approaches, the recent EEF report is essential reading.) That said, for VCE English essays I still find it difficult to ‘let go’ of marking and have sought to find a balance between offering a clear indicator of what to address while minimising my annotations themselves. As a consequence, over the last couple of years I have experimented with two different feedback approaches: marking codes and mastery grids.
I first began using marking codes back in 2010, when I was appointed Head of English at Silcoates School in the UK. One of my colleagues, Russell Carey, was Chief Assessor for the Cambridge IGCSE Literature exam paper and explained to me their process of using internal marking codes for assessing pupil scripts. We rolled this out across the department to create a consistent system of annotating work (Brief aside: Ask yourself this question: what does a tick on a page actually mean to a pupil? Do they know? Do all teachers mean the same thing when they tick work?), and pupils appreciated the continuity as they moved through year levels and between teachers.
When I moved to Australia in 2013, I had to learn the VCE system and appreciate the subtle differences in approaches to essay writing here. Consequently, my well-honed system of marking codes for GCSE and A Level responses was forgotten and I reverted back to default mode: lots of generalised comments, ticks and summary targets, with very little class time devoted to re-writes. Having now assessed the VCE English examination for the last two years, I feel more confident in my judgements of quality and have worked out the most common errors I try to correct via annotations. The EEF report offers this summary finding on the need to distinguish between errors (fundamental misconceptions) and mistakes (carelessness) when marking:
‘Careless mistakes should be marked differently to errors resulting from misunderstanding. The latter may be best addressed by providing hints or questions which lead pupils to underlying principles; the former by simply marking the mistake as incorrect, without giving the right answer.’
A link to my new marking codes sheet can be found here:
My workflow when using this is as follows:
- Skim read the essay to quickly spot major errors, then close read. Ticks on relevant points and circling SPAG mistakes
- After reading, highlight the most pressing 5-6 errors on the piece and assign a code in the margin
- Write a ‘one quick win’ target on the bottom of the piece, trying to make the target as absolutely specific as possible.
Time taken per essay (800-1000 words) = 9 minutes
As I mark the essays, I keep a Word document open and dot point any major conceptual or knowledge problems that form a pattern among the class. An example of a summary sheet I give out can be found here (Note: I am teaching Measure for Measure this year for Text Response). The feedback lesson then runs as follows:
- Explain and correct the main conceptual errors to the whole class
- Show 2x examples of the best writing and annotate why they are strong
- Give pupils around 20 minutes to read through their essays, process the codes and write their corrections on the essay itself. I circulate and conference with students as they work on this.
The major strengths of this approach, for me, are:
- Time saved – around 6 minutes per essay (It would ordinarily take me 15 minutes per essay)
- Efficiency – I no longer write out the same annotations twenty-odd times
- DIRT (Directed Improvement and Reflection Time) – students have to act on the feedback, with dedicated time set aside for this
Marking codes are not without their challenges, however, with the main ones being that students often struggle to write meaningful corrections due to poor text knowledge or weak expression. It is essential that the codes are as specific as possible, too, since they replace individualised comments (Note: I am still very much in the process of refining these). Over time, my VCE classes have gotten used to the system and have become more willing to think through their errors.
This year, I decided to pilot a mastery grid approach with my Y12 class to compare it against marking codes. Dylan Wiliam describes one method of using them in Embedded Formative Assessment, pages 122-127. I have taken the general principle of a mastery grid and adapted it for tracking structural components of essays. You can find a copy of it here:
The marking process I use is more or less the same as for the codes sheet. This time, however, I am aiming to offer a relative indicator of quality for the major structural and technical aspects of their essays (each element still contains a letter code which I can add in the margins to signal that a correction is needed). Interestingly, the majority of my students prefer this system because they can gauge both how good each essay is overall and how each skill is developing across a series of essays. The major challenge with it is that without the use of explicit marking codes, students need to be provided with far more ‘worked examples’ of successful and weaker pieces for students to really engage with the self-correction.
Neither of these systems is perfect and a claim can be made that marking codes and mastery grids offer little tangible benefit over traditional annotation and correction. Ultimately, you have to ask yourself: what compromise are you prepared to accept? For me, these methods offer better written feedback than before in less time.
If you feel that this post has been helpful or useful in any way, please let me know in the comments section. I would be particularly keen to hear any suggestions for improvement you may have, or any alternative approaches you may use.
Edit: Based on Emily’s comment, I will also include links to my feedback sheets for Argument Analysis. If you use any of my materials, please let me know how well (or badly!) it worked for you.