The Research Files Special: Research Conference highlights 2022

This podcast from Teacher is supported by Bank First – the bank built by teachers for teachers. Visit or speak to us to find out how your home loan or savings can care for the community like you do.

Dominique Russell: Thanks for downloading this podcast from Teacher magazine. I’m Dominique Russell.

Zoe Kaskamanidis: And I’m Zoe Kaskamanidis.

DR: Welcome to this special episode of The Research Files – which is becoming a bit of an annual tradition for us as our regular listeners will know – where we take a look back on last month’s Research Conference and share some of our highlights with you.

The theme for the 2022 ACER Research Conference was ‘Reimagining assessment’ and, like last year, the sessions and masterclasses were fully online, which meant that outside the renowned experts in the field here in Australia, we were able to hear from leading researchers from the UK, the US and Indonesia. Zoe and I were there to take it all in across the 4 days of the conference, and we’ve selected some clips to share with you, which I’m sure will prompt some further thinking amongst yourselves and your colleagues.

Let’s jump straight in then. Zoe, what was your first highlight from the conference?

ZK: Well, I’d like to kick us off with a highlight from Day 2, delivered by Dr Carly Steele and Associate Professor Graeme Gower from Curtin University in Western Australia. Their presentation was titled ‘Reimagining assessment in culturally responsive ways’.

The session was split into 3 parts – sociocultural and social justice perspectives in assessment (including cultural and linguistic bias); the importance of aligning culturally responsive pedagogies with culturally responsive assessment; and several practical recommendations for improvement.

Carly and Graeme unpacked the cultural relevance of large scale standardised assessments, explaining that these assessments often include questions which lack validity for students from culturally and linguistically diverse backgrounds. They raise that often, these assessments are underpinned by dominant western knowledge systems, disadvantage culturally and linguistically diverse students not only through testing, but through classroom practice as teachers seek to improve student performance in assessments.

They argue that we need to shift instead toward culturally responsive assessment. Here’s Carly explaining what culturally responsive assessment means, and its significance in communicating the values of the education system more broadly:

A culturally responsive perspective of assessment is one that acknowledges that just as teaching should be responsive to the student context, so too should assessment. Not only is it important in terms of alignment between teaching and learning, but what is assessed, and how it is assessed, communicates the values of the education system. That is, what is deemed to be important. When teaching becomes more culturally responsive, but assessment does not, a powerful message is communicated about what really counts.

DR: What a great point to highlight and such a good illustration for the importance of thinking about aligning assessment with teaching practice, which was also brought up by the presenters for the opening session on Day 3 of the conference, which we won't be sharing any highlights from in this episode. But you might recall that in our last episode of the Research files series, we spoke to the conference presenters Louisa Rosenheck and YJ Kim in detail about their conference session, which was on the topic of playful assessment. They were discussing how, if you're a teacher who's implemented game-based learning practices in your classroom on a regular basis, it makes most sense to approach assessment in a similar way to that, rather than reverting back to a more traditional form of assessment. Onto my first highlight now.

My first highlight was the twilight session from Day 2 of the conference. The session was titled ‘Sharing and securing learners' performance standards across schools’ and was delivered by Emeritus Professor Richard Kimbell all the way from Goldsmiths, University of London.

After a career in classroom teaching, Richard founded the Technology Education Research Unit at Goldsmiths. And in his session, he described the new version of the Adaptive Comparative Judgement (or ACJ) online assessment tool he helped develop about a decade ago now. The original ACJ has been used in schools primarily as a formative assessment tool, and it allows teachers to compare pairs of student work to ascertain performance standards within their own school setting. This new version of ACJ that he was telling conference attendees about includes the ability for schools to make paired judgements of work from multiple schools, which would then help to ascertain standards of performance beyond their own school setting, and overall give a more accurate picture.

It was a really interesting session, but what I’d like to highlight with you now is his explanation of comparative judgement, and why it allows for accurate responses. It was such a simple explanation but it really stood out to me. Here’s Richard:

You may think you’ve had little contact with comparative judgement, but in fact, you have. It’s an everyday phenomenon. And this is just one example of it – when you go to an optician and they’re trying to test one lens against another lens in that little gadget that they put on your head, you’ll often be asked ‘which is the sharper image, the red or the green?’ Notice that they don’t ask you to just look at the red ones, and on a scale of 1-10, tell them how sharp the image is, because that data would be very unreliable. But, comparing the red and the green makes it possible for you to be very accurate in your replies.

ZK: It sounds like a fantastic presentation, and I love that example of optical assessments which many of us would be familiar with. It’s so interesting too, to hear about assessment tools developed by educators for educators – and the passion to share expertise and knowledge within the education community is always great to see.

ZK: We’ll take a look at my next highlight after this quick message from our sponsor.

You’re listening to a podcast from Teacher magazine, supported by Bank First. Bank First is proudly Customer Owned – built by people just like you. Being Customer Owned means we can divert our profits to support initiatives you care about. Like $750,000 towards grass roots initiatives in schools since 1983. Your Bank First home loan can get you into your own home and support education for our kids. Email for more information.

ZK: So, the next highlight I’d like to share with you is the Keynote presentation from Day 3 of the conference. And this session, delivered by Associate Professor Lenore Adie from Australian Catholic University was titled ‘Assessment moderation: Is it fit for purpose?’

Lenore took us through 2 examples of fit-for-purpose moderation, to illustrate how moderation – being a shared understanding between teachers of the quality of work or agreement on a grading decision – can be reimagined. She approaches this question in the context of how teachers can better meet student needs, while acknowledging the intensity of teacher workloads in a changing world.

The examples she shared came from work she has done with her colleagues on investigating contemporary expectations of practice, and how these expectations can be met by using digital technologies to support teachers’ collaborative professionalism, agency, and decision making around assessment.

The first project she shared was working with researchers in Western Australia, Queensland and Canada investigating the developed and use of scaled exemplars in online moderation. The second project is a longitudinal analysis which looks at the introduction of teaching performance assessments for the final year for pre-service teachers at all Australian universities.

I really liked how Lenore pulled together the different factors of fit-for-purpose moderation which were explored through the 2 examples. Here she is explaining what she and her colleagues found:

Assessment of complex performers will always remain a subjective process. However, I think what we’ve found is that when we work to utilise the best of statistical and social moderation processes, when we utilise digital technologies to collect, store and analyse the data and to reach beyond the local to bring people together, to bring various perspectives together, and when we support the judgement process through customised resources like the cognitive commentary, we can shift moderation beyond an end process following summative assessment to an ongoing process that occurs throughout all stages of planning, teaching and assessing, and is used to interrogate and improve teaching and learning.

DR: That clip is such a fantastic summary of how crucial it is to ensure assessment is fit for purpose for all learners. It's such a great one to pick out and reflect on.

My next highlight was actually the last session of the conference, so it’s probably a nice one to end this episode on. It was another twilight session and it was hosted by Dr Sladana Krstic and Dr Sarah Richardson from ACER, alongside Sarah Manlove from the International Baccalaureate Organization.

In their presentation, they shared the findings of a research project ACER was commissioned to complete that set out to answer the question, ‘What is the state of the field for teacher assessment literacy design and development?’

The result of the research project was the development of an assessment literacy and design competency framework, which the speakers said takes a very holistic approach, and focuses on schools as a whole, rather than individual teachers. They found this to be comprised of 7 elements.

I really liked the interactivity that was included in this session – after the framework was introduced, attendees were actually invited to complete an online poll in real time on which of the 7 elements they’d like to be discussed in more detail throughout the session, because there certainly wouldn’t have been time to look at them all. The 2 elements that attendees chose was how to engage learners and assessment identity.

The clip I’d like to share with you is a snippet of Sarah’s explanation of assessment identity, which I’m sure will give our listeners a lot to think about and reflect on. Here she is:

So in terms of assessment identity, there’s a lot of evidence that even if teachers are given professional development opportunities in how to implement assessment and how to develop items; if they don’t believe themselves that assessment is a positive thing and it’s a way of generating data to inform improvements in either teaching or learning, it’s really not going to work.

And so it’s very much around recognising that people’s personal experiences, personal attitudes are very important. And that means that rather than professional learning around assessment simply being a very mechanical thing, it’s important to give teachers an opportunity to talk about their values – what has been their personal experience of assessment? Has it been a negative one? Do they regard it as judgemental? Do they regard it as an alien thing? Or do they see it as a positive thing? And it’s really around encouraging them to have positive beliefs around assessment.

ZK: I really like how Sarah draws attention to the importance of teachers’ own experiences and perspectives when it comes to utilising assessment models. And it’s quite a nice clip to end on, because I think we can see a bit of a running theme throughout the conference sessions about reimagining assessment to be meaningful and engaging for those involved, to really reflect the diverse experiences and needs of teachers and learners.

DR: That’s all for this episode. Thanks for listening. If you’d like to read more, you can find a link to an article we published where keynote speaker Dr Diane DeBacker unpacks the idea of making learning visible. You can find that link in the transcript of this podcast episode under the podcast tab at our website, And as I mentioned earlier in this podcast, if you missed our last episode of The Research Files with conference presenters Louisa Rosenheck and YJ Kim, you can catch up on that now in our podcast feed.

It would also be great if you could take just a few moments to give a rating of our podcast if you’re listening to this audio on Apple podcasts or Spotify. If you’re listening on the Spotify app, just click on the three dots, then ‘rate show’, and if you’re on the Apple podcasts app you’ll find the rating section by scrolling to the bottom of our podcast channel page. On Apple podcasts, you’re also able to leave a short review for us. Leaving us a rating or a review helps more people like you to find our podcast, and is a really big support for our team. Thanks for taking the time to support the work we’re doing. We’ll catch you in our next episode very soon.

You’ve been listening to a podcast from Teacher, supported by Bank First – the bank built by teachers for teachers. Visit or speak to us to find out how your home loan or savings can care for the community like you do.