Q&A: Student assessment – looking at data from different angles

When analysing student assessment data, what techniques are you using? How often are you sorting data in different ways to better understand student learning needs? And, how do you use patterns in assessment data to inform your next steps in teaching?

Dr Katie Richardson is a Research Fellow and Online Facilitator at the Australian Council for Educational Research (ACER). She has taught the Graduate Certificate in Education: Assessment of Student Learning since 2015, and is currently part of a team researching teachers’ use of data during COVID-19. Richardson recently facilitated a webinar for the Educational Collaborative for International Schools (ECIE) on the topic of using assessment data to inform remote teaching practice. In this Q&A with Teacher, she expands on some of the themes discussed.

In your webinar, you demonstrated how educators could look at data from different angles to get a clearer picture of student achievement. Can you describe how teachers can approach doing this, and why it is useful? What insights can it provide?

There are many different factors that can impact on how a student responds to an assessment. Being able to analyse how different factors might impact on our students means that we may gain deeper understanding of our students’ learning.

The most obvious factor that impacts on how students respond to assessment questions is the level of question difficulty. That is, students will answer questions correctly until the level of difficulty increases to the point where they answer correctly to some questions and incorrectly to others. Finally, they reach the point where the concepts and skills are beyond their level of proficiency. Being able to pinpoint the extent of our students’ knowledge and skill helps us to plan more effectively for learning and teaching.

However, sometimes students are highly proficient in some aspects of a domain, but find other skill sets challenging. For example, a student might be able to understand inferences in complex narrative, but they might not be able to interpret persuasive texts to the same level. So, we can group the data into different skills sets or knowledge sets to see where our students might have gaps in their learning.

Sometimes, domain-specific language might hinder a student from demonstrating what they know and can do. For example, a student might be able to solve a mathematical equation correctly but is not yet able to interpret a worded question that focuses on the same skills. This may give a teacher insight into a student’s understanding of language and how it is used within different domains.

Some students may become less focused throughout the course of an assessment task. This may result in accidental errors which may become more frequent as time progresses. Information like this helps us to understand whether the length of the test is appropriate for our students, or whether individual students need their teachers to take environmental factors into consideration (such as the time of day, or the length of a test).

In other words, sorting data in different ways helps us to figure out what aspects we, as teachers, need to focus on in relation to understanding our students’ needs and planning for learning and teaching.

Why is looking for patterns in student assessment data useful? What kinds of patterns should educators be looking for, and what could these patterns suggest?

Patterns in data provide clues for educators about where students are ready to learn. For example, we often see patterns that show the level of difficulty at which students are currently working. In these cases, we often see correct responses for the least difficult questions, incorrect responses for the most difficult questions, and a scatter of correct and incorrect responses in the middle. It is in this area of scatter that we may be able to identify a student’s point of ‘readiness to learn’.

However, it’s not always this straight forward. Sometimes we need to organise the data in different ways to find patterns. For example, we might group the questions in relation to similar skills or knowledge. This way we may find patterns where students know a lot about one aspect of the subject, but have gaps in other areas.

Patterns may also tell us about misconceptions that groups of students might have. For example, if a considerable number of students responded to a question with the same incorrect answer, it may indicate a common misconception within the class. The teacher can then use this information to plan teaching and learning activities to address the misconception.

Being able to spot patterns in the data also means that we can identify when there are no clear patterns or where one student’s patterns are very different from other students. When this happens, teachers need to ask questions about the student’s responses, the questions in the task, and the student’s context to figure out possible explanations. This enables teachers to be more accurate in how they interpret the information and it means that they can respond in ways that enable learning to progress. A teacher’s knowledge of their students is crucial in interpreting meaning from assessment data, because they know their students best.

Can you outline the difference between analysing and interpreting assessment data? Why must we do a complete analysis of data before beginning to interpret?

When we analyse, we bring data together in ways that enable us to use different techniques to find information that is hidden within the data. Sometimes we use statistics, other times we might arrange the data in ways that show logical or visual patterns. If we analyse qualitative data, we might look for patterns in how words or ideas are used.

On the other hand, when we interpret data, we figure out what it means. To do this, we need to understand our students’ contexts (there might be something that they are experiencing or working through that may impact on how they are learning or how they respond to a question). We also need to take into consideration the class context, the questions in the assessment task and our own knowledge of the content, and our students. When we put all the information together, we are likely to be more accurate in our interpretations. However, sometimes the meaning is unclear. In these cases, it’s important that we ask questions to help us to find relevant information to help us to interpret more accurately.

It’s always important to begin with analysis before we try to make meaning from it. If we start to interpret the data too quickly, we may not draw accurate conclusions. This impacts on our ability to target teaching and learning to our students’ needs.

Why is it beneficial for teachers to work in teams when analysing and interpreting assessment data?

Collaboration between teachers when analysing and interpreting assessment data is important on several levels. Firstly, it means that teachers do not need to be experts in everything. Teachers can focus on their areas of strength and combine their knowledge with that of their colleagues. For example, one teacher may be excellent at analysing data, while another in the group may have deep knowledge of the students and another may have a deep knowledge of the content. These strengths can work together to inform teaching and learning in very powerful ways. Even when teachers within a team have similar strengths, different perspectives can also reveal other ways to understand the data. From a very practical perspective, when teachers work effectively in teams, it reduces the workload for each person, because it is shared.

What have we learnt about effectively using assessment data during remote learning over the past year? What did educators do well? What did they have difficulty with?

At the moment I am part of a team that is researching teachers’ use of data during the COVID-19 pandemic restrictions. We are still completing the analysis of our data, but there are a few striking points of interest. Prior to COVID-19, most teachers did not have experience teaching in either remote or online environments although a few had studied online. On the other hand, it was encouraging to see that many of the participating teachers used a range of different types of data to understand how their students were coping. Our report will be posted in the British Educational Research Association (BERA) Blog soon.

In this Q&A Dr Katie Richardson explains how patterns in assessment data can help teachers identify common student misconceptions.

Returning to data from previous assessment tasks, can you pinpoint a case where a considerable number of students responded to a question with the same incorrect answer? How can you use this information to plan teaching and learning activities to explore and address any misconceptions?