skip to main content

Assessment online: informing teaching and learning

Assessment online: informing teaching and learning

Online assessments are capable of providing significantly improved feedback to teaching and learning. Experience in schools is demonstrating the potential of online assessment – provided the foundations are right.

The advantages of online assessment are often described in terms of its administrative convenience, efficiency and lower costs. However, well-constructed online assessments also are capable of providing more timely, more instructionally useful feedback to teaching and learning. For a number of years ACER has been investigating ways to enhance the educational value of online assessments.

Informing classroom practice

Online assessments are most useful to teaching and learning when they are designed to reveal where students are in their learning. This usually means establishing where individuals are in relation to a ‘learning progression' – the sequence in which knowledge, skills and understandings typically develop in a learning area.

The Progressive Achievement Tests (PAT online) illustrate how this works. Students are assessed against learning progressions in aspects of literacy, numeracy and science. Teachers select and administer online PAT tests at difficulty levels appropriate to the students in their classes and at times of their choosing. Students' responses are scored automatically and provide estimates of where individuals are in their long-term progress, regardless of their age or year level.

An obvious benefit of online assessment with automatic scoring is that it frees teachers to focus their energies on understanding and using assessment results to maximise the effectiveness of their teaching. It removes the need to transport, distribute, store and handle test booklets and to mark, record and manually analyse student results. And it provides instant feedback to teaching.

By establishing where students are in their learning, online PAT assessments identify starting points for teaching. The online PAT Teaching Resources Centre recommends teaching and learning activities for students at different points in their long-term progress. And, because all assessments are made and reported against the same learning progressions, teachers who choose to use PAT can use results obtained on different occasions and with different tests to monitor student growth over time.

Online assessments also make it easier to explore the details of students' performances. Teachers can analyse, summarise and display data, identify items on which entire classes experience difficulty and inspect the details of individuals' performances. Easier electronic access to diagnostic detail is a significant advantage of online assessments over traditional test booklets.

PAT online has been developed over a number of years in response to feedback from teachers. In the past four years, teachers in Australian schools have successfully administered 8.5 million online PAT tests. This has included 10 000 students sitting PAT simultaneously, and over 100 000 tests being administered on one day.

Marking online assessments

Immediate feedback to teaching and learning requires tasks that can be scored by machine. This is straightforward for multiple-choice questions, tasks that require students to manipulate information on screen and simpler constructed-response questions. When assessments involve extended writing or more complex answers, online responses usually are captured for later human marking, either in marking centres or through electronic distribution to markers' homes. ACER uses this approach in a number of our assessment programs, including the International Schools Assessment which assesses students in more than 80 countries against learning progressions in reading, mathematics and science.

In our experience, when students complete writing tasks online they often write more than they would on paper, and markers are not distracted by the quality of students' handwriting. And when markers evaluate the quality of student writing online, it is possible to monitor the consistency of marking standards and to identify in real time markers who are unusually lenient or harsh in their marking.

Some assessment programs also now use machine-marking of student writing. Machine-marking systems are based on computer analyses of relationships between features of student writing and scores assigned by human markers. These systems learn to 'mimic' human marking and can produce results as consistent as those obtained across different human markers. Machine marking greatly reduces the time and costs of marking and offers immediate feedback to teachers. ACER's eWrite program assesses and machine-marks narrative, descriptive, report and persuasive writing in Years 5 to 8.

Adaptive testing

A number of assessment programs now allow teachers to choose tests appropriate to the different points students have reached in their learning. This minimises the likelihood of individuals being given tests that are much too easy or much too difficult and provides superior information for teaching and learning.

An alternative to teachers choosing tests of appropriate difficulty is to have computers do the choosing. If a student is answering questions correctly, the computer chooses a more difficult question; if they are not, the computer chooses an easier question. In a ‘computer adaptive' test of this kind, the computer chooses questions one at a time from a bank of questions with known difficulties. Individuals are administered different questions, but their test scores (adjusted for the difficulties of the questions administered) locate them on the same learning progression.

In other assessment programs, rather than having the computer choose questions individually, students are directed automatically to alternative sections of a test based on their performances to that point. ‘Multi-stage' testing of this kind is again intended to provide tests at appropriate levels of difficulty for individual students. ACER is using this approach in our development and delivery of the online Scottish National Standardised Assessments. In that program, Scottish teachers will decide when online assessments are to be administered and results will be reported on learning progressions in reading, numeracy and writing skills.

New forms of assessment

Finally, and perhaps most importantly, online assessments introduce the possibility of collecting new kinds of information about student learning using novel computer-based activities. Work is underway on innovative forms of online assessment in a range of learning areas.

For example, early years assessments often now use tablet-based tasks in which children are given audio instructions through headphones and manipulate objects on screen. When used to assess listening comprehension, this approach allows children to progress at their own rates, listening and re-listening as necessary. Such assessments have been used in ACER's work in the Northern Territory, PAT Early Years Assessment, Scottish Primary 1 assessment, and for the OECD's International Early Learning and Child Well-being Study. In remote locations in Lesotho and Afghanistan, tablet-based assessments were completed offline for automatic uploading later.

Online tasks to assess ICT literacy commonly involve students working directly with word processing software, spreadsheets, presentation software, graphics packages and multimedia applications to find, select, evaluate, transform and communicate information online. ACER has developed assessments of this kind for the IEA's International Computer and Information Literacy Study and the National Assessment Program – ICT Literacy.

Such assessments also raise the possibility of extracting information from the processes students follow, for example, in working with a spreadsheet. Records of sequences followed, options explored, corrections made, and time spent on an activity can shed light on levels of student skill and understanding. ACER has explored this possibility through its Centre for Assessment Reform and Innovation using process data from PISA assessments of digital reading and online problem solving. Information about the processes students follow has the potential to further enhance feedback to teaching and learning.

Getting the foundations right

The quality of online assessments available to schools is highly variable. Some online assessments are designed primarily to achieve more efficient test delivery. Others appear to be shaped by what is technologically possible, rather than educationally desirable.

Instructionally useful assessments draw on empirically-based understandings of how knowledge, skills and understandings develop in an area of learning. They are aligned with well-constructed learning progressions that describe the nature of student progress. They are designed with an appreciation of how learning builds on to earlier learning and lays the foundations for future learning; the crucial role of prerequisite skills and knowledge in learning success; the kinds of misunderstandings students commonly develop; and the common errors that students make. In other words, they begin with a deep understanding of the learning domain itself and are designed to establish and understand where students are in their long-term progress through that domain – for the purpose of improving teaching and learning.

Skip to the top of the content.