Recent technological advancements in artificial intelligence (AI) have enabled high school students to produce cohesive and informative responses to everyday questions or educational activities with the click of a button (Kasneci et al., 2023; Yan et al., 2023).
Ease of access to generative AI raises ethical and practical considerations for educators. Turnitin reports one in 10 tertiary student submissions contain AI-generated text (Turnitin, 2024). While it is widely acknowledged within the education community that secondary school students are also interacting with large language model (LLM) chatbots, there remains limited understanding of their motivations for doing so.
Balmoral State High School – a large inner-city school of Brisbane with approximately 1,000 students – recognised the growing need to examine the impact of students using AI and began to question its implications for student success and academic integrity. This year, a group of cross-curricular Balmoral teachers are collaborating with The University of Queensland (UQ) Learning Lab by joining the Partner Schools Program to explore why students engage with AI tools and how this insight can shape future teaching strategies and assessment design.
With the support of the UQ Learning Lab the team recently completed a literature review and student survey to set the foundations of the investigation and gain a greater understanding of students’ current relationship with AI.
What motivates students to plagiarise?
Prior to completing a series of focus groups with our students, we first wanted to find out what motivates them to plagiarise.
During the review of research in motivation, we recognised an obvious gap in the existing literature in terms of student motivation to use LLMs. As a result, we focused primarily on a students’ motivation to cheat or plagiarise.
Research reveals that factors such as easy access to information online, pressure to perform, and misinformation has changed students’ attitudes towards plagiarism (Ma et al., 2008; Evering & Moorman, 2012). A lack of sense of integrity, maturity, confidence or experience with certain genres of writing, and a lack of understanding about how to complete certain tasks are seen as other motivating factors. It has been found that students who do not plagiarise do so out of fear of getting caught, not because of ethical reasons (Evering & Moorman, 2012).
These research insights provided a backdrop for understanding how students might approach the use of LLMs in their learning, and therefore help guide our practice as teachers in regard to assessment design and explicit teaching of ethical use of AI to students in the future.
Considerations for teachers
Overall, the academic community indicates a promising outlook on the integration of LLMs in education, with potential benefits in content creation, personalised learning experiences, and accessibility improvements for students with disabilities (Kasneci et al., 2023). However, concerns persist regarding interpretability, ethical implications, and the potential to diminish critical thinking skills when LLMs are overly relied upon for complex tasks (Kasneci et al., 2023; Grose, 2023).
Currently, students are using AI in education to perform time-consuming and repetitive tasks such as completing assessment items (Yan et al., 2023). Overwhelmingly, the research suggests that AI and LLMs are promising tools that can enhance education experiences by automating tasks and supporting learning, but their integration must be carefully managed to maximise its benefits and to avoid its very obvious pitfalls.
In secondary settings we believe that AI will enhance engagement and personalising learning experiences. In particular, it will benefit students that require intensive language learning support immensely and help the development of problem-solving skills in all students. However, researchers caution against over-reliance on LLMs, as it this could stifle critical thinking and independent learning skills. Most studies advocate for redesigning traditional assessment methods to foster creativity and mitigate the risk of LLMs undermining academic integrity (Crawford et al., 2023).
How are Balmoral Students currently using AI?
In Term 2 this year, our students were asked to complete an anonymous survey about their use of AI platforms for assessments and general schoolwork. A total of 399 students (163 females, 203 males, 15 non-binary, and 18 who did not disclose their gender) from years 7 to 12 participated. This survey has provided us with valuable insights into how and why students use AI in their learning, which will help us shape our future focus groups to better understand student motivations.
From the survey data: 53% of students said they had not used AI in their schooling; of this percentage, 41% felt that they did not need to use AI, whilst only 11% did not use it out of fear of getting caught and 13% admitted to not knowing how to use it.
For those students that are currently engaging with AI, the majority (80%) use AI for English tasks, followed by Science (50%) and Humanities (42%). Their main motivations were identified as gaining examples of ‘good quality’ work or creating a starting point for writing, fixing grammatical and spelling errors, and to improve the overall quality of their own work prior to submission.
Student feedback on motivation and ethical use
When students were asked to comment on their motivations, these were some of the reflections:
If I need some feedback on a specific part of work I am doing, it is helpful when finding errors or expanding on a certain idea and acts as a guideline.
I feel like I don’t understand how to complete the task and I don’t feel confident to complete the task
I use it to give me a structure for English so that my stories are planned and I can start writing faster with a more sound plan
Trying to answer questions by googling is too inefficient nowadays. AI allow you to ask a question and get an immediate relevant answer. It’s a great tool for research.
It helps a lot if I miss out on schoolwork, and if I need feedback if the teacher isn't there or a substitute.
The survey also explored the ethical considerations of using AI in the classroom, asking students if they considered it cheating. One average, only one-third (34%) of students felt using AI was cheating, and this percentage decreased as year levels increased: 42% of year 7s, compared to 29% of year 12s. Male students across all year levels were slightly more likely to consider it cheating.
When students were asked to comment on their stance about whether AI use is considered cheating, their explanations highlighted the complexity of this issue. Many respondents acknowledged that AI should be used as a co-writer and not as a ghost writer for their work:
… it is easy and accessible for everyone, and while it's not good for doing the work for you, it's helpful when you need feedback on a certain aspect of what you're writing etc.
It doesn't give anyone a particular advantage or understanding of the topic, but they need to use some part of their brain to access and use the information AI gives them.
Depends how AI is used in school, if it is used to complete your own work without the student having any understanding, I see it as cheating. If a student uses it to teach them or explain something in an assessment, it’s not really cheating.
Writing a whole essay with AI would be considered cheating. But using it for editing small things and research is fine.
Whilst there is still a lot more analysis to perform on our survey results and responses, our action research is starting to paint a picture that our students see AI as a valuable tool for improving their work. However, it also highlights the importance for teachers to guide students in the ethical use of such a valuable tool.
Where to next?
From here we plan to conduct further analysis on our whole school survey data to deepen our understanding of students' use of AI in their learning.
The aim of this initial survey was to provide insights into our students’ perception of AI and to act as a guide to the development of authentic focus groups as the next stage of our research project. It is thought that these focus groups will allow us to gain a more comprehensive understanding of student perceptions of AI to ultimately answer our research questions of ‘What are the motivations of students for engaging in LLM in their learning?’ and ‘What implications for learning and assessment design do these insights provide?’.
References
Crawford, J., Cowling, M., & Allen, K. A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching & Learning Practice, 20(3). https://doi.org/10.53761/1.20.3.02
Evering, L.C., & Moorman, G. (2012). Rethinking Plagiarism in the Digital Age. Journal of Adolescent & Adult Literacy, 56(1), 35-44. https://www.jstor.org/stable/23367758
Grose, T. K. (2023). Disruptive Influence. ASEE Prism, 32(3), 14–17. https://www.jstor.org/stable/10.2307/48734149
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., & Kasneci, G. (2023). ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learning and Individual Differences, 103. 102274. https://doi.org/10.1016/j.lindif.2023.102274
Ma, H. J., Wan, G., & Lu, E. Y. (2008). Digital Cheating and Plagiarism in Schools. Theory Into Practice, 47(3), 197-203. http://www.jstor.org/stable/40071543
Turnitin. (2024, April 9). Turnitin marks one year anniversary of its AI writing detector with millions of papers reviewed globally. [Press release]. https://www.turnitin.com/press/press-detail_17793
Yan, L., Sha, L., Zhao, L., Li, Y., Martinez‐Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2023). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90-112. https://doi.org/10.1111/bjet.13370
How are students in your own school setting engaging with chatbots and other AI tools? What are their motivations for doing so? What implications does this have for your own teaching and assessment practices?