Podcast Special: ChatGPT and the future of learning

Rebecca Vukovic: Hello and thanks for listening to this special podcast episode from Teacher, I’m Rebecca Vukovic.

Dominique Russell: And I’m Dominique Russell.

RV: Today, we're talking about ChatGPT, the language model developed by OpenAI that's been making waves in the tech world. So, what exactly is ChatGPT? Well, GPT stands for 'Generative Pre-trained Transformer', which is a type of neural network that's been pre-trained on massive amounts of text data. Essentially, this means that ChatGPT has been exposed to a huge amount of human language and has learned to generate text that sounds natural and coherent.

But what really sets ChatGPT apart is its ability to engage in conversations with humans. By feeding it prompts or questions, ChatGPT can generate responses that sound like they were written by a real person. It can understand context, follow a conversation thread, and even display a sense of humour or personality. In fact, ChatGPT has become so advanced that it's been used in a variety of applications, from customer service chatbots to virtual writing assistants.

So why is everyone talking about ChatGPT? Well, it's partly because of the impressive technical achievements that have gone into developing this language model. But it's also because of the potential impact it could have on how we interact with machines and each other. Some experts have even speculated that ChatGPT could be a step towards creating true artificial intelligence, capable of understanding and engaging with humans on a level we've never seen before.

DR: In addition to its applications in the tech world, ChatGPT is also having an impact on education and schooling. With its ability to generate natural language responses, ChatGPT has been used to create virtual tutoring and mentorship programs that can engage with students and provide personalised feedback. For example, ChatGPT can assist students with homework questions or essay writing, and even provide explanations for complex concepts in a way that's easy to understand. Some educators are also exploring how ChatGPT can be used to create virtual classrooms, allowing students to interact with a machine in a more natural and conversational way. While the use of ChatGPT in education is still in its early stages, it's clear that this technology has the potential to transform the way we teach and learn, opening up new avenues for personalised and interactive education.

RV: You may be thinking … that doesn’t sound like a typical introduction from a Teacher magazine podcast, and you’d be right. That introduction we just read, was written entirely from ChatGPT itself. I asked the platform to write a podcast introduction explaining what ChatGPT is and why everyone is talking about it. I asked it to mention how ChatGPT is impacting education, schooling, and students as well. It’s scary to think how quickly and most times, accurately, this platform can generate text for its users.

DR: Yeah, it’s generated a pretty impressive podcast introduction for us. I’m sure our listeners have been having conversations with their colleagues and students over the past few weeks or even months, about ChatGPT and have been reflecting themselves about how this tool might change the way we do things and what the future of AI in education looks like. There are so many questions to explore on this topic, and in today’s episode, we’re going to take you along on the journey as we have a quick play around with ChatGPT. We’ll be posing some questions to it and will share with you what it comes back with. We’ve also sat down with several education experts to hear their thoughts on the potential impact that ChatGPT will have on teaching and learning – from assessment, to equity in education and the policy decisions as well.

RV: Okay so I sat down with ChatGPT to ask it a couple of questions. It’s been said that future versions of ChatGPT will be voice activated and will generate audio responses, but in its current form, ChatGPT only generates written responses – which isn’t ideal for a podcast medium. So Dom, for this next section of the episode, could you pretend to be ChatGPT and read out it’s responses for readers?

DR: Sure!

RV: Okay let’s go! ChatGPT, how can you improve teaching and learning in schools?

ChatGPT: As a language model, ChatGPT can potentially improve teaching and learning in schools in several ways. Firstly, it can be used as an educational resource for students, providing access to a wealth of knowledge and information on a wide range of subjects. This can help to supplement classroom teaching and allow students to explore topics in greater depth. Additionally, ChatGPT can assist with language learning by providing language practice and feedback to students, improving their fluency and comprehension.

Furthermore, ChatGPT can be used as a tool for teachers to help them develop lesson plans and teaching materials. It can also provide automated grading and feedback on assignments, freeing up teachers' time and enabling them to focus on providing more personalised instruction. ChatGPT can also help to identify knowledge gaps and misconceptions in students, allowing teachers to adjust their teaching approach accordingly. Ultimately, ChatGPT has the potential to enhance the teaching and learning experience in schools by providing greater access to knowledge, personalised instruction, and feedback.

DR: So, there are a few things to pick up on from what ChatGPT told us there. It’s interesting to hear the suggestion of automated grading for assignments – I wonder if any of our listeners have tested this out?

I’m sure some of you listening now are a bit daunted by what we just heard from ChatGPT and how it says it can be used in classrooms, but to pull back a bit and perhaps ease our nerves slightly, we thought it would be interesting to reflect on how the education system embraced new technologies of the past – think computers, the internet and calculators. I asked Dr Catherine McClellan, who is the Deputy CEO of Research and Assessment at the Australian Council for Educational Research (ACER), to expand on this, but also share how ChatGPT offers something quite different to past technology.

Catherine also had some really interesting things to say about ChatGPT and assessment. We won’t have the time to get into that in this episode, but we did publish an article just yesterday looking at more elements of ChatGPT’s impact, where we share Catherine’s expertise on assessment, and more insights from the other experts you’ll hear from in this podcast. You can find that article by searching for ‘ChatGPT’ at our website, teachermagazine.com. Now though, here’s Catherine:

Catherine McClellan: It’s an interesting question because there's been, as you said, what I would describe as ‘the moral panic of the end of learning’ for thousands of years actually, probably goes back – there’s some evidence that the Egyptian pharaoh, who first had writing introduced in Egypt – ‘oh, we will lose all of our oral history!’. It goes way back – paper, slates, erasers, calculators, like you said, the internet. It's been going on ever since there's been education, learning of any sort.

And so, this is another one. And it will be similar in some ways, I think, to the past ones and different in other ways. And I think the similarity will be, as everyone fears, maybe the initial uses will be things we might consider non-optimal. Will students use it to write their essays? Almost certainly. Just like they use Wikipedia and to copy chunks of [things]. That’s not going to be different. And so, I do think maybe it's just an age thing (because I'm old), but students are ahead of us in adopting technology. So, they'll see the uses first.

And so, I think there will be a transition period. I think maybe this is the source of a lot of fear – that there will be this period in which they’re ahead of us and they're finding ways to escape the boundaries we’re setting around academic work or assessment or achievement measures. So, I do think that transition period always happens and it’s always a little shaky. And it's scary because we don't already know. It goes outside our established boundaries. That's always anxiety-provoking – the fear of the unknown. What is this going to do?

I always take reassurance in the idea human beings are learning animals. We love to learn things. We're not going to stop. I do think it will change how things are learnt, as nearly every tool we've introduced in the history of ever has in fact changed it. So, you know, going back to our Egyptian pharaoh, who may or may not be apocryphal, he said ‘we’ll lose all our oral histories’. You probably did lose that tradition of always learning history as an oral thing. So, of course it changed. But you didn't lose the history. It got written down and then could be saved. We still have those papyri, so, it's still there, they didn't lose the history. I think what they lost was this tradition of always learning it orally and then everybody knowing their family history for 20 generations back verbally. So, it changed, but it didn't disappear, people just learned in a different way.

And I think all of them have been like that. I was a secondary Math teacher many years ago and I came through a cohort – this will tell you how old I am – when graphing calculators first became a thing. They were very popular all of a sudden and I was in Graduate School at the time, getting my Masters in Teaching, and I was trained to use them. So, I thought it was the technology … ‘oh, great we can use these things and we can do all this stuff’. And I thought it was a tool, and joined a faculty who mostly thought of them as a horrifying thing – ‘students will never learn how to graph, they'll never learn how to solve equations, they’ll never learn all these things’ – ‘Oh, actually they will’, because I had the training. So, they sort of: ‘you do it’ and so I did do it and it was great fun.

And when I taught precalculus I taught students on my graphing calculators and had them draw hundreds, thousands, of graphs, really complicated graphs, really amazing things that to learn by hand would have taken days to graph something – boop, and there it is. Because I was having those [conversations]: ‘and what are the patterns you’re seeing? What do you see as the behaviour with the numbers?’ ‘Really, really, large’, ‘really, really, small’, ‘really negative’, ‘really positive’, ‘what happens at zero?’

With this thinking about how a graph behaves at different parts of it, then you could see as many examples as you wanted to show. You just keep hitting the button and get a new one and it was great. And then the following year when they learned calculus, the topic of limits – limits is usually a miserable topic to teach. Students hate it, it's hard to understand, there’s a lot of fiddly notation. But these students who’d thought about how a graph behaves at a very large number or a very small number (near zero) – they breezed through limits. Three days, we’re out of here. They completely understood it because they'd been thinking about that sort of end behaviour of a graph, so they said ‘oh, I know what that is, that's end behaviour’.

So, it turns into a tool. But it doesn't turn into a tool immediately. First, it's a threat, then it's a tool. So, I think we'll see that with ChatGPT too and it's going to be, I think in this case, more of a revolution maybe in the humanities at least initially, because it can write. And that's new. We really haven't had a machine that could write. Now it doesn't write the same way people do. It doesn't think. It's a bunch of high probability linkages underneath all that complex structure and regression, which is great, but it doesn't read, it doesn't think, it doesn't write, as we think of it. It's a different process with a very similar outcome. It’s not quite the same outcome. So, I suspect it will become a tool like it did for my students learning to graph, where, okay, have a draft, but you need to put your stamp on it. You need to edit it, it needs to be in your voice, in your vernacular.

RV: I love that example there, the graphics calculator, and how it shows that emerging technologies are initially a ‘threat’ when they’re first introduced and they become a ‘tool’ over time. I'm going to come back to my conversation with ChatGPT now. The next question I asked it was: where do you get your information from? Can we trust what you tell us? This is what it had to say.

ChatGPT: As a language model, I get my information from a large corpus of text data that has been curated and pre-processed by OpenAI. This corpus includes a wide range of sources, such as books, articles, and websites, among others. Additionally, I have been trained on a massive dataset of text through machine learning techniques, and my responses are generated based on patterns and associations learned from that dataset.

It's important to note that, like any other source of information, my responses may not always be accurate or up to date. I strive to provide the best possible answers based on the information available to me, but I do not have personal experiences or access to real-time information. Therefore, it's always a good practice to cross-check any information I provide with other reliable sources and exercise critical thinking.

Overall, while I am a machine learning model and not a human, I am designed to provide [informative] and helpful responses, and I strive to be a trustworthy source of information.

DR: So that response from ChatGPT is quite important to remember, I think, and shows that we must really know and understand how this tool works. It’s something Professor George Siemens, who is the Director of the Centre for Change and Complexity in Learning at the University of South Australia, shared with me, when I asked him whether there are any misconceptions we might have about ChatGPT. Here's George.

George Siemens: So, first of all, we're starting to understand what its limitations are in some ways. Like we're introducing language into our vocabulary like when it produces a citation that literally doesn't exist, we're calling that a ‘hallucination’. And so, we're adding, you know, we're building our vocabulary to be able to describe what we're interacting with. We're also starting to see some limitations on what it can do. It's particularly bad at math and doing a few things in that regard. It's particularly good at programming and being able to do some even code debugging and editing, which you know has been an interesting insight that we haven't heard enough about. So much so that someone at ACM wrote an article recently that said, this is the end of programming, like ‘this is the most significant disruption I've seen in my career’, was basically the author's perspective on it. ACM is probably the top publication in the computer science field. So, it's not an insignificant statement.

We're also seeing that there are some, a very fluid relationship to truth. In fact, it doesn't even care about truth, and there's no sense in which that's a priority for it, which makes for pretty interesting results and outputs. And so, it changes how we relate to information, ‘cause we've traditionally seen information as it's accurate information or it's not accurate. And we've been talking a lot about misinformation over the last few years. With the current version of ChatGPT, we're actually talking about complete ambivalence to truthfulness, like truth just isn't a part of a thing that it does, or it aspires to be.

Even with Google search, we look at the relevance of a resource, the quality of the resource, and even Facebook with its fact checking mechanisms that they've had to add over the last few years. With ChatGPT, there's literally zero relationship to truthfulness, and it doesn't even aspire to it in its current form. So, I think some of the bigger gaps that I see, to your question, is we don't have a way of effectively, yet, describing what it is in relationship to what we're familiar with. So, I think our biggest gap right now is language. What is this thing and what does it mean?

I've heard people describe it as ‘well, it's a layer, an interaction layer, to information that we haven't had before’, a colleague, Anisha, out of University of Queensland has made that statement, that it's kind of a new way of interacting with information. And so maybe it's kind of like, you know, Windows 3, where we move from a text-based interface with computing infrastructure to a graphical user interface interaction. It could be that. I read an article recently that talked about it sort of like a blurry JPEG and, you know, that's another angle. So, you can see all of us are trying to find the language to describe what this is and why it matters.

So, I'm not that worried. We know now limitations of ChatGPT computationally and some things that it’s particularly bad at, some things it’s particularly good at. So, we understand that. We know it's tenuous or, actually, outright ambivalence towards truthfulness as concept. We're aware of that now. What I think the biggest gap that we see emerging right now is to identify what it is as a tool. It's the language to describe how we fit it into our school system.

Is it something that we add on to our curriculum? Is it something that schools ban? Is it something that promotes self-regulation? Is it something that will make students lazier or something? We don't know that yet. So, I think it's that sense-making gap is the biggest one, which interestingly is a social cultural gap. It's not a technological gap.

RV: I think that concept of ‘truth’ and ensuring factual accuracy is something that teachers and schools will continue to grapple with as students use ChatGPT more and more. So much of it is about teaching students those critical thinking skills and the importance of fact checking. It will be interesting to see how educators approach it. Moving on and back to my conversation with ChatGPT, the next thing I wanted to ask it was about equity in education, and how it can support students experiencing disadvantage. I asked it: How can ChatGPT be used in education to level the playing field between our most advantaged and disadvantaged students? Here was its response.

ChatGPT: ChatGPT can be used in education to level the playing field between advantaged and disadvantaged students by providing access to high-quality educational resources and personalised instruction.

One way ChatGPT can help level the playing field is by providing equal access to information and knowledge. Disadvantaged students may not have access to the same resources and materials as their advantaged peers, but with ChatGPT, they can have access to a wealth of information and learning materials regardless of their socio-economic background.

Additionally, ChatGPT can assist with personalised instruction, which is especially important for disadvantaged students who may benefit from more individualised attention. By analysing students' responses and providing tailored feedback, ChatGPT can help students better understand difficult concepts and identify knowledge gaps. This can be particularly helpful in subjects where students may struggle, such as math or science.

Furthermore, ChatGPT can also assist with language learning, which can be a significant barrier for disadvantaged students. With language practice and feedback, ChatGPT can help students improve their fluency and comprehension, which can be essential for success in other subjects.

Overall, ChatGPT has the potential to level the playing field in education by providing access to information, personalised instruction, and language learning to support disadvantaged students, helping them overcome barriers to success and achieve their full potential.

RV: It raises some really interesting points there, although it does sound too good to be true. I posed a similar question to Dr Pasi Sahlberg, a professor of educational leadership at the Melbourne Graduate School of Education at the University of Melbourne. He says that he doesn't believe that just by having ChatGPT education would become more equitable. Instead, he says it’s important that students understand the technology behind ChatGPT, and this could help to break down some barriers.

Pasi Sahlberg: I think it’s critically important that we in schools we make sure that each and every child, before they get into using ChatGPT, that they understand what it is – the ethical issues, the safety issues, the identity issues, all those things that everybody has to understand. If we don’t do those things carefully in our schools and we just let our children play with this as if it was the kind of ultimate truth that comes out of this, I think we’re going to see a lot of problems and issues and challenges in a way that there will be young people who know more about the limits and conditions under which we can use ChatGPT for example for their own learning and progress in schools, compared to those who don’t. From the equity point of view, the critically important thing is that we make sure that each and every child understands – what are we facing here and what are we likely to be experiencing in the future when it comes to the technology? And then make sure that they know what they’re doing with this technology when they are studying in school.

RV: So, is this the end of education as we know it? Of course not. While we’re only just beginning to understand the potential impact that ChatGPT and similar AI chatbots will have on teaching and learning, the role of teachers and schools is just as critical as ever before. In fact, Pasi says that ‘because of the absence of research to inform us to what to do with AI now, educators and policymakers need to rely on professional wisdom and innovation of practitioners more than before’.

PS: Especially here in Australia, we have so much heard about this evidence-based ideas and philosophies that schools and teachers can only do things in a school that is based on evidence. And evidence often is regarded as research. We here at the universities we think about evidence as a rigorous, reliable, trustworthy, academic research papers.

But when I said what you read there, I was thinking that we don’t have time for that type of research. Our way of producing research is too slow for this issue. At the same time, the artificial intelligence and these chat bots are changing so quickly that even if we do a study now and report it next year, it’s going to be outdated already because there will be a next generation thing. The next one will probably be something that can speak, that you don’t need to type anything anymore, that you get spoken responses back and it’s a completely new type of thing.

So, if we want to find good ways or new ways to think about what to do with this current situation now within the next or this school year, what I wrote was that young people often have new and fresher ideas than we do about what to do and what does it mean for their own learning, particularly in high school. In Australian high schools, a good way to find ways forward is to combine the collective professional wisdom of teachers about teaching and learning, and this fresh insight and experience of students as learners. How do they see the best use of this type of chat bot in their own learning? I think we are undermining the young students’ potential and capabilities if we only concentrate on how to avoid them cheating in writing essays or responding in exam questions. I think most young people, if they understand what this is all about, that they will be using this in a good way and smart ways for their own learning.

DR: That’s all for this episode. Thanks for listening. Don’t forget to subscribe to our podcast channel on Spotify or Apple podcasts, and if you’d like to leave us a rating or a review, it really helps us out. We’ll be back with a brand-new episode very soon.

Consider the 3 questions that were posed to ChatGPT in this podcast.

How do you see ChatGPT improving teaching and learning in schools? How will you be ensuring that students learn which parts of ChatGPT to trust, and which to be weary of? How could you use ChatGPT to lessen the gap between your most advantaged and disadvantaged students?