TER #230 feature - Generative AI and Education with Linda Corrin

    10:17PM Sep 27, 2023

    Speakers:

    Cameron Malcher

    Keywords:

    ai

    generative

    students

    learning

    education

    tools

    generate

    work

    tpt

    teaching

    question

    gpt

    stephen sondheim

    assessment

    create

    content

    outputs

    chat

    give

    bit

    Joining me now from Deakin University is Associate Director of learning innovation. Linda Coren. Linda, welcome to the podcast.

    Thanks for having me.

    While we're about to have a conversation about your research and work in the field of education technology, particularly AI and its implications for higher ed and the transition from high school to higher ed, we have just spent the last 40 minutes reminiscing about our shared history in the community theater groups of Wollongong. And, you know, that's something that you've still kept much more active in than I have, by the sounds of things.

    Yes, all look, I love everything to do with theater. And it's it's been great to get back involved poster post pandemic, and get back. And and also sort of look at the connection that you can sometimes draw between education and theater or something I'm starting to explore a little bit more and in my work, so nothing's wasted.

    Well, you know, from from those community theater days, you you've, you've gone into being a theater technician, researching, researching and working in educational technology, and I went on to be a high school drama teacher. So I think there's an obvious difference in where our focuses were all those many years ago.

    Jerry, Jerry, both very valid ways to go, I think,

    Oh, absolutely. And I dare say, you are in much higher demand than performance. There's a certain oversupply, I think, especially in the age of YouTube,

    and an under supply of technicians. So it's something for people out there to think about, I guess.

    So, on that topic, as you said, you have gone on to to research and work in the field of educational technology, what have been some of your key research areas over the last two decades?

    Well, I guess I was just interested, I started off in this area, really, because I was interested in looking at what technology meant for education. So kind of got in fairly early when things like elearning, and the use of technology to do any kind of educational, well, to support educational delivery was really in its infancy and in some ways. So I've kind of watched that development and evolution. Over the years, lots of new technologies have come along, lots of different ways that we can use everyday technologies as well. And then of course, we've been hit quite firmly in the face at the moment without generative AI that's just starting to emerge now. So that's really shaking things up. So it's never been a boring space to research in. But one of the things I think I've really been interested in the whole way along is how do we learn from the research that we do, and put it into practice in the classroom? And also the data that some of these technologies throw up as part of everyday working with it? How can we actually use that data in a more effective way to work out how to better support students?

    Well, let's dive right into that topic of AI in education, because it is what brought us together for the sake of this conversation. When you look at AI, in its current form, I mean, and I suppose it's worth, I suppose it's worth flagging the fact that, you know, chat GPT, which is probably the example of generative AI text base that most people are familiar with, was only launched in November of 2022. So it's less than a year. And already, we are starting to see its proliferation and its use in many different sectors. But when it comes to that kind of AI in education, what do you see as its, I suppose, defining parameters, what does it what does it offer in the education space that has the potential to be so transformative?

    I think think that tools like chat GPT have done recently is really sort of, you know, AI has been around for a long time, we've been able to do lots of different things with different forms of AI, generative AI, I think, has taken us that extra step forward. Because it generates things it helps to, in some ways, there's so many different ways you can use it, I guess. So it'll help you to do a lot of the groundwork that doesn't require an awful lot of creativity and thinking it can do a lot of the easy stuff. And then we as humans can come in on the top of that and improve that, and then really take it up to the next level. So in some ways, it can kind of make learning a little bit more efficient, I suppose in some ways. However, I think we've only just really seen the tip of the iceberg in terms of the ways that we can use it in education. So I know in any conversation around generative AI, we talk about all the possibilities, but we also and a lot of people go straight to the threats. And I kind of don't want to do that in this conversation. I'd really like to sort of talk about what we can do with it. And then obvious See, it would be remiss of me not to talk about the things that we need to consider in other ways. But I think generative AI can really enhance the level of creativity that our students can have. If we use it in appropriate ways, and in respectful ways, because there are a lot of different considerations behind how this technology works and how it generates new content. But I think it's a really exciting time, because there's a lot of possibility that we could run with lots of different examples of how we can use it as part of a tool for education. And also as part of the outcome of education that students can take forward into whatever their life holds, whether it's be the workplace or community or, you know, so again, it's a big thing. And it's definitely a turning point in how we consider the way we teach children and adults. But definitely an exciting time.

    Well, as you said, is the key difference in this current generation of AI is that notion of it being generative of being able to create something, but and you've touched on the fact that then there are sort of ethical concerns about its way of going about that, because there are a lot of people who accuse the current relatively new model of generative AI of being more of a copying process. So are you able to explain the difference between what might be considered computers and algorithms copying something versus actual generative processes that these AI engines engage in?

    So I guess it comes down to the fact that a lot of this is driven by a thing called machine learning. So yes, you're right. I mean, technology can take bits from different places and reproduce them in, in certain ways. And that's one way that the technology can work. But generative AI, really, it looks at a really broad corpus of information. And that's, again, where some of the controversy comes in. Because where is it getting its information from? So How broad is the the information that it's drawing on and so on, and so forth. But generative AI is really aiming to learn from what is out there, whether it's on the web, or you know, I heard our friend Elon Musk is going to be trying to create generative AI of what people post on Twitter, which I find slightly disturbing, because we know that what people post on Twitter is not necessarily always representative of society as a whole.

    But I'm also reminded of the story from about, I think it was about 2015 2016, when Microsoft first tried to launch an AI chat bot that had been trained on Twitter, and had to shut it down because the discourse on Twitter led it to very quickly become a anti semitic racist character, because that was the volume of content that was taking in. So I'll be curious to see how Musk will make a new Twitter train day high, less of that,

    especially with the way he's been changing Twitter, or shall we say, x? But But coming back to your question about, you know, how does it generate it learns from all of this material, and it really does try and draw together different elements of what it's learned to create something new. So the whole idea is that each time you go to a tool like chapter VT, or mid journey, if you're doing graphics or things like that, it tries to give you something new, it tries to create something that's a little bit different to what it's created before. And I had my team actually at Deakin, we went to TPT, with the same question, and we asked the same question from different computers on different days and things like that, just to have a look at what the difference would be in what the chart TPT would generate. And it's really interesting, because we actually asked it to summarize the main points from a particular book that we were looking at at that particular time. And so each different person who asked Chuck GPT, the same question, please, you know, what are the main points of this book, it gave them a different response. And it highlighted different elements of the book, sometimes it totally ignored the second half of the book. And so it's really interesting in its attempt to give something new each time, it sometimes was very verbose. And sometimes it was very succinct. We found that sometimes with some of the generative API's, when you ask it maths questions, it will only give the right answer a certain amount of times, and then it actually starts to give the wrong answer because it's trying to generate something new. So it's, it is problematic in a lot of ways. And I think that will lead us to the the conversation that will inevitably have about how do we support our students to be able to critically analyze the outputs of any kind of generative AI tool to determine how robust how reliable and in some cases, how true, the outputs can be.

    But on that issue of teaching students to engage more effectively with AI content and perhaps be more critical. Users engages with it. I mean, as we said, at the beginning of this interview, chat, GBT has only been publicly available for a little under a year. And already the development that it's made has been huge. And you know, when you mentioned mid journey before as well, you know, I remember playing with mid journey when it first came out, and putting in particular prompts and getting back these mangled you know, human bodies that looked like, you know, horror movie monsters almost. And I used exactly the same prompts about six months later and got back these very crisp images that actually represented what was being asked for. And just that scale have developed in such a short period of time makes me wonder, what is the trajectory of growth and improvement going to be like, and how long until it's at a point where it will be difficult to impossible to actually teach people to be effective critical users of it because it will be so indistinguishable from natural human communication?

    That's a really good question. And I think really, it highlights the importance of helping our students to understand the kind of foundational knowledge that they need to be able to use these technologies. So I think a lot of people are assuming our that we don't need to teach people some of the foundational concepts in our different disciplines, because they'll just be able to chat GPT it. And really, that's not the case, I mean, you'll you'll know yourself trying to use something like mid journey, it's very difficult to get it to create artwork in a particular style, or form, or resolution, or all of those sorts of things, if you don't understand a little bit about art, and you don't understand a little bit about the quality of the outputs of of images. And it's going to be a bit the same with the chat GPT, you've got to know how to craft the prompts that you actually put to the AI to be able to get the AI to give you material, that's going to be really good. Having said that, and coming back to your point, it is improving a lot TPT can actually write its own prompts now. So yes, we will get to that point where it's going to be really indistinguishable. But then the question then comes back to well, what are we asking it to do in the first place? And I think that's where a lot of the problems have come up in the conversations around education, because a lot of people, parents, government educators, their minds have gone to the worst case scenario, they've gone to the well, our students are just going to cheat by using AI now. So academic integrity has become a real focal point of the conversation around generative AI. And whilst it's a very important point, and I'm not saying that we should ignore that in any way, I really think we need to start at the other end. And I do a lot of work in professional development of academic staff. And I'm really trying to not have the conversation about the learning and teaching implications and the academic integrity implications. Until I've had a conversation with them about well, how does this thing work? You know, what is generative AI, let's have a play with it, let's actually see how easy or hard it is to get generative AI to do what you want it to do. Because I think a lot of people assume it's a lot easier than it's sometimes it's exactly what you just said there before trying to get it to create images is not as easy as people assume. One of the hardest things that I find is getting it to generate an image that incorporates a piece of text. So if I say create a logo with this word on it, it's actually very difficult, and you need to do quite a lot of prompting, and refining of those prompts to get to that final outcome. And all the time that I'm doing that or a student is working through that process. They're learning. And so it's kind of a, it's a good thing, in a sense that you are learning in order to get the kind of output that you need. As genuinely via gets better. Maybe that learning will reduce a little bit, but it's not going to go away, I think you still need to know what you require the outcome to be. And you do need to have that ability to reflect on refine, and to some extent critically analyze. So giving the students the tools to be critical thinkers around the outputs of generative AI. Maybe you don't necessarily use generative AI to do that process. Maybe there are other things you can use to get that foundational knowledge into the heads of the people who are going to be using or the students who are going to be using generative AI. But they definitely need some of that foundational stuff. You can't successfully engage with generative AI without understanding some of those basic principles to do with whatever discipline it is that you're approaching it from.

    Hmm, well, let's let's go back for a moment to the issue that you raised here of academic integrity and how immediately people's minds go to the worst case scenario because obviously, were one of the biggest threats the you've we've been using that word to at normal educational practice has been allowing students to To replicate, or produce work without actually themselves understanding, necessarily the content of the material that's being put in. And, you know, I'm thinking about the countless number of social media posts I've seen, where teachers have taken some photos or screenshots of student work with have cut and pasted from jet GPT, including the as a generative AI model, I cannot babalao just given the game away straightaway, because their understanding of the subject is so poor, that they just copy and paste without even engaging with the content. But the question I really want to ask is, you know, so far, we've seen a lot of different policy reactions to AI in different sectors. Currently, the standard response in the K to 12 space has been an initial ban, while there is a development of an effective AI engagement framework, just to start this conversation, what is deacons current policy, and approach to AI at in university assessment?

    Look, it's a really good, interesting and difficult question to answer in the sense that Deacon, Deacon got on board pretty early with the whole generative AI thing, and they had a lot of opinions. And we have a lot of experts in both generative AI but also in assessment research at Deakin through the Center for Research in digital learning and assessment. And so one of the first things that Deakin did is they didn't ban it. They said, look, it's we can't ban it, we can't stop students from using it. And as you've seen, there's been such a proliferation, proliferation of different AI tools coming out every day now. So it would be impossible to try and monitor all of that. So I guess the Deakin approach has really been to embrace generative AI, and then to start to say, Okay, well, if we know our students are going to use it, how do we look at the way we teach with it, and the way we assess our students, so that we're giving them assessments where generative AI can't answer 100% of the assessment task for the students, and the student has to demonstrate some of their own thinking as part of that. And I think that's really the answer for for all different levels, I was reading the consultation paper on the Australian framework for generative AI for schools. And it kind of says the same thing, it's this idea of, we need to look at the functionality of it, we need to look at the learning and teaching approaches that we use, we need to recognize the inherent biases and problems with things from everything from intellectual property to cultural appropriation, and all the different things that are there. And we need to make our students aware of all of those issues so that they can work out how to use this. But it comes very much back to how we design, not just the assessment, because again, we always jump to that assessment end of the piece. And we we don't always necessarily have enough of a chat about the learning and teaching that we use in the meantime. So it could be that we set up activities for our students, where we actually encourage them to use generative AI to develop structures and drafts of particular works. And then they can show the progress and the process that they go through. So I think a lot of our assessment is starting to look at less about the outcome less about that artifact that comes at the very end and a little bit more about how the students can demonstrate the process, their thinking, their input, how they've taken the outcomes of generative AI tools, and made it into something of their own. And there's a lot of people writing about this particular idea at the moment, colleague that I, I co edited a journal with Jason, Associate Professor Jason Laci, he's written a lot about this idea of it's, it's going to become less about the artifact of the outcome and more about the process and the the learning that occurs.

    Yeah, well, I mean, I think, from my experience, particularly in the high school space, I think what, you know, when we think about AI as a threat, what has really been highlighted is how much of an assumption, the artifact or how much assumption there is around the artifact as evidence of learning as opposed to mocking or evaluating the artifact itself. And, you know, suddenly generative AI removes the artifact from whatever learning has what has not happened within the individual. And so these these underlying assumptions around which whole education systems have been built, or suddenly disrupted and exposed for the assumptions that they are, but what you're describing there the idea of process over product effectively is one of the things that first comes to my mind is that will challenge very much the industrial model of higher education. You know, one of the one of the things we hear about a lot in higher education is academics talking about how they only paid for so many minutes per paper to mark, because not only has the assignment become a product with an assumption of learning, but the whole hiring of staff has been built around that assumption. You can do it that way. So the reason for this very long Prelude is when you think about implementing that more process driven assessment of learning, especially in a higher education setting, how what what does that look like? How does that how does that work within within current hiring practices and structures? Or are you looking at significant changes that require more resourcing less resourcing? How's that work?

    I look, we'd always like more resourcing, but as far as what's gonna happen? I think it's interesting, I've heard two very different, what would I call it, I've heard two very different arguments about this sort of idea. And so a lot of universities are starting to think, well, maybe the only way we can assess our students is to talk to them. So we go back to that very, you know, 234 100 year old model of we teach someone throughout a semester or trimester, and then we sit down with them for however long and we talk about it. And we really get them to demonstrate in an oral way, that they have understood the concepts of the particular course that they're taking. So and we are seeing some of especially the Group of Eight University starting to bring some of those practices to the fore, that has an interesting implication in terms of resourcing, because, in some ways, and especially with the technologies we now have for doing teaching, so to do, marking of oral exams on the fly, that could actually not be too much of a problem from a workload perspective. If you've got, you know, enough people to do all the oral exams, and you've got enough time to do all the oral exams, because as exactly as you say, the massification of higher education has put a lot of strain on an academic staff, and especially the sessional staff, who end up doing a lot of that marking and assessment work. The other side of the argument, of course, is this process and really focusing in at several points throughout a semester or trimester to look at that process and to mark and grade that as as students progress through. And you're right, there are some implications there. But in some cases, it may just be a tweaking of the assessment design so that the marking load doesn't change substantially, but the focus of the marking changes. But yeah, you're right, I think there is some big conversations to be had going forward about the role of, of education. So I think it's probably I don't want to say easier, but it's probably a little bit more straightforward in the K to 12 space, where you there's a definite purpose of having students attend schools, and in certain times and their development on all sorts of different scales. Whereas for adult education, like higher education, vocational education, there's a really interesting tension now between the need for a university as an education provider, you know, I can go to chat TPT right now and say, TBT, write me a curriculum, outline for a course on x, and generate questions that I can answer and test my knowledge and do all of these sorts of things. And it can do all of that for me. So why do I need to go to university now? Well, the answer is really, to be accredited in what I know. So how does the university then turn around the way that, that it, it assesses and it accredits someone's learning. But I'd hate to see the universities disappear completely. I think there's a very important function of universities, and some of it is very purely social. You know, it's about letting people learn together, letting people build networks and friendship groups, and all of the different things that you get, you know, university used to be about going and finding yourself, and maybe learning a few things on the side. And, you know, it's changed a lot since then. But I'd hate to lose all of that. I don't think it would be a a wonderful world of chat. DBT was teaching us everything.

    Oh, look, I'm sure we could wander down a very long trail of discussing the, you know, philosophy of, you know, university and education as a guide towards output in an economic sense vers self actualization and development. And, you know, I think I forgive me for introducing a slightly depressive sentiment into the interview. But I think I think there's definitely a sense that at the moment, the economic drivers win the argument socially over any notion of humans developing into thinking educated individuals. And I suppose that is a part of the challenge because while we're talking about the capacity and the possibilities of generative AI We are talking about the in the broader context of what is driving its role in society. And we've already, we've already heard many stories around the world of call center staff being laid off on mass, or, you know, being replaced by chat bots for online systems and that sort of thing. And, you know, currently, there's this significant strike happening in the US in the film industry, because generative AI is suddenly threatening the work of not only writers, but also of actors, particularly background actors, who can be replaced by by generated images. And, you know, I think we said that we were trying to hold off on talking about the threats of AI. But obviously, the big question, a lot of people in the education space is how long until generative AI can actually be an effective replacement for human teacher in a context, especially if you're operating in that economic model of schools as delivering content that allow someone to serve an economic purpose. And I suppose what are your thoughts on that regard? Like, what? What does? At what point does generative AI stop becoming a potential helpful aid to education and start to become a replacement for human driven education?

    I belong, I believe very strongly that the human teacher is central to education, I really think if ever, we got to that point where generative AI really starts to take over and and do that function of teaching on a large scale, then we really have lost our way. There. My personal opinions, obviously, however, you're right. I mean, there is a very big economic drive there. We also know that, you know, we do, we don't necessarily have enough people to be teachers in this country in some particular areas in some places. And it does really worry me that people might say, well, we'll just, you know, get generative AI to to do some of that work, I really don't think that's going to be a good way for us going forward. I think generative AI is is everywhere, you know, it's already in the workplace, we know that. One of the things that education, regardless of the level is going to need to do is to prepare students for that kind of world. And so really, we need to be teaching the things that are beyond the generative AI so that people can can use generative AI. And I really think the human still has a really important role to play in that. It's still early days, we really do need to see how this does evolve. But I don't think you could ever take the human out of the education equation, to the extent that generative AI could could overtake that. But I agree, you said before, you know, it certainly is always going to be a helpful tool. And I think we should try and keep it at that level, and always recognize the value of the human.

    Well, let's, let's focus on that slightly more positive aspect, in the sense that, what do you see, and he's even having these discussions around what the assessment will look like, potentially, and, and how they might, AI might support the education process. And one of the examples you gave already was using chat GPT to help draft and develop, say, a curriculum outline, or possibly even to give to generate ideas that might inform, say, a lecture structure or whatever, what are some of the more effective ways that you're aware of that it is being used to enhance and support education, rather than necessarily replace it

    look like? I've heard a lot of examples, there was a really good one from the UK where a student was using chat GPT, in particular, as a tool to help generate feedback on the work that he was preparing to submit for his assignments. So he wasn't necessarily getting the artificial intelligence to write the assessment for him. But he was using it as a sort of a feedback loop, I guess you want to call it, you know, a critical friend, if you want to put it that way. And he do a bit of work, he puts it into chat TPT, he'd get a bit of feedback, and he would refine the work over time. And it was really interesting. He also submitted apparently, a piece of work that it was just purely TVT. And of course, you know, it didn't do very well, the assignment wasn't marked very highly, but the one where he actually had his own intellectual input into a process, where he'd sort of had a conversation with a generative AI to develop some work. So helping students to understand how they can do that for themselves is is a really good example. I think in terms of you know, that there are lots of different learning activities where you can, as I said before, get TPT to do a bit of the groundwork, and then the students can work either individually or together to really improve the the output the quality, there's an awful lot that that generative AI can Aren't due yet there's a lot of contextual things, you find that it's, it can be very vague in its language, if it doesn't understand the context, all that well. And of course, again, it depends on how you prompt it as to how well it will understand the context. A friend of mine was saying the other day that this work was obviously written by chat DVT, because there's far too many adjectives. And so yeah, and then they went on to say, you could actually write a prompt to say, reduce the number of adjectives, so it looks a little bit more like a human's written. So there's a lot that we can actually learn about how humans write how humans create, the sorts of things that we put together by looking at what generative AI comes up with, and looking at what the difference would be between those two, two outputs. But again, I always come back to this idea, the only way you were talking before about sort of that economic drive and the jobs of the future that students are going to go into. And of course, one of the biggest rising jobs at the moment is the jump job of prompt engineering. So it's this idea of people who basically get paid to write prompts for generative AI. But in order to know how to do that, and to know how to do that, well, there's a lot of stuff you need to know, in that background. And education is always gonna have that role of teaching that. But yeah, lots of different possibilities. I mean, there's lots and lots of pages on the the internet that are actually starting to emerge now that are quite interesting, where people are really trying to share this practice. And it's been a nice thing about this whole situation is that people have panicked a little bit. And then they've said, right, let's go to the community, and let's learn from each other, and find out how other people are doing these sorts of things in their, their classes, and in their learning design. So it's been quite positive in that way.

    Well, to just to connect that back to the issue that you raised before about how integrity of assessment has obviously been a key focus. And when we think about AI's ability to engage with other content, one of the things that's been a big talking point is the large, largely failure of programs to be able to detect when something was written by an AI. You know, you just mentioned there that a colleague said, Oh, this was clearly written by an AI and, and again, you see countless examples online of, of people saying, I wrote this, and the AI detection software told me it was written by an AI. How does that? How does it just come down into human intuition? Like, you know, you've mentioned that we're talking about Assessment Integrity is such a key thing that AI challenges. But if AI cannot detect itself, what are some of the more effective ways to help identify and to support students? Because and I suppose I'll just throw one other thought in there, because we have for so long had such a focus on the artifact as a proxy for actual learning. You know, we have, we have an unfortunately, performance oriented culture in a lot of education, you know, mastery, mastery is touted as the goal performance is what's marked. And so we have this huge performance oriented culture where some people will really struggle to get beyond the thought of it's done. I don't have to worry about it. Yeah. So how do we how do we overcome? How do we help people overcome that, especially when the AI detection software may not even be able to help the students detect when something's written by an AI? Because I imagine I'm sure somewhere, someone on the internet is studying this. But there must be a date in the future where the majority of content on the internet will go from being human generated to at least AI assisted if not AI generated. And so for students using generative AI, they may not even have the ability to tell if what they're reading is authentic, or AI generated. So how do we go about that issue of integrity and addressing the authenticity of content? Especially for novice learners who may not be able to tell the difference?

    That is a big question. But a really important sorry. No, I think it's, it's really interesting, because you, right, I mean, I'll take it in a couple of parts if I can, because I think the first thing you were saying there is yeah, how can we sort of know whether or not the student has done the work themselves? And again, I think it comes back to a word you used as you're asking the question, which is about culture. So in, as you say, in our education systems at the moment, the culture is very much that the students go away and do some work. And they hand us a piece of something, whether it's a report or an essay, or artwork, or whatever it is, and we mark that in our own offices or whatever. And then we give them feedback. And I think we need to change that culture. We need to be in the classroom more with our students. I mean, it's my big thing of being in a higher education editor. Later, I looked very longingly at the K to 12. area where, you know, you get to spend lots of time in classrooms with students, and you see their development and you know whether or not that students capable of doing the thing that they've just handed in. Whereas in higher education, we may not see a student at all, a particular teaching period, and

    one of the See of 700 faces in a lecture.

    That's it. And you know, a lot of our tutorials and things are taken by a sessional staff who come in for maybe one or two semesters, and then that's it. And, you know, there's not that continuity of seeing the students develop. So I really do think we need to rethink how we do have people coming into to higher education in particular, we're all very much about being flexible, and giving people options in higher education now, which is all wonderful. And again, I don't disagree with that in any way. But sometimes that means that, that the education experience becomes quite disjointed. And we don't have a lot of continuity, and we can't see some of that progress. So I do think we need to take a more top level view. So at the moment, when we teach, especially in higher education, we're very much at the individual unit, or course or subject, whatever institution you work at, they call them slightly different things. But you know, an individual teacher looks after an element of the learning, and then they hand over the students into another unit or whatever. And of course, they lose all side of that. So I think, as an institution, we need to look at how do we, we follow the student journey a lot more, and we have more time with the students so that we can really understand are they capable of doing the things that they're handing into us? Do they need to continue to hand things in? Are there other ways that we can actually monitor and look at that progress? I think Qaeda 12 are doing some really wonderful things in that sort of capability development and tracking area. I know there's a group at in the assessment Research Center at the University of Melbourne, who's doing some really wonderful work with K to 12 institutions all around Australia and the world, about looking at the progress of students over time, and not just relying on the individual points and artifacts. So I think higher education can learn a lot from that conversation, and how we do that. Coming to the second part of your question around. How will we help our students to know what out there in the big wide internet is real and WhatsApp? Well, I mean, it's all real to some extent, but what had been generated by an AI and what's a human contribution? That's a really difficult question. I went to post something on LinkedIn the other day, and before I could even start to type anything, it said, Would you like AI to actually generate the content of your LinkedIn posts for you? To which I said, alright, I'll give it a go. And to be honest, it came up with something that I thought was a bit rubbish. So I read it myself. But you know, these are the sorts of things where we won't know eventually. And that's why it becomes so important for us to look at the other sources that come to us through the world. So that we can question and try and work out whether something is Israel or not. I'm reading a fantastic book at the moment by a guy called Tim Harford, and it's called how to make the world add up. And He's an economist, and also a journalist. And he basically gives people 1010 ways of looking at information that might come your way, especially information that involves statistics or graphs, and to sort of look at all the different ways that you can interrogate that. So obviously, looking at past trends, and all of those sorts of things, looking at how that information makes you feel, why do you think you're so willing to accept it or not accept it, and really helps you to break down some of that critical literacy that we have towards the information that we see through the media and through the internet and things like that. And so I think people like that, who are trying to really help us to understand how to interpret the world now. They're going to be really interesting for us to be following as people in the world but also as educators.

    Yeah, it's it just the thing for me about the capacity of AI is just how many it kind of exposes just how many assumptions are quite foundational to social institutions and practices that we don't yet have an answer for. And education is just one of them. But But that idea of content creation, I mean, you know, as it is, the generation of news and media has been in rapid decline for a long time. And I can imagine, I mean, as there are already clickbait advertising dependent sites that just use AI now to generate articles. And that idea of training people to be critical readers say sounds like it sounds like an ideal, it, I just I the mind boggles at the scale of what would need to happen for people to be able to genuinely engage with and identify the different nature of content to that degree, especially in another year, when you know, four or five more generations of chat bots have been developed.

    It is a bit overwhelming. And I really think we're gonna have to wait and see to see the extent that this does happen. I know at the moment, as you say, it's rapidly improving. But one of the experiments somebody told me to do when I first started playing around with tools, like tat TPT, they said, Go in there and ask it to write a bio for you. And, of course, something similar. Yeah. And I'd even thought about it someone, someone told me to do this. And I thought, Okay, I'm a person who I've written a few academic articles and things like that, that are out on the internet, you know, that it should be able to find a little bit about me. And of course, my first version of Chatty BT three or a very much earlier version, it wrote back saying, I don't know who this person is. And so I gave it a bit more, you know, to works with Daikon, and she's an associate professor of blah, blah, blah. And it's it obviously, they work in a very niche area, or they're just not all that well known. And of course, that's where I realized that the data that had been used to train Chad DPT three to that point, was from 2021 ish earlier. And so there wasn't anything from 2022 where I actually moved to Deakin. So of course, I'd given it information that it didn't know existed, but of course, with chat TPT for and various other tools now coming on board. Their ability has improved, incredibly, and when I went back in and asked her to write me a bio, I think it was only about eight weeks after I tried that first time, it wrote quite an extensive bio. So it recognized that it knew who I was, it recognized my research area of learning analytics. However, it told me that I'd written two books that I hadn't written. It told me that I'd won a very prestigious aarC grant, which I would love to have say I had one but I didn't. And yeah, so again, it's it's looking like it's more confident with the text that it's generating. But it's a lot of it was just false. It wasn't? Well,

    that's actually a really interesting point. And I want to I suppose I want to test my understanding and see if my understanding of this particular issue is correct, because for a while the term hallucinating was used to say the chat GPT hallucinated information. And you know, people who had tried to use it to write essays will talk about how it had made up references. Now, my understanding of chat GPT in particular, is that it is a closed ecosystem, like it gets trained on a fixed data set at a point in time, but that its function and purpose is not to be factual, but to generate realistic sounding language responses. And so, in the cases where it hallucinate, you know, people say, Oh, it's hallucinated, these academic references that don't exist, or it's made up information to add to the biography, that the particular tool that is chat GPT, unlike, say, the the one connected to Microsoft, Bing that's actually connected to the internet. But that chat GPT is a closed ecosystem and its goal and its function is to generate realistic sounding uses of language, not necessarily factually accurate ones. Am I understanding that correctly, because I feel like there's a lot of misunderstanding about chat GPT based on that single piece of information.

    So chatty, btw is a large language model. So basically, all it's trying to do is it's trying to predict the next word. So you're right, it isn't necessarily designed to be factual. And I guess that's where there's going to be an interesting fork in the road, I think, coming pretty soon. And I think we've perhaps even seen the beginnings of it with some of those other tools, as you mentioned, but and others that are now linked to the live internet. So there'll be a difference between the traditional chat DPT and I'm not saying that that will always be the case for chat deputy. But you know, those ones that do work on a training set that they've been trained on, versus the ones that can pull in anything from the internet. I think, again, coming back to my point I made before when people actually get in there and try to use these different tools. And I've used a number of different tools now over the last few months. They're not as good as you think they are. We're here in the media, all of you can do all of these amazing things. But when you actually start to look at some of the outputs, it does, as I said before, it takes a lot of time to get the output into a state that I think is acceptable in a lot of ways. So yeah, there's a lot of examples that I can give as I follow a particular newsletter where they give you prompts that you can try out each day, and I was trying one out the other day where it said if you wanted to do some online shopping and you ask a particular tool, I think it was the being one and to create a table Have the top three, I don't know noise cancelling headphones or something like that. And you ask it to give columns of different things. So, you know, price and quality and all of those different things, and it creates that for you. And when I tried to run it myself, but not asking about noise cancelling headphones asking about something else, all it really did was went to one website and and summarized some of that information down. It didn't go online shopping for me. It just found some text that matched what I was asking it to do. And it gave me it gave me information back. But I won't say it was the best information. And certainly, I don't think they were that greater deals to be honest when I looked. But yeah, so it's it's actually a lot more complex than we think. So, yes, we need to be mindful of it. Yes, we need to be wary of a lot of the ethical, as we said before issues around it. But it's, it's still got a long way to go to give us that quality, that we'll be able to very easily replace a lot of things you hear a lot in the media about the kinds of jobs that Chad TPT might start to replace. And it's concerning. And I do think that some of them, perhaps, you know, aren't quite as concerning, but at the same time, we do need to be really mindful of the sorts of things that we ask generative AI to do. And of course, then who takes that risk when something goes wrong? So you've seen people using generative AI to create legal cases, and then they go to court and they don't win? What do you do? Do you sue? TPT? Because it created this text? Well, who's responsible for the creation of generative AI outputs? And I guess you could argue that you can't hold a generative AI responsible a human can be held responsible, but not a piece of of code?

    Yeah, well, I mean, we've already seen the ruling ruling out a US courts that content produced by an AI can't be copyrighted, because it's not attributable to an individual human. But I have to wonder, you know, when you mentioned the range of tools currently available, each of those is proprietary content of a company, or if not an individual, and I imagine there might be efforts on both sides of that of the owners of the model to claim ownership of its output. But similarly, I can imagine that we might eventually see the owners of the model start to be held accountable for its output as well.

    You'll notice untap TPT. And other tools, they actually have disclaimers now to say, you know, this is information that is generated by this tool, however, we do not recommend you rely on this for you know, blah, blah, blah, there's a whole bunch of legalese that are starting to come through in some of those tools. And I think that's interesting. I think the intellectual property thing is interesting from the sense of, how did they get the material to train these models in the first place. And we have a lot of authors and an artists and people coming forward saying I didn't ever give my permission for these large language models to be able to use this information. And yet I can now someone can go into a generative AI tool and ask for something to be created in the style of insert name of artist. And so yeah, there's some some really big issues there from an intellectual property from an ethical perspective, from a risk and legal liability perspective. And it's, it's difficult when we look at it as an educator, because it's really great to be able to say to our students, well, why don't you go into mid journey or into one of these other tools and, and explore what might this thing look like if, if you did it in the style of a particular artist? If you're teaching art, wouldn't it be amazing to say, right, here's a, here's a piece now go and explore what it would look like if Picasso had drawn it versus Monet drawing a door, you know, those sorts of things. I created a syllabus or a small piece of description around a unit. And I asked it to, I asked TPT to write it in the form of a musical by Stephen Sondheim. And that was interesting. And I tweeted about it. And the next thing I knew I was in the campus money now, but anyway, long story short, it was,

    you know, it also provided the sheet music. Yeah.

    That's it. But but I also felt a little bit dodgy in doing that, because Stephen Sondheim is my favorite musical theater composer. But Stephen Sondheim passed away a couple of years ago now. And the material that I put in there was obviously nothing to do with anything to do with theatre. But what it gave me you know, it was basically a dodgy poem. And I'd hate for people to think that that was representative of Stephen Sondheim's work. So again, it's having that knowledge of what's what is the genuine article like so that you can compare the output so anybody who looked at the particular piece of writing that took TPT threw up as a Stephen Sondheim esque piece. And no Stephen Sondheim would know, they were poles apart. But I guess if we're teaching young people who don't have that in depth knowledge of the original genuine article, they may not be able to tell the difference.

    Yeah, and that would I mean, that's especially in the K to 12 space, any issue with students accessing resources, who, you know, to go back to the to that previous question, simply won't have the experience the knowledge to be able to tell the genuine article from the imitation, or, in some cases, possibly the deliberate forgery, or act of fraud.

    And there is the question of can generative AI ever really create anything new, because it's really a predictive model, based on what has come before. And I know, there are different answers to that. And I'm honestly not up with the literature enough to be able to give anything definitive around that. But, you know, there are definitely ways that that generative AI can take in a much broader range of data than a human could ever do. And it might give different perspectives, but it's still learning on the basis of what we feed it. And of course, one of the biggest issues we have is that we know, our data from the past can be very biased can be very skewed. against particular minority groups and ethnicities, we have a lot of issues with trying to use my journey to create graphs, images that are representative of what our students look like, I have a lot of trouble every time I go in and say, give me a picture of a researcher, it gives me a man, I have to white mask it to be sometimes not always white, which is good. I mean, that's, that's, uh, at least it's getting there as opposed to some extent, but, but yet every time I need to specify if I want to a female. And so yeah, there are lots of those sorts of things that over time, it will get better. But we have to be really conscious of what we're training these models on. And as I said before, Elon Musk, training it on Twitter is really dangerous. But a lot of content providers and content owners are now saying I don't want generative AI to be able to use my material, which means there will be holes in generative AI as knowledge. And some of that material may start to disappear if we start to, to rely a lot more on generative AI. So it's a kind of, it's really for the too hard basket, because you know, you don't want people's work to be exploited. You want people to get credit for the creativity and the ideas that they bring forward into the world. But at the same time, if people are going to use this as an education tool, you don't want too many holes to be in there. Otherwise, you do get a very lopsided view of the world and, and the outputs and culture and all the rest of it. Yeah, well, I

    suppose that comes back to this question of what, particularly chat GPT actually uses a large language model, because you know, when we talk about the data that's trained on to your understanding is the, you know, there is obviously a process by which it takes in information as well. But is that process able to apply a critical eyes to the content coming in? Like, is it able to make any kind of judgment? Or is simple volume, enough to skew the way that it responds to things?

    That is a very good question. And if I was a machine learning expert, I'd be able to give you the answer. I don't know. But I assume that volume would have an impact. I guess if it's looking at a whole bunch of pieces of work that say this is the view of the world, it will eventually believe that that is the view of the world. I mean, believe it is not the right term. But yeah, so I think you're right, it may we do need to think very carefully about what we we feed it we know that, you know, tools of the past, I wouldn't call them generative AI but other AI tools, especially tools that have been used for things like image recognition and CCTV and things like that, it was very biased towards particular people of particular races, because the material that it was fed to learn off was was that we we find that yes, researchers are males, because a lot of the earlier AIS, were only trained on pictures of males. And so that's, that's something that is a legacy of, of how these things have been built over time and the basis of AI and as it's as it's grown, so, but I think the really, again, trying to bring it back to a positive I think the the really good thing is that we're aware of that a lot more now. And when I see people writing consultation papers from the government, or if I see universities trying to come up with, you know, guides of good practice and things like that, we're really calling out that sort of stuff a lot more. So we haven't fixed it yet. I don't know whether we'll ever fix it. There's always going to be biases of some way or another in the world. But I think if we can at least recognize that they may exist And then try and mitigate against them as best we can, that's going to be a positive thing going forward. But as you say before how how we develop that skill in our students, I think that's the thing we need to start from, from day one, when they're coming to school is to start to think about how do we build them? And I think we already do that, you know, how do we build our children slash young adults to be critical, but creative members of society.

    It also, it also makes me think that maybe the immediate future, particularly of generative AI in education, will be on dedicated AI systems that have very carefully curated data sets to be trained on that, you know, what we will start to see, you know, there's a, there's a concept, you know, I was quite a, I was quite a science fiction nerd growing up. And there was a concept in a lot of near future science fiction of experts systems, not true AI, but they were trained to be highly specialized in a particular function. And in the space of education, I can imagine that it won't be long before we're seeing a, you know, an expert system AI for science, you know, or even even for particular fields of science, for biology for physics. And similarly, you know, I know that there have been some experiments in feeding AI, the work of a particular author, and then creating like a, you know, a chatbot, that's meant to be like interacting with that author, for example. So, yeah, is that I mean, that's almost you mentioned at the beginning of this interview, that we have had types of AI before, particularly in the fields of like algorithmic learning and supports for, for learning in that way. How much of a gap Do you think it would be to jump from that kind of algorithmic learning that existed before to more of a specialized AI that might focus on a subject.

    So I think, to some extent that, that exists and has existed for a little while in the form of intelligent tutoring systems. So if, if the educational institution that you're that we're talking about has a lot of money, so we see this used a lot in K to 12, we also see it used in higher ed as well. There are very specialist systems that are very intelligent, and but very targeted to teach certain areas. So as you say, it could be part of a discipline, it could be a particular skill. And a lot of that revolves around, you know, a certain amount of knowledge that you're wanting to expose the students to, and also what we call a student model. And the student model tries to take into consideration different attributes about a particular student, and then it will, the model and the system and the AI, if you want to put it that way, we'll then adapt the learning. So if a student is really struggling, then it might give the student more questions to answer or a different way of looking at things and, and sometimes I have a friend at the University of Melbourne who did an amazing PhD, probably about 10 years ago now where he actually created a chatbot to help students who were learning first year programming in Java. And the the Chatbot, could talk to the student, it could recommend questions. And it also helped to match students who were struggling with students who were doing really well, and get the expert student to help the novice student as well. So it wasn't just the bot that was doing the teaching, it was actually kind of, you know, matching ideal students to each other to help because of course, the other great thing about the expert student teaching the novice student is that that helps to reinforce their knowledge, as well as helping another student to improve there's so some of that already exists, how generative AI comes into this. And whether generative AI could be used to help create even more adaptions and personalizations. For students. I think that's a really interesting and exciting space. And again, as long as we've got some sort of oversight to make sure that it isn't generating things that aren't true, then that's a really, really interesting space that we could be in. I saw an article I think it was from one of the, I always forget, very works for I think about the minute anyway, one of the American universities, who have been doing some studies on educational video, and actually using generative AI to create text summaries of Bonga educational videos so that students could have a smaller summary to digest and then looking at whether or not that's an effective way of learning. I actually question Why did the video need to be that long in the first place, but that's interested in multimedia learning? But yeah, so I think there's, again, there's lots of different ways that we can use generative AI. One of the things that generative AI can do is it can and give you feedback I mentioned earlier about the student in the UK who was doing that. And so in some ways, it can be like a formative tool for students to use to be able to, to check their progress, where a teacher may not have the time to do a long form assessment or even a formative assessment with with a lot of feedback. So yes, there's lots of possibilities, lots of really cool ways that it could be used. But you're right, I think, the leap from early artificial intelligence to intelligent tutoring systems to predictive analytics through to Gen AI, it's definitely on its way. And it's going to be really interesting to see what comes up in the next few years, because I think there'll be some really, really innovative and really helpful ways that we can do it. So it's all about balancing the risk of the people who want to do bad things with these things with the real benefits that we could get for education.

    So Linda, when it comes to particularly thinking about the next few years of AI in education, both in terms of you know, I mean, in terms of technology in terms of policy change in terms of how it might change our approach to education? What are the what are the things that are front of mind for you, that both teachers, education systems, and policymakers should really be focusing on. As this technology continues to develop and become more widespread?

    I'd say the really important thing is for all of those groups that you just mentioned, educators, policymakers, management, all of those different areas, to just get in there and play around and understand what it is, I think there's a lot out there in the media saying this is what generative AI is. But it's only when you get your hands dirty, and you play around with it, and you see what it can generate. I think it's really great that we're sharing lots of examples of good practice amongst educators of all different levels of education. But you're really, really understanding what it is, what it does, where it's going, thinking about how we use it as part of our learning and teaching. And then yes, we do need to think about assessment and things like that. But don't just go straight to assessment and to academic integrity issues. First, think about the technology itself, the opportunities that it has, and then we work out how we change the culture to be able to make our students benefit from this fantastic advance of science and technology.

    Well, I do hope that we might get the chance to catch up again in the future and look back on whether that has indeed been the approach that people taken and consider the state that we will be in at some near point in the next couple of years. Linda, thank you for a fascinating very far ranging conversation on the topic of generating AI in education. I will make sure there's a link to your profile in the show notes for anybody who'd like to either read more of your work or get in contact or to read more about the Australasian Journal of Educational Technology. But once again, thank you very much for your time.

    Thank you. It's been great to chat about all this stuff. And yeah, let's see what the future brings.