TER 233 Feature - A Whole-School Approach to AI with Michelle Dennis

    2:04PM Nov 8, 2023

    Speakers:

    Cameron Malcher

    Michelle Dennis

    Keywords:

    ai

    students

    teachers

    school

    tools

    put

    assessment

    generative

    idea

    foundational skills

    dangers

    task

    conversation

    principle

    language

    technology

    work

    discussions

    people

    write

    Joining me now is the head of digital from Haileybury in Victoria, Michelle Dennis Michelle, welcome to the to podcast.

    Thank you so much for having me

    that your presentation at the ACL conference was about creating a whole school approach to AI, which I'm sure is a topic that a lot of schools will be trying to figure out how to grapple with in the coming months and years. But before we get into that aspect of it, let's Can we just take a step back and talk about how you see the current state of AI in terms of both its potential, but also some of its challenges for educational practice? Well,

    the interesting thing is that the current state is movable. So a lot of the time when we're building educational policies, what we're looking at is a fairly stable idea or situation that we're dealing with. When it comes to AI. This is changing on a day by day, week by week basis. So at the moment, where in the last week of this recording, we had issues like the writer strike just literally yesterday signed an agreement saying that ai ai would not be able to be used without the consent of the writers. So protecting writers rights, we've had chat GBT building a video feature, mid journey is updating on a weekly basis, Firefly is available for schools to use. So we are in a time where there is a lot of changes happening really, really quickly, some of which are tech changes the difference between last year and this year, absolutely massive, but also policy and legal changes as well. So the current state of things from an educational standpoint is that we need to be outwardly looking and learning in order to translate it into and try and start predicting on how we can best support our students. For this changeable world. I do think that we will see more and more jobs that will be changing because of AI. The Writers Guild is one example. And obviously, the actors guild as well. But beyond that, if you even just look at Photoshop, that Firefly, which is the AI within then it's just another tool within it. You select a background to tell it what you want the background to be and the photo background just changes. So Microsoft is bringing it in using copilot. So it's not going to be like a chat GBT website that we can just ban and pretend doesn't exist. The tools that we interact with on a day by day basis, we're gonna see more and more generative AI popping around into our day by day experience, which means that it's going to be harder to segment and treat separately from the rest of our practice. Yeah,

    I mean, it's fascinating to look at, you know, you mentioned Microsoft before, they've already you know, they own LinkedIn. And they've already built tools into that, where if you start to make a post, you can just click a button that will have an AI tool, generate a post for you on a topic, which, you know, so far I've not seen it generate anything I'd want to post in my name, but certainly seems very popular with certain sectors. That that idea of integration, I suppose, once we do, especially once we do start to see that kind of generative AI built into, let's say, Microsoft Word, it's entirely likely that word Google Docs will soon have the equivalent of chat GPT, or their version of it. functioning in the background. What do you see that offering to school practice? That doesn't just become a matter of you know, at the moment, and let's keep in mind, we are recording this at the end of September. By the time this episode goes online, who knows, as you said, who knows what the change will be? But given that the current response has largely started with a ban, in school systems, what do you see is the potential for you know, Microsoft Word with a chatbot built in that can co write co design research for you.

    I think one of the since generative AI started to make its appearance, I've had some very robust discussions within my team. I have a team member who is English teacher and a drama teacher who very much believes in the power and the authenticity of text and the power of the author. And the importance of being intentional about what you write. And then you have me who I am someone who is I'm I focus on ideas and tools that make you be able to shoot light those ideas and put them into action and have articulated better those kind of tools for me. I've seen them as immensely powerful. And so we've had a lot of very robust debates. And we've both moved a little bit on our spectrum, as we've discussed it. And I do think that we are in a time where we're going to, we are going to have to relook at what these tools can do, I think we, the benefit is that it can give a voice to people who otherwise might not have had one. So I've spoken to people from different backgrounds who say, I can't express myself the same way I've been had the same educational backgrounds, my resumes, and my cover letters are already sorted before I even get to the interview and get a chance to make my case. So tools like this can help with that. with Excel, you'll be able to write normal questions to query the data, you won't need to know the formulas, you'll be able to say, Oh, I know the answer to this question. So again, we see that democratization of information where it's about what you can, rather than necessarily knowing the right formula, it's about knowing the right question. And putting these tools into the hands of the people who might be the ones who need to know the answers more, or know the better questions to ask because they're the ones dealing with the face, the face to face class or the situation that they're trying to problem solve. I think that's why we saw chat GPT go viral so quickly, is because you don't need a special degree. And you don't need for all that talk about prompt training and prompt engineering, you really don't need too much to get started with it. A few tips will help you do it better. But you don't need a degree, you don't need a specialist qualification. And so people saw something that they knew they could use. And they didn't need to go through any of those barriers to use them. Now, we have that the educators like I as an educator can talk both sides of the debate. There are dangers to that sometimes knowing the right formula and means that you also know the problems with the formula that includes outliers in its calculations, or the complexity of the question itself might mean that AI translates it in a more simple way, then perhaps we should be is it accounting for where people are coming from or what where a student's suburb is. However, even with those dangers, even with the dangers of potentially giving similar words to more people, and allowing people to express their thoughts in a more consistent way. And there are dangers with that there are also benefits, and it can change the way we have our conversations. And certainly with our students, it can mean that students who were being left behind have tools that they can use to help help translate and be part of the story we need along the way, make sure that we start identifying what are those foundational skills that we think will help you ask the better questions will help you be able to think critically, will help you understand enough that AI is a tool that you are using with enough knowledge to use it wisely. And not like aI replace those foundational skills.

    Well, that I mean, that gets to a key issue on the topic of assessment. And I you know, I'm an English and Drama teacher myself. So I can I can hear where your teammates coming from? Not so much because necessarily of there being an issue with the authenticity of text, but how, where are your discussions with your team in your school at on topics like when you consider that most school subjects in their syllabus documents have within them at the K to 12 level, particularly K to six but also seven to 12 learning and correct use of subject specific language and vocabulary. And suddenly AI is doing that for them in a way that means they may not actually be themselves engaging with and using the correct language as the curriculum requires it to, you know, the way it gets described by other people have spoken to has been an issue of, you know, authenticating assessment or whether it constitutes plagiarism or not. So I'm just curious where your where he was your thinking where your conversations currently at on that potential clash between what AI can do for students, and at times when it is actually replacing the intended learning outcome of of a curriculum document.

    One of the things I'm trying to do is build in intentionality. So if we pretend It doesn't exist, the kids are going to be using it. You know, you need to be knowing it's out there and designing your assessment tasks with the idea that it's out there in mind. And then you and then when you're intentional about why you're what you're assessing, and why you're assessing it, that means that you start going, Okay. In this case, I need to explicitly design my tasks so that those foundational skills are learned. Or in this task, that's less important to me, because I've already assessed that earlier. For this task, what I want to assess is instead this level of thinking, or I want them to learn how to unpack things like Chechi, VT, which can come up with inaccurate information, I want them to be able to think critically and ask questions about the material that they've been given. So I do think it is a balanced diet. I think that like, it's not about AI being used in every task. It's not about AI being completely bad. It is actually about being intentional as teachers, but also as assessors about what are the skills that we are targeting? And how are we designing the assessment tasks to pull those out? Now, I will say as someone who I for teaching methods, one of which is media, and I've been playing around with generated images for a while now. And I found that my vocab has gotten better since starting to use it. Because the more accurate my vocab is, the more I talk about art history and my prompts, the better the results are. So it's actually made my use of language more intentional, and more subject specific. And it's given me a better understanding as I've worked with AI to get better results. Because most of the time, when you put something in, if you're just treating, let's say chat GBT as a rubbish in rubbish out, I'll just put something in I'll take whatever is put it in. That kind of cheating is very easy for a teacher to detect, we don't really need a machine because it doesn't look like the students writing often or miss the court purpose. Or it will be it will be bland, like an assessment task when you look at that. And that's the kind of response you get, it's not necessarily a good assessment task either. So for me, I do think that as we're building our assessment tasks with AR in mind, actually can be better learning outcomes. So assuming that AI simply by existing might detract from the use of language, I'm finding is not always true in practice, if used? Well, so I'm talking about a world where teacher talks to students about with this assessment task, you will be allowed to use AI. However, we need to look for bias, we need to question whether it's accurate, you need to do the research to back out where it is, you need to look at the language, it needs to include these words. So designing it's going I one of the things I want them to develop is that subject specific language, they need to edit the answers to include that those words in there. And they need to be able to discuss with me afterwards or discuss with the class what those words mean, or unpack it. So actually building that into the task of that's important. I think a lot of the time, we just kind of assume that will happen when a student's writing their response, rather than explicitly saying to them, This is what I need. These are the kinds of language when I'm talking about subject specific language. This is the glossary that you should be referring to

    a lot of what you've described there about the ways that engaging with Aya may have actually helped improve so vocabulary. So there's language in a classroom context, that to me conjures images of students engaging with AI as a regular part of their learning practice in some subjects. As opposed to it being something they might tap into come assessment time or or when projects are complete. So I suppose this really does take us to this topic of a whole school approach to AI, you know, and and again, at the time that we are recording, the current state of things is that the federal government are working on a nationwide set of principles for incorporating AI into education a number of states have started with some still have some don't have blanket bans on students accessing our schools, with the work that you're doing trying to foster and develop a whole school approach to AI. What are the current driving principles that you're working towards at the moment?

    So when we first came out with our policy day, one term one, it was a very specific policy about a specific, very specific technology and looking at the rapid pace of change. I knew that that was not a long term plan. That was it. interim, we need to have a something out there. And it was guided by the technology rules at the time as well. Because I do think we have an obligation and ethical as well as a legal responsibility to comply with the terms and conditions of the products we use and respect student privacy. So I do think that's really important and can't be forgotten. But I knew that like, that policy being so specific, had a shelf life of, you know, ideally, a term but less than that, in reality. So when we came time to come up with a policy that I hoped could steer us through change, I did go to one that very similar to the claim, first, I'm just gonna say that is principle driven. So isn't about specific technologies, it's about what we as a school care about, so that as technology changes, we have those explicitly there to make our decisions faster. So the first one, and for me, the actual most important one is critical thinking, and ethics. Because we are about to enter a time where students are going to be, they're going to be voters, they're going to be interacting potentially on we know they get a lot of their news from social media, we know that they don't go to traditional news sources. And that generative AI might make it harder to separate fact from fiction. And we know that AI also can be biased, because it's based on the information that goes into it. So we need to make sure and we need to be really explicit about training our students in how to approach AI, with that critical thinking mindset, and that questioning, it will not only prepare them for that future, which is there are many problems with that future. But I think it is a reality that we might have to deal with very, very soon if we're not already dealing with it now. But also, it's also a valuable schools it skill beyond the AI, it's something we should be doing already. Ethics as well. Our students are going to have to make decisions and advocate for themselves about what they want us to future to be, do we want a world where if our actor dies in the middle of a film, we can use deep fakes to replace them is that ethically, okay? As a society, we need to work out if we're okay with ringing Telstra and dealing with an AI that sounds very, very human and acts like a human and not being able to tell the difference. And that's where, you know, we have the European Union, for example, coming up with ideas about what should be allowed and shouldn't be allowed. And I want my students and I want my teachers to be able to advocate for what society says is okay or not with these new situations we haven't considered before. So that's my first principle. And I, as Hallo breeze first principle, and I genuinely think that is the most important one. The next one is privacy and security. I've alluded to that earlier, I genuinely think we're making decisions about what's happening with student data. We know when we use AI, we're training it, anything you interact with AI is adding to the machine. And so we need to be careful about what information goes into it. We need to be careful about when students are old enough to make that decision. Yeah. And when we're making the decision on their part, we're not in a perfect world. Sometimes we have to weigh up opportunity versus risk and opportunity versus cost. And sometimes, sometimes we go that's worthwhile. And sometimes it's not. And we have to be explicit about that. So any tool we use at Haileybury has to go through our procedure for approval. So it goes through me I look for the educational use case, is it worth it? Because also, the more tools we use, the more difficult it is for students to navigate their day to by day schooling. So there is a bit of a bonsai approach, why use five video generative devices, when one will do it well and potentially minimize how much stuff is out there? Then the next person that goes through is head of it, who goes do ICS security risk there? What happens to the data, where does it live? And then risking compliance, who look at the privacy does it comply with our obligations as a school, and what we promised to our parents about the way we make those decisions? Now I'm okay with teachers going out there and trying things. They can't do it with student data. So if they find something that's good But I want them to go out and explore, not with student data, they find something that's good. They bring it to us and we go, okay, let's look at our lens. Can this be used with students comes out at the end for with a yes or no. And we either, like build it into our programs and we add it to our list so other teachers can use it, and we add to the training of it. All we say, Okay, this is why, but I try and always go this is why not for this one, but tell me what your educational cases and I'll try and find your way of achieving your educational needs in another way. So that security and privacy again, it's it is core to what's really important to us as a school. But also, I think every school needs to do that I genuinely think that we're educational leaders, we need to be making good decisions on behalf of the students in our care. The next thing that we've we've got five, I apologize. The next thing is creative juices, so Haileybury, we like to think of ourselves as an innovative school, we want to be that we want to create a the very best learning environment for our students. And so as a school ourselves, we're always looking for how can we do things better. But we also want our students to do that. We don't want our students to go, I only do things the same way I did last year, or the year before, or the way that my teachers have always shown me too, because we know, that's not the best long term outcome for them, that we need to prepare them for a changing world where you need to adapt. And sometimes you need to change your ways. So that creative uses we do want to highlight to students that there are creative uses for new technologies. And we've had, for example, an ex student workers intern, with our Latin teacher collaborating in our open AI project together, and looking at how they can use AI to bring about new oil outcomes. And I do think that this is one of those whole school things that is really important that you need to embrace the fact that there is change in the world.

    What can I just ask you about that? How does the school go about balancing a desire to be innovative and always changing and adapting against obligations to curriculum documents that have a much slower cycle of adapting?

    It's always a challenge. I do think, though, that they're not incompatible. The way I see it is that you can put guardrails in place to provide safety, to provide certainty that we're not missing those core skills, which is actually the next pillar is actually making sure that we do keep those Scott core skills, a core curriculum, that we prepare our students, and we are meeting our requirements. But the interesting thing that I have found is that when you are open to change, and you're open to challenge, often, not always, but often. It's not only achievable, but it can actually help the traditional successes as well. So our middle school head of humanities went, Okay, we need to look at our our assessments differently if AI is available, how do we make sure that students are actually doing it themselves? And we know that one thing that AI at the moment is not great at As of recording is footnoting. And research, it will in fact make up about 75% of its quotes, depending on which one you use. And

    it's funny, you should mention that that's something that's come up in a couple of conversations I've had previously. And my understanding is it seems like that's a bit of an issue between an understanding of what some of these large language models actually are and are intended to absolutely verse, perhaps the misconceptions people have what they are aiming to do. Do you have a Do you have a process of? You know, in your assessment of AI tools? Do you have a way of clarifying the parameters and the realistic expectations of that tool?

    So what I've been trying to do with our teachers is educate them. I do think that we hire English teachers, our drama teachers, our art teachers for their amazing skills in their discipline and their ability to reach students. They aren't computer scientists, and so expecting them to keep up by themselves and to understand what's happening behind the box is not fair. So I got the first staff day of 10 three I'm with the whole school to start that conversation about what AI language models are, to talk to them a bit about why generative AI will produce hallucinations and what hallucinations are and why they're there. And the idea of reasoning, and how that, that concept of what AI actually is. I've also got online course to support our teachers in that as well to try and keep that continuous discussion. So it is something we're working on with our teachers, because they need to know if they're going to be helping students understand that as well. However, it is a challenging idea. And this is pay is incredibly abstract, if you think about it. And that's one of the reasons that I actually would question the use of generative AI to low in a student's age. It's hard enough for adults to get their head around that we know people are already using AI to get feedback on mental health issues. And

    yes, and I noticed that, you know, a couple of months before recording this chat GPT significantly restricted the ability to respond to questions about mental health, because people were perhaps using it in inappropriate ways.

    Absolutely. But if you use it in a non critical sense, and this is where the critical thinking comes in, and take it on a surface level, sometimes it sounds very valid. If you wrap something up in beautiful writing, you can make anything sound convincing. And that is absolutely one of the dangers of AI. And again, we need to talk about it, we need students to experience setting to see it. So showing them examples like that make that bias more obvious and discussing it as a class. Why is it coming up with this is really, really important. But we do need to train our teachers along the way and schools in our ideal world, and I know that I am coming from a very privileged school where they have someone like me who I have a computer science degree and, and am an educational expert. That is a absolutely not something that every school has. Which is why I try and do things like this, because I do want to make myself as a resources available to as many schools as possible. But it is something on a system level that we really need to work at in terms of putting time aside to train teachers, and allow them the space to explore in question and themselves develop these critical thinking skills and and to ask those deep ethical questions, these hours of debates that I've been having within my team have shaped my thinking so much and have been such a valuable part of my PD. More so than any reading being questioned. And being challenged, is a great way to help you see all the different sides of the story. And what we need to do is look at how can we do this on the system level where teachers are already very, very busy, very overwhelmed. And this is not necessarily their discipline area. So I've been doing on a system level within my school for campuses in Melbourne, and trying across 1000 staff to give them access to those resources and start that debate and start that questioning. But the bigger you are the harder that gets in some ways. So if we look at the whole of US state state, that is something that we do need to look at how we can create the space for these conversations. And I have ideas about how that could be done. I do think that we need to be looking at getting more and more people activated and giving more and more people within different communities, access and space to talk about it within their communities. There are ways this can be done through distributed leadership. And for me from a democratic perspective, I actually think it's just vitally important from from a societal sense that we don't do what we did with social media, which is form a social contract going we're okay with giving away privacy in return for connectedness. If we as a society are going to make sure that AI doesn't harm doesn't does the least amount of harm, I won't say doesn't harm, but does the least amount of harm on what we see as important. We need as many people as possible understanding it, so that they can have rich discussion about what can it do and make easier what what power can it unleash that can be for good, and how can we As a society, draw those barriers and go, Okay, so this is the directions that we want government to step in. This is where we, we as a organization will go on strike, or we will draw the line in the sand and say this is not okay.

    Yes, and I mean, another key question that brings up, which I realize is a little bit out of the scope of this conversation. But simple access is still such a major issue. And I think, you know, coming out of the pandemic, we all learned a very hard lesson about some of the assumptions that are made about various you know, about that different groups of society having access to technology, and this has the potential to once again, accelerate that disparity. Because already AI is very much a pay to play space, which privileges Do you already privileged? Absolutely.

    I think my problem is, we already have that entrenched disadvantage, which is a problem in itself. My worry is that we're creating a we're doubling down on it by taking whole blocks of and groups of people and say, No, you just can't touch it, and restricting their use artificially. And so taking a disadvantage that's already there for something that could potentially be used to level some of those, I think AI absolutely could actually be used to help teachers differentiate a lot better. It could be used in a way to help us reach some of those students who the teachers really struggled to reach. There are things that we can do, like you look at Microsoft reading coach, and its ability to have each kid read into the AI and get individual feedback, as a teacher in a primary school doing that for 20. Kids. It, they do it,

    they do it, but they lose your time. Yeah. So

    there's a real cost to it. So there are real things we can do with AI to try and also reduce that gap. I'm not trying to minimize the access gap, I'm just worried that we're doubling down on it. And I'm creating a second gap where you're an independent school where someone like me, has the ability to work with my leadership team and a very forward thinking principle to make our own balanced approach where we put we have our safety guidelines there. And we have our way of thought about the risks, and we're managing our risks. But we are still giving our students the ability to interact and ask questions and learn. And then they go out into the world where AI might be changing jobs, and they go, Oh, I can see what's going to happen, I'm going to move in that direction already. They get the promotion over someone who's never seen it before and is in a reactive space. So I guess we're getting a second as well as a digital divide. I worry about an AI divide in terms of access. And that really worries me.

    Yes, it'll certainly be an interesting one to see how that's addressed in public policy, particularly in education. But sorry, to get back to we sort of diverted from the five principles of your school wide approach you you discussed, safety and privacy, you discussed creative uses. And we started to touch on number three. And

    we did critical creative thinking, creative thinking, yes. And so the next one, which is, again, the all important on AI. And that one is that those foundational skills, you know, I've called it that explicitly, because when people see see AI, they are really worried about it's one of the first reaction is, well, if we can't write anymore, will this be the death of writing? Will this be the death of creative thinking? If you can have aI create artwork? Why be creative? I know though, through my use of AI, that being an expert makes me use AI better. So I actually think that there are core foundational skills that we build on. And again, that comes down to what I was talking about earlier about being intentional in your use of AI and when it should be used and when it should not be used and being explicit with students about for this task, I'm, I want you to write it without using AI. And I'll use a lockdown browser to make sure that you are not accessing those tools, because I want to make sure that you're practicing this skill set. And I'm preparing you for that assessment along the way and you know, that's coming. So, you know, you're going to only harm yourself if you use AI while we're practicing. Because in the assessment, you will not be able to use that as a crutch. So I Do think the reason that's there is to build in that intentionality. But to put it in a way that I can explain to parents and students as well, because our policy and our principles aren't just for teachers, they are for the whole school community. And so I needed language that I knew that a parent would understand, they understand that you need to be able to add, even if AI and a calculator can do it, it didn't take away the importance of needing to be able to add the final principle. And for me, it's actually covered by the other four, but it needed to be addressed, which is academic integrity. And the reason for that is because every teacher has that question. Every parent has that question. Every student has that question, how will you know? And I look at university, and how important it is that students don't plagiarize. We've always had students who cheat, like, we've always had students who copy things from the web or get someone at home to help or not all of them detectable just like AI is difficult to detect, well,

    the contract cheating industry in higher education is massive. Exactly.

    And we know that part of that being an ethical person or virtuous person, is that you what you put out there you're authentic about. So if you use AI, you need to reference it. So we've provided guidelines on how to do that. So we want, we want that transparency that we have with our teachers to be there for our students as well. And I've created some graphics for our teachers to use a Canva. banner to push on assessments to say is, am I allowed to be used? Is it not here or the way we're ensuring that and people will look at it and say, you say there's an AI detector we use turn it in. I won't speak on a podcast about how accurate it is, I will say AI is difficult to detect. And

    I'm I'm happy to step in here and say that, you know, it's been made quite clear that AI detection is far behind the capacity of AI to actually detect and from what I've seen, from what I've seen, the addition of attempts to detect AI has actually significantly increased the rate of false accusations basically, that human writing is increasingly being flagged as AI written, it's a massive problem.

    We still need to turn it on there. But I always from the very beginning, before it was launched, even have said, I would never accuse a kid of plagiarism with just one thing. You need a portfolio and I know that's workload. But I'm fortunately that's a reality where he in either you design the tasks so that access is completely impossible. Or you build it saying you need to put references in and you must include paywalled references that you can only access through the school library and things like that, that wouldn't be easily accessible by a I latch language model. And then on top of that, having turned it in there helps make it visible that something we care about, without it being the thing that you would ever go to a parent and a student and say you are plagiarizing, I would use it potentially to look at to go is there something I might have missed and have a look at it and just go, maybe I need to look at this one again. Maybe I need to ask every kid few questions like you do with a PhD where you need to be able to talk to what you've written. But I do think that its use within our school is not about it. That being the the proof but more slagging that this is something that we do care about, and that we are using different methods to look at. But it would not be the only thing I would go to a student with insight. I don't think this is you. What I do think is that teachers do know the voice of their students. And so for the most part, I find that most of our teachers come to me going, this wasn't run by the student, I can just I just know, at which point we've got lots of tasks that we can use, that were produced in more tests like conditions that we can you don't need to do them often to have that portfolio there to go. Well, this is the student voice to compare against if you're ever questioning. So those benchmarking opportunities can be really useful, which means that you can step away from making every task benchmarking opportunity. So we do prepare our students for exams, again, something on LinkedIn, I got a few, a little bit of criticism that out that we do sometimes do handwritten assessments. The VCE exams are handwritten, you need to prepare yourself students for success. We want to we want our students to be able to get into university. So some of our assessments are handwritten. And I, I will stand by that that's part of us preparing them, part of us doing our job as an institution. But I do think that those whether it's a lockdown browser, or whether it's a handwritten task, those become a really useful thing to look at later to go okay, well, then in between that, I can be a little bit, I can step back and give them a little bit more freedom, because I've got those opportunities to go to moderate against.

    So how has the community and the staff at the school responded to this approach? How's it actually being rolled out school wide?

    It's interesting, because a lot of teachers have been champing at the bit. So in areas that I wasn't expecting, I wasn't expecting humanities to be one of the ones going, Oh, this is interesting. But our middle school Humanities Department actually said that they were getting better results, after they designed the task with the knowledge of AI being around because students were actually doing the research. A lot of the time student we know that student research, particularly at the seventh and eighth level can be I'll put the bibliography at the end. And there we go done. But forcing them to use footnotes or in forcing them to use specific resources to inform their to inform their research. So saying to them, these are the firsthand accounts you need to look at, actually meant that they were questioning and going into more depth with that research. So Gabriella Welsh, who was head of humanities in the middle school was saying, what they were getting better outcomes. I'm working with her on a task where the students use AI to interview a historical figure. But then they have to pull that apart and footnote it and research it and check it for accuracy and question it. So the idea with those kind of tools is not that AI is the end of the assessment, it's actually the starting place for their research journey. And along the way, knowing the kind of materials they get, they will uncover things that will make it very obvious about the downsides and the dangers of using AI. So it's a tool that's going to make them better historical researchers, who will help them ask better questions, but it will also educate them about the dangers and challenges that using AI as an information source, brings it out.

    And I'm curious, what about your English and Drama teaching colleagues that had concerns at the beginning? How are they engaging with AI at the moment?

    Look, I've I've pushed him a little bit further. He has a our English department is exceptional. They are absolute they the students produce excellent writing. They are absolutely questioning and they should. But I think what they're also aware of is that it is something that is there. And so I have had like Shawn Collins is the member of my team, who's the drummer English teacher. And he he's he's posted about this on LinkedIn. And we're very open about our robust discussions. I think the rest of the team is getting a little bit sick of them. Every time we have a coffee, there's a little bit of a battle. But I do think I don't speak for him. I do think he's got genuine concerns about you know, is it okay to have aI write something he thinks that everything should be written by a person? We've had some debates about? Really, every email, have you seen how many emails I write have to write a day? Which one of those require a personal, carefully worded response, which has my sense of self in it? And how many ones do I want to take that so self out? Because I'm responding as an organization, not as an individual. And so where is the place of AI in that world? And I genuinely see his concerns about the loss of humanity in your writing. I think it's a genuine one. But I also think when we're looking at a world where teachers are being crushed by workload, yeah. I think that there are some things that only a human can do and if I can use AI to focus teachers jobs at the things we love doing the things that only we can do forging those connections, motivating students, then there ends the science the students, like if I can focus them on engaging with the text and learning the things I want them to learn if I can use AI to, for example, I've had students use AI to generate questions for themselves to answer. So generate using AI to build their own study tools, using AI, for them to, to try and help them synthesize large pieces of information. And in questioning that, I think we need to be careful about making broad generalizations about whether it's good or not, I think it is something that is more nuanced than that. And we are going to be in a time where there is those shades of gray about it. I wonder if we will enter a time where if you use AI, you must, you must always acknowledge it. So if an email is generated by AI, will we be will the EU put in regulation saying, for example, that you notify people you are interacting with AI, I did not write this, I stand beside it. I certainly when you look at the Writers Guild agreement, that is the direction they're going in, in terms of like, writers can use AI, but they can't be told to use AI by the production company. And it has to be disclosed if AI was used in any materials are given to them. So you know, AI generated scripts can't be just handed to a writer to clean up without disclosure, for example. So I don't think there's gonna be easy answers. I also though, don't think that I am someone who also sees how I prepare students for the real world rather than than the world that I might like it to be. And so I don't think that fully clinging to off the authenticity and power of the human connection in the world, at the word is. I don't think that's going to prepare them, as well as having that discussion with them, as well as showing them what can be done.

    I think, yeah, I think that question of preparing for the real world, is also an interesting challenge. And we're still not sure what impact all this will have on the world of work and the world of you know, and it's interesting hearing a lot of the conversation about ethics. And you know, we've had a lot of talk here at this conference about virtues in education and an ethical conduct, when there are many times where they might be in conflict with how the real world implement some of these technologies in the workforce. But that's possibly a whole separate conversation.

    It probably the thing was AI as many multifaceted it's very multifaceted. And we could probably spend a week trying to unpack it because there aren't set answers. But why do you think that when I talk about preparing students for the real world, I'm also talking about preparing them to have those ethical conversations. And so the fact that we don't know what the answers will be yet, that's even more important to me about being really transparent to them about what the tools can do and what my questions are. So when I'm using Adobe Firefly with students, I'm very, very upfront with them about how it has been trained. And the questions people have about image generation and what that means for people who make their livings being creatives. So I, I understand what you're saying, and I don't disagree. But I think that means that it's even more important that we that we have those messy conversations with students, ones where we're not telling them what to think. We're just exposing them to the different point of views on the same issue.

    There was you take a pause at this moment to acknowledge we've had to continue this conversation the next day of the conference, because we've just had so much fun and so much to talk about, that didn't fit with it our scheduled time. So after thinking about those issues of ethical use, and implementation and getting students to think ethically about their engagement with AI, one thing that you did talk about, which I'd like to draw you on a bit further was the idea of age appropriate implementation. Because when we think about how AI interacts with the curriculum, and your notion of ensuring foundational skills and elements of the curriculum is still there, how do you see the introduction of AI into schools with particular ages particular year groups to ensure that it doesn't interfere with that key foundational knowledge? Yeah, so me

    part of This is foundational knowledge and foundational knowledge building. But I have a bigger concern. And this is more about child development. So we look at children AI, I'll start by looking at AI actually, most adults struggle to understand AI, we like to personify things and say that they're human and and we already have an issue with adults seeking help from artificial intelligence and using it as a confidant or, and in fact, overnight, since the last time we spoke meta has released on personality driven AI chatbots, so that you can have a Dwayne Johnson type character offering you health advice and giving you all the enticement and encouragement, you need to become a good body builder, I don't know. And we can see as adults that that's problematic. My problem with AI and looking at when we introduce it is that we, our brains are not designed to necessarily understand that level of abstraction to L A. And so I think that we need to socialize the idea of AI. And we need to be aware that our students will be using Google Voice, Google Home at giving instructions to Siri, I apologize if there's any Siri devices that I might set off by saying that. And so they will be interacting with AI on a daily level, they will see robot vacuum cleaners that move around. But when we get to generative AI, where it's able to sound and look and create videos that you might be able to respond to in a real way. That's where I think we need to have some very deep discussions, not just as an education, but looking at it from a child development perspective. So bringing in other sciences as well about when is it safe to introduce these and how do we firstly prepare students for living in a world as a child where you will be interacting with AI, and acknowledging that their machines and machines have come with inbuilt biases and things that are wrong, and they only do what they tell you you tell them to do. And they're not driven by the same human rules that we like to think that humans are, they don't experience true kindness as true feelings as credibly deep conversation. And so I am a little worried that we have unforeseen consequences of introducing it too early. So I remember back when I was studying, and I will say that child development is not my area of expertise, I did the teacher level of child development, I've taught prep to yet well, but I'm not an expert in this area. But when I was doing my teacher education, I was told that about 12 is when you are about ready to do algebra, because that's when your brain can handle the level of abstraction needed to understand that. So taking that idea, and I'm going to, I'm going to accept at face value that my hopefully my memories not too far off. And I do think that there's not a lot of evidence about what harms might be done by mixing us too early. And so for me, there's a couple of responsibilities. I have a school and educator responsibility not to do harm to my students do no harm. I think she belongs beyond medicine, it comes to education as well. I also have a responsibility to my community and to my parents to engage them in this conversation about what age are they allowing students to access certain things. So for example, just looking at the level of access to AI at the moment, we can see the students being allowed on Snapchat very early, because parents like being able to track their students on Snapchat and communicate as a family. I have personal opinions about the appropriateness of that. But we can also see that there is AI now built into that program. We're going to overnight meta has announced that every single platform they have will have generative AI built into it. So we're going to see all these platforms come out there and we need to be having a conversation with our parents about we are in a time where we don't know the answers about what the implications of this might be to a developing mind. And I do think it's, I do think that we need to be very careful about how we scaffold this journey for the arm Students and for children. And I, I love technology I am, I just think we need to look at it from the nuanced view of like, look, we look at social media and social media portal of joy and a lot of connectedness and also a lot of things that are very damaged social issues. So I do think that that's something I haven't heard discussed enough. And I'm really interested in trying to see more attempts to gather evidence about in an ethical way, in a developing technology, but definitely, I think it's something we need to look at and within schools, I certainly would be, I am looking at it from that perspective of when do you have the capacity to be able to understand that AI is abstract, and it can present as human, but it's not human.

    The comparison to social media, I think is an interesting one, because I've heard and read social media described as this great unregulated experiment that would never have met any ethical considerations of actual research. And it's only in glaring hindsight, that we can see both the positives and the potential negatives of it. And I think it's an interesting consideration with AI for that, what you've brought up there for the same reason, because the drivers of this very rapid rollout of AI, primarily commercial. And will we see that same unregulated experimental rollout and only recognize what we should have seen coming? In hindsight, 20 years later, much as we have with social media?

    I think that I hope that we've learned the lessons of the past. The reason I say that, I don't think I think we're still in a time of great danger. I'm not trying to minimize that. But we are having the discussion. And we have the industry reaching out and asking for regulation, we've got leaders in industry writing letters saying, We want governments to step in. And we can see, for example, what the EU has come up with, where they are taking a stance about, these are the things that we need to look at, if you're developing, these are just no go areas. So they have in their, in what they've written said, anything that is designed to change human behavior is just a read, we don't go there with AI. And we don't develop that. Now. We're in a complex, globalized market. And I think that we there are very interesting debates about whether we need a globalized approach to how we look after AI and what we as a world perceive as safe or not safe. Now, that said is still we are slower from a regulatory sense and a system sense than we are in a technology growth sense. And we can artificially try and slow down the technology. And if been requests for that to happen. Open AI, in fact, put out something, say we would like no more further developments until we know more. So obviously, we've never seen that before. We've tech companies. Tech technology doesn't a tech companies don't tend to take that kind of mindset as a whole. But I do think it's a real concern, because so I'm hopeful that we will have the discussions. But I am concerned that we might be having them too late.

    And certainly, once it's out there, it's very hard to put the genie back in the bottle as the saying goes,

    absolutely. And people are changing their work habits true. People are changing. Companies are changing the way that they work already. And we you talk to different people and they now use AI as a tool just like they use other things as a tool. I'm worried about having a conversation that is too negative or too positive about AI because I think that you can you can look at those risks and go, it's too much we can't do it. And I do think that's where the reality of of what is already out there and comes in. So I think from when I get overwhelmed by looking at the global problem, I restrict my view back down to my role within my school and go what can I do for my community and what action should I take knowing the realities and the things I can't fix around? Me. And so I think that like, I can't put the genie back in the bottle myself, but I can raise a community that will question and push back at government and say we want there to be regulation. And so rather than like, responding to these fears with a ban, which is a completely understandable response, responding with education, for me is the way to create a community of dialogue that we know can create change. And there's the idealist of me coming out.

    Well, to close then, on thinking about your context and what you're working on. What are you looking forward to, as you continue to develop your school's approach to AI and engage the community and teachers? Where, where are the next big steps that you're heading?

    I've been meeting with every area of the school. And when I say every area of the school, I've been meeting with facilities staff as well. And running workshops with them talking about these. These are the tools, what are the problems? Because I don't want to so I think that we need to look at how we can use these tools to to solve the problems that are on the ground and listen to people and give it give more people the power to make the school system work better, so that we can do our jobs better and be there for students more has so many little things that we can resolve. So that's one of the things I'm really excited about. Because from that we're having discussions about all sorts of things that I can see educational implications for. I'm seeing teachers doing things like developing a large number of individual quizzes with individualized feedback on every single response. And no human has the time to build that many, by a human has a time to check, check it for accuracy, and to look at it and make sure it's safe. And as a result, students are getting better impact. So I'm looking forward to all those small projects. But as well as those big discussions about what we can do. Really interested in seeing what happens on a system level when you look at places like South Australia, where they're looking at how can we wrap around that open AI platform and build our own educationally oriented approaches?

    Well, there's certainly a lot to look forward to. And I imagine a lot of unexpected change yet to come. So thank you very much for this fascinating exploration of what sounds like a very comprehensive and, from my mind, particularly principled approach to integrating ai ai in education. Michelle, thank you very much for your time and I can't wait to catch up with you again in the future to you how it's been going. Thank

    you so much.