With the current sort of consumer facing generative AI, we're still kind of wrapping our heads around. It's probably we're just starting to see exponential growth. In a year, we could be thinking, Okay, well, that's like every day. Basically, we're having a pivot like new tools are coming out. That's unlikely, maybe, but very possible, and that world will be very strange. With
tools such as chat GPT, and other adaptive Machine Learning Technologies, students of almost all ages are able to quickly compile written work, ranging from term papers and reading summaries, to legal briefs and job application letters. Understandably, this concerns many in higher education. But this complication just scratches the surface of what emerging tools to include artificial intelligence are capable of doing. This is random acts of knowledge presented by Heartland Community College. I'm your host, Steve fast. Today we're speaking with an educator, scholar and musical composer who has dived deeply into the benefits and disruption of AI and other machine learning tools. We'll talk about how this emerging technology has created an energized and somewhat chaotic present, and presents an uncertain future. Hi,
I'm Ray Magnus. I'm an associate professor of music composition at Elements State University. This year, I'm also serving as a provost fellow assisting the Division of Academic Affairs with the university response to disruptive technology, I am going to be teaching a continuing education class on AI called reimagining living. So looking at life in the age of artificial intelligence. So
your background is in acoustic music composition. So how did you start to get into learning about and using virtual environments for music composition?
Yeah, it's a long I mean, it's it's time, right? So I explain this to my students a lot that you look at people's careers, and it looks like magic. And it's kind of weird, but it's just time, right? It's just like, lots of time. But so yeah, my whole backgrounds in, in music, acoustic music, I have a doctorate in music from Illinois. And, you know, I've been teaching music since 2011, at Illinois State. And most of that was acoustic music, music theory, oral skills, things like that composition. In 2017, I was just kind of interested in VR is kind of coming online as a consumer tech, then Vive and the original Oculus were coming out, I thought, this looks kind of cool. I wonder what it's about. I happen to get some internal funding from ISU that needed to be used on equipment. So it was natural connections, like of ISU by a Vive, that'll be awesome. So, you know, I got one and I started, you know, going into VR and looking at and just thinking, wow, I want to write music for this. You know, this is just such a cool thing and all these different spaces. And that sort of evolved into man, I want to write music with this. And then there wasn't really any software for that. So that became the sort of long rabbit hole of, you know, Googling, like how do you make the art? Like what is this and it's okay, I get a lot of game engine is started learning unity, learning to code, you know, and all that I had no background in coding, but it's maybe more representative of the type of world we live in that that's possible. Now, it's just time. And how much time do you want to expand? That's your resource, right? And it was a lot. So but it's, I'm happy I did it. It's been really interesting. You know, the last was almost seven years now, exploring that technology, we're really kind of at an inflection point, I think for right now, the vision Pro is coming up tomorrow, February 2. And that's sort of when Apple kind of enters into a market that they're they're such a user focused company, that the early returns on that are really impactful because it's just, it feels comfortable and familiar. So it's very exciting. You know, the tech is malleable, there's tons of stuff to do with it. But if you look at my background, it's sort of like what what, how did how did, how did that happen? And in a circuitous because life is circuitous at times, but in a lot of ways, I think very similar to writing music and the creative process. It's
interesting that you've fully embraced this technology, or at least to to an extent, you're definitely working with it all the time. You. You are someone who is really in the know of what's happening with AI. And it's interesting to me as a creative person as an artist to see your comfortability talking about it, understanding it, using it, thinking about it as a tool about the possibilities of AI, when so much of what you sort of see in the general communication space, are artists saying no, no, no AI is stealing our work. It's creating something that's inferior is going to take away, you know how we make a living? How do you reckon with that? And is is that a little bit too hyperbolic? What you're hearing people say, Why are Why are artists I guess I know why artists might be scared of AI or the things we're seeing, but what what do you say in relation to those things that that we do here?
Yeah, so I want to be really clear me a couple of things. One, I don't I'm not like a futurist. Really. It's I know, a lot of the stuff I'm doing seems very futurist II. I like VR as a technology because I think it's it's transformative. And there are things you can only do with that. And that's what really appeals to me. Like, how do you create experiences that are only possible because of that, but not replace reality? Right? We're very far, I think from from any of these devices have actually, you know, Ready Player One, the matrix or something like that. That's quite a ways in the future. And that's dystopic. Obviously, there's like, obviously, lots of things that could go wrong there. But as a tool that can do things, I think it's really interesting. Same with AI, I think it's a tool that can do things that is really interesting. To your point about like, artists, one, I mean, it's coming from music, too, as a composer there are, you know, there are tools now that can generate what I think most people would not recognize as being fairly normal stock commercial music at this point, if you're not really paying attention, it's really good. And that's obviously really disruptive and very scary, because I teach those students that are gonna go out and do those things. And they want to write, you know, video game music or something like that. And it's, it's able to do that at a fairly competent level at this point, in 2024. And the way that these things accelerate, it could be tomorrow, it could be six months, where it's like, extraordinarily good at that. The visual artists angle, obviously, is they were the first ones, you know, through the wall, so to speak. And in the generative AI explosion that's happened in the last 18 months. And I mean, it's like duality of space. Right? It, I think it's entirely possible and very likely that the companies that are at the forefront of this are committing massive crimes, it's very difficult to prove that because of the nature of the technology, and that's its whole thing, and the legal process needs to work it out. I think it's gonna be similar to, you know, Napster and things in the late 90s, with file sharing. But it's almost certain that they illegally scrape things and are infringing on rights. On the other hand, and this is again, not to diminish that part of my role is to be familiar with these things and try to look at the trends and where it's going. And for lack of a better way of saying it, like throw a dart at a target, you know, in 18 months in three years, for institutionally for us to kind of think, Okay, what what do we need to be preparing students for? If these tools are here, and XYZ happens and the legal system or their trend and open source or they become more personalizable? So you can you can use them more as like a generic tool to assist you like, specifically, you're important to local to your computer? What does that look like? How does that how does that affect our curriculum has it affected our students are going to be doing so it's not a blanket endorsement of I think it is extremely scary. And, again, lack of a better word. I mean, that's, it's likely illegal, and there are going to be consequences for that. And I guess I don't want the other half of my sort of position of the way I'm looking at it to look like I'm diminishing that pain or the anxiety that people have, because it's very real. When
I think about the development of machine learning, and the tools that we have seen, you know, just in the last 30 or so years, you see as a user of these tools, and this is going beyond AI, we're just talking about things that are tools people use every day, whether it be you know, something to help you with grammar, or you know, when you talk about musical performance, right? You started out to see synthesizers and things try to duplicate the sounds of like a woodwind. So, you know, you'd have a church bust sized organ that had oboe as a, as a button on it didn't sound like an oboe. And that that was people were comfortable with that. They're like, Okay, this gives you the general idea of an oboe. But now, with technology developing, you could probably create a synthesized oboe that's not played by a human being. Yeah. Is there a point where artists musicians, are not able to offer anything different than what you can get from a synthesized performance?
Yeah, I mean, it's a great question. I mean, there's there you're right. There are samples not necessarily synthetic, I cannot fully synthetic from like, you know, sine waves and things like that. I hesitate to say that. We are at a point where the sound realistic, but they sound they can get very close if you know what you're doing. But the other half where people are actually recording, you know, an oboist in London Symphony or whatever. And you can record it in such high fidelity and use multiple recordings of them. So that you can use that in an audio workstation and compose with essentially that player as your oboe player. We are absolutely at a point where you can't tell I mean, if you know what you're doing as a, I guess, synth, the straighter when you're like working in logic or Pro Tools are able to you know what you're doing, it's indistinguishable. There were definitely there. Your question about like, you know, what does the artist offer? Even when I talk to people about AI, one of the things they always come back to is that this technology, maybe paradoxically has the potential to free us to be more human. Because so much I think of what we do just generally broadly is algorithmic and we have error like human error in it all the time. I spent most of my morning just responding to emails and just going through like all these different emails in different threads, which is fine because you want to have these coming. vacations. But a lot of that stuff, you know, scheduling something should be, it could be done by an AI. And it can be done faster, better, cheaper, more accurately. At this point not to say, in the future, it will certainly be more frictionless and just part of our life. But as an artist, you know, when, if this thing can free us to be more human, I think there's so many opportunities as an artist to kind of embody the opposite of that to think, Okay, well, sure, someone can type in to their, you know, generator, I want a piece of music that sounds like X, Y, or Z, and it kicks it back. And it sounds like that. If what I do is to create that thing, then I'm in trouble. But I think you've talked to a lot of artists, it's not about that it's this, it's a holistic thing, you people want to have a connection, they want to condition you to read a piece, they want to question you to draw something. And if they didn't, I tell my students this all the time, if that didn't exist, there's no reason why most of us will get work. Because there's enough people doing this, you know, in the concert music space that they could commission, you know, the top 2% all the time, but they don't, because people want to work with individuals, it's the same sort of thing. With something like Etsy, my, my mom has a very active Etsy shop, and always talks about how it seems so recession proof, like no matter what people are buying, or like, you know, small craft things, regardless of ups and downs in the economy, because people want to support her. They know her, they talked to her, the men have ever met her, but they like knowing that it's this bespoke thing. And it's not algorithmic, it's not mass produced. So I see that as a feature. And I that's really attractive. I mean, it's, it's the same sort of thing, where if you go and hear a band at a bar, you know, in town, your local band, yeah, I mean, they're not probably as good as like, you know, you know, the really high produced, you know, top 1% of musicians, but you begin to form a relationship with them, and you want like, you get to know them, and you hang out with them, you talk to them. And that's fine. Right? That's, that's what I see is like a really intriguing future that more of our life, as artists becomes that.
Well, I want to get back to that in a moment. But to kind of continue the thread, and this might get a little bit into what is AI and what isn't AI right now in early 2024. But there is this idea has been around in machine learning computer science since the 50s. Then the Turing test, which, you know, correct me if I miss characterizing it, but is can a computer make something that is indistinguishable from what a person? Yeah,
I think specifically like conversation, right? Yeah. Interacting with a new dog. No. And
I think this is where it is particularly challenging. We see amongst Faculty of Education, they say, student write a paper, the student then can take an AI tool, chat GPT, whatever, is scrape information. And then it will write a paper for them they handed in they don't they're not using a lot of brainpower for Yeah. And this terrifies, you know, people that they're like, wait a minute, I can't then teach this person because the, the metric, the evaluation that I have for the students no longer works. And then what if they can figure out a way to use this tool better, then I can figure out a way to check them. And that's going to evolve. So I wonder, are we to the point now, where things are effectively passing the Turing test? I think about again, I use that probably terrible comparison to the the human organ button. Oh, it has the bateau bow. And then I think about you know, like we'll see in movies, right, this photo realistic CGI technology, which is made by people, but also probably assisted by machine learning as well. Definitely assisted. And you think, okay, that's bad. I can tell that's bad. I can tell that's bad. I can tell. Oh, I can't really tell. And I think as people, we learn to figure out the fake too, right. It's not realistic enough. Yeah. And you talk about the bespoke thing. That's what attracts us this flaw, this, this realism to it? Are the machines smart enough to fool us most of the time now? Or are we as humans kind of adapting to what we see and what we experienced? We can sort of tell? Yeah,
that's a really great question. I tend to think if you're not familiar with the tech, you're not really paying attention, you're just you're you're hearing an auto, you know, robo call or something or you're reading an email. I think it could absolutely fool you, you have no idea. I've read enough AI at this point, when I'm reading letters, I just kind of look at it like that. That's not right. But again, as you said, it's sort of an arms race, it's really easy to mask that, you know, to make it not read like AI. And this is, you know, part of the problem with the teacher student relationship and faculty wanting to rely on, you know, AI detectors, plagiarism detectors, to sort of filter, you know, that work and then make some sort of punitive system for the students who were using it. And again, rightfully, like if they're infringing on the code of conduct your class policy. I mean, it's shouldn't be penalized, right? I think more just kind of generally, that's, that's not the future we probably want, right? We're the sort of military police state we're setting up in classes, because it's going to be very, very difficult to check all of these things and catch students any of the AI detectors paid unpaid, whatever, you can crack him in 30 seconds, if you know what you're doing, like there's, there's no way that they're gonna stay in front of it, you're gonna catch a low hanging fruit you're gonna catch, I think a lot of times students who are rightfully using it to just fix grammar, language, things like that, we've a lot of non native speakers, international students like that are just wanting help, like they just wanted, they know their language needs to be cleaned. And they see a tool and it does it and they're maybe not able to read it and see the difference. I think more broadly, you lose assignments that you assign, and a student is able to fit it into a free AI, it gets something out that resembles what you would consider passable those assignments. Probably a long time, this is my opinion as a faculty member needed to be revised. Like that's it. Think about the learning outcomes, think about what are you teaching? Like what is what is the purpose of this, if it's critical inquiry, it's critical thinking, if it's writing skills, it's all that there are really good ways to do that, that will completely circumvent this problem, we have, if you might call it with generative AI. And their ways that in the past may have seen seem to totally impossible, because of just the workforce they would need or the amount of time commitment or whatever. But they are, like real and good. And there's just I think such an interesting, in so many ways better future for how we teach students that, again, I this is me, I hesitate, I want to make sure I'm not characterizing things for our provost or positions that I see. But this is my personal opinion, they think would be so great. But it's really scary. It's gonna take a lot of learning on faculty standpoint, it's a lot of reflection about what we do, like existential, like, what am I teaching, or to the core, what we're doing as teachers is to transfer information in someone's head, then we should probably should have quit like 20 years ago, because that's the internet, like the internet probably took a lot of that away. Right. But I don't think anyone would say like, the what I'm doing is giving you information, if our goal is to change information into knowledge and like go through the synthesis process. And the role of like a liberal arts education is to create just critical thinkers and good citizens to make people into a better version of themselves, then all of that is like possible, and I think possible in a more profound way. With this technology. It's just really different. And it's scary.
Again, I don't want to throw any institution under the bus, but it's higher education, keeping pace with grappling with this, because it seems to me if you look at this technology, it is developing at this exponential rate, that without being too speculative, you could look a year in the future and say this is going to affect every single student in a composition class everywhere. And we maybe didn't think about how we're having our faculty approach a composition
class. Yeah, I mean, that's part of what I've been doing in my fellowship at ISU this year is just drinking from a firehose, essentially just trying to absorb information. And specifically, I think it's really important to point out, I'm not a computer scientist, I'm not a machine learning scholar, I happen to have this sort of weird circuitous path through technology, with VR, and a sort of was like following AI, because it was always kind of impacting the technology. But unlike I'm late to the game on generative AI machine learning, like there are people that study this. And so I'm looking at this sort of as a non native, and part of the role of that is to okay, what is this work? And I see this applying to people I've had conversations with, and how do we prepare, one of the great things about universities is they're very slow to adapt, right? If lots of measured conversations and thought, and it allows you to weather things really well. It's like an ocean liner, you just kind of move and keep going and great debate. And that's wonderful to hear governance and all that system. It's very powerful. However, it's not particularly nimble, because it because of just the mechanisms, you can't, you can't expect to Okay, well, we'll take that to this committee, and they're going to meet in May, and then we'll have a discussion in October, that just won't happen with this tech. You're right, we are seeing what we understand to be exponential growth right now that it's with the current sort of consumer facing generative AI, we're still kind of wrapping our heads around, it's probably we're just starting to see exponential growth, which again, in a year, you could we could be thinking, Okay, well, that's like every day, basically, we're having to pivot like new tools are coming out. That's unlikely, maybe, but very possible. And that world will be very strange, because institutions will be incapable of dealing with that, all we can do is hope to set up mechanisms where we can start having discussions and pull levers and like, you know, work on guidelines so we can, you know, get to our students, you know, and get them what they need and help assist them to not break the law or something which is very possible with these tools. But it's antithetical to what we do, right? And that's maybe the larger existential thing for education. We are exposed to the tech it is going to be very easy to have an infinitely patient tutor that can speak 40 languages and give you accurate information and help you reason through it. Now Now, I mean, that's possible now, but it was certainly will be very easy in the next 18 months, and will be marketed and like, attractive to students like how do we? How do we deal with that? Now? If there's something that's disruptive like that for composition, how do we pivot? How do we fold it into your curriculum? Do we have policy and or at least guidelines in place that allows for that to happen? That's those are the conversations.
So with the caveat that you said earlier, you are not a futurist?
I don't feel like
you've claimed to not be an expert. But you obviously have studied this and have a better handle on this than most? How do we determine where we are in the curve of where AI is going? We mentioned the things that we know, this tool this thing can do. Now, I think anytime you have a new technology, you see a lot of hiccups. And it's maybe easy to see how it's wrong. But then how far away? Do you think we are in these tools not being wrong and learning more quickly from the mistakes, getting the data, discerning the data, and then getting to the point where they really become the automatic checker at the grocery store? equivalent of, oh, I used to see one of these it had its glitches, but now it's the standards, but
I do Yeah, cuz it's just faster. Yeah. So I mean, this is, again, due to the caveat of like how I'm approaching this, right, I'm just going I'm absorbing as much as I can. And in a lot of cases, a lot of this is enabled by AI, right? So you're able to I'm able to read, you know, a paper on something and then kind of reasoned through it with an AI. And then sort of checking that reasoning, you know, I'm kind of going through the circular thing. You know, the, the, my guess that I've told a lot of people is 2026, like wearables sort of have a heads wrapped around it, where we can see from 2022. And again, this is like, you know, the public facing AI, generative AI with Chet GPT. But it really goes back to like, 2016 or so there's been a curve where you can see large language models starting to like, make sense and see, okay, this is like not a toy, like, this looks powerful. And we've seen, essentially exponential growth from there. But our brains don't understand exponential growth really, like it's, it doesn't make sense to me, really. But it the idea of, you know, GPT, for right now, and GPT, five, you know, GPT, five, could go to a trillion things that's trained on which GP GPT 3.5, I think was like 155 million. So it's just what does that mean, we don't really know, it will probably be much more accurate, much broader, you know, understanding of things more understanding of reason, and nuance is probably a huge thing that they can sort of infer, like reasoning, where it may make up something at this point are hallucinate. And that we get to that point, right, we're really at this point where we could start running out of information, that's very possible. However, there's a lot of recent research showing that these models can train themselves and synthetic information, essentially, they can serve as teachers for themselves. And if that kind of pans out, and we don't have limitations in the algorithms and all of the structures that it's not breaking down, then it's really just like time and money and compute power. You know, we think about like the power and sort of what this is, there's like, there's you think of it like two different things. There's like the the energy that you use to make these big models, which you can think of is like a university that you can go and ask questions of, but then there's also what's called inference, it's like the energy that's used to actually query and like you're asking a question, and what does that take? And there's obviously a lot of discussion about that. And like the environmental impact, and like, all this stuff, is very valid. However, there's a lot of research showing, too, that if you pour more power into the inference, and you essentially let it think more about its question isn't the instant that there's also scaling happening. So it's not only that they can generate more, you know, stuff and just have water, you know, lexicon of knowledge, but then also the ability to put more power into it and think about it more patiently and spend a day thinking through a math problem or, and then it can just crack it, right, because it's just, you're importing billions of dollars worth of equipment into it over a day or something. But that's insane. Like, that's where it's like, oh, it fusion energy or something. It's like, oh, we just figured it out. Because you're in cancer, right? Because that's, that's the sort of like, Whoa, yeah, like what is happening? And that could be, I mean, 2030 I mean, it's really possible.
This is such a fun subject to talk about right now is fun. Yeah, it is. Because back in everything that you say we could do 10 more discussions on, as you get into those things, we are entering into, feasibly kind of what I would call like the Star Trek element of this where you could have something like a universal translator, yeah. Where you could have an AI assistant tool that would allow you to you won't have to learn to speak Italian or Spanish. You just have it there.
It exists now. So this is actually something that I was telling our provost I was like, hey, this would probably work, you know, figure this year, we'll see a universal translator Something like a minimum viable product for that, right? And my guess was this year and it came out a company put it out on January 9. So yeah, my gosh. But it's essentially by this company called time kettle, it's this little box, you put in the middle of the desk, you put in an earpiece, someone else puts in an earpiece, you choose a language, and you just talk and they hear you in their language, and you hear them in their language. Essentially, zero latency is just, which again, is like thinking back to the phrase to be more human. That's possible, right? With phones, and you hold it up and you're like typing in words or whatever. Or you're going through human right, which is obviously fine. But there's slowness, and there's sort of that feeling of? I mean, I certainly I have, I've been very fortunate to be around English speakers as an English native speaker. But I would feel shame and embarrassment, if I were going to Italy and trying to, like, get my point across. I'm sure people feel that way. They come to speak with me if English isn't their language, but that disappears. You just look at someone in the eyes and just talk to them. And that's, that's awesome. Right.
And there are languages, instructors, if any, you're listening now and probably, you know, echoing through somehow metaphysically right now that are just terrified by this idea that, you know, that's that's one way that the world could change dramatically. But another way, is, you were talking about resources and energy. And you know, right now there is a lot of power in information holding and controlling, right? Globally, it seems just as likely to me that there are very powerful entities that have the billions of dollars to kind of win the arms race of AI. Yeah. And then I don't see them saying, Well, now that we have this power of owning this tool of owning this information, we're going to just share it freely and equitably across the world.
I mean, Sam Altman from open AI said, forget his exact wording, we said it's like, it's, it's futile to try and catch us like as opening because like, we're too far ahead. And the way that scaling works is like you're not going to do it. Which I mean, he has a lot of reasons to be very invested in that company, though. He apparently owns no stock in it, but it's you know, multitrillion dollar concept, right? I think you're right, there's this comes down to, you know, it's a massive geopolitical thing. There's, like, all kinds of questions. This is as big of an arms race as literally anything I think anyone's ever experienced. I said in an interview once, I think this could be like humans inventing fire, like, it's like, it's like, you know, inventing or controlling fire. It could be that crazy. So the knee jerk is to like, regulate, right, we want to get our heads wrapped around it. The caution is, we definitely want regulation like that we need to be having measured discussions and like figuring out the licensing and copyright things with image generation, like that needs to be figured out. And we have to make an equitable future for artists, right. But we also can't, we have to look at other countries, like Russia, China, Iran, North Korea, like, you know, countries that are going to have this technology, and are absolutely not going to do that. And they are going to just, you know, full bore Go forward, go and use it to do any number of things. So when we regulate, you know, the EU, the West, whatever, where you want to look at it, there's this balance, right, between, like, how are we competitive in the world, and then also over regulating internally, if we over regulate in our country, that can really quickly run the sort of open, like source component of this into the ground, while consolidating the power in the hands of Google, open AI, Microsoft, Tesla, or whatever companies end up having these models meta, which could lead to a dystopic, you know, there's, there's essentially, essentially 30 People that control the most powerful thing we've ever invented, which would be, I think, you think we can agree pretty bad, but there's this opportunity, right? For open source things you can imagine just like the impact that a very accurate translator that can also do medical diagnostics, that is open source and free would be it's like, that's as impactful, I think, is something like penicillin, or you know, it's just you sort of cognitive load off of physicians access to just like, is this you know, growth on my arm cancerous like even just really fast things that people would ignore and not have access to, or whatever the money to go and see a doctor. But then also, like you said, the cognitive load, like then the physicians don't see maybe as many patients for those things that are, you know, benign, or just really, yeah, yeah, you have a cold or whatever. So then they can be doctors and nurses and really, like dig in to what they probably imagined their jobs being which is, you know, the personable one on one stuff nurses can nurse and that's, I think, amazing that that that seems really great. But it's really delicate, right? It could go very wrong with the wrong sort of regulations or under regulation. And it's gonna be it's gonna be tricky. You're gonna thread the needle.
Alright, that's maybe not the brightest placed to stop. But, but I think we'll stop there for now. I can't thank you enough for coming in talking about this, and sharing what you've learned and how it applies all over the place with us today. Thank
you. This is great. And if you have more questions, I'm happy to continue the conversation.
I think that will probably happen
Roy Magnuson is an associate professor of music composition at Illinois State University and is assisting the University Department of academic affairs in matters of disruptive technology. He is also teaching a continuing education course at Heartland Community College on the subject of artificial intelligence. He spoke with us today about AI, creativity, education, and machine learning. If you are interested in other interviews about technology, education or other topics, subscribe to random acts of knowledge on Spotify, Apple podcasts, or wherever you found this one. Thanks for listening