I have to Oh, no. You mean the like we Yes. Thank you, Erin. I'm gonna stick with it. And we're gonna do what Aaron said. Because my presentation is behind me, I apologize if it's too small. But I'll try to keep it lively and quick, actually, we don't have too much time. So a little bit about what's happened this summer. And then broadly the AI phenomenon itself and some of the key insights that I think are important for that I would like for everybody who is in business or in development, or dealing with bringing this into the state to have a couple of key pieces of information and insights in mind. And then share a little bit about what's happening in this year moving forward. And if we have time, a little q&a, otherwise I can stick around. So the ICD AI is a newly formed group, it's less than two years old. It is a non academic Institute within the research office. So we we do workshops and outreach and we we educate but we don't issue credit. What instead what we do is we galvanize resources from across the university in our data science computing, Computer Science Information Systems. And we're trying to organize them and catalyze them so that everybody can find each other. They can work more closely together. And importantly, with my work, that we can now outreach to the state get out into the state and industry because we want this to go beyond the boundaries of the university. The taskforce was was formed this summer and some of the highlights are that more than 70 faculty, students and staff joined on an voluntary way which kind of gave clue to how popular and impactful the topic of artificial intelligence had become because there was just so many people over the summer who were willing to get involved. And it continues now in an academic year contexts, and it's now up to 400 individuals, dozen people from all across different parts of campus. Some are from our campuses in Phoenix, but who want to bring guidance to the AI phenomenon as it impacts the university and our society. One of the key highlights was that we made a lot of progress and syllabus syllabus Cretans. So the University of Arizona is not caught flat footed. It has policy for how we should take AI into the classroom. And then we have some follow events coming up. And then again, I mentioned my industry outreach and the need for communal dialogue, which is coming coming in the form of several town hall events. Steve, how are we doing? Is it going going? Okay, you're, you're fine. All right. So briefly, the working groups that were formed, and you can follow up, if you like, with any of these working groups access and equity, communications, AI and data, document events. My group is the industry group integrity, education, syllabus, guidance and training. Let me show you really quickly what the site looked like. That's Michelle's site. I was just checking it out as she was talking, I think that should be good. Okay, so we have the artificial intelligence at University of Arizona website. And this is where we have our events that are coming up, and you can sign up and registered there, as well as check out the workings of the working group, for example. So here we are in the steering committees, and you can find contact information and get involved with those groups, if that's of interest to you. The key findings to share from that summer experience were that resistance to this new generative AI phenomenon is somewhat higher than enthusiasm. I think that's worth keeping in mind that broadly across the spectrum of society, it seems that people know about Chachi Beatty, they know about generative AI and for for various reasons, resistance to the AI is a bit higher than the excitement for the AI. And that's kind of cultural, culturally, culture wide. And students we found are more likely to have used or will use generative AI than faculty. Although I don't think our faculty are laggards. It's just that the students got to it fast. The attitudes are shifting towards enthusiasm and curiosity. And it's very interesting to consider that once people can begin to play with with generative AI tools, if it's the image generators or the chat GPT. If they haven't done it before, they tend to warm up pretty quickly because it has such a, it can have such a rewarding experience, impact returning results for you. And it can be quite chatty and friendly. So people seem to warm up to it. But it's worth noting that as a culture, that people are a bit a bit freaked out by this, there is a huge appetite for talking about it. And that's why we that's what motivated our townhall series to create dialogue. We're doing okay, on time. Yeah, we're fine. And then social disruption, or the potential for social disruption and job replacement are among the top concerns. And I think it's funny because I explained to people that my job is to go find, ironically, how we can make jobs out of AI. So if I have to go make new jobs with AI, that's, that's the problem that I'm wrestling with. And I'm really grateful for the chance to do it. But I will admit, it's a head scratcher, because the artificial intelligence strategy, technology itself. It does things that humans human beings do. It does them well, particularly cognitive labor, we can get into that later. But the most important point is that what has happened recently is that ethics and regulation have become really front and center of almost every conversation, whether it's at the university, it seems to be happening at government level and, and other other parts of the world. So this is happening, the there is an effort to to, to regulate it, I want to show you real quick. And Steve, I'm going to have, there's some I'll put these slides, I'll share the slides if you'd like so that people can follow up with the resources. But for example, there's this one group here, which has emerged called the Center for AI and digital policy. And I really recommend if you're interested in regulation, discussions, and I mean globally, subscribe to their newsletter, it's very thick. It's it comes out once a week, it's a dense read. But there's just so much going on globally, about efforts to regulate AI in all the countries of the world. And so that's kind of fascinating if that's your thing. It's worth taking a look at that. And here's the slides that I'll let you know the URLs, the insights that I would like to share most with you all and I don't know what the technical level on the call is, I'm going to presume it's perhaps a spectrum from not very much to quite a bit. And I hope that this will be helpful to pretty much all of us, but we know that there's a noise. We know that there's enormous investment into it in terms of just economic, you know, the money that hundreds 10s or hundreds of billions of dollars are being sunk into AI so you know what's real? There is a tremendous amount of hype and a growing concern about it. I mentioned the regulatory environment, but something to keep in mind is that it's estimated that our compute power has increased by a factor of 10 every year since about 2010 that has been growing like that. And it is expected to continue for at least another three to five years to grow at that rate. What that means is that the AI models that were blank that are being built now could be 1000 to 10,000 times larger within a few years than they are now. And so what this means is that we were kind of at a very special time, because one thing that's not understood well, about the generative AI models like chat, GPT, and Bard and the image generators is that they seem to learn things on their own, just by sheer size. I think many of you might have heard that chat GPT, or GPT, for taught itself, all human languages, even though it wasn't trained on all human languages, it can infer it through its vast lexicon of vocabulary and language and linguistic weights that it's built, it can actually just read Hungarian, it can read Persian, it can read ancient languages, as well. And so those are called emergent properties. And so something to keep an eye on in the next few years is that as these models get super, super large, with companies like Google and Microsoft competing to build the biggest models, we're going to find out whether new capabilities emerge or not. And so what I kind of put out there for you to imagine as a thought experiment is that it's a very different reality on this planet, if we have chat bots that are as intelligent as us, and they plateau that like they're really smart, but they're not smarter than us, then that's kind of a world sort of, like, you know, data from from Star Trek, where you have robot companions that are kind of like people versus that, that the AI gets so intelligent that that they just dwarfed us and intelligence. And there's a lot to watch. And we'll probably see this within the next six to 12 months, a little bit of science as to whether it's going to plateau, or whether it's going to keep rolling or not. So something to keep an eye on. other little details that are good to keep in mind are that text text meaning where you type a prompt, and then it generates a document and text image where you type a prompt and it generates an image that is completely solved. Now, the imagery the AI can generate right now is so convincing. It doesn't take a whole lot to create something that looks absolutely real thing to keep in mind is that text to video is right behind. Now, I was just in. I was just in Virginia this week in Washington, DC for a conference and the National Security Agency was having somebody a representative there talking about the concerns that they have, from a security perspective, the intelligence community. And they were pretty concerned about the near term possibility of deep fake technology disrupting our election cycle and causing groups of people to fight with each other because of the way social media can propagate. These, you know, completely false, but very, very, very realistic stories about people doing terrible things. So that's something to think about. I hate the fact that AI is kind of a bummer this way. So I'm going to do my best to make it also, you know, like optimistic and uplifting. But there are things to think about like that. The most important thing that I would like for you to know, if you don't if you're not aware already, is that in this last year 2023, there has been the establishment of open source source alternatives to the giant models and what that means. I'd like to explore just one second more, I have a little graphic so that it's not all just bullet points there. I found this graphic this morning, I thought that might get the idea across. So I chose the the motif of gunpowder. Because that was a pretty transformative technology and another thought experiment, think about what the world if you were in the 1400s in Europe, for example. And then gunpowder arrived, and at first it has it. But then over a few decades, the commoner has it as well. And think about how different our world has evolved because the gunpowder was a technology that was not controlled just by by the by the elites, the American Revolution, for example, the you know, and the all of the gunpowder empires. In other words, it's a very different world. If the open source models that are now downloadable and can be used by organizations like like Iran, and like by governments and by small businesses. If you can build your own models, then you're not completely beholden to Google, and to Microsoft. And to Amazon, you actually can build a you have a chatbot you can build a commerce authority chatbot this, this is possible and this is playing out right now. So it's also a very interesting thing to think about, about how the world's gonna go. The last little bits I want to share about the AI phenomenon is that it's moving very swiftly and just this last week, Amazon put $4 billion into a company called anthropic, so now they're in the game, they're probably going to be a very, very large player in artificial intelligence services. Microsoft released something called autogen eight AI agent, which is an open They made it open source, so it's not owned by them. And it is capable of automating the generative AI so that it has its own drives, which, you know, I'm, again, this is why we need regulatory environment, but it makes it much more powerful. And then Google released size data on their new model, I want to show you real quick, I did, I asked set up to help me with this last night. But there's I did a quick visualization of size to help you understand. So if this little black.at, the top is the compute size and parameter size of the chat GPT, that happened back in November of last year, this bigger.is, the size of the current cutting edge frontier model, which is the GPT. Four, that's the one that everybody's using, but this is the size of the one that Google is building. So that's where we're going in the next 18 months. And like I mentioned before, it's going to be really fun to watch, because does this giant model do things that we can't understand? Or does it just kind of plateau as being a really expensive version of the GPT four, and that's a question that will play out. The most important part that I want to leave you with, is that there is the potential for acquiring an open source language model. There's a website called hugging face, which is quite popular, I'll pull it up takes a minute usually. And here every day, the most recent versions of these fine tuned models are published in an in a shared environment. And you can have these for free. Many of them you can use commercially, not all of them. But some of them some of the some of them, you can. And with that, it's possible to build a language model that has your company's information or your organization's information in it. And you don't have to give that information out to open AI, or to Microsoft or to Google, you can keep it private, if you want to go that route. So there's a new world of internet, or other tech technology development that is made possible by this technology. The thing that I think is relevant for our group today is that with all of this massive amounts of bandwidth coming into the state, at least potentially, their needs, there can and needs to be something that we can do with it. And I think that artificial intelligence will be very happy to soak up that bandwidth. And so that's why it's good to be thinking about it. Now. And just to wrap up our strategy in this year, and next is to continue outreach, continue holding public workshops, and helping people understand what the technology can do. Again, the model that I just tried to explain, I wish that I had a graphic for you to make it a little bit more tangible. But this idea of building your own local model that connects to a larger model to get the best of both worlds is something that I'm very passionate about. And that's, you know why I chose a historical analog of a technology that if it's in everybody's hands, it's a different world than if it's just for, for organizations hands. And so if you want to continue that conversation, I'm really happy to stick around to do that. And then finally, my last slide is that at the U of A, we are engaging to build our own, like local language models now. So this is something that you have is starting to do, I don't know that it'll be like value of a model, it might be health sciences model, it might be research models, but this is beginning to happen. So it's a thing. And then the one that I'm most proud of and passionate about is the AI student internship dream that we have, which right now, I'm calling it the LLM workforce, or the LLM army or whatever. But we have so many students that are attracted to the University of Arizona for computer science and machine learning, data analytics, all of these kinds of sciency mathy fields, they're all dabbling in or fascinated by or adapting to artificial intelligence. I don't have hard numbers yet I imagine it's, you know, 1000 or more students that are that are waiting to find they're looking for a way to to get into AI. And so what we're trying to build is a an exploratory workforce, where we would have students mentored within our institute in how to build these models, and then pair them up with organizations like yours, hopefully, that are forward thinking, ready to take a chance like to work with students that's really important that that that you appreciate the value of mentoring people who are at that beginning stages of their career to adapt to your environment, and then maybe, you know, kicking out this language model generative AI power at the state level instead of letting it just be a Google versus Microsoft thing but like, can we get this distribute this out into the state of Arizona where people are, are willing to ready to play with it and learn Learn how it works and put it to use. And, you know, productivity gains and entirely new types of work and products and services are possible with that student relationship. And so that's what I'm very passionate about. And then the last thing is that I was just in Washington yesterday. And you know, when? When, sorry, when I'm just wrapping up.