E82: The Rhetoric of A.I. Hype (w/ Dr. Emily M. Bender)
12:21PM Jul 28, 2023
Speakers:
Alex Helberg
Calvin Pollak
Dr. Emily M. Bender
Keywords:
ai
work
students
technology
writing
emily
point
synthetic
tropes
gpt
people
talk
hype
metaphor
parrots
podcast
stochastic
idea
calvin
artificial intelligence
Hello everyone and welcome once again today to reverb. My name is Alex Helberg. And I am joined on the mic by my co host and CO producer Calvin Pollak. Hey, how you doing Calvin?
Doing good, Alex, how about you?
I'm doing very well thank you and especially today we are honored and beyond excited to be joined by Dr. Emily M. Bender, a professor in the Department of Linguistics. At the University of Washington, the director of UW is master's program in computational linguistics, and the director of the computational linguistics laboratory. Emily, thank you so much for being with us on reverb.
This is fun. I'm excited.
So are we. So today, for all of our listeners out there, we wanted to bring in our guests to talk with us about so called generative AI technologies. This is how you'll hear it talked about a lot in the media. It's currently causing a lot of buzz in a lot of spheres of society, such as education, medicine, law, military and intelligence communities, among many others. We thought it was important today as a podcast that looks at language and argumentation surrounding these kinds of issues, to have someone with not only expertise in how these types of technologies work, but also has insight into the way that language is used to hype normalize and even obfuscate the way that these technologies work and their potential effects on our lives. Calvin does that about cover our exigency for this episode?
Yeah, I think so. Alex, I think there's only two other things that I would mention. Just to explain, you know, why we wanted to do this show and bring on such an eminent expert in the field is, I mean, for me, academia is, is a place where a lot of these discourses about so called generative AI is being kind of legitimized, and in ways that I find extremely problematic. As someone who attends a lot of writing studies, conferences, virtually every panel nowadays is about AI. And I think that a lot of what's going on in those panels is problematic not just ideologically, but technically, in terms of what these technologies actually do. So that's the first thing I would say. And then the second is, is funding. I mean, there's a ton of funding going into this domain. within academia. And outside of academia, there's a lot of government funding, going into, for example, the army AI Task Force at my alma mater, Carnegie Mellon. And so there's there's a high stakes financially as well, in addition to academically so yeah, I'm really excited to hear what Emily has to say.
Absolutely. So we'll throw it over to Emily, right off the bat here, we kind of wanted to start off with some of the basics of our current situation, we're recording this in the summer of 2023, just for those of you who are clocking this on the timeline of AI developments and hype. So let's start off with some of the basics here as it pertains to the current hype around so called artificial intelligence, since we as I think you are too primarily concerned with language as our kind of field of study, in your opinion, should we even be using the language of AI or artificial intelligence to frame software tools like synthetic text and image generators? And if so, why? If not, what are some more useful or precise alternatives? Given what the tools actually do?
Yeah, that that question very generously suggested that I might say it's a good idea to use this term, of course not. So artificial intelligence is what drew McDermott and Melanie Mitchell after him called a wishful mnemonic. And this is the idea that when you're writing computer code, if you want to make it readable to you as a human, so you can go back and fix it. You name the functions after what you want them to do. And a good function name sort of very specifically points to okay, what's the input? What's the output? Something like artificial intelligence, or maybe you might have like, understand sentence or something like that is instead wishful like, this is what I wish it was doing. And then mnemonic is just just helping you remember, right? So artificial intelligence, in that sense, is a wishful mnemonic, it is also highly ambiguous. So it refers to an area of research or a field of study. So there are artificial intelligence conferences and those have been going on for decades. It refers to a collection of technologies like approaches to doing things often called machine learning, I think better called pattern matching, maybe data hungry pattern matching at scale. It's sometimes used to refer to specific systems. So people will say, you know, I forced an AI to read 100 Seinfeld scripts, and here's what it came up with. Right and nai there. And there, my co host, Alex Hanna for our podcast coined the term mathy math to like show how ridiculous that is right? Love that love it. But there's another one, which is great, which is Stefano Quintarelli came up with salami, and I just want to get you the right what that adds up to he this is an Italian researcher, and he says, let's forget the term AI. Let's call them systematic approaches to learning algorithms and machine inferences. That is amazing. Me. Yeah. And then you can say that this does the salami understand me, does the salami have feelings is the salami intelligent, and you can see just how absurd it is. So, so let's not use the term AI. But there's another problem, which is that it lumps together lots of different things. So there's something called automatic decision systems. So if you've got a system that is maybe set up to either advise a banker, or just tell the banker who should get a loan, and who shouldn't, that's one kind of thing that falls under it, you've got these text and image synthesis machines, also gets called AI, you've got automatic transcriptions, medical automatic speech recognition that gets called AI. And all these things, work differently have different affordances can be tested in different ways, maybe more or less appropriate. And if we say it's all AI really muddies the waters.
Yeah, no, I really appreciate all of those different ways of delineating the different technologies that are often sort of lumped together in this. I was thinking, I mean, this is just like a, I don't actually have a rationale for this. I was also thinking bologna could be a good metaphor as well, because that's just sort of like, you think of sort of like a mystery meat that combines a whole bunch of things that you don't really quite know what gets put into it. But yeah, yes, Apple scrapple. Could be a good one. But yeah, if we're continuing with the meat metaphor is there just to really kind of get precise here, because I do think that even what we call these things, kind of hypes them up. I'm fascinated at the ways that, you know, I really like that the term that you use the wistful mnemonic as a sort of way of condensing this down into a shorthand, the other one that I actually in practice, as you know, a researcher at a university who has to be in the kind of strange position of playing the experts of explaining large language models to people, because, you know, I'm one of like five people in a writing program. And without a linguistics department, what I most often refer them to is your paper that you co authored with several other researchers, such as timnit, gebru. On stochastic parrots, that was the conceptual metaphor that you used, instead of going with these sort of intelligence metaphors that sort of imply that technologies can think that they can make meaning that they have some sort of meaningful cognitive functions. Instead, you call them stochastic parents. So why did you choose that metaphor? What does that help us see about these technologies that the AI framing does not?
Yeah, thanks. So first of all, no shade to actual parents. In the phrase, stochastic parents, we're talking about the English verb to parrot, which means to repeat back without understanding, and then stochastic, aside from being fun to say, and sounding fancy means, you know, randomly according to a probability distribution. And so this was meant to counter these narratives that we were already seeing in late 2020, when we wrote that paper where people were claiming that language models were understanding and doing much more than they actually did. And we were just starting to see language modeling, which, by the way, is a very old technology, it goes back to the world work of Claude Shannon in the 1940s. And sort of Markov before that. And basically, it's this idea that you can model the distribution of word forms and text, and that can be useful. And it's incredibly useful part of spell checkers, right? You too, are probably old enough to remember when spell checkers went from just putting the squiggle under any word that wasn't in the dictionary to being able to catch when you wrote the wrong version of there, because it's the wrong one in context, right? That's language modeling. Is this a plausible sequence of English words, but that takes the sequence of words as an input, and what's happened with the text synthesis machines, and that we were just starting to react to in 2019. And 2020 is turning that around and saying, rather than taking text as input and saying what's plausible here, maybe just have the machine say plausible. Next word plausible. Next word, plausible. Next word. And so people make sense of that. And we can't help it. And so the idea with the stochastic parent metaphor was to help people sort of get some critical distance and see, okay, we're the ones doing all the interpretation work here. That's just a very large magic eight ball.
Right? Yeah. And to me to the the key difference here is in terms of meaning making, right I mean, I'm also kind of, I'm also parroting in some ways your paper that you co wrote with Alexander Koehler on the differentiation between form and meaning where these technologies have gotten to a point where again, I'm not going to attribute them agency hear where they were, you know, they can have a sort of plausible output of the form of text or the form of language, but semantically, they are not actually there. There is no sense making happening by any real concrete agent here, there is no meaning that is being understood in any way by these systems. Is that right?
Yeah. And so you started that by saying that you are parroting our paper. And I'm going to disagree there. Sure. Because you understood those ideas, you took something from it. That's why you are making them your own and repeating them. And this is something that I get asked all the time, and especially by people who work in the arts in literature. And you know, there's this idea that I think is probably traced to Foucault that, you know, nothing is new, that every time we talk, we're sort of referring back to other things. But the point is, we are referring back to something that's the pairing of form of meaning again, and what matters is, someone had an idea, they expressed it, we interpreted it made it our own, and now we are re expressing with reference to them. And that idea in its original context, and then our new context, and that is not just, here's a bunch of forms likely to come next likely to come next.
Yeah. And so I think part of what drove us to invite you on the show and have these conversations is, is this question of stakes, right? Because a lot of the discourse around AI hype is financially driven, right is tech executives and media outlets, that parrot to bring that diver back in that parrot, you know, expert discourses about AI, in ways that dramatically inflate the stakes around these technologies. So someone might think that the stakes are human intelligence being eclipsed by two powerful AI systems, everyone losing their jobs, a complete overhaul of our education system, the collapse of civilization, etc. And so before we start critiquing some specific examples, I think we wanted to give you a chance to give us your take on what the stakes really are, for these technologies being adopted, more and more in different realms of society.
Yeah, so I spent some time thinking about what's at stake here. And I like that framing. So one thing that's at stake is the health of our information ecosystem, with synthetic media machines for the more setup, so anybody can go turn on the tap and pour more synthetic media out into things we are getting to a point where it is very hard to identify trustworthy sources of information, and then trust them when we found them. And that I think is is a huge thing that's at stake. And that's really, really important. I was asked by a French journalist, given that, how do we prove our own authentic authenticity? How do I prove if I'm writing something that is written by a human? And I think that that's kind of the wrong question, I think that what we can do as individuals is that we can value authenticity. So a news organization can say, we don't use synthetic media, right? We will not use synthetic images, we won't use synthetic text, if we need to report on it, we're going to quote it and mark it for what it is. And they can sort of state that. And I think also as an individual, we can say, you know, as I'm putting information out into the world, I don't do that. And that that is one way to push back on the issue about the information ecosystem. But I think we're also going to need regulation, I think we're going to need transparency. But when we're encountering synthetic media, another huge thing that's at stake is the RE inscription of systems of discrimination. So a baby boomer hunay points out that machine learning. So this pattern recognition at scale is inherently conservative in the sense that it takes the patterns of the past and uses them to help create the future. And we know there's a lot of things that are bad, and those patterns of the past. And we also know and we go into this in the stochastic parrots paper. So these systems amplify the bias in their training data. And part of that just has to do with the the way the math works down inside. But also, everyone thinks that you frequently hear people saying that Chad GPT is trained on the entire interweb, right? It's not right it is there's no way to go somewhere and download the entire Internet. And first of all, the internet itself is not representative of humanity, huge chunks of the internet actually already were synthetic text there to basically do search engine optimization. And the people collected these datasets try to avoid that stuff. But you know, think about who has access to the internet, think about which parts of the Internet are being used as the starting points, think about who can comfortably participate and express their point of view without being harassed off the internet, and so on. And so it's already very much not representative. And so we have this rescue, asking what's at stake of systems of discrimination being re inscribed. There are also questions of labor rights and resisting exploitative labor practices that are hugely important here and becoming more and more clear, through the work of some wonderful journalists. Also, scholars like Vina Duvall is really great in this space. And then another piece of this is that all of this technology around pattern recognition that is presented to us as fun to play with also enables quite a bit of additional surveillance, both in terms of normalizing all this data collection, and in terms of what can be done with that pattern recognition. And so there's a lot that's at stake here. And it has nothing to do with some supposedly artificial alien intelligence turning against us, right. It's all about what people do and or possibly choose not to do, hopefully With this technology,
I was also thinking about I mean, you said, you mentioned about the necessity of massive amounts of compute power, which is another thing that I think all too often gets overlooked in this that, you know, people that don't understand the actual, like material, technological infrastructure behind this, don't understand, like, you know, and you and your co authors point this out beautifully in the stochastic parents paper, the amount of energy that is required to train a model like GPT, three, or you know, GPT, four, or any of the other ones that are contemporary out there. I mean, it's massive, right?
Yeah, it is. It is very, very large. It's not, I mean, it's not the worst thing we're doing in terms of carbon impact, but it's measurable. And we're thinking about, and then that is something that the stochastic parents paper frequently gets cited for when people really ought to be citing the work that we're citing in the paper. So I'm thinking of authors like Mr. strew, bone, and others who are sort of bringing this up in the first place. The one thing we added in stochastic parrots was we brought in the environmental racism angle, because, you know, when we talk about costs versus benefits, and people like to talk about how, oh, there's so much upside to this, you know, we have to, it's just in its infancy, which is one of those problematic tropes, you know, let's see where it's where it's gonna go. It's like, well, well cost to whom and benefits to whom. And the same thing can be said for a lot of the energy intensive things that we do. And it absolutely applies here.
Totally. So I wanted to, we can actually kind of get into now what we, what we wanted to do here kind of combining our various areas of expertise, in particularly a focus on language and argumentation, the way that language is actually used to hype up, inflate, or otherwise sort of proclaim these technologies as being the next big thing, the thing that's coming at us and the thing that's going to change our lives fundamentally. So you know, AI hype comes in many forms. But there are certainly patterns that recur across the different discourses, we wanted to ask you first because we have some ideas of our own. But to you, what do you see as sort of the most ubiquitous and the most pernicious AI hype tropes that are currently operational in the media today?
Yeah, so there's a lot there was one and thankfully, I think this has died down a bit when chat GPT first hit the scene, it was over and over again, intro paragraph to the article. Haha, that was chat GPT. And I was just an I also, I have a standing search on the phrase, stochastic parents so that I was getting all of these new journalistic efforts from all around the world and seeing that over and over again, it's like, Guess what, journalists? This isn't original. And it wasn't actually a good idea the first time. And it really harms the credibility of the news outlet if they're going to publish synthetic tacks, and I only tell you below the fold, oh, by the way, that wasn't real. And the amount of times that I saw that was astonishing. That seems to have died down a little bit, but I'm still kind of raw. That's one. Another one is to talk about either AI systems or the technology or the field as being in its infancy, as I was just mentioning, which I think is pernicious because that draws on our reflexes of of nurturing and caring for something that Send an infant that's an infant and also displaces accountability, where the company's putting together the stuff aren't infants. That one tends to go hand in hand with contrasting real present harms with imagined future benefits. Yes. And that's, that's problematic. And there, I'm drawing on the work of Anna Lauren Hoffman, who sort of noticed those two tropes and points out how problematic they are. Another one is this idea that progress is a linear race to a known destination. And it's just all about how fast can you get there? Are we going to get there faster than somebody else? And here I am, I really appreciated listening to Beth Singler on the radical AI podcast, she was a guest there. And she was talking about how you can look backwards into the past. And you can trace the steps that brought us to whatever particular technology you're thinking about today. And then people will turn around and do that same thing into the future, where they take the future point to trace to from science fiction, right and assume that that's given, it's assumed that that's necessarily going to happen. And it's just a question of how fast we run down that path. And that's not how science works. You know, science is an exploration. It's different researchers and not just science, but all scholarship, right, going off in different directions and being in conversation with each other and making decisions and deciding which direction to pursue. So that one gets under my skin. Another trope is that data is just out there for the taking. If we can grab it, it's ours. So problem. A very pernicious one is this idea that intelligence is something where you can rank people, and you can rank machines, and you can put them all on the same scale here to meet the Abreu and Emile terrorists are doing amazing work tracing what they call the test grill bundle of ideologies, which links a whole bunch of these things that are behind the discourses around so called Artificial General Intelligence, with eugenics with really awful notions of what makes a better person and it's all sort of based on questions of IQ tests, and it's really repulsive. But anytime someone talks about artificial intelligence, or maybe superhuman intelligence or things like that, the whole going into those discourses may be necessary, not necessarily knowing it, but it's there and it shapes the thinking about it. There's a wonderful paper by some neuroscientists Baria and cross, where they point out that the metaphor that that pervades neuroscience and also computer science, is the bidirectional metaphor, the brain is a computer is a brain, yes. Which is inherently very dehumanizing. One is people will frequently say to me, people are just stochastic parrots too. And this is basically folks who want so badly to believe that what they've built with these large language models is actually AI, that they're going to bring down what it means to be human until they match. And then finally, there's this trope that technology is moving too fast to be effectively regulated. And there's just no hope regulators can't keep up. So we've just got to believe in Silicon Valley and the Tech Coast that they're going to do the right thing. Unfortunately, we're starting to see that discourse that metaphor, that trope being resisted by us regulatory agencies, like the FTC saying, there's no AI loophole, we regulate the operations of companies, if you're using automation to do it, we're still regulating it. So I love that,
ya know, I was seeing all of the news about all of the different tech companies that are currently being basically brought to heel by not just the US government, but governments elsewhere, too, for, you know, everything from data privacy, or data capture to just failure to pay their bills, for various things. It's like kind of ridiculous, the way that the Emperor's clothes are, you know, are kind of being revealed to be invisible. Now, the other thing that I wanted to pick up on based on something that you had said earlier, and I think it's kind of latent in some of the other tropes that you were pointing out, is these specious attributions of agency, where we are, you know, talking about technologies as if they are acting agents, and they are, you know, just sort of doing these things like chat GPT understands that, you know, you're attributing a sort of speech acts or thought acts to a technology, were really that is just completely illogical and species to do. So. My personal least favorite one of that has become the AI has seemingly evolved. That's one that I see in a lot of different things where it's sort of like wait, this is now a biological entity that is has the ability to go on its own. And I mean, we can talk a little bit about why that's so specious, right, that there are there is training data, there are human evaluators at you know, just about every stage of the production of these technologies. And that is really the thing that I personally for me, I think is the most dangerous because, and I know that you've talked about this elsewhere too, by placing agency onto the technology, you then you know, that's kind of setting you up later down the line to be able to say, oh, but you know, if this technology does something terrible, like it's already done for, you know, the like, eating disorder, crisis hotlines or mental health crisis hotlines that have had to take chatbots offline, because they're giving people bad advice. That's the technology that did that not the people that made the decision to use the technology or the people that programmed it.
Yeah. And you put that one together with the Infancy metaphor, that's the technology that did that. We have to train it to do better, we have to help it grow up like, No, it's the very best reporting on all of this is what keeps the people in the frame. Right. So reporting, that talks about the people making decisions talked about the people who created the original data talks about people who did data labeling it when the people are in the frame, the discourse is is much more solid.
Yeah, the Infancy metaphor is really wild. I just have to say that, like, there's just, I mean, this idea that not only does that remove agency from the specific companies or developers, it also implies moral goodness, right? Because we have, you know, all of these moral associations with infants with babies that they're, they're inherently good. And and, and that, you know, that, you know, if we do everything right, things will be good in the future. But it kind of removes the fact that like, designing the thing in the first place was a human choice. It didn't just kind of spring out of, you know, nature.
Yeah, yeah, absolutely. And I love this conversation, because this is a lot of what I do as I'm trying to deal with AI hype is coming from a linguist perspective, say, Okay, what, what presuppositions are being brought in here? What do I have to go along with to even make sense of this? And it is pernicious when stuff is brought in as a presupposition. Because you actually have to work harder to deny it, you have to say, No, I'm not going to go along with that. I'm going to I'm going to take issue with the way you frame that, before I even answer the question or before I even try to refute the claim that there,
yes, oh, 100% that you I really thought that was like such a succinct way of putting a finger on. I think what bugs me the most about this is that you talked about this earlier with timnit gebru. And Emile Torres work dissecting and kind of trying to help people understand the test Creel ideology, which we could do a whole episode on it, we will probably refer out to some other sources, because I don't know that we could do it justice today. But it really gets at this notion that there's a kind of teleology to a lot of AI hype, where it's presumed Oh, we all want the same thing. We all want this future that AI is taking us towards, or that these technologies are going to hasten, when you know that it's never questioned whether or not whether or not people actually want that, which is, you know, a couple of the different framing strategies that we wanted to talk about today here as well that I think you know, you you got to send us some really good particulars here. These are a little bit more general that we can see operating. And we're pulling this from the work of rhetoricians like Jeanne Fahnestock, who does a lot of good work on genre who talks about in her paper or her work on accommodating science, the way that science often gets talked about in public media, the basic sort of what she calls the Wonder frame, right? The way that we can frame a technology as sort of hastening in this wow, look at this big, flashy thing that this new technology can do, as well as another thing that she emphasizes, which is the practicality frame. You know, here are the nine cool tricks that generative AI can, you know, do to improve your work which both of those I mean, they're not as extreme as like, what we might assume from the test Creel set, but they do also imply a kind of teleology, I think, as well and endpoint that we are all going towards something that is supposed to impress us, or to get us really excited about the practicality of when we don't even know if that's what we want. And that's why we wanted to do next with you. And the reason that I'm setting this up with those two frames, is we have a textual artifact that we want to look at with with Emily here, and kind of critique the different tropes of AI hype that we see going on in them. The one that I think is going to be interesting, especially to our audience, pertains to synthetic text generators, and writing. So the article that I want us to look at here is called why I'm excited about chat GPT here are 10 ways chat GPT will be a boon to first year writing instruction by Jenny young, I'll just go you know, paragraph by paragraph, feel free to stop me at any points that you you know, want to flag something that's going on here. Calvin and Emily feel free to to weighed in on this because I think we're going to have a little bit of fun going through this. And I should say, just off the bat just to be I want to be collegial here. I don't know Jenny young, but this is also something that I've heard from a lot of people in Writing and Rhetoric studies. as well as elsewhere across the university, this is not just Jenny Yong, who is the person who believes this
anyone who went to foresees this year hurts. Yeah, yes, yeah.
Anybody who went to the Conference on College Composition and Communication certainly heard this. So here we go. Since the launch of chat TPT in November, many faculty members in higher education have been worried about how use of the artificial intelligence text generator will harm student learning headlines about the end of high school English, the educational crisis, and of course, chat GPT and cheating have predominated. But from my perspective, as a first year writing program director, I'm excited about how this emerging technology will help students from all kinds of educational backgrounds learn and focus on higher order thinking skills faster. Here are the 10 reasons I'm excited about chat GPT. So I'll stop there. Just Emily, what do you what do you make of that lead paragraph there?
So it seems to be partially coming from a place of defensiveness, right? So there is another trope that is not so much in the media coverage, but in the way that computer scientists talk about their own work, that they are solving problems, and doesn't matter where the problem can come from, because they're problem solvers, and they can solve it. And if you read papers that you know, a lot of papers that use machine learning techniques, we'll start by saying, well, it's really expensive to have people do this. So we're going to automate it. And it's always sort of saying, We don't want humans doing this work. And so with chat, GPT, you probably had writing instructors all around the country in the world being told, You are useless now. And so this is this feels like a sort of a defense against that. But it's a defense that stayed within the frame, rather than more bravely critiquing the ridiculous claims of attack GPT taking over education.
Yeah, yeah. And I would just flag that it assumes, like you're saying, from a posture of defensiveness, that this stuff isn't that bad, we're definitely going to use it. And here's why. And also, the other thing I noticed is this idea of this will help students from all kinds of educational backgrounds learn and focus on higher order thinking skills faster. So higher order and faster, jumped out at me as problematic terms there where we're going to need some definitions of what kinds of higher order thinking this instructor believes are important for writing students. But also this is this idea that speed is what we should be going for in the writing classroom. I don't I don't know if I agree with that. So
that sentence also procedures, some stuff that's going to come up lower in the article, all kinds of educational backgrounds. So there's this idea that we have a problem that, you know, not everyone's got equal access to K 12 education. And so here's a technological solution for that, rather than actually addressing the underlying inequities. Yeah, right.
That's exactly right. And that is actually the first thing that is going to come up here specifically. So let's dive into number one. Number one, it's going to help level the playing field. Here's the truth about the achievement gap in writing skills. Students who have professional parents who went through K through 12. In higher socio economic school districts tend to graduate knowing how to structure an essay and write grammatically correct sentences for the most part, glad she added that parenthetical because as a writing instructor, that is false. first generation students who went through K through 12 in under resourced and lower socio economic school districts do not graduate with these skills nearly as often. Here's the very important takeaway from this disparity, the disparity is environmental, not biological. In other words, the students who know how to structure an essay and write grammatical sentences are not more intelligent than those who don't. I've taught in the suburbs, the inner city and the rural Midwest. And if I consider the quality of ideas, logical reasoning, and critical thinking that my students submit, it tends to be fairly similar across similar populations, first year writers in this case, unfortunately, the students who have not yet learned to properly structure and punctuate will always receive lower grades, because their work doesn't present as well. If we allow AI to help students generate first drafts, and I'm saying we should, then there'll be I'll be starting from a similar place. Ah, yeah.
So first of all, so much effort spent on saying there's an achievement gap, but it's in the environment. And I'm reminded of the work of Megan Figaro, who's a linguist who has sort of made it a sort of career goal of getting rid of the something called like the the word gap or something. There's this idea that kids from disadvantaged backgrounds, learn less language because they're exposed to fewer words. And she said, this, this is this is BS, right? coming at this from this sort of deficit perspective is all BS. But there's like, so much effort here. Like sort of, we're starting by talking about a deficit, but we're being very careful to say it's not biological to cash. I'd like to live in a world Nobody thought it was biological. But also we're not talking from this deficit perspective, right. But more importantly, you get to the end of that. And it's like, rather than having the students start from their own ideas, which she just got done saying, we're very good. Let's start with extruded text that came out of the synth texted this this machine.
Yeah, I'm not exactly sure what what model of like cognitive processes or, or just general best practices and writing instruction come out of the school that first drafts have to be grammatically correct. That's just that that really, with most the most established pedagogy in our field, which is,
that's the opposite order of operations there. You know, what I've recommended, and every first year writing course, I've taught I mean, the other thing is that this idea that they'll all be starting from a similar place like, look, yeah, they'll all be starting with, as Emily said, extruded text, that's incredibly cliched and, and corny, like, versus, versus starting from their own ideas. So yeah, that's this is all so much there.
Yeah. Number two. Moreover, just getting that first draft will serve as a teaching tool. I'm not exactly sure how this is different from the way that first drafts have been taught here to for students whose middle and high school teachers didn't show them how to craft some basic essay structures will now receive explicit modeling from chat GPT and the lake over time using such tools as a starting point only, they will begin to internalize those structures.
So again, you can see my facial expression wasn't a cat This is so
Ah, I'm so sorry if what this is doing to you. Yes. So.
So one of the points that to make Ebru makes is that, and she's an engineer, right? If you're going to build a system, as an engineer, the first thing you do is you scope what it's for. So that you can then evaluate does it work well enough for that? Has she or anybody else proposing this actually sat down and said, Does chat GPT give well structured essays consistently, no matter what the prompt, like this claim seems baseless to me. But also, again, like, Who would want to work on an essay? Which isn't about their own ideas?
Right? Yeah. And it's also again, I don't I don't think I by the framing about, you know, the students whose middle and high school teachers didn't show them how to craft some basic essay structures. Now we're getting into the territory of proposing a technology to make up for all of the inadequacies of public education system. Nowhere in here does this say that we, you know, have to like properly fund our education systems. In many ways. You could also read this as kind of a cop out from that, instead of worrying about all those pesky bad middle and high school teachers, we can just get chat GPT to replace their labor.
Yeah. And and I would just add that this idea of seeing basic essay structures, receiving explicit modeling, internalizing those structures, I mean, that's, that's genre based pedagogy. Right. And it's something that's been around for a long time in first year writing, but I don't see how chat GPT I mean, it sounds like they're saying, chat GPT will do the genre based pedagogy for you or make it easier. And it's not clear at all that it does that. And like the only way for, I think for students to internalize those structures is to understand why those structures exist, what rhetorical purposes they serve, and how they like make meaning in certain contexts. And chatty btw seems to strip that context away.
Yeah, absolutely. Crucially, number three here, it will allow both students and teachers to focus on higher order thinking there's that word Calvin, let's say we, let's say we have a draft generated by AI. As the attribution of agency generated by AI, it's going to be grammatically correct, yes, but boring and generic. It's merely a starting place. From there students and faculty will begin clarifying and concretizing ideas, adding detail and nuance and rooting out vague abstractions. In other words, we can now get to the important parts of writing much more quickly.
So ideas from where
Yes, right. If the ideas Yeah, clarifying and concretizing ideas, but again, the the actual generation of ideas, we can just outsource that. Right? to the system. Yeah. Oh, man, the higher order thinking processes. In contrast to number four here, it prioritizes the process of revision, which is the true work of writing. If we allow students to recruit AI. I love that. That's an interesting conceptual framing, they're recruiting AI to assist in initial drafting, then we get to start with revision. This will be a game changer. Anyone who's ever taught first year writing knows it's not that hard, nor that productive. To get students to spit out a first draft. It's extraordinarily hard to get them to revise eyes that draft. And to understand that this is where the good writing begins. If we use AI effectively, then we can skip a lot of nonsense and immediately begin meaningful writing practices.
Alright, so this is wrong about teaching first year writing. So in my experience, the exact opposite is true. The students, the students who are who are having the most trouble, have trouble getting a first draft down, because they don't know what they want to write about. Maybe they like they haven't fully bought into the course. And you have to have those conversations with them to kind of help them with the invention with the invention process and with creation and kind of like buying in understanding the stakes of of the writing to ask. So yeah, I just think this is totally opposite. And it's kind of, you know, I think we're getting at idea generation and invention is something that's really under theorized in people who are writing in this kind of practical frame, about Chad GPT, they've sort of bracketed how that process works, so that they can say it will be so helpful to you.
And it also seems inconsistent with what she's saying, like this paragraph was inconsistent with itself. She's saying, it's easy to get students to do this. And you've just said, That's false. But so therefore, we should get the AI to do it instead. Because like, what's, what's the win there?
Yeah. That's a good point. No, exactly. No, it's very true. It's hard to parse because of that self contradiction there. But But I also wanted to say, I mean, like, from a rhetoric perspective, you know, we talk about I mean, we don't have to cleave precisely to the, you know, traditional canons of rhetoric. But invention is obviously one of the most important ones. That's its first for a reason, because it's the most murky and mysterious and one of the ones that, you know, cognitive, so everyone from cognitive scientists, to linguists to rhetoricians have been trying to interrogate is what are the different ways that we invent ideas, or, you know, if you want to pick apart that framing, we can, but like, how do we develop meaning and put it out there in our own voices, right like that, to strip that away from students I think is, is actually doing them an incredible disservice. In my mind, number five, it will better prepare students for the world of work. Many professionals are already using AI daily, we are obligated to prepare our students to use these technologies professionally.
So there's the inevitability narrative, right? There it is, yep, yep.
inevitability, inevitability. And then also that the point of teaching students to write is to prepare them to be a good worker. Yeah. And that there's, there's kind of no, no distinction between those two things. If if writing classrooms are doing that, then they're doing, they're doing well.
That's right. That's exactly right. Number six here, it will allow for better collaboration between students and faculty, using AI D personalizes the first stage of the writing process and gives the team whether it's students or student faculty, a common artifact upon which to begin work. I mean, like,
so. So depersonalized is like, the whole reason you get invested in writing is that you're expressing your own ideas. And I have to say that my initial reaction to the open was going to use chat GPT to cheat thing was, the only thing I do when I'm evaluating student writing, is give them formative feedback on how to you know, refine what they're doing. And if they ever gave me something that was synthetic text, it would be an enormous waste of my time and completely useless to them. Yes, so this Yeah,
this this one's just kind of baffling. I mean, it sounds like it's saying, This is good, because it D personalizes creating a common artifact that we can all kind of look at, I guess the idea is, then your instructor won't be like, offended by what you submit for the assignment because it's depersonalized I don't, I don't know why you would want a more depersonalized writing classroom like it's really supposed to be a place where we nurture not to use some of the same metaphors used for the technologies but we nurture students thinking and like their their expression, and that is personal like you can't take the personal out of that
I was gonna save Calvin that metaphor actually works here because you're talking about nurturing. You're talking about nurturing a human being Yeah, exactly. This one here, I think is the one that maybe gave me the most the graded on me the most number seven, it will improve linguistic skills by forcing what is now being called prompt engineering. This works on two dimensions for instructors because we have to account for AI and assignment design and for students because in order to effectively get AI to generate applicable content, they will need to practice and tweak the prompts they feed it so prompt engineering we should Oh yeah, go ahead, Emily.
I couldn't even work out what linguistic skills meant. And this one,
I was gonna and you're and you're a linguist, we this is why we needed to have a linguist that means you or to show us that like, yeah, linguistics for improving linguistic skills through again, prompt engineering is just basically tweaking the prompt that you give a generative text synthetic checks generator to get it to spit something else out, which I've heard that talked about now as a workplace skill. I don't know if I buy that that's going to be, I certainly don't buy into the inevitability narrative surrounding surrounding that. But again, the linguistic skills, they're not very clear. Number eight, ooh, this one's enticing. It will make grading faster and easier. Imagine how much more quickly you could assess for logic, structure and flow. If you weren't tripping over fragments and comma splices are getting lost in one three page paragraph.
Imagine how much more quickly you could assess for logic, structure and flow. If you weren't wasting your own time. nitpicking, punctuation and paragraph structure,
right? Yeah, this is actually more of a more of a critique of this person's, I guess, ideas about teaching and like what effective writing teaching goes through, but also presuming here that there are not meaningful things to be made out of, you know, the way that a student comprehends a grammar like that, in and of itself can be kind of a fascinating exploration of how to structure your voice, right to just say that that's something that should be automated away, I think is is also kind of a spurious notion here.
That's what I was going to add to that. One is just that it implies that grammar is not important. And I have students who specifically request instruction on on like, the finer points of grammar. And of course, there's, there's all kinds of rhetorical aspects of different grammatical forms. Right? Yeah,
exactly. And I think sometimes even when I'm assessing writing, I'm in a different space, because I'm looking at research papers and linguistics. But I think sometimes there really is value in focusing on the sort of craft of the structure. And other times you want to be focusing on the ideas and the overall structure. And it's all about the student. Yeah, what what they need was being you know, is focused on in the instruction at that point. And yeah, if the student doesn't do that practice, and they always say, Well, I'm just gonna take some synthetic text that doesn't express my own ideas, and then concretize those ideas, then, there's also that last learning opportunity,
exactly. Number nine is going off of number eight, it will make grading more authentic, for the same reasons as number eight, you'll be able to focus on assessing the important stuff. So yeah, yeah. Again, just kind of like D. D, focusing on the teaching of grammar as being something that is meaningful or something. Yeah, or something that has like a contextual richness to it that, you know, that it shouldn't be focused on I yeah, I again, don't really agree with that. One,
the assessing the important stuff, I think that in a lot of cases, as the instructor, you're going to be doing what Emily mentioned earlier, which is like you can tell immediately, this is synthetic text, so you're not going to be assessing it, you're gonna, you're gonna say, resubmit this not in synthetic text.
Right? Right. And then finally, number 10. Here's our little, little nice moment to end on here. It's going to force all of us to revise our curricula in a way that's going to be more creative, more generative, and more fun, it's gonna force it to be more fun, more fun. I truly do believe this. It's going to be work, but it's work worth doing. So I don't know, Emily, do you think it's good that this is going to force our curriculum to be more creative, generative and fun.
So the most charitable reason reading that I can give this is it can be worthwhile to revise curriculum. So like having something that says, oh, no, I need to really change how I'm doing this could be valuable. But it wouldn't be to work with chat. GPT. Right. My my initial reaction, I was asked back in December, you know, what happens when students cheat with this? If someone is cheating with a synthetic text machine, the problem was upstream, right? The problem was, we didn't create conditions where the students had time and motivation to actually dig in and do the work. So curricula has written, sure. But not to embrace tech synthesis as like the heart of it.
Yeah, I feel like I've been trying to I've like gone red in the face, saying that too, and trying to be polite about it saying to other faculty that like, if you have a writing assignment that is easily hacked by, by a synthetic text generator, it's probably not a very good writing assignments. Again, I can find Kinder ways to say that, but I mean, it really I appreciate you reinforcing that concern, Emily, because that's, that is that is in line with what I believe about it, too.
And I would I would just add that in our previous episode with Scott Graham, one of the things he pointed out and one of the things I've heard from my students as well, in technical communication, is that using these things is not easy. It's it's actually incredibly labor intensive, because the output that you get is so shitty so much of the time, right? And so the idea that this is just going to be Fun, and we're going to revise our curriculum, and it's all gonna be fun for student and instructor, I don't think it really actually works works that way in practice.
And it's free to access for now, but open it at any point because start charging for it. And if you've redesigned your whole curriculum around students using it, then you're going to have to redesign again,
that's exactly that. That also is a point that does not get talked about often enough, whenever often enough, whenever you kind of like yoke yourself to a new technology, that technological infrastructure could change, there is nothing saying that this product couldn't be that its funding structure, its access structure could not be changed in you know, at the drop of a hat. And you will have to, you know, think on your feet or yeah, get an institutional license for it. Right. And I don't know, I mean, I think that there's a lot of there's a lot of danger in accepting that. But but just to kind of close us out here, I really want to go back to what Emily said earlier about, initially, this feels like it's coming from a place of defensiveness. And I think that the general takeaway that I want to impart that I'm getting from this conversation, to a lot of other writing instructors is that we, you know, like many of us were trained in how to do good writing pedagogy. This is not the kind of we know that at the end of the day, this is not, this shouldn't radically shift the way that we think about our pedagogy, because it fundamentally is not based on good practices to try and kind of force fit our pedagogical style into the parameters of this new technology is not only reinforcing the sort of inevitability hype, that it's going to be everywhere, so we have to play into it, or we have to do this in order to get funding for our writing programs. But I really do think that it has danger to it, because I don't think that our writing instruction is going to get better with it.
Yeah. And it's and it's, there's no reason to expect that it's coming from good practice, because that's not what it was designed for. Yeah, it's not it's not based on what's known about how to do I mean, I assume there's a body of knowledge there. It's yours. It's not mine, but you know, there's there's a literature that the scholarship there, and that was not an input to the creation of this product.
That's exactly right. Yeah. Anyway, that is our critique of 10 ways that chatty PT will be a game changer for first year writing. We want to say thank you very much to Emily and Bender for coming on here and helping us critique some of the hype surrounding writing based AI. And we also wanted to plug since we are doing this as kind of a you know, as kind of a read through tracking the different hype tropes. I also wanted to plug Emily benders podcast with co host sociologist Alex Hanna, mystery AI hype theater 3000, which is now available on podcast streaming platforms, as well as a video episodes available on a peer tube. Is that correct? Emily?
That is correct. Yeah, peer tube is the fediverse answer to YouTube and we were initially a Twitch stream up on peer tube and now with wonderful production assistance from Christy Taylor we're actually rolling out the episodes as a podcast as I'm speaking to you Episode Four has just dropped in the podcast format.
Magnificent fantastic
we will certainly link over to that and yeah, so if you want more of this more of Emily M benders perspective on critiquing AI hype, along with the brilliant Alex Hanna, we highly recommend that show, Emily, anything else about where our listeners can find more about you find read more things by you?
Yeah, I'm pretty easy to find on the internet. If you search Emily M. Bender, add linguistics, I'll pop up. I have my own faculty page that I still maintain at the University of Washington. It's got all my publications, there's a media tab where you can find links to all the podcasts I've been on this will be there by the time people can hear it. And also other media work that I do sort of public facing stuff is going to be there. And let me tell you, it's been a busy six months with the media.
So there's a lot I can only imagine. And for that reason, we also thank you very much for being so generous with your time and talking with us here on reverb. It was a joy getting to speak with you.
My pleasure. And I'm grateful for the boost of our new podcast and for a chance to reach your audience.
Absolutely. Thank
you so much.
All right. And from all of us here at reverb best to you all stay stay non synthetic out there. And we will talk to you again soon. Bye bye, everybody. Bye bye. Bye. Our show today was produced by Alex Helberg and Calvin Pollack with editing by Alex reverbs. CO producers at large are Olivia Burnett, Sophie wanza And Ben Williams. You can subscribe to reverb and leave us a review on Apple podcasts Stitcher, Android or wherever you listen to podcasts, check out our website at WWW dot reverb cast.com. You can also like us on Facebook and follow us on Twitter where our handle is at reverb cast. That's r e v e r b underscore c a s t if you've enjoyed our show and want to help amplify more of our public scholarship work, please consider leaving us a five star review on your podcast platform of choice and tell a friend about us. We sincerely appreciate the support Word of our listeners thanks so much for tuning in