Rob, hello. I'm Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast. In this episode, we riff on vibe coding, and we talk about the human and economic impacts can this agentic coding paradigm the use of AI? How will it impact people just starting their careers and people building real applications? What is the trajectory for AI, generated systems and code. What is the risk of AI slop? All these questions and more are part of this conversation. I hope you enjoy it.
Do you think companies are going to start evaluating people based on their graduation timing around chat GBT,
the graduation setting. What
so do you think companies are going to start judging candidates based on the timing of their graduation with the timing of the release of chat GBT.
Why? I don't understand the corollary.
There's a huge uprise of people using chat GBT to do all of their assignments.
Ah, okay,
and teachers using chat GPT to generate their assignment curriculum. The quality of education is just chat GBT interfacing with itself.
Oh, my God, it's funny, because I was going to talk about vibe coding today. I do think that there is a very serious dilemma, more, even more broadly, on, on the AI impact, on, on this, yeah, because, I mean, a lot of a lot of people graduating in these in this year, I think it's going to get even worse, probably even with, yeah, especially with coding and something like that, may you have to question whether they've really learned that much, right? Is that what you're Is that what you're thinking that they're, they're just not going to be well educated candidates.
So there's a mixture of AI flop and the the dependence on AI and the known side effect of it hurting critical thinking skills.
I read so four days ago.
Yeah, the over reliance on large language language models to do your critical thinking, mixed with the teachers and students both using it. Do you think that companies in the near future, we'll have to start filtering candidates based on that kind of thing, or if it's just a never ending loop.
Well, I mean, you're going to have to have ways to test that people will have maintained their critical thinking skills somehow, right? Because you this is the, this is, this is the what's weird. Can you imagine jobs in the future that aren't using AI, to some extent, or to a large extent, to improve performance? I
think it depends on the position. Go ahead.
I I see a problem, because let's, let's look at the classical model right to get senior experienced engineers, they have to start as juniors. And if you're using AI to do all of your junior level engineering work, how do you get experienced engineers?
I i Just to take a counter argument on you that the junior engineers are it's a different skill set now than it used to be that the skills that you need to be a successful engineer in in this next generation are different than the skills that you needed to be a junior engineer before. I don't know what the I mean. I'd love, I'd love to spend this time talking about what that looks, you know, what that difference is. But I'd have to, you know, I'd have to start from a, you know, I just handed you a. Better, much better tool to do grunt work, right? The question is, does it replace reasoning? I mean, I'm and Tom, I'm happy for you to take the you that grunt work is necessary position. I think it's a useful discussion.
Yeah, yeah, there's, there's an argument to be made there, certainly, because sometimes when you have to do the repetitive work, you train yourself to be able to, I don't want to say, dissociate from the work, but to be able to perform it on a more automatic basis, without having to apply critical thinking in that particular task, freeing up your mind to do other critical thinking other places,
when you're also learning pattern matching, right? One of the things you know, when one of my kids was going through college, they would, they were called me up and complaints like, Oh, I'm, you know, my sophomore year, I'm doing nothing but problem sets all night, right, over and over and over again. I'm not learning anything. Like you're learning the pattern so well that you understand how to apply the pattern into the next, next solution without giving it much thought, right? I actually, I mean, I do the same thing when I'm with dancing. When I'm learning dance, I have to spend time learning the pattern before I can do other stuff. So I agree. I agree with you. Said, Where, what are people doing instead of learning the pattern, or what pattern are they learning when they're using AI to do the grind, used it enough to start having that experience, right the as writing the code for You, what? Are you? What's your brain doing in the meantime?
That is a great question that I don't have an answer to.
Isaac, you, you brought up this point of you know, cognitive,
cognitive
effort. What was the phrase you use with that cognitive
reason? Dependence on critical thinking or lack, dependence on llms and inhibiting critical thinking
to provide, to provide critical thinking. Like, what should I do in this case?
Yeah, it would be as if you went to someone else to answer all of your actual problems that needed thinking.
I have witnessed that type thing. And I think agentic AI is going to get even more close to that, because you're going to have something that says, what steps should I do to analyze this decision, lays out the steps. And then I think SubCom, yeah, go ahead,
there's a good trade off, because I found my productivity increased by not having to have encyclopedic knowledge of error codes and having an LM able to have, like, a big picture knowledge of the code base, and be like, Oh yeah, you have this line run twice. It can't run twice. But if you're like some things that are algorithmic, I'll go on a brief tangent. The average code that these llms are trained on is by the the bulk of the code, which is not by like the greatest programmers, it's by average programmers, right? And average programmers aren't solving like strong problems. They're not doing like ridiculous map traversal or not doing like you can get like depth first searches. You can get certain recursive algorithms, but if you use it to do like those kinds of problems, or, I guess, subsequent kinds of those problems, you're not going to be able to troubleshoot things that go wrong. If you've only ever had aI write those problems for
you. There's an assumption that the AI has a better insight than you, right?
If you don't have insight, yeah,
I. Gosh, ah, it's, this is the dilemma, right? I mean, it could, it can scan your code base faster, or ingest data and and com, you know, actually have more, you know, consider more components than a human is considering.
I mean, so would you are we getting to a point where AIS, I think we are at the point AIs are better at reading X rays than humans. I think, no, it's a limited use case. But medical medical diagnosis. I mean, I'm, I'm hearing stories about people putting in which I did not recommend this, people putting in medical, you know, medical symptoms into chat, GPT and it doing a giving, giving diagnosis or solving problems that you know their doctors had not been able to address.
It reminds me of the original like IBM, like thing that would take in those kinds of things and garble it together and give you results. And I think something that was interesting when I was reading that article was that there was a paired article with it, that the the medical professionals that were giving you potentially inaccurate diagnosis, the people that gave you potentially the best diagnoses, were the people who were just out of School, because they had everything fresh in their mind. But the people who gave kind of more unreliable ones were people who had been working for longer, because they have seen more and they the more unusual cases were just that unusual, so they would respond with things that were more likely than the things that could actually be happening. And I think that there's probably a mixture of those with AI like, I think it can do what you were saying the big picture, but at the same time,
I mean, I, I use it a lot to, I'll, you know, dictate my thoughts on an item, and then ask it to do a summary or a short script, or, you know, sometimes I'll ask it to put a graphic together as a just to see what it's thinking. And it's an intro. It's like a, you know, fun house mirror in some ways, right? It gives me a reflection back of what I said, sometimes it adds some things, so sometimes it misses my key points. But you know, that's as good as a system would do faster. I mean, when you how? How carefully do you review the code that it generates?
Usually, I'm having it do like completions for like menial tasks. So I'll write like a comment, and I'll say, I want you to generate this boilerplate based on this content, and then it does that perfectly, like I don't have to go and format things and do it a certain way, the auto complete is usually pretty good when it's actually generating like algorithmic code that's not boilerplate. Usually I have to rearrange some stuff. And the thing I was telling you about this the other day, it's like, it's like an intern. So it'll write like intern code, and you'll be able to, I guess, like kind of code review it. But if you trust the the intern that doesn't have like architectural knowledge to write everything, you're gonna end up with like architectural flaws down the line,
that's that's been the concern I've been hearing about. We're moving from vibe coding, by the way, to agent coding. In what I'm hearing people talk about,
or live coding, is Agent coding,
agentic coding. There's people who are weirdly making a distinction. Why don't you make a
distinction? So originally, the meme for vibe coding was that someone made a game entirely just by, like loosely prompting Claude. So they would say, Hey, I'm looking to make this game, make it multiplayer, and then it just generated, like everything through the agent model. The the vibe part was that you weren't like you were less holding the AI's hand and more like being the lazy, like product manager, and you're just saying, Hey, I wanted to go this way, and I think with something like a game. Assume you can get to a certain point, but something like, I guess the topic for these calls in general, is like data center operations. The AI is not going to have that training. Most people running enterprise data centers are not posting exactly what their their architecture looks like in any place that's going to get scraped. So it's knowledge is just going to be guessing based on whatever articles it has
that's Well, I mean that when I look at, you know, will AI replace what we do, which is always an interesting question to me, as sort of, you know, obviously, if I have material interest, but it's an interesting question to me of how much expertise is baked into what we do. And could, you know, could the you know, if an AI was to, if you told an AI, be an interesting exercise. I want you to build a, you know, raid and BIOS configuration engine.
I don't think there's enough historical code data for somebody to do that, necessarily, but they would. I suspect it would likely build something very narrow and and bespoke to solve that problem, rather than, I mean, I guess you could prompt it to keep building it back until you had a more general system about to take this one step further. But I'm interested in people's thoughts before I do does that so understand what I'm what I'm saying. Yeah, go ahead.
I think when you say, take our jobs, I think there's, like, there's five people on this call, probably with wildly different positions. Mine more of an engineer. Tom, I think was like, architect D, you are a CEO, right? And I think seeing like, seeing things like the head of security posting JFK files in chatgpt, and asking, like, are these? Which of these can be released, like some of them, some of the jobs, depending on the person, could be replaced. But it in some cases, I think it. This is something I saw an a video about AI slop, who is the AI for and the question is like, who is benefiting from having all of this stuff AI? So for you, having your entire job be replaced by AI would be beneficial if that's what benefited whoever owns the company. So if you're a publicly traded company, the shareholders would want to be benefited by that. If the AI is not doing a good job, then the AI is not going to take that job, right? But if you're and an example that they use from this video was like art and writing and books. So a lot of the people who are making books with AI, the the question is, Who are these AI books? For this person doesn't like a lot of writers they enjoy, like, actually the writing part of the process, not just making ideas and having some machine garble it all out, and the people reading the the stories they enjoy, knowing that, like thought has been put into the actual word choice, not like the overall just like plot elements that are now like, linked together by AI slop, as they're calling it,
yeah? So I think AI slop is exactly the right word, yeah. So
for something like programming, if it's okay, for it to be milestone based, like and kind of like the book analogy. If you're okay with saying I want this overarching goal and I don't care about the pieces in between the goals, like in some aspects you need to care, like for security, then I would say those positions are less replaceable or easy to replace. If you don't care about the how you get from A to B, but if you care about how you get from A to B, then it's less replaceable. I.
Either you're Are you? Are you arguing the craftsmanship argument? I'm
arguing that if you want it to be average, or you like it's okay to be a black box. I don't think the the craftsmanship. I think it will matter less and less as we get, like, supremely better AI. I don't think we're gonna have like ASI anytime soon without destroying the entire planet. Sure.
Yeah, well, but, but, I mean, you're talking about, I definitely think we are going to enter a world of handcraft, you know, there's value in human crafted, human touch stuff, because the AI, you know, it's sort of like getting, you know, we've seen this with manufacturing errors, errors where, you know, you get, you know, knockoffs that are much lower quality, that there's slop, and, you know, we see this all the time, there's, there's a lot of, you know, junk clothing. There's a lot of of stuff that's produced mass, produced cheaply, and it has, you know, no no very minimal human thought to it, or various but, you know, hey, I need a sneaker that looks like this Adidas sneaker, but isn't and is manufactured for $10 instead of $50 go and you can get that now, you know, we're getting that like, hey, you know jet, you know Claude. I need a program that will boot and provision Dell servers and install this operating system, create it for me. But I don't know that there's a lot of value to having in time having a human. You wouldn't pay a premium for that to get that same function, because it's such a business function done by a person or with people involved. I mean, we're not writing software this new. We're trying to get people out of provisioning a server, but the process by which we do it requires a significant amount of knowledge and context and intent to build a system that is able to do that in a reliable, repeatable way.
Do you think that what we're building is enabling an easier AI takeover of operate operations positions? Because if you can have a solid API driven foundation with content that's built on like knowledge that you understand of your own infra, then the human element becomes less necessary because we are taking care of that human aspect of it,
you know, ah, I don't know. I think that the current systems that we're replacing are usually bespokely created by a small team of people out of parts to do things that are very specific to what is immediately in front of them, right? They're trying to solve their problem with their servers to install the current operating system. And so even if they're working to do automation, they're not, they're not doing automation that has any broader scope than their immediate problem set. And and therefore they're they're really, they're automating, but they're not automating. And when we approach that problem, we are we are taking a much more standardization and process focus than the people that we're involved with, and so I don't think that's quite anti person. It's moving you to standardized process. I have an analogy for this. You might have heard me use before, but it's appropriate for this, and we'll see if we can bend it into the AI Generation Next. The analogy I love to use is boilers, literal boilers. So back at the like the dawn of centralized heat. So 18, 1800s late 1800s people wanted to heat buildings, and so they built radiators, and the radiators boiled water and and flowed the radiators. But there weren't any standards for this. So you, you know they would each building would hire somebody, not, not even an engineer, right, who would build a boiler for that building, connected up with a whole bunch of pipes and heat the building. And every boiler was custom built, right? They buy pipes, they buy a boiler, they buy like, you know, each one was was custom. And the consequence was they had a lot of fires. They had a lot of exploding boilers. The systems didn't work well. And the person, the people who maintained it, they were maintaining a bespoke system, and you had to know the ins and outs of that system, and so they had janitors who basically maintained the old system. And it's funny, every once while you hear a report about somebody, and they like, they have a name for their boiler, and it's, it's, you know, some bespoke, specialized system, and it costs a ton of money to run and you can't fix it. And what, what I see us doing is fixing that problem. So that was dangerous and expensive and caused a lot of harm, because each one was was unique. What happened after that is they had regulations and standards and requirements and standard boilers and standard pipes and standard fixtures. And now, when you build a system like that, you're really not there's no cost, there's very minimal customization. You are putting together standard parts in a standard way. And the more you do that, the higher value you create. Because you get standard parts. You get standard results. You have more people can maintain it. You don't spend time thinking about your eating system, right? If something breaks, you go to the store, you get a new piece. I see us doing that. So if somebody uses digital rebar in their data center, their servers become interchangeable. Their operating systems become interchangeable. The way things operate are standardized. They don't have to learn the difference between a Dell system or an HP System. They can just write, show up, run Digital Rebar. We have abstracted that. It's a lot of a lot of words for that to get there, but so, so I don't, I don't see us as like, I mean, we're definitely taking away that manual labor, but we're moving it into a into a higher value system. Did I lose you? Or does that make sense? I
think it makes sense. The thing
that's weird to me about AI right now is, if you go back to the heating analogy, and somebody said, Hey, design me, you know, chat. GPT designed me a heating system for my house. You know, I don't, I don't know, to the extent that it's going to know the rules and the regulations and design, right? Is it, you know? Or coding is even worse. Is it worried about best practice and pattern, you know, it's using what it has. But if you're vibe coding, and it's like, oh, yeah, I know how to solve this problem. And, you know, all of a sudden it's out, you know, it's out doing, you know, it's rewriting the operating system because it didn't, couldn't find the library that it should have had, or because you constrained it in some way you didn't anticipate causing a over, over constraint.
Actually, I have a story along that for vibe coding. I was reading somebody's notes about vibe coding, and they say, oh, yeah, I started vibe coding it, and it was writing it all in node with this back end, but I added a parameter and it said, Wait, you shouldn't be in Node. You should be using, you know, I don't know what alternative, what else they and they the vibe coding. Rewrote the whole application to because they changed, changed the target criteria, and now they rewrote it in a whole new language. Yeah. I don't know how I feel about that.
I think that's part of the experience that I had with some of the agentic type things like it wouldn't really understand why you were doing it a certain way, because it didn't have like that extra context, unless you're, like, putting comments everywhere for like, every single prompt or something. So it will, like, sometimes delete an entire file if it thinks there's something wrong that can only be fixed by deleting the entire file. Or, more drastic in your case, just deleting you. For making the entire thing be a different language, because it doesn't realize that the use case, like, maybe it needs that specific language. So
the new agents that anthropic have released have kind of solved that problem, because they have much wider context, and they're able to, like, view the whole project and take that into consideration. So the new like, the new models that they're releasing, I mean, just like couple couple weeks ago, have solved a lot of those problems with, like, not having a bigger context and understanding some of the pieces, so let's just delete this kind of thing, right? So
it's just the code. The systems are definitely getting better, right? That's think that's important. Are they? Have you played with them? So it's challenging because you don't we need to not we need to assume that this is the springboard, not the not the end result.
I used Cloud to write a whole like web app that like scrapes local ring times and kind of collates it. And it helped me come up with a way to use, like Cloudflare workers, and kind of work around some of their limitations, and then some of the scraping. It came up with ideas because, like, I was just trying to do a curl on a website to scrape, but, you know, because sometimes you need to render JavaScript or do this or do that. You know, it was able to, like, suggest a couple of different strategies, and was good at, like, rapidly prototyping. Like, okay, we tried this. That didn't work. Let's try this other thing. Oh, that works better. So.
And so I want to pull this all the way back, though, because there's a there's a degree of reasoning that you're applying in in what you're doing. You're not reading the code as carefully, but you're still thinking through, you know what, what's on your mind as you're doing that evaluation.
I mean, my goal is, you know, Does, does the code work, right? So I'm testing the Validate. I'm not, yeah, like you say, I'm not necessarily reading line by line and going, oh, I need to fix this code, or whatever. It's more of a, you know, does it, does it provide the output I expect, right? Like I would expect to see, you know, this data structure, but I'm getting that data structure, and how do we, how do we modify what I expect, or what I'm getting to what I would expect, kind of thing?
How is that different than you would have like I'm looking for, how is that different than your reasoning would have been if you were writing the code?
I'm not sure. I'm not sure that it really is, but an interest,
I mean, this is, it's an interesting question from that perspective. And and then, if you were going to maintain that code, would you keep involving the AI
robably? I mean,
because this is, this is where I start wondering, like, if you look at what we do, and you go to the boilers analogy, right, it's really hard to do physical system. And this is one of the things I think makes us a little bit different, in that we're actually interfacing to physical machines that have constrained rating BIOS configurations. And if you do it wrong, bad things happen. So there's a degree of physicality with this, right? If your AI system was going to design your AC or your electrical it wouldn't be like, hey, I need to upgrade the 20 amp circuits. Like, Oh, here, let me show you how to redesign your whole fuse box and get rid of everything, right? There's, there's constraints in the system. I guess you'd program those in to the system with code. It's much more fluid. And you could say, Yeah, you know what? I decided that that's a hard way to do it now that, now that you're having trouble, I decided that was a hard way to do it. And I'm going to give you a completely new approach. Have a great day. You
I mean, this is, I think this is the element of what we're talking about. But I.
Riding this past this point of utility,
I did. I lose the thread. I
maybe I did
because I'm it's hard without somebody having really explored the agent tech software. John, you've come the closest by letting cloud blog do very fair amount of work from that perspective.
Yeah. I mean, I've been, yeah, I've been using the cloud agent stuff for, yeah, I don't know, a few few weeks now, it's, it's pretty cool, because you're, I mean, you're basically just running a Python process locally and does kind of the communication, and then it's able to, it's able to do a lot of cool shit too, like I told it to do, you know, make a commit and write a summary and all this. And it can do, you know, I mean, it's basically just running bash commands and looking at the output,
but that's legitimate.
You know, it does all the tedious stuff that I can't be bothered to do.
So how much do you trust it? How do you know to trust it?
Yeah, I mean, I think it's one of those like trust but verify kind of things where you know, test and validation becomes that much more important, and then you know, do you use the tool to write the tests? Because then how do you validate the tests? So you need tests to validate tests. And
have you tried to have it write tests.
Yeah, it works pretty well. Okay,
Isaac was talking to me about this, about the test. People who write tests are better coders than people who don't. So the tests, the examples of tests, are better.
Is that a Yeah,
that's funny.
I agree with that. I mean, I mean test driven development was the all the rage for a while there and oh yeah, development. And then it was, well, it's technically test driven development, but we do all the development, and then maybe eventually write tests.
I have done legitimate TDD, and it's amazing when you do it, because it really, it really does, you know, force you to think through what result you want to have. And then, how am I going to verify that result? And then, then, when you write the code, it usually ends up being easier. And then I've done like refactoring on it, that you know. And basically you refactor with much more confidence because you've done the TDD. But I
yeah, I mean, that's one of the things I love about the Ruby community, is that a lot of stuff is, you know, test driven, and so, yeah, most libraries have fairly robust tests. So when you are making changes or making pull requests, you know, you can be confident that, oh, hey, all my tests pass, probably my, you know, feature whatever, didn't break everything.
Yeah, I would hope that people using AI end up building more tests. They use the AI to build better test frameworks from that perspective. And I haven't. I haven't seen it. I did. I did see it. Is Adrian Cockcroft, so one of the architects in the original Netflix days, he was doing some agentic coding and like just raving about the experience of how like that it built. It didn't just build the code for him. It actually checked it into GitHub. It built the repos. It did the splitting. It built without being asked. It built a preliminary UX, and then. He he was able to continue prompting to get it to build more and more complex systems. And he was, you know, after three or four days, he felt like he had done, got a lot of code written that met his, met his objectives for his prototype,
I just, I feel like I'm Get off my lawn, sort of grump, in this case, that I don't feel like it's that legitimate. I
uh, hmm,
like we might have plumbed this topic to its depth at the moment. Is anybody worried about chat GPT replacing Digital Rebar? No, I actually believe that we have a that there's a material upcoming threat of somebody at a company trying to vibe code or replace that. Our upcoming competition is not going to be mass and cobbler and foreman. It's going to be an ops person, vibe coding or replacement for us. And I think the AI companies are likely to do it first. I don't I can be fascinating. To see if they're successful at that. I mean, we've watched people try to build even a quarter of our functionality and take years to do it, but and there's not a lot of sample code out there to recreate it. I um, why Isaac? Why are you so confident?
I think that the architectural complexity that we have is just like it's so different from other provisioners on the market. I think the ironic model where, like, it's just kind of a little stateless service that does, like, reaches out to IPMI, and it has its own kind of weird inventory or inspecting mechanic. I think that's probably the the max of the capabilities someone would be able to create. It would just be some independent tool, and then they would just have, like, a bunch of those, and they would hack them all together with something kind of awkward. And I don't think it would give you an enterprise product. I think it would give you, like a not made here, kind of blob of scripts that someone is banished to maintain
that That, to me, is back to the, this is why I gave the analogy Back to the, you know, heat the boilers analogy, right? What, you know, how much of this code that AI is generating is boilers, there's is 18 air, 1800s error, boilers versus, you know, modern, reusable product. I don't know. How do we actually, I do have a follow. How do we test? How do you know which is which
like from quality?
Yeah,
there's some big
this is, like this vector by my sneaker, right? How do I know that I'm holding a Adidas versus a knockoff Adidas
from a product or from an AI perspective?
AI perspective, they
have these comprehensive, like test suites that does a whole bunch of, like, automated problem solving and challenges and stuff, and they there's like, multiple suites of these, and they bench multiple models up against them, or they each develop scores for each of the sets, and they just compare. The isolated scores that's
going to test ai ai product versus AI
product. How do you if, if
we are, we're hitting, we're we're walking into this brave new world together. How am I going to know that I'm using a service, or do I care that I'm using a service that was written by an AI or versus by by humans.
I think this is where it's kind of silly. I think there's a degree of It's a miracle that we have anything like it's it's a miracle that we've sent like rockets to the moon. It's a miracle that we've built these giant bridges. It's really just a collection of many people working over a long period of time, and I think there some AI is going to be better than some people, and some people are going to be better than some AI. And I think that if everything is AI, then maybe you will see symptoms, and it will be in the form of, like a new wave of security being bad. Or maybe it's in a new wave of, like memory issues in a few years. Or maybe there will be an AI that fixes it like, imagine, like something like rust, but the SAT solver is on every programming language, so it just like an AI that looks at all, ever all the code, and says, Oh, this is bad. Fix this. And then another AI just fixes it for you. I think you won't know, because there is bad code by everyone, yeah, and I think,
and maybe that's a very, I mean, human, human, and this is the AI argument, right? If there's bad code by everyone, then you know, maybe you're no worse off with AI generated bad code than human generated bad code.
And to that point, a lot of the people that we talked to at the Red Hat convention in Boston, they had been manually managing multiple like open shift clusters. What's Why is an AI not better than you at literally never upgrading your firmware and just running commands on three servers.
Oh, you mean, you mean having bad practice?
Yes, if AI can write bad code and you can manage an infrastructure poorly. Why can't an AI manage it in your place poorly?
Oh, my goodness, that you could apply that argument to your college cheating thing of you know they're cheating more efficiently, but the people were still cheating.
And to bring it full circle, because everyone's now dependent on AI, there's no use in having the actual people if the AI is just doing what they're doing for them. So now the question is to will jobs have to vet for people using the AI and just hire AI instead? Or
they're like, they're planning to hire AI instead. I mean, here's, this is the dilemma. It's if you went, if you went to college, but I could say, do anything and didn't develop your reasoning skills if you used AI as a way to cheat your way through college and you ended up with a degree that you didn't improve yourself getting then when you get into the workplace, you are going to find that you aren't bringing value, and you're going to either have to decide to turn your brain on or you're going to not be you're not going to keep in jobs very long. I think AI is going to make it harder for this is the ultimate question. I don't know about this. Let me ask it like this, does AI make it harder for us to find out that a person doesn't know what they're doing?
Did you hear I'll answer that with another question. Did you hear about the college student that made an AI for doing code technical interviews?
Yes, yeah, where he had his drag, it would do it. It would actually do the coding for him. Yeah,
yes. So he an enterprising college student who had his diploma rescinded for this because the company that he applied to. Do Yes, the company that he applied to and made the tool that he tested on their interviews, I think it was Amazon or something, yeah, they reached out to the college, and they the college was like, This violates all principles,
disclosed. He got the job, and then self
disclosed, if I recall, yes, something
like that. And then got, and then, like it was, Amazon contacted the school for an ethics violation. I think they ended up hiring anyway.
I don't know. I don't think so. The but the application monitored your screen. It could listen to the thing, the voice, and do transcribing of what the other person was saying. It could solve the problem for you one step at a time, prompting you what to say and how you solved it. And it would move around so make it look like your eyes were moving in different places. He had an avatar of him. No, it like you would use your face. You would be the avatar writing the code, but it would be a teleprompter, and it would move around on your screen so it looked like you were looking at the problem and not looking at another monitor, okay? And this super sophisticated piece of software solely to fight back against the technical like hacker rank, code interview. I think that if your interview is easy to solve like that, then you're going to fail the Turing test as the proctor. I wrote a college essay on this, actually, about how the future of Turing tests would be the person giving the test, or the quality it was something about that, yeah, the quality of the person, or something like that. But I think the the person that has to do the interview, if they use AI, then it doesn't matter, but if they don't use AI, then I don't know. You can ask it questions like, how many R's are in strawberry, and maybe it'll answer correctly.
Isn't that, like the whole premise of Blade Runner? Is there, like trying to tell who's human, who's replicant? And
that's true. I mean,
yeah, the test was the the VoIP conf test and Blade Runner? Yeah, it measures your heart rate, blushing pupillary dilation in response to emotionally provocative questions.
The but I'm gonna, I gonna keep coming back to if we expect people right. We RackN expects people to use AI to improve their job performance. We, we, we're not hiring fresh outs, so we haven't hit this yet. But for every single employee we have, there is an expectation that they're going to augment their performance using AI at this point, to some extent. I How do we know if, when somebody is using an AI, and maybe we don't care well enough that they've been able to give up reasoning.
Oh, so whether or not they have the path to use the AI,
I work the right, they're using AI in a way, right? And the question becomes, there's a great, there's, there's a, actually a really good business school study that Laura did that I'll relay on this of somebody who's using AI effectively, and we can't tell. We're like, wow, you're producing great work. This is really good, I you know. And you know, if they could basically be, you know, like aI says, nod, and they nod, right? You know, I don't it's, it's hard for me to know where we start figuring out if somebody is under is not performing adequately. If they're not right, there's, there's, we're testing how well they're giving, they're working on the tasks that we're giving them, and the quality. And we have ways to judge the quality of the tasks, I expect AI is going to significantly improve everybody's output, and as long as that's happening, I guess I'm okay. But if somebody's truly not not doing anything but plugging my request into an AI, their job is going to be at risk pretty darn fast. Past, if you're, if you were the interface between your boss and an AI that then does all your work that you're you're in, you're going to be in trouble for from adding value. Yeah,
I guess the question is, Are you now paying for the prompt engineer, or can you, like, get away with just paying for a better model
if the prompt engineer, if you as a prompt engineer, are adding knowledge and experience and rational thinking and doing that, then, then you're adding, you're adding value. You've you've done something. If you are literally just saying My boss asked me to do this work, you know, you know, please do it for me. Then I would say you're you're not adding any rational thought, you're just plugging stuff into the AI. And that's that's not going to last very long. And I think those college students who are like my professor gave me this assignment, give me the answer, and then turn in the answer. If that is the behavior that they've learned in college, then they're in trouble. If they use that AI to, you know, if they learned how to reason better, or learn better or explore things, but that curiosity is going to probably help them. I think the line between those two use cases might be incredibly thin
at the moment.
I'll leave you with thought with the story from Laura's, Laura's business school that I found interesting. So there was post office, this happened, I think in the 70s, surreal, real case study. And in the post office, the mail would come in on a truck. You would sort the mail into bins to for delivery, and then people would come get their bins, and then they'd shuffle off and do delivery. And that took like 10 or 15 people in the office who came in didn't take didn't want to interact with anybody else, didn't really take time, didn't learn anything, like just said, All right, this is what I need to do. And he would show up and sort the mail, and he was five times faster than everybody else in the office. Didn't want to socialize. He would sort the mail leave. Nobody liked him, and he was making everybody else look bad, because he was five times faster at doing the same job everybody else was doing. And so the I the post office employees, the previous all the existing ones, went and complained to the manager about that employee. He wasn't likable. He wasn't getting on with the team. He was, you know, what? What was, what? What should the manager do in this case?
I don't tell this story as well because there's, there's human dynamics in that. But right a lot of, a lot of people are, you know, hey, I want the new guy to train the other people. I want the team to get along and, you know, whatever the usually, in a case like this, there's a lot of pressure on the new people to work at current pace. There's a, there's a lot of experiments about this that especially batch work, there's a tendency for people to have pressure to not work faster than the current mean. The right answer is, ultimately, learn the process the guy's using that makes him faster, and then fire everybody you don't need, and use the new process. And I think AI is in a we're in a bit of a moment like that, that the right the people who are doing the work today by hand. Haven't done anything wrong, but somebody showed up with a faster way to get the nail sorted.
I think there's more to people than other performance. So it it probably does not like end up that clear cut and logical. So there's, there's definitely value in like, having people you know that are reliable, even if they don't use the god tool to solve every problem.
And this is, this is, and right now we're, you. Using AI with any real externality cost, like the cost of the power isn't baked in the cost of the chips. It's mostly being given away for free or at a loss. So one of the things is not baked into any of these this whole discussion is the actual cost of running the systems is not currently being passed on to the consumers of the same systems.
Are you insinuating that there everyone's getting, like, addicted to it before they ramp up the price and people can't handle withdraw?
Yes, oh yes, while we're searching for the business model and the killer features we are getting people use you. Oh, yeah, these companies all operate at tremendous losses on the assumption that they're going to improve. You know that that's not surprising,
I know. But the the keep, I keep going back to full circle with the people who are now dependent on it, and the teachers and the students and the entire infrastructure, and it's already expensive as hell to run colleges or sell people College. Now they can just make it more expensive.
So you well, or you literally just, you spent quarter million dollars getting a college education, and you, you know, college student was like, Ooh, I can use chat GPT and go spend more time drinking and partying and not thinking about what I'm actually trying to get from the experience. And you walk out with, yeah, and the the cost of using those tools was not obvious to you at the in the moment because of the way we've monetized them. Indeed, indeed,
talk about brutal, yeah, that is going to be very funny or very devastating in a few years, depending on
or the part of the built in assumption is the efficiency improves dramatically over time, and what they do at a loss today, in three years, they'll do they can do profitably, because the chips and the process efficiency have improved exponentially. But I still think it's a it's people can only, you know, I don't know. Mysteries,
yep. And then there's also the added like water, cost of running these data centers and everything else. So even if things are a little bit more efficient, there's still physical location strain. There's like a video on the impact of like, electric grid noise in the electricity itself impacting people who live nearby the data data centers wearing out their Oh, interesting. Their hardware faster because the like, the the electrical downs has more noise. Yeah,
that's fascinating,
especially with like groups of data centers, because they can switch on and off their generators faster than the electrical grid will be ready for it.
Oh, my goodness, then you have frequency drift in the middle of all that fascinating. All right. Well, I appreciate the conversation, and I'm gonna wrap up. And this is thoughtful. Thank you. Good for Buddy. Talk to you later. Thank you. You're welcome, Josh,
wow, this is one of those conversations I suspect that I'm going to listen to in five years and just hit my head on the table and think of how naive we were and all the things that we missed. I hope you enjoyed it. If you're listening to this, then either your podcast is stuck in auto play, or you did enjoy it. These are really fun conversations. Usually we have these on Thursdays. Please come in and join in. We want to hear what you're thinking, how your job is being impacted by AI be part of these discussions. We really enjoy them. Thanks. Thank you for listening to the cloud 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently, because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion. And hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure operations community. Thank you. Applause.