Welcome, Sally. Hey everybody, welcome to dead chat. This is Eric newcomer Tom and Katie are here. I've totally lost my voice. I am mustering through sickness to make sure that someone here who has been writing about AI is on the podcast. Because
Kenny and I are just here to sound.
Our AI standards are here,
the first generative podcast,
we have Nathan, who's the founder, Seoul, general partner of air street capital. I feel like I started covering European tech and like, we started exchanging a lot of messages. And yeah, so I've been looking for an excuse to do something with you. And then finally, I got on the AI hype train, and you've been thinking about AI for many years. And you're like, everybody, welcome to the party or whatever. Yeah,
glad to have you.
Yeah, what's it feel like now that everyone's obsessed with AI, all of a sudden, has it been a shock, or was sort of your reaction to the mania of the moment
hasn't quite been a shock. Because sort of expecting this to happen? In a way, it's great that, you know, like a new advanced technology domain gets attention from increasingly generalist investors or investors who focus on different industries, because almost then, you know, reaffirms that like, the technology is for real, and probably has applications and all sorts of domains.
Yeah. And all your portfolio companies can get marked up, like, you just have to hope you got in a lot of the deals.
Explain to me what happened. Exactly. Because I mean, if we're just going to trace the chronology of the last year in venture investing, obviously, 2020 21 Maybe the first half of this year, has been about crypto. And you saw all these funds rebranding themselves as crypto focused funds. And all of these VCs that I used to talk to all the time about their old investments are like, What the hell are you talking about delivery to me, I don't do delivery. I'm a crypto investor now. And I mostly do crypto. And it seems like that's all been forgotten. Now, like, that's all been memory holds. And like, they all switch their hats. And he took off their dot eth on their hat, and they put on AI. Like, explain to me what has happened over the last year.
The real like inflection point was when you could kind of master this first viral use case of machine learning, which is images and video. Like before, when you had text models that were working really well, and you could generate like a script, or you could talk to a bot. There's something that's fundamentally less interesting reading a script with a bot, than looking at these epic images that never existed before that you designed with your own instructions on your mobile phone. So I'd almost call it like the consumerization of machine learning, or at least the product, the output of machine learning, which is understandable and much more tangible to almost everybody who uses the internet, then the products that came before.
And while those tools like stable diffusion and Dolly from open AI, were available to the people in the know, for a while they really came out what like late August, September, and that has really sort of kick this off. Just using Publix very aware.
Yeah, and I think their distributions also been quite interesting from how the machinery market has evolved. Because the central dogma has always been that centralization wins, which is like to be good at machine learning, you need to have all the data, all the top people all the computers, and then the product and you mix it together and and then you get sort of, you know, great machine learning on the other side. And Kryptos been the counterbalance to that which is just decentralized everything give power back to everybody. And yeah, over the first few months of this year the fact that you can now like arm the rebels, as it were armies open source communities, which are basically they call themselves like research collectives. And they either gather on Discord or, or other forums that might not even be companies or corporate entities, and provide them with access to compute, which is what stability really did sort of liberates the creativity of individuals who couldn't participate because they weren't part of the, you know, the GAFA elites, the big technology company elite. And that's actually led to like massive open sourcing, which then busts up the centralization hypothesis and shows you that there is like another path to building and distributing machine learning based products. So I think that's why like a lot of folks are getting excited about to
how important was Dolly as a product and I guess for our listeners that haven't used it out there, I mean, this is the tool that Eric was mentioning, like open AI released, and it allows you through fairly direct instructions, you got to be direct because it's not always that smart.
You've got to basically say like in the style of Monet, or I feel like the key is to like yeah, give a realistic,
right exact right, right or clip art. But basically you can give instructions to you know, at some software and it will generate with less than a minute, some fairly creative depiction of what you drew. And it's like, very easy to use. And it's like a very clear product of what AI can do. So like, how important do you think just that being released even on a beta level, and then more openly played into the excitement from investors that there's like a whole opportunity here?
I think the community was like, pretty blown away with the results of Delhi and especially like Delhi, too. But then they get back to this question of can you build businesses on the API's on the closed API's of large companies, and now there's a few that are built on GPT. So I think that that excitement got even greater once like these models became reproduced in the open source world by folks like stability, a looser etc. So I think it's really just this this notion of like busting up centralized entities and providing tools to everybody else. And I think they've also landed really well just because we kind of live in this like, tick tock generation of short form image, like video and images and Instagram. And the outputs of these models are like perfect fodder for those platforms.
Well, what's interesting to me about Dali, and this whole idea of generative images is that it's something that you can very easily explain to someone like I think it passes the dinner table test, that crypto certainly had a very difficult time doing, you know, like crypto asset has spent her time
passing the market test.
Grandma, just trust me put all your money in this,
right? It's a different dinner table conversation, especially with older people. But yeah, you know, it's like everyone, I'm assuming all of our listeners spent last Thanksgiving trying to explain to their uncles or even worse having their uncle's explain to them the value of crypto and like what it can do. And then whereas with this, it's like, Oh, why don't you think of some words, and then we put it into a prompt and I can enter it and it'll draw an image. And at the very least, you're like, oh, wow, that is impressive technology that I can tangibly use, you can grasp? And I mean, I know it seems like really simplistic to put it in these terms. But is that how investors felt as well? It's like, Oh, I get this. This is cool.
I think there's probably a bunch of like, I get this. And this looks like consumer technology that I've been used to invest in for a very long time. I think the other thing too, is some investors realizing that they don't have like a bet. In AI, like, what is my big bold bet? And when I like the proverbial investor sees that open AI is worth a ton of money. That Do you mind doing some great things, that there are some new offshoot labs that we profiled in our state of the art report that are raising tons of money, you know, build that billion people, that they're kind of all these adjacencies that you can apply, like machine learning to sort of get to just drum themselves up into this? FOMO?
Can you break down just like generative AI versus with everything else? Or I feel like all of a sudden, we're talking about generative AI?
Yeah, but textbook definition, you basically have like two kind of categories of machine learning, you have like what's called discriminative machine learning, which is basically like, given a data set, how can I draw a boundary between two categories in that data set, so if the data is just like images of animals, whereas the boundary that separates one species from another, and then you train this model on this data set, and then its task when it sees new data is to just classify the species that's present in the image. And that's discriminative. By contrast, like generative is basically where the model is trying to learn, like the statistics, the probability distribution of a data set, like learn something intrinsic about it, such that you can ask the model to basically generate to synthesize a new example that fits that data distribution. And so like, at a high level, basically, what we've done is created models that are like, able to sort of learn like increasingly complex datasets, in this case, like the entire internet, or the entirety of like Flickr, or whatever, in pairs of, you know, text descriptions to image representations, such that when you ask it to generate, like some arbitrary scene, like with some very specific prompts, like it's seen different combinations of these things before and can smush them together in a way that like, looks nice. So you could probably argue that like a lot of the generative AI that like we're talking about today, which is images and video and text, sort of like a rebranding of like creative AI that I used to see like five years ago or so when, especially this technology called Ganz, generative adversarial networks, which were like, you know, very hot and HYPEE, a couple of years ago, where you basically get like one model that generates an image and then another model that says, like, is it good? Or is it bad, and then you sort of pit them together or against one another? iteratively. And then at some point, like, the image generator becomes good, because it can fool the network that saying, is it good or bad? So it's sort of like not new and is like, fundamentally part of like textbook machine learning?
You know, I feel like I'm going to ask the most obvious question here, so bear with me. But how does human creativity work alongside generative AI? What does it compete with? generative AI, you know if we can do so much based on extraordinary works already created in the past that we all love and admire. And we imagine that AI could create really pleasing, interesting thought provoking work based on that. Where does human creativity sit alongside that,
I think it awaits human creativity will just be like a guide, almost. And then these generative models will just help you explore, like the search space of what you could possibly make. Think like, at a high level, what's kind of beautiful with these machine learning models that, you know, synthesize more data than any human could ever do. And their entire career, if they tried to become expert at something is that kind of spread over like humanity's history, we can sort of get to like more local optimum solutions for things. Like if you remember the AlphaGo sort of case study, it's like, if you train the system on like, as many simulations as physically possible in the space of time, then you'll probably get some gameplay that's better than what we've seen before. And so it turns out that like all of human expertise, passed on from one generation to another, yields a local optimum, that's not the best that exists. And so I think, you know, in a way, like these generative systems are sort of guides to get us to more optimal solutions. And this applies to making pictures prettier and prettier. But it also applies to making more potent kind of drug molecules and pharmaceuticals,
it's funny to hear this answer in the domain of creativity, though, you're like they're gonna be, they're gonna rank at a higher percentile of creativity than we have. And it's like, very
interesting, too, because it takes things like Picasso and turns him into sort of raw material, it turns him into a commodity, right? In order to generate more images,
I wanted to build on sort of Katie's question, which is just also this sort of latent, like plagiarism style, or just like the machines taking advantage of like past human creativity, to become smarter. There's sort of the philosophical question in that where I think some artists feel like, well, they're taking my data to build these future drawings. And really, I'm not getting a cut of that, you know, no, it needed me. So there's partially it's just like, philosophical, but I am curious, also, like, as an investor, is there any, like legal risk here, someone was talking to me, you know, like, this generative work could be done in like video games or something. And you can imagine, like, the set of video games is much smaller. And it could be much more obvious. If you're sort of building sort of an algorithm off of existing, sort of, you know, video game IP, I don't know, it's how do you think about sort of the intellectual property of what goes into these systems?
Yeah, I think it depends on the scale of the dataset that the model is learning from, because you could probably argue if the model is sucking up the entirety of the internet, and what is one incremental, like blog authors blog gonna help this model? And how can they justify that? Unless perhaps, like, they're so iconic in their style, like in the artistic space with Picasso,
or it could be the grounds for the largest class action lawsuit of all time? Yeah, so my words were stolen as part of the large language model, and I believe all 7 billion people on Earth have a claim.
I love it when big law firms come together with technology.
Yes, wait a
second. So I think that's probably one of the only ways that like, one could legislate against these systems, which is like the entirety of like a significant pool of people to legislate or to or to file lawsuits, because the individual consumers not going to have
a say, I'd love to see that advertisement on TV, have you are your words been used as part of the large language model, your personal rights, and you may be entitled to a settlement.
Hey, we haven't prediction in the state of your report that will probably have some form of like content lawsuit that will, I think, at some point, just give rise to like a new sort of licensing agreement, which is like a, for example, like, I'm Reddit, and in order to train in the entirety of my corpus of conversations, etc, then you have to abide by
blah, blah, blah, fascinating. So there actually could theoretically be some sort of licensing elements of this that has to be taken to account for these large language models.
Yeah, I mean, I think if you're a big content owner, that that's part of like your future monetization stream. And that's a way for you to participate in this future. That's pretty inevitable. I think what's open source like, gets their hands on something, it's kind of an inevitable direction of travel. So then the question is, like, are you Wikipedia? Are you in Carta?
So it's one of the reasons why there's such a rush to go in early. You know, is one reason that investors sort of see this on the horizon, which would slow growth so get in now, while there's still a lot of room for very quick movement,
I think yes or no, but I think if you're early then you probably might suffer from like being the tallest Poppy and sort of being the target, you know, and then, once like the lay of the land has been cast Then the fast followers can move in. And I think that's been proven Absa
is gone. Spotify is here. Yeah, exactly.
I think that kind of dynamic can happen. But it is also true that in open source and just in communities in general, like, once you get momentum, it's hard to stop it unless you really screw things up, or something new pops up. And so like, it's tough to think about what are these competitive long term moats that you can apply to sort of keep your pole position?
Did you follow the whole conflict this week between stability AI and runway?
Yeah. So yesterday, actually, yes, I tweeted, I was like open source AI for all like, what
can you recap it here is very obscure,
it's happening on hugging face, which is like the message board of sort of the AI world. So I was deep in. But basically, these two companies both backed by CO to, you know, the huge investor. Stability AI is basically the company that put out stable diffusion with some other researchers. And then this company runway, I think their co founder participated in the research with stable diffusion. They put out like an update to stable diffusion without stability, AI's permission. And then stability AI basically said this was a violation of their IP, and threaten them. And then eventually, somehow runway got them to back down. But it's like, clear that these sorts of open source projects are not as open source maybe is represented different people want to own them. And people are raising yet billion dollar valuations off technologies where it's not clear who controls them. Nathan, did I gloss that right? Or what am I missing? Or what do you think's interesting about the whole incident?
I think he got it quite right. The other interesting thing is, like the core technology and paper and busy research that underpins like stability came from academic environments that, you know, we're sort of enabled by large compute infrastructure. And so that's sort of also been like, increasingly ignored as perhaps like this ecosystem become financialized, by applying large valuations to companies. So I was surprised, especially by the key word of like stability, IP, because I thought in all the marketing, but there is no IP, because it's all open source. By default,
it's basically open source, and you understand this better than I do, it's open source, but then stability AI basically spent a ton to run simulations of it, or whatever, to train it, right. And that cost a lot of money. But then even the training work helps everybody because that system is just out there or stability, AI isn't able to say we paid to train it, we only get the better version, or
Yeah, I mean, so like, there's this sort of like the model code, and then you train it, and you can train it on whatever say, like you have a data set, I have a data set will sort of get a different model, because we've trained it on a different data set. And the difference is just expressed by what's called weights. So basically, this model has like tons and tons of knobs. And then you need to tune like billions of knobs. And so if you train it on different data sets, you'll get slightly different knob configurations. And then you can like list these knob configurations on plugging face, and then you can download them and then and then apply them to your model without like retraining your own model. And then you've got the same model, basically. So these are what's called like model checkpoints, weights, that kind of thing. But yeah, I mean, these architectures are largely in the public domain. And then the data set that it was trained on is in the public domain, it's mastered so called lion. So unless I think a company takes the open source model, and then trains it on their own corpus of like images, or video, and then someone steals that that's like theft of IP, but then if they just publish the model back on the internet with the new weights, and then say can be used for both commercial and music purposes. If it's open access, then it's an actual
theme that's like Tom and Katie, and I have like discussed over the years and it's always interesting technology is like a sense of like fatalism in tech, where it's like, if something's happening, Silicon Valley doesn't always want to have like, some huge ethical debate about it, because there's just sort of a realism that like, what the cats out of the barn. I mean, I think we saw with like, open AI, right, like, Dolly was slower to be open access than some of these others. And then like stable diffusion, basically jumped the gate. And then Dali is like, fuck it. We'll be out there too. I mean, do you think like, this is like a controlled enough situation where anybody can be thoughtful about like, how technology should develop or do you think this is just sort of like a mad dash where it feels like this is happening? I should be the one to monetize it before somebody else. Well, I
don't think the framing motivation for stability is monetizing. I think it's really distributing like systems to anybody who wants to run them and who can benefit from them. But I do think it has some implications over like the kind of alignment and safety and guardrails and things like this around these systems. And I mean, the canonical, like, example was, yeah, open AI and big tech companies that, you know, had their own views as to what people could use these tools for, in the form of filtering certain prompts that would get the model from creating certain things that were deemed to be, like, unsavory. And then the alternative is like stabilities, anybody can go do this. But in a way, like who should decide who gets what, and how, in a way, you could say that, you know, the entirety of humanity that's going to work on these open source models could potentially get to a better place than a few people in the single company. That's pretty much the experiment that's getting run at the moment. Because as you say, like, once one company goes open source, it's sort of game theoretical, everybody else has to if you want to be relevant, hence the like Wikipedia versus Encarta analogy that I think is quite topical here. You know, whether either of them end up being good, lucrative companies is like another question. But this is only like the first order effect, you have all the second order effects, which is what other fields are going to benefit from these innovations. And that's where I've been spending a lot of time especially as these kinds of models touched like problems in biology and chemistry and physics and drug discovery and things like that. And that sort of occurring a bit in the shadows, because it's slightly technical and goes back to like the nonviral consumer use cases of machine learning, but it's very real.
We're all writers, how terrified you think we should be that generative AI will successfully replace writing like some VCs say, oh, writings even easier than images.
Writing is hard. Like. I mean, as somebody who wrote like one Master's in one PhD thesis, and I try and write a newsletter as good as Eric's, but like, this is hard. It's classic, like purrito. Like, I think you can do you know, 80% of the job. And then the question is, like, how easy is it as a user experience to solve the last 20. And I think in many of the writing assistants that I've tried, it's like, you know, a generates text, but then you kind of get halfway through, you're like, This is garbage, or like, this is not good enough. One of
the newsletter writers did a totally AI driven tweet storm that went viral, and he said, it was like his best. And I guess my throwback on this writing question is almost like my worries about the audience, like, I think like you, me, like we're writing the 80 to 100 is very different. It's like, Oh, this looks like good writing. Yeah, but it's incoherent. The people you're referencing aren't real, or like, whatever, you know, the actual logic, the key, the real key part is not there. But the aesthetics of it that it looks like something you would say it feels like it like, has takes like, it feels like a clean solution to a problem that's there. And so if the audience is dumb enough, you know, then you could make money off of it, you could build a large audience. And I feel like that's sort of terrifying, that like, part of what's been protecting the world is just that the people doing the writing, want to believe that it's like, coherent, but if you just like unleash a sort of generative AI, it's like, well, can people tell the difference? Or not? I mean, do you think that's too cynical? Or do you see my sort of fear there that the humans need to be good at? Like, they need to care enough that it makes sense?
Yeah. I mean, I'm kind of more positive, like, if I can get like,
that's to be more positive,
I might get a system that can do take my like English, written newsletter and writing and like, God knows how many different languages or create different formats or create hot takes, that are shareable on different platforms, or can just speak it in the same way that I have spoken to, and I've tried to do that manually. It's a pain, like, I'm happy being like 10 bucks a month for that, like, and I wouldn't be really that concerned would take my job at the bar, like, you know, amplifying the things that I'm already doing, but like removing some of the like real work, whether it ends up like entirely replacing me, I think, honestly, that's just like very hard to tell. And by that point, maybe I'll find something else I want to do so.
And language is so filled with nuance in terms of word choice, translation, it's interesting, brought that up translation is actually in some ways extremely hard, because straight translation often does not capture meaning at all. So it's, you know, language is very tricky. Imagine a straight translation of the Iliad. I mean, that wouldn't really work. Right? Yeah.
But AI, somebody was just telling me yesterday that AI is very appealing in this sort of cross cultural context, because you could imagine, like novels where right now the novels talks about like New York City where the authors from what if you want to sell to a mass market audience like you could say, oh, this machine's going to figure out like that this readers in like Beijing or whatever, and we'll replace it with like their favorite, like a local shop. And even if it's like, you know, imperfect and doesn't get the language Right, it's still like better than today where there's no effort made to sort of pander to the reader. So I mean, we could enter this world where like stuff is really sort of catered to
literature is would it be better to read like, Lord of the Rings and have it set in Washington DC? Right? Yeah, like, let's replace Middle Earth, you know,
go with the Capitol Hill, and you will toss the ring into the fires of the rotunda.
I don't want to imagine a world I've never seen before. That's not the fucking point of fiction. Yes, it is actually the point of pictures.
Well, you know, what's interesting, though, is that like, that kind of piggybacks onto this idea of all content being catered to our personal tastes. And this sort of social media driven idea of, you know, algorithmic driven consumption. And like, why shouldn't you know, the next I don't even know what popular book series are out there these days. But you know, like, the next Nobel winning book, be iterated towards the different audiences, because that's the way people consume everything else, you know, there's no advantage to central entertainment, centralized experiences, something that you have to, you know, relate to other people's point of view that's done with that's over. That's the old boom,
now, like your hunger games, la version?
Sure. I mean, in a way, it's like mass personalization. Everybody gets their own version. But in a way, it's kind of sad, because then it's like loss of opinions.
But it's also loss of a cohesive experience. loss. I mean, like, we mean, this is ages ago, and I didn't read these books until much later because Eric made me but Harry Potter is a great example of a series of books that created a creative, imaginative experience that people across cultures, age groups, socioeconomic groups, races, and genders could all encounter together. And do we not want to have things that bring people together anymore?
Like any good millennial I militantly insisted that Katie read,
it was really intense.
A lot of it was monthly, month long campaign. And
I like how sad for you. And my request is in exchange for house sitting. That you read. I left you like my own copies, which was
like so intense, because
they're like, Mormons out there. We're like, chill out a little bit. You don't have to leave it everywhere. Yeah, I don't know. It's funny. I mean,
you want me to have that experience? Because you wanted
me so we can be friends and understand me, right?
Like, if we don't have that shit anymore, that we share, that's not of our own personal preference, like, what? What do we have?
But you know, what I could see that concept being actually very appealing to publishers is the idea of stuff being you know, of its time and like not aging very well. I don't mean like thematically but I mean, like word choice, or, you know, the stuff in
Huckleberry Finn, for example. Yeah. Choice. Yeah.
I mean, yeah, I don't even know what like the AI woke version of Huckleberry Finn would call, you know, Mr. Jim, but like, yeah, it's all those things. Anyway, I don't want to spend time on that. But my point is, I can see actually that idea of being like, Well, why can't a book be like a dynamic thing. And over time, AI can identify what are the problematic themes and words in the book and update in a way that, you know, doesn't offend people in a way that it might have at the time that it was written.
I mean, I've seen some stuff in the video game environment of like programming, these non player characters that have certain behaviors respond in certain ways. And therefore give like a unique experience to the game player. And like, the way they're trained, is quite cool to where you can like, import an example of a conversation or a script. And then sort of like tune some knobs based on personality traits, say, you know, I don't know, they behave like Zeus or something. And then the agent knows that because it's like, read all of Wikipedia and stuff like this. So it was pretty wild to see that. And so perhaps it's like more in these virtual worlds where this will happen. And at
the beginning of our conversation, I mean, you were sort of nodding to the fact that, you know, a lot of the AI work is like very open source. And like, sort of not totally, like financial driven, I don't know. But is there like a clear, like AI researchers sort of like ideology? Or like, what are the sort of like camps in terms of the culture that's emerging in this space, as you see it? Or is it just too big to have something like crypto was unique, these add the financialization to sort of get everyone in line? And sort of creature culture? Does aI have something like that same sort of shared culture?
I think then what are the new demarcations I've seen is around safety and alignment. Like there's a very, very, very small number of people that are working on this topic of like if we invent AGI like how do we make sure that it aligns broadly with human preferences, just based on the concept that like any prior species that was smarter than the one that came before it, like, generally made into a pretty bad experience for the species that was there before. And so, like, there's a consortium of people that don't want to work on capabilities anymore, which is broadly like making a head better. And they only want to work on making AI safer or more aligned. And you're supportive
of that. skeptical of that, or do you have a personal point of view? Yeah, I think I'm generally supportive of that. That's like anthropic right is a big company in that space. Yeah, anthropic,
small one in London called conjecture. This one called Redwood research, there's a few people at open AI that a couple dozen people,
how do they make their money? Is it just like tides from the actually profitable companies to like, feel good about themselves?
For the moment, they don't? Yeah, for the moment that they've just been, through a Venn diagram overlap between safety and alignment and Effective Altruism. And so we've seen, for example, like Dustin Moskovitz, open philanthropy and send back when treated FTX, who funded a lot of these projects. And, as BF did the massive round in anthropic. So none of these companies are revenue generating, and I don't know if they have aspirations to be, but they certainly want to create better tools for alignment on your
very specific point that, you know, more sophisticated or more advanced species, you know, sort of crushed the one below that, you know, I studied philosophy in college, and I'm a big like, you know, bite the bullet type person on moral intuitions. And there's a certain type of argument that if you're like a diehard utilitarian, and you find out like that this new super AI, like experiences more utility than human beings can it gets like more joy more, whatever the utility calculus is, they get like more of it, then you should sort of root for the AI to wipe out human beings and like if resources can more efficiently go to the AI, which gets better utility use that it makes sense for it to go there, which I think is sort of a hysterical, like, I'm gonna bite the bullet all the way on this one and say goodbye, human beings. That sounds
like an argument and AI would make error. Sorry. I don't know, talking to you.
So very worried about Rocco's basket Liske. So I guess this would be a very, you know? Yeah. You remember the whole row, because I think we've talked about it, everyone talked about rose basketball so that there could be this overlord AI, already, that sort of exists, like across time. And so to save yourself, you need to be working towards its existence, because anyone who does and will be like, terribly punished. And so yeah, you should. This is,
this is the Horizon Zero, Dawn. I know how that one ends. Me music. By the way, I imagine that's a big use case for this technology, right?
Yeah. Well, music is one that was again, like tried several years ago. And that maybe now is an inflection point, too. So I read a business a couple of years ago called juke deck, which eventually sold by dance, but they were like one of the OG machines creating music. And now it's probably like, a ton better. But I'm kind of like excited about maybe like, the more esoteric applications that are not super obvious. Like we have a business called in Tensai, which does health and safety, basically, like protecting individuals and manufacturing industrial environments who, unfortunately get injured because those environments are dangerous, or, you know, accidents happen. And this is a great documentary on Netflix, which is like a perfect primer for why this is an issue called American factory. Sure. The Obama. Yeah, yeah, exactly. So this is like a solution in a way to some of the problems that manifests there where you're trying to, like, have good health and safety practices. But it's just hard to do that walking around with a clipboard trying to instruct people who don't wear the right protective equipment things. So they use computer vision to apply this sort of like a cybersecurity solution for the real world. And, like, we've already seen that like some large companies that implement this immediately, after like a week or two weeks, see huge reduction and alerts and dangerous behaviors. So I think that's like one like I didn't really know anything about before I encountered this company and watched this documentary and realize like, Shit, this is massive with like, big implications and makes use of some pretty cool machine learning. But it's like, just like a number one or number two priority for a certain category of customer. So sort of excited about these sorts of domains rather than like the what's in like, the glitzy obvious Limelight that every VC is going to kind of vibe with.
See, that feels like it's even more in line with Eric's argument that the AI should wipe us all out. Because if we as humans can't even protect ourselves without using an AI, you know, to like, protect ourselves from each other. It would seem like there's no hope. Right?
I mean, I would argue that some people think that that kind of use of AI is literally wiping humans out. I mean, we've, we've seen some of these things, especially in industries like long haul trucking, where more and more of the decisions that went and can make are being given over to a machine. And a person is sort of peripheral to the process. And it's not necessarily Well, it is typically reducing things like accidents. When you think about what's happening to the human beings involved, you could argue that there are other negative consequences that perhaps hadn't been anticipated. So I don't know, Tommy, maybe we'll have it both ways. Humans will be wiped out either way.
Right? Right. We're just destitute from the jobs that were taken away from us by AI is or, you know, we just don't use the AI we just all die of massive injuries in our factories. Yeah. Yeah,
these guys are coming. You're optimistic about it. Right? I mean, I think it's pretty amazing. I think this is gonna be the biggest like productivity gain for human beings and like, a long time, I think it's going to be a massive revolution. I'm like, Yeah, I'm a true believer in like, AI changing human existence far more than crypto and like, very happy to see Silicon Valley. Yes. Moving back.
Yeah, I agree with that. I mean, I think that you're totally right about the productivity gain, I just am not sure that productivity gain is the baseline measurement for whether or not humanity is getting better or worse.
Well, government needs to do something to say, Okay, we've made these productivity gains, therefore, you know, we're not gonna just keep grinding, every human being out, or it requires policy may be to cash in some of the benefits for people instead of just
I think my baseline is like, what kinds of problems like weren't addressable before that now become addressable? Because we have this new technology? Right. So some of that might drive productivity gains, some of them might not. But I think that's like the coolest unlock, you know, it's like, what can you do? If a solution requires more than like a web app and a database, like a nice UI or something?
Computers dominate us in chess, they can dominate us, and presumably, drug discovery or whatever, once we figured out that, I mean, that was what I took you to be saying earlier. It's like, yeah, we're not the best in the world at games that we've been playing. For much of sort of sophisticated humanity, it's very likely we're not going to be the best and other things we
can do, especially games that we need better tools to understand. Right,
right. Well, it seems like I mean, if I could delineate the 8020 issue here, you know, 80 is being like, we've developed an AI that can beat us all in jeopardy. But like the last 20% is like developing an AI that can host Jeopardy. And that seems like it's the hardest thing to do. I mean, we hardly can host it ourselves. Well, yeah, we
get to set. I mean, that's why people think, you know, if anything, AI could be good for sort of emotive, interpersonal, tight or professions. Yeah, because humans get to set the score on that and say, we actually we prefer,
I mean, like, Yeah, it's true. Like, I have a friend right now who's in the hospital with cancer. And I think she'd rather have the bad news delivered to her by a human being can hold her hand and be emotionally connected to her in a real way, rather than an AI. Yeah, I think I think it's going to be hard to replace that.
This is true. But I think like, even in that example, we have a company that is now part of a bigger drug discovery business called Sen. Shia, but like in every case, like the doctor is trying to make an assessment as to which therapeutic strategy is the best for this patient. And that's really, it's really a hard choice to make. And at the moment, like what they've been doing is, at best, sort of like doing a biopsy and sequencing and seeing what is the gene that might be causing the cancer and, and then just picking a drug that, you know, in theory fights that specific mutation, but this company that we invested in, they actually take that same biopsy and basically run like a clinical trial in the dish by having that biopsy, and the presence of like one of hundreds of different cancer agents, and you can functionally measure whether this drug is, you know, fighting the cancer or not. And they've actually proven that, like you see statistically higher survival rates, because you're functionally assessing cancer drug performance against the patient's tumor versus just in a very reductionist way, like doing mutation and drug matching.
And that stuff I have no doubt is going to make huge advances in medicine. I'm just saying that, like, if somebody has to hold your hand and say, You're gonna die, I think most people would rather have that news delivered by a human being AI. And also, like, we saw a lot of this too, during the pandemic, when people had to do things like give birth by themselves, just alone, messages coming into their phone, for some reason, didn't really feel that comforting. They didn't really feel like that was an optimal experience. They still wanted a human being one that they knew standing next to them while they did this, but they just couldn't have it.
Yeah, I think for the most intimate and personal of interactions in AI should never replace it. Like it's not something that unless you're absolutely lonely, and you know, that's a whole other thing too. People chatting and having conversations. I mean,
that's just the essence the essence of being human is being right Right,
right but I mean, that would probably be like the last you know, quarter or the last you know, moment of humanity is like us helping ourselves, you know into our obsolescence and you know, eventual destruction as we comfort ourselves into our death.
But is it so interesting that we're even having a conversation reaffirming the idea that in life's most intimate moments, we actually want human interaction just in case anyway. Just just reminding ourselves
like as the last human you know, comforts the last you the second to last human, or vice versa, you know, and the AIs are watching us through their screens and saying it is almost done.
This is such Cletus, this is an AI derived modification of William Faulkner his Nobel acceptance speech clearly. Why, like the last puny voice of humankind bringing up you know, across the hills, I like this like is the last day? Yeah, compared to the last day.
I knew what I can add those conversations. founders that I've met are like working on, like, practical solutions, like real problems like Not, not this
like to fantasize about what a general AI will mean, and all right, I mean, it's not just sort of the mass. I mean, right? Do you spend a lot of time with these
exercises? Yeah, I actually don't spend a lot of time thinking about general AI, to be honest,
because you think it's sort of just a total mind game distraction, or you just like don't find it,
I think it is a bit of a distraction to some degree, like, it's a bit of a short term for me internal distraction. Like, I have no idea when this will happen. And I don't know about you know, like these surveys that ask people, you know, over web space of time, do you think Charlie I can, can arise? In many cases, like those questions are formulated in a way that presuppose an answer. And so they kind of bias around a bit.
But you're seeing the survey, say sooner than you think is credible, or?
Yeah, to what degree? I don't know. But yeah, it does feel a bit sooner. And by the way, like the date is, like been getting closer.
Well, and also people are incentivized to make it sound like it's sooner than it is because that makes it sound like a more reasonable investment.
Yeah, exactly. Yeah. Critics
self driving cars, they can't predict general intelligence. Cool. Thanks so much for coming on. Yeah. Thanks for having me. Tom, are we wrapping on this? Are you wanting to do more?
Sure, if you guys want to stick around, we can spend a few minutes on the email, we can do that. But
Nathan, thanks so much for coming on. Really, really appreciate and great to talk to
Do you want to spend a few minutes on the email? Or do you
wrap it as was the email that he sent about pronoun? Yeah, so
this was the I guess, now former CEO of mail MailChimp mail camp. The reason I wanted to go through it is not because, you know, I mean, the email itself, I thought was pretty hysterical. But also it does touch on a few things that have actually been themes on the show before. And so beyond just like laughing at this guy for sending a 1500 word email to is to is
enough, like any HR.
Any HR person would have, like, thrown themselves off a building to stop it from happening. But you know, no one, no one tells these things to CEOs.
I don't know how dedicated you think these HR people are companies that
Yeah, clearly a company. Like email marketing firm, it was acquired by inserts
like HR person, if you're thinking about throw yourself off a building on behalf of your company, you need to wake up,
let me just read through the email, we could just reflect it for a little bit. And then if we see it's getting longer as boring, we could just we could just call the episode. Okay, so again, this is an email that was sent by the now ousted CEO, MailChimp, which is an email marketing firm, the story was broken by platformer and so he Schiffer, who's a writer there. So this is the email. Sorry, the guy's name is something chestnut. Something I don't know him. By the way. Chestnut No. Hi, team. I've been really impressed by how well the new employee onboarding has been going lately. We're bringing on so many new peeps. Oh, yeah, that's the thing in the email, he calls people peeps the whole time. We're bringing on so many new peeps, and in turn, they're bringing on their own great questions and making the chats very lively. Kudos, I want to take a quick moment to lightly recalibrate something before it goes too far. This is where it's
I've never read this. So I'm like experiencing this live okay, I'm
sure Katie has an either because she has other things to do. I am noticing that whenever new employees introduce themselves in zoom before asking their question, they're also announcing their pronouns. This is completely unnecessary. When a woman parenthesis who was clearly a woman to tell us that her pronouns are quote, she slash her and a man parenthesis who is clearly a man to tell us that his pronouns are, quote key slash him. However, if there is an employee with gender dysphoria in the room who feels more comfortable, this is coming from the CEO by the way to all of like all the employees of the company. I just want to make that clear. There's an employee with gender dysphoria in the room who feels more comfortable if we know about and use a non obvious pronoun for them. non obvious means that they might appear to be one gender to others, but in their minds, they consider themselves to be another gender, they are very welcome to proclaim that pronouns to others in the room. And for the record, it is my desire that MailChimp is a respectful place that will honor that request the name of inclusion. So basically, like the guy is trying to explain himself and the why he's writing this email and do it in a way that's very thoughtful. And you know, he's not trying to say I have a direct issue with people claiming their one gender or another, you know, he's trying to be very, you know, whatever. But this is where the problems kind of crop up the next paragraph, it seems as though there is a very kind and compassionate intention by someone somewhere in onboarding to accommodate our co workers who use non obvious pronouns, but making them feel comfortable enough to announce their pronouns, indeed, an intimidating thing to do in front of the crowd of math. The logic seems to be that if everyone else is announcing their pronouns, and that is
the logic, I know where you're going with this dude, and that is exactly the logic.
Yeah, we all see what logic becomes a key word here, as you'll soon find out, the logic seems to be that if everyone else is announcing their pronouns, then they are making it easier and more comfortable for the trans last gender fluid employee to announce their own. That is truly kind and I truly love that intention. Yeah, but in the but here's the but so far, so good. But in the long run, this approach does more harm than good. There are, there are three reasons for this. First, there is a tiny there's a very tiny number of peeps at MailChimp, who would consider themselves transgender, forcing either with orders or through guilt, approximately 1300 90 Other peeps to adopt the new communication paradigm that humanity has never had to use in our 300,000 year existence. And in our 150,000 years of spoken language, I don't know where those numbers
and we would definitely not want any peeps who have not yet publicly identified as such to feel comfortable doing so we want to keep that number really small here at MailChimp. We don't want any more people feel uncomfortable. Their gender
because it goes against 300,000 years of tradition in order to maintain traditions
that were bad that we get rid of. I'm gonna throw slavery out there as one but continue.
Yeah, they never you know, in ancient Sumerian, they never announced their pronouns. And I think we should honor in order to make things slightly more comfortable for an extremely small group of peeps is completely The Logic Group that we're trying to keep a small possible. That's always gonna say, like very small transgendered people to they're all just little tiny, little tiny, you know, anyways, what's the harm? Well, why do people with the harm? Yes, you may be asking yourself, you know, 300 words into this email? What is the harm? Or what is the purpose of this email? You will now find out? Well, I believe that when everyone is forced to comply with something that they know is illogical, no matter how kindly intention, they will eventually believe anything and do anything, even if it's vicious. We're undermining logic and reason, which undermines independent thinking, which history has shown always leads to disastrous consequences. Forcing a majority of peeps to behave a certain way is the opposite of inclusion. So basically, at this point, he decides to go like slippery slope with the whole argument and say, like, if we start announcing people's pronouns in meetings when it's only a small number of people, we are bringing about the ruination of civilization.
Oh, wow. Because never in history, even recent history. Have we asked the majority of people who do not yet agree with unchanging social norm to comply with effort? Yeah, no. Like, gay rights, like interracial marriage, we've never, we've ever tried to pave the way for those things socially, like language and legislate
one of my reactions to this, which is very like something we're in the sort of management space, it just feels like if he has an issue with this. And the best lever he has is to reach out to the whole company. Instead of trying to get his subordinates or whatever in line. I'm saying, This is how we want to handle onboarding is sort of a miss it shows like a lack of a handle over the company and sort of like
he was does speak to other people before he sent us.
Your deputies to agree with you or Yeah,
yeah, well, he seems to be pinpointing on some process and onboarding, which is like an HR function. And yes, it would seem like if you have to have this conversation, because it is just fucking killing you all the logic that's going on, because it's only tiny transgendered people that should be announcing their pronouns. You could handle this at a smaller group than every single employee at the company.
I am personally very skeptical that this pronouns announcing is going to stick in our culture. I am not like going to be one of these people like protesting it, but I just don't. I feel Like already the discussion, like on my tic tock feeds among like progressives is like, is it good to be centering gender so much in like our introductory conversations and Katie, to your point that like, Sure, maybe we're making it easier for people to come out because we're asking, but you're also pressuring people to make a gender statement like one of the first things they say to everyone. So I just think even in the world of like, just progressive argumentation, I am not sold on the fact that these gender intros are going to stick. And I do think it's reflective of extremely heavy handed HR, like progressivism, which is the worst form of progressive culture like LinkedIn, basically forcing everyone to put their gender and their LinkedIn is not an eye opening thing. It is exactly the sort of like, status force morality of the left that no one likes, and will not win people over. So Well, I agree that these people protest way too loudly, about like, gender shit. And like, who cares about having to say your pronouns, but I do think the instincts of it are like, I'm not sure it's a winning issue for the left.
I mean, I don't think that it will stay around forever, but not for the reasons you've said. I mean, like, I think that it's something that's happening now, in order to pave the way for a group of people who do not feel comfortable to feel comfortable. And that once they feel more comfortable, and we don't need to have this happen, it won't happen
once people are trained to just like, no one's
trained, just like we've all been trained not to assume that somebody is married to a woman just because a man. So when I meet a man, I don't say, Oh, how's your wife if I see a ring, because I don't assume that he's married to a woman. But it took a really long time to get there. But now we're all trained. And so some of the linguistic things that it took to get there have also faded away. And I will say, you know, as somebody who's friends with now, multiple parents of children who do not identify with the sex that they're born with, it is really, really painful. I mean, like, this is not this is really difficult. And so I think that you're right, that we won't always have to center gender in this way. But that there's a reason why it's happening.
You're seeing the misgendering is painful, or the whole extra,
I think the whole topic, I think that a lot of the topic isn't helping that there's a lot of pain to go around. And it goes beyond just simply misgendering. And so it is not only to make people who are having questions about their gender or gender nonconforming feel comfortable, it's also it makes their families feel comfortable, it makes their parents feel like my kid is entering a world that, you know, I mean, I think that one of the things that they fear is that their children will be beaten up or harmed. I mean, I think any, so any acknowledgement that kind of seems, even if to your point, it doesn't work, right, anything that creates some feeling of like the right. I mean,
it's always crazy. Conservatives are like, you want to be performatively kind to everybody like what's wrong.
It's like God
to do, but it's not. I mean, like, just like, I was gonna say, just like, I think it's good that we don't assume that every woman who walks in wearing a wedding ring is married to a man, and that it's totally and that we had to be kind of performatively kind to get society to that place we did. It's clearly
like an awkward part in the like, you know, movement towards being a more inclusive and kinder, gentler world. But actually, a lot of the topics that you guys are bringing up, come up in the next couple paragraphs. You want to bring it up? Oh, there's like, anymore. That
was the end. Oh,
my God. Oh, my God, no, no, no.
Right? Have one point, the more points you have, the more people can be mad about.
Right. Okay. So that was that was a slippery slope argument that what we're doing is we're descending into a world of logic, and like, soon we'll have like fucking ants wearing hats, because it doesn't make sense, you know, for people to announce their gender pronouns if it's very obvious what their gender is. Okay. Secondly, in my direct one on one conversations with a small subset of that small population of transgender employees, let me emphasize again, these are very, very small people. I have found that they don't even need or want all this accommodation, right? This is this is interesting. Oh, no, and I'm sure I'm assuming if these people exist, and he's not making this whole thing up. This is interesting. There's an employee who started as a woman, but there's a transition into a man during transition, he politely came to me and other leaders and respectfully asked us all to honor their transition by using new pronouns. It was our pleasure to honor that request. He now uses he him pronouns use the men's restrooms has never wanted a gender neutral restroom, and additionally has worked damn hard to earn a new career. His new place in life most important, I'm sure has achieved peace in his mind. Just providing a place where they could learn a living and do good hard, meaningful work helps them find inner peace. And the fact that's happening at MailChimp is a little weird My point,
every company takes this action. I'm not disturbed by I know,
on the contrary, I think this is his most this is his most like accommodating and like well intentioned part. Well, it's all supposed to be well intentioned, but the one that like actually makes the most sense, right? Because it's based on real people not like you're,
I think that's what's frustrating about some of these culture war issues. It's like, well, when faced with a real moral decision around a specific person, right, I feel like I acted morally. And yet, I get yelled out by the HR department, like, certainly, I think a lot of us are sympathetic to that kind of right point of view, right. So
I don't want to read through this whole paragraph. But you get the point. Basically, the CEO is saying, I've talked to the few transgender employees that we have here, and they've all actually specifically requested that we don't do this because it's uncomfortable for them. And I want to honor that. So it's like, okay, okay, interesting. Oh, good. Good.
There's a different way to do this email. I'm seeing it right now. But continue.
Yeah. Okay. So here's where things get very interesting to me. Third, this used to be about fostering a creative, productive work environment. With that intention in mind, Dan and I on danas have always wanted MailChimp to be an inclusive meritocracy. A place where no matter your lifestyle, gender, race, nationality, or economic background, you could be an independent thinker and speak up. Not only would you and feel emboldened to speak up, your fellow peeps would listen and take your customer centric advice. It was all in the name. Yeah, it was all in the name of work. But now everything is incredibly politicized. That's probably true.
Lesson. I long for the days when I could have a workplace. It's not completely Yeah.
This is the part where like, you know, his argument is verging into uncomfortable territories, but I actually probably agree with I'm finding that people no longer feel motivated by meaningful work. They are motivated to make political statements that is definitely true.
Well, yes. And no, I mean, like I'm sympathetic
with a lot of what this guy is saying so
but yeah, cuz this is like every but that's always a minor. I mean, this the the vocal minority who feels motivated by like protests, most people do just want to clock in and clock out. Do your
fucking job, right? Like, even a meritocracy is a fucking farce. Like it's an unnecessary fake belief of capitalism, like, I'm sorry.
Okay, let me finish up this paragraph, because then I want to back up more. They're using company time and company resources to win a game against their opponents in that game that is raging in their minds and on social media, understandably, so. Our society is becoming increasingly divided. And it feels truly like our social fabric is being torn apart at the seams by radical politics on both sides, coercing people into proclaiming their pronouns is not about creating an inclusive, creative, productive work environment. It's about becoming a political statement. The only thing I would have to say to that, by the way, is like it's a very brief political statement. You know, like if you really are all about political statements just do like more land acknowledgments and shit like those things take a long time. You know, you you're spending a lot more time on that one than just being like, my name is Tom he him but whatever, as righteous as some peeps might think that that is,
Deep Singh really does make really, really, it undermines everything.
Yeah, yeah, there's really no, there's no coming back from that personally. But it's hard. as righteous as some peeps might think that this is they should also consider that there are others in this world and on the opposite end of the political spectrum, who feel just as righteous about their beliefs, understanding and respecting that fundamental concept that grown adults can have different views as a part of being American. And part of being a mature adult. Peeps, of all the different political leanings are free to vote the way that they want to see our country governed.
Is he basically trying to say that same pronouns is triggering conservatives in the company and making them feel political whenever this comes?
Right, right. I don't even need to read any more. You guys get the point of what he's saying here in this paragraph. And like, the, the reason that I liked this, this part of it is because this gets to two things that we've talked about a ton on the show, which is that
trans issues which we're always talking about, yeah. I feel like we proactively avoid
talking about, like, social hoppin issues everybody knows I'm like breaking out in hives right now. It's really hard to know.
Who knows her audience will take this but the point is that the politicization of the work environment the fact that I you know, and you've probably seen this more than people certainly more than Eric does and a little more than I do Katie but like sizing his work environment I am an employee now we're gonna have to start sending these emails out
Eric to all your workshop with us before you hit Send dear nukes
Is that what you call your employees?
Now we do so yeah, I
know it's this idea that like workplace is becoming this battleground for some employees to Yeah, I agree with the CEO here like I do think there are very vocal I'm sure minorities, but people that are trying to you know
vocal minority not because I think these issues aren't important, but simply because even if you look at like big workplaces, I think the Washington Post did a great story on this, like the Starbucks employees moving to unionize, it is a real thing. And it's really important. And I think that this is a real movement that has legs. And it's not just something made up. At the same time, it's clear, even from that story, that many, many people are just clocking in because they need money. And they're like, really not engaging, you know, they're like, maybe I'll wake up one day and be a unit in place, meaning for me, personally against
president's trying to push, right. But
there's the line between the Trader Joe's and the Starbucks employees and what's going on in most of these companies where they're all white collar workers. And you know, the ones that are very vocal are trying to, I don't know, realize they're Palooza
on college campuses, to where the students are trying to unionize the employees. But it's like these students are leaving after four years. And the people who are working in security or who are working in dining services, they are not leaving after four years. So you have, it's not a professional class, but it's a group of students who feel very passionate. And I understand why I'm not saying they're wrong, I'm just saying that the incentives are very different for these two groups of people, the people who will say, and the people who will go,
right, but it is very distracting inside these companies. I mean, I'm covering Google Now. And that company is like borderline paralyzed, like certain departments that are because of the activism and outspokenness by certain people at the company. And I'm not saying it's a good or bad thing, I'm just saying it's a reality. And it's also just the result of years and years of all of these companies telling their employees that your personal belief should be wrapped up in the mission of this company. And that what you stand for, is what you were working on. And this was always going to happen. In my opinion, there was always gonna be a point where people got disillusioned by the mission and felt way my personal beliefs, if the only place where I can express my personal beliefs is at the computer that I work, then I need to spend all of my time making sure that everyone knows how I feel, because otherwise, it doesn't make any sense to me.
And sort of like, elite white collar left has become very content with, like, statements of solidarity as some like major, like political victory, instead of staying focused on actually like, I don't know, material conditions or, you know, actual political achievements. They're
like, yeah,
good. Get. I know you don't, but get oh, no,
I'm like, if you guys want to be activist in your companies, you shouldn't be demanding your companies to pay their fucking full freight and corporate taxes. But that's just
Oh, yeah, you know, my mom, that's my favorite thing I think anyone has ever said on this podcast, and you're like, Apple, not enough taxes is the biggest political issue.
It's the most worrying political statement that has been made, but it's like, yeah, what if? What if we just like collected enough taxes for the government to like, do cool functions? Once Yeah,
right. Because all the Apple employees that you know, spent months and months complaining about, you know, whatever culture issues they had at the company, I don't think a single one was like, why are we incorporated in Ireland?
Right? Right. And why can't we enforce voting rights? Like the ones we have on the books? Like, why does it take so long? Why is our criminal justice system grind to a halt? Like, why are there not enough people to investigate white collar crime? Why?
Yeah, yeah. And so like, you know, the whole discussion about gender pronouns on both sides? I mean, look, you really came
into this tom, like, we were just going to eviscerate this email, I actually don't think this podcast is like, so
happy to eviscerate it on the grounds of like, What a dumb fucking thing to write like he should have, actually. He'd said he interviewed people who are trans at his company, gotten their thoughts and written an email that started, I've had really important conversations with members of our community. No, no peeps, members, our community. And I want to know whether or not there, you get to do what you figure, if there was a way to figure out, you know, why we are centering gender with the use of pronouns, there is some discomfort coming from the very group of people we're hoping to make comfortable. And so what does that mean? And can we discuss it as a community and hope with a different plan, and that's also a much shorter email. Email can be four paragraphs, fresh, short paragraphs,
and the takeaway should have been there's something deeply wrong with the human resources professional. We need to we need to get rid of these.
To be sure, because Because Because here's the thing, Katie, if that was actually the problem here, he would have sent that email and it would have gotten accomplished something that would have actually benefited the very, very small we have to include that number of trans people at MailChimp,
lard and you're trying to keep it small buddy, right? Like you're doing a great job.
That's clearly not what's going on here. He was triggered by the use of these. If you said these.
He's like using this Well, a member of people at MailChimp to hide behind because he's pissed about something, which is also like, Oh God, I mean, just so fucked up. But like, why didn't he call us to write this email for him? Why didn't he give us a ring? We could have done this?
Yeah, I mean, it goes into the sort of who is politicizing what between the left and right, right. Like the left is like, I don't know, we're just saying people's pronouns. And then we like have a fucking meeting. And the right obviously reacts very negatively. And then it's sort of hard to say, which move is the politicization,
right? What is his claim is that he's not being political, right? He's trying to remove politics from the woods.
But he's basically saying that when people have to say pronouns, and people on the right have to do it, they're, they're feeling like it's a political act. And they're basically being forced to sort of, you know, go against their principles by doing something that they Yeah, bristle at. And we've
seen different versions of this playing out in tech with like the CEO of Kraken, and like his super based work culture, where he wanted people to be expressing only Dark Web ideas. And if you don't like it, you can leave. And you know, the Brian Armstrong at Coinbase stuff. I mean, it all was very tied up. You know, it's all very tied up in crypto, but like, it is a very real thing that's happening to these tech companies. And the decision by CEOs to claim we can be non political by sending out emails like this is a completely wrongheaded way of doing it. Of course, it's gonna backfire. And the MailChimp guy, he stepped down. Yeah, he's gone. He's gone. And we don't know if it was because of this. But like, zoom right after, you know, the story didn't, which was it was a bit of a shortcoming with the story. It didn't really explain why, but the you know, they had gotten acquired by Intuit, I think he actually made quite a bit of money from that whole transaction.
With this bullshit, he can have his own like, he can have all his staff, he can do whatever he wants, you know, never tell them their gender, if you want them. Yeah, you can just have
no staff. He's like, I'm moving to a model where I have no staff, no people,
no HR departments,
I mean, people stepping down and like overreacting to employee revolts is another part of the I mean, I think that sort of calm down somewhat like I would be interested in now, if he was really ousted over this
wouldn't make sense. I mean, the email was like,
Yeah, imagine this, this is stupid, but like, I don't think anyone should be fired over this email, though, does to Eric's earlier point, indicate that he's a bad manager. And so I wouldn't be shocked if you scratched the surface beyond the email WriteLine stuff right? Here, right? Clearly,
advice, go to your executives and say, are we aligned that this is like out of control? I'm sure most of them would be like, listen, we're trying to meet our sales quota. And this is not like a revenue driving decision. Like who cares? Like, please, right, exactly. But he's like,
but he's trying to say here, you know, that, like, all of the pronoun discussion is distracting us from a real fulfilling work of running MailChimp. And like, he's just like, I got a way to fix it. We're not even your number issue.
Actually. Maybe the bigger issue is your own suffering people Mail Chimp
in the same way that you know, like the Fed, you know, the only thing they can do to control inflation is by raising interest rates. He's like, the only thing you can do to increase productivity is sending 2000 We're emails eviscerating our pronoun policy. And he's like, that's the only trick I got my backup.
Focus on the company more than a divisive email from I remember. When everything companies
really come together,
rally around the shittiness of your boss's email, they
would say somebody's seen it. I would spend half the day being like, What the fuck does this mean? And everybody will like just ignore it? Like, who cares? Like, those emails are the most distress there's like, why am I worrying for overlords that are so out of touch with like the core product that we deliver?
I mean, talk about like the use cases for AI How could there not be like a super advanced Clippy on all boss emails that says like, it seems like you're writing a very ill conceived email about what pronouns your employees are you want to send
this exists? Did you not see this? I saw it on Twitter, somebody ran this email through some like, I don't know, woke censorship app. And it was gave us like a 0%. Like,
just like a lot, like are kind of like this. This is not some woke sensor. Like, dude, it's a
little weird that software exists to be like this is
was that indeed the software? Or was it just like, I don't know. I don't know what the software was.
Yeah, there's not much more to say about this than like, you know, prayers up for everyone at MailChimp.
One thing that I'm interested in talking about the I don't think we should get into this episode, but I do want to sort of just plant the seed is getting to a point where our political debates we can like shrug our shoulders on some of the more we've basically gotten into a political culture where we've tried to like ample Fie, the importance of every political issue. And so that whenever there's disagreement, it feels like a real, I don't know, severing point. And there aren't these issues where it's like, people just say to each other, yeah, I disagree with you, I don't care about this issue that much, you know, you would be seen as like, obviously, like, a bigot or anti trans to say like, this is below my line like Trump. I mean, Trump basically saying there's like a hierarchy of political views he cares about and one was, was low. I mean, people went ballistic, he ultimately survived. But there does have to be a true ranking of issues you're willing to like put,
it can be different for anything, it can be different for everyone, like you can
get in trouble for not prioritizing.
I mean, I think that I think which mouth got in trouble for was coming across as Dick. I mean, I think I was criticized. And I think that most people prioritize the social issues that are swirling around us based on what's important and pertinent to their lives. I don't think that they may be like staggering them and write a blog post about it like he might. But I think if you are, for example, the way I grew up, you know, growing up in the 80s 90s, with not very much money in a dying blue collar town. You know, there were a lot of social issues happening at that time. But the ones that were most important for, like the people I knew were economic issues. One, like there was obviously a lot happening with gay rights, like it was the 80s and 90s, there were people being beaten to death. And that was for some people a lower priority issue, because we didn't know anyone who was gay, or for people who are closeted in my town, that was the highest priority issue. I think that's fine. That's totally fine. And I don't think that we should ever have to declare what our high priority issue was, and why and defend that, like, that makes literally no sense to me. Well, there's
also no way back to this email that like for the CEO, and the people that were annoyed for whatever reason, by having to say their gender pronouns, that it was a high priority issue. For them.
It's like your people using MailChimp your highest priority issue, dude should be like, Why do people use this product that like, has the most dumb looking user interface ever? Like, I feel like I'm in kindergarten when I use it? Like, nobody really likes? You know, it's like, maybe that's your high priority that you're getting your business being stolen by substack. I would just sit
and look, maybe the CEO had no ability to answer those questions. And so for him, he felt like well, like the battle that I can be waged is the one about gender.
I would love the case to be the guys like fuck, like, I have no vision for this product
just right, it's like the Republicans basically decided like, our vision is
fight about some other shit like that. If that's the truth here that is 11 degree of chess, I would love that I would love to see this guy. That was just like,
the board comes down. They're like, Alright, what's the 12 year plan? That it's like, well, basically, all we have left are internal culture wars that we adjudicate over slack. And, you know, hopefully I can bring up some name recognition because these emails will leak. And you know, people remember MailChimp, because the last time I thought about it was when they advertised on cereal. And for
those people who love parlor, they will use MailChimp over
the based emailing platform, and I don't know, you, I think you might be right here. I think there are some multiple layers of this gut strategy because aside from that, it seems thin broth. Anyway, we went there. All right. This was fun. All right. Thanks. Goodbye, goodbye. Goodbye. Goodbye. Goodbye. Goodbye. Goodbye.