Tyler Cowen, George Mason University | Stubborn Attachments
11:04AM Mar 9, 2021
Speakers:
Keywords:
people
question
book
discount rate
tyler cowen
tyler
crypto
growth
sustainable economic growth
maximizing
vaccine
existential risk
argument
economic growth
agree
thought
future
problem
country
trade offs
I'm very, very, very excited and happy to have Tyler Cowen join for this meeting. This meeting is like other meetings in the series based loosely on a book that Mark Miller, Christine Peters and I writing more specifically, it's based on the three in the book. And in that series, that that the chapters corresponding to we last week, had a really fantastic week with Robert Axelrod, Vernon Smith, and Andrew McAfee, who all joined. And so I'm really happy to continue that streak and having Tyler Cowen here today. And for context. This meeting goes of chapter three that we have in the book. And in that chapter, we suggest that civilization fuelled by voluntary cooperation is already a super intelligence that is rapidly get a getting better at serving our goals. And in the previous meeting, we had Andrew McAfee, who I think is also on the call here today. And Brenda Smith joined to discuss a few potential drivers of this trend. And now we have Tyler Cohn to add his perspective. And the idea is really that noticing progress is the first step if we want to accelerate it or keep it going. And hopefully, we can learn a little bit about the potential factors that our spring this progress, and then have you guys think about how best to support them as we move more into a crypto commerce ecosystem, and include them in many of the applications that many of you are building. So hopefully, we're in the future sessions, we're going to talk much more about crypto commerce. Now we're still going to talk about commerce. But I only want you to kind of guys to think about how we can best put those factors into a future. Okay, again, we start chapter three by suggesting that there's a lot of progress that's happening across civilization, violence is declining, global poverty is falling, life expectancy is decreasing is increasing. And we're rapidly becoming more educated, and so on, right, but those are just a few cherry picked examples. I think the crucial thing to note is that all of those developments are positive from no matter what values you hold, or what goals you may pursue. And so I think they're really fantastic on a variety of different value spectrums, right, or at least should be along the lines provide us with more times ever achieving our goals, and more health with more energy to do so more education with more likelihood we'll achieve them and so on. And we in chapter two of you remember suggested that civilization has follows a purrito tropism, just like a plan goes to the light, civilization goes into perfect directions, and toward mutual benefit. And Tyler Cowen, actually, in one of his fantastic books, has a similar flow vein to describe progress. And he compares progress with a crew Sonia planned, and which is a mythically automatically growing crop that generates more output every period. So really, really interesting to have those both flow and flow trajectories. But of course, recently, bumps in the road has become much, much more apparent, I won't go into them now. But we list a few of those in the chapter, Alex haberer, has a few about rising prices, and so on and so forth. But we want to hear from Tyler today, because he has both in his work and surveyed how civilization has been rapidly progressing. But also how recently bumps in the road have become more apparent, especially in America and what we may do to overcome them. And to again, I continue to climb parade of preferred hills in a much smoother terrain. So Tyler, I can't tell you how excited this group is to have you. Thank you very, very much for joining. I will share more info about you in the chat, because I think many people know who you are. And yeah, we couldn't be happier, please kick us off. And I'll be collecting questions in the chat. Thank you very much for joining. Thank you,
Alison, for the very kind introduction. And Hello, everyone. Stubborn attachments is the book that I spent the most time writing in my life. I worked on it for about 20 years, but never full time. Every year I would spend a month or two and try to make it better. And then I would just put it aside. So it's my philosophical book. And I thought as a philosophical book, it needed more time. And then just when it iterated and converged, then I finally had it published. I'm very glad to have published the book with stripe press. stripe, the company is very committed to ideals of progress. And they allowed me to publish the book, I mean with editing as is. So that was great for me. There's a few Geneses of the work. One of them is simply my graduate work in economics, a lot of which focused on welfare economics. How do we know that one policy is ever better than another? So economists typically invoke cost benefit analysis, which I think is okay as far as it goes, but it doesn't present much of a grand picture of how you might judge someone Systems very often quite a static analysis useful for deciding if a municipality in Denmark should put in a new tram line, but not necessarily political philosophy. Or the other influence on the book came when I read Derek parfit reasons in persons, which to me is one of the all time great books in philosophy. And I read that more or less when it came out in 1984. And it just was a long standing question, well, how does this tie into economics and economic arguments? And in particular, there's an appendix in parfaits book, where he argues the rate of social discount with some qualifications should be zero. We should not discount for time per se, though, arguably, we should discount for risk. So there was Parfitt, then there was this thing, welfare economics. And then I was reading bigger picture thinkers in political philosophy there carfit, Rawls and nozick. And of course, you read these people, and you start thinking, well, like, what are my views, and I didn't find that my views were represented by any of the major thinkers, I was reading. And then like way in the background, of course, when I was really young, I had read Adam Smith's Wealth of Nations, a book about, of course, the Wealth of Nations, and the book about economic growth. So just trying to put all of those influences together. I worked on developing an argument where the primary good at the social level is to maximize what I call the rate of sustained economic growth. So the underlying moral theory is what I call a pluralist, that is a lot of different things matter. We, as a group, actually cannot agree on the exact weight of how much matters, to what degree. But what's the kind of claim we could agree upon? If someone said, Well, for a society as a whole, say, current day America is in some macro sense, a better place than current day Albania, our current day Congo, on average, I think, I hope maybe we could more or less agree on as a whole. So that's something we can agree on. What is it, what we're really agreeing upon, I think what we're really agreeing upon is that a society that is much wealthier than some other society is very likely to be a better place to live. And that sounds a little trivial. But once you unpack that intuition, think of it in terms of some axioms, figure out its other implications.
If your view is like mine, that the future per se matters as much as the present are, then indeed, you are led to maximizing sustainable economic growth. Because if you don't, the alternative, you're going to get to some future, that's much richer than the alternative you're favoring. And that's going to be like comparing America and Albania once enough years past. Now, just a caveat, or two, first, I would stress this notion of sustainable economic growth. So for instance, if economic growth circa 1968, you know, involves too much use of coal, it's not sufficiently green, we need to worry about climate change, I would not only accept, but indeed, I would stress that is of paramount importance. If the future really matters, we can't be doing things that are going to wreck the future. So it's a bit different than just maximizing the rate of economic growth, flat out, sustainability is all important. And with the zero social discount rate that's going to follow very naturally. Another qualifier I put on the argument in the book, I'm not sure it's necessarily empirically important. But I do think there are absolute human rights, we should respect those. And there are a constraint on what we should do to maximize sustainable economic growth. Now, empirically, I do think that the societies which are wealthier has a better record at respecting human rights, at least in current times. So those might all move together. But nonetheless, I am more than comfortable with the notion that there are some nozaki inside constraints. For me, they are not libertarian rights. I mean, taxation, per se is not theft. It's not a complicated set of human rights that a bunch of UN lawyers might come up with, if they spent nine months around the table. Just the simple view like you know, we shouldn't kill or torture innocent people for sport. There are rights we shouldn't enslave people, what you might call the bare bones Minimum that I think or hope we as a group could agree upon, like there are human rights, we don't want to break them to maximize growth. If you had to torture a million innocent babies to get the growth rate up a tiny bit, you know, we still shouldn't do it. There's the content inside of the argument. It's not what most of the book is about. But I'm very happy to have that in their final coffee is the way the book thinks about wealth. It's not exactly measure GDP, though, again, I think in practice, it's actually fairly close to measure GDP for many, but not all comparisons. So there are all kinds of household goods that don't show up in GDP, leisure time would be the most obvious. But there are many others. And again, you might argue, well, they tend to co move with measure GDP. Mostly, that's not entirely true. But I'll just say when I talk about maximizing wealth, we do want to value what you would call non market goods, in some manner, compare them to the market goods. So leisure time counts, my recipe is not that we all work 23 hours a day, which by the way, is also not sustainable. So GDP and economic wealth truly understood human rates as a side constraint, the future really matters. small differences in growth rates over time, make the world a very different place. The book, I give a numerical example, I think it's if the United States from 1890 to 1980, had grown less by one percentage point a year, the USA 1980 would be as rich as Mexico, rather than like the US. So you can even a small increase in the rate of growth, if you can keep it, it's going to be worth a lot. So this is a kind of moral framework. It's a way of thinking about politics. If it's productivity, economic growth, sustainability at the center of our thought. It argues that, you know, we care about a lot of other things. We care about beautiful art, we care about justice, we care about fulfilling our duties to other people, this complex, blooming, buzzing mix of stuff, right and William James like fashion, we're never really going to figure that out. What's the closest thing we can agree on? All say, it's like the USA is better than Albania. Working backwards. That's the moral theory that gets you there. And once you see it, you really ought to feel just follow it.
Oh, so that's really what the book is. And I just kept on writing that and writing that for 20 years changing things. Early editions of the book had long sections on existential risk, which I think is very important. But I found a lot of other people were publishing the same ideas in other books. So I just took that out. So if you're wondering, what about existential risk? Absolutely. That's part of sustainability. But again, the book is I wanted to write it. I wanted to pare out arguments I thought other people were doing in other places, you know, starting with Richard Posner, but really a lot of the rationality, community, effective altruism, and other groups. There's an interesting paradox in my argument, just to close my opening remarks, like what's the stuff wrong with my argument? So I gave a whole talk at Stanford, there's a 90 minute video on YouTube. Of me like steel manning the objections to my book, I don't have time to go through all of that. But if you type into YouTube, Tyler Cowen, Stanford stubborn attachments, you'll get to the whole talk. But here are two of them. That I I really know I can't handle it very well. The first is what you might call the nonhuman world. How do we wait the interests of human beings and non human animals? So it's far from obvious that economic growth is good for non human animals? You can debate like, you know, pigeons against mastodons, whatever, but I don't really think the interests of animals can move with the interest of humans in any simple way. So insofar as you might have, say, Peter singers view or other views that weight the interests of animals, I don't think my framework helps you any. And in fact, I do think animals count in some manner. I'm not sure how. But I think that's a problem for my framework. When my framework is not good at handling, we can come back to that. Here's the other issue. So when you talk about sustainable economic growth, kind of in the background in a sneaky way, there are two different values, there's the living standards, you're reaching for it and then the risk that you might not get there. And those will often co move. So you might think, well, wealthier societies have a better chance of surviving. If you think the true time horizon is very, very long. Actually existential risk will predominate as a factor in your calculations to say You think there's some paths where human society can just keep on going, you know, for 10s of millions of years, maybe it doesn't get that much richer, but it can just keep on going. And existential risk is just going to be much more important than maximizing growth is the price of just staying in the game, let's say it's a billion years, and you're all over the galaxy. It's like, Starship Enterprise, whatever. existential rate, existential risk becomes more important relative to economic growth rates. I empirically, that's not my view, my view is, the scope for humans and our galaxy is not for billions, or 10s of millions of years, it's actually more limited than that. So existential risk does not predominate in the calculations across this two dimensions of living standards for the chance of not even getting there. But if you're like super optimistic about duration, you're really gonna want to look a lot more at existential risk, and less at rates of economic growth. I don't think that refutes my framework, it's something my framework can account for. But I think it would make my arguments less novel, they would just collapse into concern with existential risk. At the very least, my framework tells you when you want to welcome that collapse. My best guess is I argue in more detail in my Stanford talk is humans, human civilization, maybe we're here for another 800 years. And in the meantime, we want to grow, but there's no immense, incredible prize at the end of the tunnel, where we just stick around for a billion years are too stupid and irrational and Nene and self destructive. Anyway, those are just a few opening remarks. With that, I'll open it all up to you, but happy to address any question you wish on this or any other topics. And thank you all again, for coming and listening.
Thank you so much. Well, this was much more than I could have hoped for. And actually much more than I had expected. I didn't know that there was supposed to be an X was part in the book as well, and so on. But I want to give it to participants first. Mark, do you want to start with your question?
I guess. Mark Miller. Hi, Tyler. Hi. The contention that we should have a zero discount rate sounds like exactly the kind of narrow, very specific objective value theory that you that we've been trying to avoid. And that I take it in general, you've been trying to avoid? Should statements, we're trying to try to stay very neutral among people that value very different things, then only stick with things you only take as universals things that have wide applicability over many different gold structures. The we have. So I'm wondering, first of all, how you come to that very specific should question because for one thing, it does not correspond introspectively to my sense of values, I have a very low discount rate, I value the future much more than most people around me. But it's certainly not zero discount rate. And as a creature having been built by evolution, and, and having my carrying structure being the result of selection, it makes sense for evolution to have created us not to have a carrying structure that that that is that would be abstracted into a zero discount rate. There's also a paradox here, which is, if you assume that there's tremendously more population and cognition, just in comprehensively more than than the total current world population is zero discount rate would rate the good to us now at almost zero compared to the incredibly larger population. And that puts us in a bad bargaining situation, because we need to get the agreement of people now with their current interests in order to set ourselves up for a situation that grows into that future.
There's a lot of different points in there. Let me cover a few. Maybe not all of them, but I do discuss most of those in the book. So if we have a zero discount rate, I don't think the current generation is in what you call the weak bargaining position. So the greatest request we can give to the next set of generations is simply good institutions and good norms. And that's also what is good for us. So it's not like Our rawlsian savings problem where they get everything and we get nothing. What they want us to do is build a good civilization for them. And that's pretty close to what is best for us. Now, you mentioned objective value theory, I'm a big fan of that, I don't think there's a way around it, like social choices have to be made. And when you work backwards from the social choice that has to be made, you have to have some belief you're doing a better rather than the worst thing. And that's my objective value function, so to speak. If you polled people, which I'm not saying is the standard here, but it's interesting, if you polled people, they very often will present you with hyperbolic discount rates. That is a it's a pretty high discount rate for the next some number of years. But for the very distant future, you know, decades out there, they are very low and consistently low near zero discount rates. So I'm arguing for a zero rate. Again, it's not what poles it's actually not that far away. From what poles? Well, in that sense, I don't think it's so counterintuitive. But a way simply to think about my comparison is I'm saying we should apply zero discounting to well being or happiness, not to money flows that are reinvested, but just to human wellbeing. So I give an example, in the book, imagine, you know, someday, that we wake up and discover a billion people on earth have to die of painful cancer, because one evening, we have an extra helping of dessert. If you have a positive discount rate, there's always a comparison of that sort that will make moral sense and indeed be a good thing. I just find that too difficult to swallow. And when you work backwards from rejecting that comparison, you pretty much have to end up with a zero or near zero discount rate. So there's more on all those questions in the book and my Stanford talk. But those are just some points I would throw out there.
Lovely, thank you. Okay, next one up, we have all of our and then we have Rachel. And oh, perhaps you Yeah, you say a sentence about you, just so that Tyler has an idea of where you're coming from.
Thanks, Tyler, for this wonderful intro. I'm older, I usually live in Berlin right now, at my parents place in the Black Forest. enjoy nature a little bit of a question around what you like, in your introduction, you talk about sustainable economic growth or sustained economic growth that can like keep the living standards and reduce the risk that we may not get there. My question is also in combination with the kind of voluntary cooperation that is highlighted in the book that Allison sent around. How do you feel that or think also how, how uncapped investor returns that are prevalent in our society contribute to the kind of centralization that leads to involuntary cooperation. So an example would be, I need to stay on Facebook, I need to call cooperate with Facebook as a user, because they have gained so much growth and systemic power that I get is not a voluntary cooperation anymore. And on the other hand, also, when you have uncapped returns, or investors that drive companies to maximize your profits, that then leave little economic freedom, or all the social responsibility that is necessary, or like, we're actually having no discount on human wellbeing. Because, yeah, it's it maximizing towards a lot of value. So what's your thought about this? Like? What? How should our society move towards a future in which different economic approaches that don't incentivize this kind of centralization of power and value?
I would put it this way, if Facebook or some other company has significant market power or monopoly power, if you were then to us antitrust against that company, you would break them up output would increase that would contribute to economic growth. My recipe would call for following those policy actions to break up harmful monopolies. So I have no problem with that. Now, there's a completely separate side question like which are those companies that have harmful monopoly power, and you can debate whether or not that Facebook, I'm pleased to report to you, I've not used my Facebook page in over a decade, and I'm doing just fine. So I would look elsewhere for harmful monopolies. But if you want to think it's Facebook fine, you know, regulates the use of its App Store seems to me clearly a violation of antitrust laws. And if we did something in a wise way, we would have a higher rate of economic growth. So let's do it. So me. Yeah. For me, the
question is also about how do you internalize externalities? And right now, all the externalities that are not accounted for that are usually like, like environmental damage, for example. They're not accounted for because there's always this competitive and like dynamic between where do the profits go? Do they go to give dividends to investors or like signal share price increase? Or are they going for paying, for example, for increasing supply chain sustainability or paying for health care for workers? So it's not necessarily about harmful monopolies? But it's more about how does the future look like where we cannot account for externalities, because the profits that could be used for that are put into investors hands, and continue to do so creating at the end of the food, like food chain, these horrible monopolies? It's the most visible?
One, I think you want to ask the question which which systems are the most innovative right, toward boosting the rate of economic growth? And from my personal point of view, this is not from my book, I'm speaking outside of the book on a very specific issue. I think we significantly under subsidized science, that there are very high positive externalities from good science, we subsidize it somewhat with very high returns, I would like to do that much more. And then on environmental issues and mix of carbon tax and again, more subsidization to science to solve that problem, seems to be another massive issue, and also handling pandemics. Those are the particular externalities I worry about the most. I don't think they so readily map on to all the standard political categories. You might have other ones, but if you're just asking me, what do I think about particular issues? That's what I see as the most important externalities on top of course of national defense.
Thank you. Lovely. Okay, we have Rachel next, and then I'm going to post in the chat. Who else is next Rachel, one or two words about your background? Perhaps that you've kind of tell us? Sure.
So uh, I'm a co founder and CEO of motion Hall, which is a deal enablement startup that we work to finance and mobilize Kirsch lives a life sciences IP around the world. And we've actually met before through village global, which is one of our funders. And so I heard you speak with Eric roofer, Eric Schmidt, which was really great. And my question for you is pretty simple. But I want to dig into the really specific piece, right? So most of us here are pretty high agency people. A lot of us are familiar with your work, and I've read your books of our set in particular, what's one thing you believe that you wish, most often people like us would more fully come to understand and work to mobilize?
There are no simple questions. That's the first thing to understand. First, most of you, I don't know, I know a number of you. Think of the people I do know, like one thing that very smart people I know frequently don't understand, are the social benefits of religion. I'm not myself religious. But I find it's an issue where I disagree with a lot of very smart people that I know. Another issue, I wouldn't say I disagree with people if I bring it up to always agree with me. But I don't find that they prioritize it. And that some notion that just a lot of things in the world ought to get done more quickly. And I think you see with the early phases of vaccine distribution, still in the European Union, not in Israel, not in the UK, my country finally has its act together, we're doing pretty well. But we waited many weeks and just did nothing. And my hope is that is a wake up call to us that we fundamentally have institutions built for routinization of repetitive activities, which again is fine for many things. But we're quite bad at emergencies and new tasks. And again, when I present this to people, like people like you, I can use that phrase even though I don't know who you are. None of you disagree. But at the end of the day, I don't quite see that a lot of you want to prioritize it the way I do. And I think our failures and responding to the pandemic. Show that because we've had a lot of well governed nations completely fall flat on their faces when it comes to getting the vaccines going, among other issues.
Can we dig into that a little bit more before you move on to the next question, Tyler, do you have specific recommendations for how we could dig into that I know I expect my late career to focus on that piece for running a company. I know how much I have to architect and just to get people on my own little team to mobilize quickly around emergencies. And change is very, very hard. So it's not surprising. It's an incredibly challenging problem. And we're talking about, you know, country wide national institutions, global institutions.
I think in most institutions, there are too many layers of now, especially governments, but not only private foundations are far, far too slow. Getting research funding to people, you know, working on COVID related issues could take six to nine months, in many cases. And I think there's far too much credentialism. And I think a lot of our efforts, we use far too much software. And this will be painful for some of you to hear what you might call pro tech, the tech only works when you do it well. And a lot of what is working well for vaccinating people, is getting them into huge open fields, putting them in a line and jabbing them in the arm and giving them a file card when they're done. And when you make people sign up for these cluttered systems, at one county in Maryland, montgomery county had like nine different signup systems, all mutually incompatible for one county. And they all were crud software, like not even mediocre software, just terrible software, the DC system still crash all the time. So software is great. It's clear why we use so much of it. But at the end of the day, a lot of tasks require a kind of brute physical mobilization that we've somewhat forgotten how to do. Thank you.
Thank you. Also, thanks so much. Also seems like bad software was also a problem in there. Okay, next one up, we have a variety of hands up. So we had Peter next.
Thanks, Tyler. Sir, if you're publishing a grand theory of political economy in 2021, I'm curious, okay, what can we use this for? And one of the big application domains looks like artificial intelligence. And there may be a couple of ways that that plays out. One is, you know, people who are building increasingly capable AI systems may say, Well, what values should we try to teach or optimize with these systems and, and perhaps, you know, up to humans being tricked into wanting things that they shouldn't want to whatever? Like, broadly, it sounds like a useful philosophy for how you try to align AI with human values. But there's another question in there. That's a little weirder. And it seems also very important, an opinion in parallel with your unanswerable animals question, which is, if we're starting to make digital beings in the coming years and decades, does it matter what psychology we give them? And what we teach them to want for themselves? Because you know, whether their lives flourish, and whether economic growth produces that type of flourishing, sounds important and and potentially hard to answer.
I would repeat my point about the no simple questions. But here's what I would say I consider myself what I sometimes call a two thirds utilitarian. So I mostly utilitarian, but not entirely. And there's at least three sets of issues where it seems to me utilitarianism does poorly. One of those is non human animals, right? possibly even plants, I guess, alien beings you could throw in there. Another would be AI is if they become sentient in some manner, and then also disabled individuals. And I think you encounter the same sets of problems in those cases. And it seems to me that religious thought is actually a better starting point. For many of those issues, though I don't even believe in God, that there's some sense of equality of mercy, which though it cannot be applied universally, is a ruling principle in a way secular philosophy does not give you an equally simple manner. So I typically suggest people turn to religious philosophy. To treat all of those issues, I think it will do better than secular, utilitarian philosophy. And that's one of the ways in which I'm a straussian. I think we should have these norms, not entirely secular, though we cannot actually explicitly justify them. doesn't answer your question, but it the slight macro framework for thinking about it. I think it's a great non answer. Thank you.
All right, next one up. We had, I think, Tyler, thank
you so very much for taking the time. I'm curious whether the zero discount rate proposal is for is a proposal for mechanism design or as a description of what you think has been happening or should be happening, and also whether it would affect your framework and analysis so much if we were to exchange it with an adjustable adaptive discount rates according to perceived
total impact?
I see a very simple rule. I certainly don't think we've been following it. I think climate change policy shows that many, many other issues. So much of American government, it doesn't even run on a two year cycle, right? It runs on a less than one week, media headline cycle, super short term, not always in bad ways. But very often in bad ways. The Senate is like our grand example of people who think long term, and they're not even thinking six years out. So for MIT, plenty of issues, I actually think that's fine. But for issues with long range and consequences, it's often disastrous. Preparing for the next pandemic, maybe now we'll get it right. But we certainly didn't have it right. Two years ago, that would be another example. People sometimes ask me this, the rate of discount really have to be zero, exactly zero. And I guess I think it does. So I get this a lot of comparisons, where the numbers you're feeding in are not exact. And in that sense, the true discount rate you're applying is fuzzy, rather than literally being zero. But if you explicitly recognize a positive discount rate on well, being of any kind, there then always is some comparison. It's stretching long enough into the future, where you're having a million people die as painful cancer. So Cleopatra can have an extra helping of dessert. And I don't believe in that. So I'm gonna say zero. I mean, to be hardcore here, it's got to be zero. No, it's not right. It's not somewhere we're ever going to get. But it has creatchi kind of religious like structure to it. Well, I think about the afterlife, right? It's a highly religious argument.
And, yes, thank you. I think to some extent, you know, the arguments that you make in the book are interesting, because they are, that they're relevant from a point where you have one, or you don't have as your discount rate in the sense that from both perspectives, at least for now, those just accelerating sustainable growth is the thing, I think that that is a good thing to do. And that gives us the Sonia plan that then gives gives more of what people want, and to everyone. So I think that's, I think, a really appealing factor of of that particular argument as well. Right. And so, you know, perhaps it is it is just an appealing fact that the types of value differences that we are like that are at least discussed here, right now, they as well, as long as the thing is happening this far, at least don't, don't have to be conflicting. Right? What do you say that that's Yeah. Okay. Um, well, we have a lot of other questions here and by Robbie, and then Dan, as well.
I guess, one question, cuz I thought it was just sort of how do you think about trade offs? Like if you can have higher growth, it some choices make higher growth for some larger number of people, that slow down growth for some other set of people, whether that might be, you know, the growth of a country like China, or some people think as an example of whiteout, or climate change, which involves some trade offs. And there are a couple of issues there. One of them is sort of just from a pure utilitarian calculus point of view. But the other is, you know, just sort of bounded rationality and people just being naturally self interested. And how do you build institutions that are able to actually make trade offs of that kind?
Well, I think part of my argument is if you get to a wealthier society, it will grow more and innovate more over time. And over some time horizon, a lot of those trade offs vanish. So part of the argument is suggesting there are fewer trade offs than you think. Once you do, what is the growth maximizing choice. It's just very hard to do that. It might be hard at the stomach, you may not have the strength of will to do it. There will appear to be many trade offs over say, a five year time horizon. Well, what about wealth versus higher inequality? Right, you can make a very long list of all the different trade offs but in the end analysis After enough decades, you're comparing America to Albania. I'm not saying Albania isn't better in some ways. But I don't actually think that comparison isn't a trade offs, I think you can, more or less on ambiguously, you know, opt for the richer place. So in part, what I'm trying to do is kick a lot of these trade offs out the back door and get rid of them. I know that's a controversial move, but that's part of what's going on in the argument. And religion does the same thing. Right? It says, Now, don't steal from your neighbor, sort of Believe me in the long run, it will be worth it having these wonderful, your 4000 here and having you'll really be glad you didn't steal from the cookie jar and so on. Right, it's making the same move.
And thank you so much. And next one up, we have dealt with this question.
I My name is Dan Elton. I work in the field of AI for medical imaging. And I live in Bethesda, Maryland. Great. I'm sorry, this question is not about stubborn attachments. But I read your Bloomberg view piece on sins of omission versus comission. And I've been thinking a lot about the FDA. And the failure to approve Astrid AstraZeneca vaccine, the failure to adopt first doses first and all the things you've been writing about and Alex tabarrok, I've been writing about. I co founded a small movement called unclog the FDA.
This movement awesome. It's great.
Yeah, yeah. So we're trying to we're trying to like get the word out, we're trying to, you know, spread a lot of the writings that you and I and Alex have haven't. But what we're finding is that it seems it seems people are people are not really receptive to this idea that the FDA should have moved faster. And I think it's a, I think it has a lot to do with this distinction between acts of omission and versus acts of commission. And people just ignoring the the acts of omission. But, um, and I think that's extremely, extremely powerful in in the way people think about ethics. Of course, you know, I'm a utilitarian and I follow Peter singers argument pretty closely, but it doesn't seem to be the way most people think. And but but what's what's curious is like, people do care about sins of omission in certain situations, like you point out. So the question is really? How, how, how, basically, it's, I'm sorry, if this is hard to answer, but how do we get the public to recognize the sins of the sins of omission and the, the sort of because the the incentives? Right, there's there's a, right now, the incentives on the FDA are such that they don't, they don't get much in the way of negative feedback from from omission. Or from going slowly, super slowly and super, super carefully. Which is not, which is not necessarily the rational thing in during an emergency.
All these questions are in the book, by the way, though, not with your concrete example. So you have asked about the book 100%. My sense is this, that ideas really matter. And if you speak to the public health people on FDA panels, FDA commissioners, vaccine experts, and I know a lot I've gotten to know a lot of these people. They are quite conservative, in their approaches, trying to get one of them to endorse human challenge trials, it's very hard, I mean, conservative in the literal sense, you know, the partisan sense. So, until that changes, which will take at least a generation, I think it's a long run effort to educate the next generation of people who do Biomedical Ethics, public health, vaccine, research all these other areas, and get them interested in ideas that are more congenial to too many of us. So I think it has to come from the top down in quite a slow way. And I don't think it will ever be the public marching in the streets much as I would like to see that. To me, it seems more like monetary policy, like the public's upset if there's inflation or unemployment, but they're not ever upset about monetary policy. Right. And vaccine policies like monetary policy, people might be upset about death so that they can't travel. Monetary Policy mostly is shaped by elites, economists, mobile financial markets. And I think vaccine policy will be the same way. So just like Milton Friedman invested for a generation in teaching, people want to crouch, whether or not you agree with it, but it was very influential. And then you had somewhat of a move in the other direction where Keynesians invested for a generation, opinion has shifted. And I think public health will be like that.
Someday, basically, you have to work within DC and the people within the power structure. And
but here's the thing, I think the next generation of public health people, they've all been reading Twitter when they were age 20. I'm seeing arguments like yours mine that other people here have made, and it stuck in their heads. I'm not convinced they will carry it forward. But due to social media, I think there's been a breakthrough. And it's quite possible that a generation from now many more of these things will go our way. And I think the example of the UK in particular, may hold up historically, UK has done many things right as of late. And I think they'll end up looking quite good.
Oh, absolutely. I
think that will have real impact, but not on the people who are 63 years old.
I hope so. Thank you. Thank you.
Thank you so much. with Brad next.
Yeah, it was one of those people, I was not quite 63. But I'm bad. I'm on record for the Institute. I'm up in Canada right now, I just got out of quarantine. But I won't be here for too long. And they just started AstraZeneca here today, actually, but they're doing the full dose first rather than the half dose first. So it does take a while for things to percolate out. However, I wanted to, while mostly agreeing with you trying devil's advocate a few points, one of which is that well, sustainable exponential growth is great. We have a pretty sucky track record on arranging for sustainability and for pricing externalities, in order to arrange your system for sustainability. So is that endemic? Is that inherent to the problem that we will suck at making it sustainable? Or is there a way that that problem can be solved in a more generic way? And the second question, which I can repeat later, if you like is, what about the people who try and suggest you know, the GNH, Gross National Happiness sort of up? To say, growth is not the thing to maximize its happiness? And it can be maximized in ways other than growth? How do you respond to them?
to questions, of course, first, I don't think our track record is really so bad. So 10 years ago, I was very pessimistic on green energy issues. I thought we're just headed off the cliff with climate change. But now you see, you know, the market valuation of Tesla, I get something that's crypto, but uh, mostly it's about electric cars. You see, batteries for electric vehicles, appears to have made some kind of breakthrough. You see prices for solar and wind power, falling, needy, I don't know, 15% a year and headed to being very low. And I know it's still you need to integrate those technologies into the rest of the infrastructure. That's not easy. There's even serious talk of electric planes, not for the longest flights, but for quite a few flights. And that seemed absurd, even five years ago.
So I think I mean, I work in I work in that field. So I'm very familiar with those developments. But it's also very clear to me that they happen not because it was greener. But because people like Elon Musk and others said, No, this is just better. Do this, because it's better now, yes, it's more sustainable.
But that's, I think, why we have a decent track record, that ultimately there's a profit incentive. People want to make money. That's not always a positive motivation. But we're seeing it turn in our favor when it comes to the environment. And I think we're gonna beat this thing, just like the vaccines are beating back COVID. So, for Canadian to say our track record is terrible. that strikes me as a little weird. I mean, Canada is an awesome country. There's a dirty fuels problem coming from Canada. But other than that, I mean, Canada has just done so many things, right. If anyone should be optimistic, I would think it should be a Canadian.
Who lives in California.
But nonetheless, that's part of your set of options. Now, you asked about gross national happiness. I think properly understood. It is not that different from wealth. So I agree with the common claims that say Asian countries. East Asian countries at least are relatively unhappy compared to their GDP. Latin countries are happier than you might think, given their GDP per capita. So it's not like a complete match up. But if you just were to list like, what are the 1012 countries that immigrants want to go to? I think it's obvious which they are. And they're the wealthy countries. And with high variance, of course, but people in those countries are relatively happy and have meaningful lives. And that's why they attract people. And I don't think happiness is the final standard, either, right? some notion of meaningfulness of a life, arguably is at least as important. So ultimately, I think it's clear who the winners aren't, even though there are different ways of measuring it. It's not the case that everyone in Bhutan is actually so happy
that I agree with you there and I but I'm actually trying to channel for them is this idea that if your metric is happiness, or your metric is meaningfulness, as you just suggested, that wealth is a means to those ends, and not itself the sort of one central goal as you outlined in your thesis?
Well, I would say the ends are plural, right? So I started with pluralism. We don't agree on plural ends. But what's the thing we can agree on that we all need to get to our plural ends? I think it's well, if you had to pick one thing.
Thank you. All right. Lovely. You have Eric's Papazian next.
Thanks, Alison. Hi, Tyler. Just a quick question on sort of a broad view of macroeconomics. what needs to change about the field? Why has there been, you know, something of a coming up short, when it comes to understanding growth. And just recently in the debate over the stimulus in the United States, you know, you had people like Larry Summers, who seem to be so supportive of stimulus with his secular stagnation theory, coming out against it, you are sort of mixed, let's say, if not, because of inflation fears, then because you thought there was an opportunity cost relative to investment. So in general, why is macroeconomics sort of failed to understand growth properly? And what would you do to fix that?
I think a lot of macro is underrated. And it's done better than people think. So there are disagreements about the stimulus. But I think those are mostly disagreements about the politics. So Democrats, like Krugman will say, well, this will be really popular, then we can spend some more. I like that for political reasons. But if you listen to Krugman carefully in that debate with summers, he's basically buying into Larry's economic analysis. And look, Larry is still saying we ought to spend a trillion, he doesn't think we should spend 1.9 trillion, he does in fact, want to spend more money on different things like green energy. I don't think there's that much disagreement about macro economics. So think macro is underrated. It's done pretty well. There is something hard to understand about economic growth that comes from human culture. So I'm convinced culture is the central element behind growth, that say the savvy Confucian systems of East Asia clearly have succeeded in delivering high rates of economic growth. I'm genuinely uncertain how generalizable those recipes are for other countries, because it's a perfectly fine quantum to have to say, Well, what worked for Singapore, you maybe can't apply to Brazil. You don't have to disagree about the economics. Maybe Brazil just can't swallow what a small city state of 5.3 million people the size of Fairfax County can do. So if culture is uncertain, then growth is going to be uncertain. And you can blame that on macro. But a lot of macros very good. I love macro, in fact.
Thank you. I just saw another question in the chat popping up from Chris Godard, if you hear
Yes, so my question was a fairly simple one.
More questions?
Sure. Yeah. So I think I'm not sure if many people here are readers, or Scott Alexander, the pseudonym of Yep. So he recently wrote kind of almost a provocative open question about why it was so difficult to get things done in America in particular, I think I forget what it was called. But it received a lot of comments. And I think one of the comments coming out of that was whether or not there was difficulties of trust at certain levels in the US in the present day. And I guess that kind of segues into my question regarding if having low friction barriers. During business is dependent on trust and ease of doing business is correlated to economic growth and then hence maximizing future wealth, as you know, your primary economic good. How would one seek to optimize trust in a society?
Like I said, another simple question. This is again veering into very specific opinions of mine. But I think the United States has a much higher level of trust than most people now in the media will admit, I think, especially in a business context, where products probably the highest Trust Company in the world. There's work at Stanford trying to measure trust in management cultures, US typically ranks number one, we can delegate tasks, with a higher level of trust, it seems than any other country, that means our companies are better run doesn't always mean there's trust in government say, that said, I do historically I see a very common pattern in American history. And you can see this in Pearl Harbor. I mean, Pearl Harbor was a disastrous mistake, we were completely unprepared. And the Japanese destroyed a significant part of our military assets in the Pacific. But then over some longer time horizon, we came back very strong. And I'm not sure what feature of American life accounts for this. It may be our inward look that we look inward, and we're provincial in some way, rather than truly global and open. So we have terrible starts. But you see the salza. With the vaccines. We did, what 2.9 million doses yesterday. That's really pretty good. I know Israel still ahead of us. But my goodness, you know, Biden was boasting, he would get us to a million doses. And we're at 2.9. We're about to break through three with j&j coming. We're here already. So maybe the right model of the us is we have high trust, but a lot of dormant assets. And we look inward. Because we're a big country. And we're weird. And we live in all sorts of weird little places. And it just takes us a lot of time to get our act together. But when we do those kind of final squares on the chessboard, we're pretty awesome. I guess that would be my model of the United States.
Yes, please, don't get me wrong. I mean, I wasn't necessarily just targeting the US was just, I think, based on my questions generalizable to more or less any society. But it's the the the, I just mentioned the US because that was the target of that particular post, I'll see if I can dig it up actually.
striking to me, the countries that did poorly in the pandemic early on, are doing much better at the end, and vice versa. In a Germany's the opposite, they're pretty high levels of trust, pretty well prepared early on a lot of contact tracing, kept cases low for quite a while. And now it's all collapsed, and they don't really have that many vaccines. So there's a lot of people with some theories about this country is good at doing things or this country not, but it's somehow quite disaggregating.
Thank you. I'm looking forward for Chris post of the states according article. We're now one minute before closing, I think, you know, your last comments, make a really good segue into also what we'll be discussing in the future meetings, and from now on, which is specific applications that work also in areas where there isn't much trust. And so those are a lot of applications in crypto commerce. So maybe perhaps, you know, parts of those bottlenecks to cooperation can be solved this way, too. And so, you know, looking ahead, perhaps one last question that I have, I will be, you know, what could you tell people that are building applications in the crypto commerce field in the field of AI? Are the fields going forward? I think that they may have in mind as they're building applications that allow an increasing diverse set of intelligences to cover up to cooperate more smoothly.
But I would close with two points first, if any of you would like to follow up on anything, please email me, my email is online. But what I would tell people doing crypto is no one understands what's going on in that field, in my opinion. Now, that could be a good thing. But you'll hear people's voice opinions with great certainty about crypto one way or the other. I would just take it all with a grain of salt and positive, negative whatever. If you ask a simple question, like how do we model the correct and proper value of a crypto asset, I think you'll get nonsense from almost everyone. That's maybe exciting. It's an opportunity, but just don't get too dogmatic, don't get locked into a particular position too readily. One way or the other. And that would be my advice for people doing many things, but crypto Most of all, and just thank you all again for coming. listening. Thank you, Allison, for being such a gracious host of the event.
Well, we can't thank you enough on the size of the group. Thank you so much for joining us. It was really fantastic. And we got so many questions. And I'm really, really, really grateful that you took the time to join in and answer so many questions for anyone else would like to stay on or at least have stay for more social after this event. I'm hosting chat room into the chat This is on gather, it's a social platform where you can meet, sit at different tables talk to each other. For that you have to break off zoom, and then you click on that link, you click on that link back of zoom, hop on over there, type in the password, and then we'll see each other different tables, that's for those who want to stay on and get to know each other. And we do that after a few of those meetings. Thank you so much again, Tyler from everyone here. And thanks for staying on not two minutes over, I want to be very mindful of your time. And I'm going to see many of you hopefully, for a discussion with auto Tang and Taiwan's digital Minister for the next meeting. And we'll have a book focused meeting as well on a more social setting. But I'll follow up with all of that via email. Thanks so much. And from now on, we're going to move on to the technology sector for the next meetings and discuss how we can allow different intelligence to cooperate better using the types of ideas that we've discussed throughout the past few meetings. So Thanks all for joining and I see you on gather. Bye bye