Keynote: Jaron Lanier
5:55PM Jul 27, 2020
I guess that means we're live. So welcome everybody. I met our next presenter Jaron Lanier in 1986 on my first day of work at a small Silicon Valley startup. I was introduced to a super friendly intelligent freak with dreadlocks. He turned out to be the founder, you couldn't help but like this guy. You couldn't help but be sucked into his kind of visionary presence of vpl was the name of this little startup. It's where a small group of us together pretty much by accident created alternate reality generators, out of nothing. No one knows what alternate reality generators are which is why Geron coined the term virtual reality. And of course, everyone knows about that now. Along the way, though. Through the years where we work together. Jaron and I would often have all night talks, talking about pretty much anything all sorts of ideas media technology and its effects and, you know, we would do this while playing on any number of 100 plus instruments that he had humming playing talking. I've admired Jaron ever since his thoughts, his ideas, his visions and himself. So without further ado, I give you Jaron.
Hey, Mitch, thank you for those kind words and. Isn't it cool to keep up friendships over all these years. Yes, isn't it great that we happen to be in a reality where that's possible. And it's particularly great to still know people from those early virtuality days from so long ago, which are really like a mythical reality that might or might not have existed for a lot of the younger people who've gotten into virtual reality more recently. So what I want to talk about today is the big picture of how technology is fitting into humanity. Because I've had this strong feeling for a while that the mix isn't quite working. And I'm particularly concerned with a creeping feeling that I think is lurking behind a lot of current events that people are feeling like they themselves that humanity might be becoming obsolete, or that humanity might be left behind or replaceable or something like that. In a typical day's news in the last year. The first 90% of the news is horrible. It's about the pandemic it's about unemployment. It's about
police malevolence in various countries around the world, including our own. It's, it's about profoundly childish, short sighted leadership, all around the world, including in our own country. It's there's this, there's this parade of horrible news. And then right at the end the little cherry on top it's supposed to be the good news is often some little thing but Oh, and today, a robot proved it can, you know, interpret, radiological data better than doctors, so there'll be like the like the little good news is some little nugget placed by tech company about how AI is replacing people. And meanwhile, if you look at the rhetoric of some of the people who strike me as the most troubled in the most dangerous in the world. You'll often see this language that they fear they'll be replaced. You find that in the most violent fringes of Islamic movements you find it in the violent fringes of white supremacist movements in our country and elsewhere. You find this sort of weird feeling like you know. Are we on the way out we're struggling for our lives and often the people who say that we'll blame some other group of people might blame some other ethnicity or or whatever. But, but the thing is it's not really plausible that they'd be replaced by that other group of people and so why do they feel like they're being replaced and I think the answer is that every day on the news they're being told they're being replaced. And so what I want to do is dive into the way we think about technology and the proposal I want to make is that this whole way of thinking of technology something that can potentially make people obsolete is itself based on a mistake, and the confusion. And that mistaken confusion has a whole interesting social and economic history. Even a romantic history. but it's it's fundamentally based on mistaken ideas in the first place. And if we can get our ideas a little straightened out. I think we might have a shot at reducing at least part of what ails us which is this underlying feeling that people are vulnerable to being replaced by technology. Now, I should say that even those with the best intentions tend to buy into this narrative of replacement these days, for instance, some of the people talking about universal basic income, have only the best intentions and yet in the background of that ideas we need the universal basic income because people are going to be worth anything anymore to the economy so they'll need to be supportive. And so once again there is underlying that this idea that well hey you know used to be worth something. Now you're not now your charity case and there's something about that that's very spiritually poisonous I think to people.
where does this idea come from.
It has a few interesting routes that I want to explore. The first one goes back to the 19th century, and the spread of factories and industrialization. At that time, in the desire to make factories efficient. The workers in the factories started to be conceived of as robots. And indeed, if you look at the early rhetoric that led to the rise of Marxism and reactions against capitalism in that period. You'll see a kind of an early version of this replacement rhetoric in reaction against it that we see today. In 19th century factories workers were thought of as being mechanisms that were supposed to be controlled with precise specifications. There was a notation for human movement designed to hack people to specify when people would do bit by bit of avant notation which survives today, oddly in a totally different use as a tool for choreographers to document dancers, but it was originally designed to precisely modulate the bodies of workers. Now, There are a few things to say about this phase of industrialization. One is, of course, it was anti human and horrible and demeaning. It's completely understandable why people reacted so badly to it. But there's another thing to say about which is it didn't actually work very well. And in order to explain why it didn't work well. I want to talk about a later development that corrected it. In the early 20th century. There was a guy named Deming that some of you might have heard of. He's one of those people like Quincy and music is just known by one name because he is such a powerful force Deming was a scientist and engineer who looked at factories in terms of systems and in terms of information flow. And he thought you know what we're doing here is really stupid This is not the way to optimize a factory. And he came up with a method for improving the quality of factory output and in fact, the term quality of supply to industrialization originates with him, and the all the variations of the Six Sigma movement and all this stuff come from originally come from Deming. There were a few different prongs of what he did. He was the first person to use statistics.
He was the first person to use statistics in real time feedback loops so he would say, what is actually happening with the products coming from this factory,
can we look at the data and improve it and so these days of course we use statistics and feedback in real time all the time but at that time it was revolutionary. But he did two other things. One of them I mentioned in passing, which is was almost like a kindergarten culture and perhaps in its own way a little demeaning in retrospect, where he would have all the workers get together and sing songs for their for their morale and stuff. But the third one is the one that really fascinates me the most. What he said is, you know, you have all these people who are humans after all they're intelligent they have brains. They're actually making this stuff. Why don't we reveal to them the data from the statistics we're gathering and see if they have any ideas on how to improve what they're building. instead of treating them as mindless robots. And that was a revolutionary idea, and he framed it in. He wouldn't have used the term spiritual but I'll use it he tried he framed it in almost spiritual terms, he said you know as soon as a person becomes aware that what they're doing matters, and is able to use their own volition to improve what they're doing. They got they they gain the kind of depth of moral reality in the world, they gain a kind of a participation in society that's profound where instead of just being pawns. They're real contributors, it creates a kind of a sense of responsibility a sense of pride, it creates a moral force, that should improve everyone. So his ideas did not take hold in the US after World War Two. However, they found an avid audience in Japan, of all places. Now, in the immediate post war period, Japan had a very very poor reputation in manufacturing. I'm old enough to have to remember that a Richard that when I was growing up in the 60s Japanese products, had a reputation for being the very worst, the things that would fall apart. Japanese companies like Toyota adopted the Deming method, and within a decade completely turned the tables and gained a reputation for the highest quality in the world. And this in turn had a profound impact on the world, because it created the manufacturing model for the economic rise of Asia, which worked initially in Japan and then was copied by the Tigers, Korea, Taiwan, Singapore, Hong Kong, and then by big China, of course, famously. and so it turns out these ideas, anything that happens on that largest scale has both good and bad qualities, and I'm very unhappy with modern day. China on many levels the wiggers the Tibetans, etc. On the other hand, how many times in history has a giant new force arisen in the world. As peacefully as China has risen. And so it's a it's a relatively peaceful model and it all came from. Well, not all but you know it largely arose from the seed of this scientists looking at information flows in a factory and saying, Wait a second. People aren't robots. People have dignity people can contribute let's treat them like adults give them responsibility. Let's allow them to have pride and things just got better. Now, why am I belaboring this point about Asian manufacturing and Deming. It's because in our modern time engineering, and the production of objects, the production of technology has become more and more computational to greater and greater degree we're modeling every everything. In some cases with such as with 3d printing it's directly computational. And the more computational a gets, the more our model of how computation and data work really become the model on how everything works, and currently we're treating people very much like 19th century factory workers. So if you ask, Where does the data come from that molds and guides the algorithms that run everything, which we call the artificial intelligence algorithms as if they're separate from people they're this other intelligence. Where does the data come from. Well, it varies there are different systems right which data gets into algorithms, but the most common one
is that people are given a kind of a barter relationship where they give over their data in exchange for a free service. Often a free service that isn't necessarily that important, like a service where you can post pictures of yourself in order to feel less sexy than other people who are posting pictures of themselves or whatever the hell it is, but in any way, we have these things where people get addicted to services their data is taken. And so it becomes very much like the 19th century situation where the person is not aware of how their data is being used. They have no ability motivation or understanding that would allow them to make their data better to make the world better they have their completely removed from any connection or responsibility to the meaning of their data in the world.
I think is is a deplorable situation on multiple levels. I think it's deplorable spiritually because it sends the message that the intelligence is in the algorithm and entirely not in the data that drives the algorithm, which. And, you know, you can argue about the nuances but surely that extreme statement cannot be correct. Surely the data matters at least a little. And it's deplorable because as in the case of 19th century factories. It's dysfunctional. Because it removes human intelligence from the possibility of improving outcomes driven by humans.
so you know one question to ask is, is there any alternative to doing things this way. And I think there is, um, we ended up in this very weird situation and I've written about this a lot, so we'll go We'll go over to great deal. But we've ended up in a situation where, probably the majority of the biggest AI companies are driven by the spider model, not all of them but I would say, like, Google is for instance, and Facebook is pretty prominent examples. And because Google is and so those are some of the AI efforts supported by Google, which are some of the central ones.
in similarly in the, the Chinese information world, the same model holds although with different language. And so we have this very strange situation where if you look at a company like Google, it has kind of three identities depending on the direction from which you look at it. If you're a consumer and you look at it, it looks like a barter organization that offers you free services like search and YouTube videos and so on. If you're an investor if you're from wall street you think it's an advertising company, although perhaps a notch creepier than traditional advertising because it engages people in real time behavior feedback loops, which I think is a different thing that shouldn't be legal but let's leave that aside. But if you work inside Google and you look at it it's an AI company that's getting all this data. And it's trying to make algorithms. Now is there another way. And the answer is yes of course there is just as Deming pointed out that Toyota can improve its quality by giving people. Now I'm trying to turn off these questions I find it really annoying I'm not used to using zoom is there any way I can get rid of these.
Turn off the chat I think that's all I need to do.
Now they still pop up right in front of the screen.
Okay. Tara stop sending questions until Jaron is ready. That's the solution. Okay wait till you're ready to
resign, I would appreciate that because I find it.
I find it a little distracting.
All right. So,
let's see. Okay, so
the. Let's propose a different way to do this.
Is it conceivable to imagine a world
where people would be aware of what data they're producing would have some ability to influence what happens with it would have some ability to improve it if it's to a purpose. I think that world's conceivable. I'll sketch it for you now. A lot of this work has been done in collaboration with some of my colleagues and I mentioned Glenn while especially
in order for workers
have effective voices, not only did not only was it necessary for factory owners to appreciate that they could actually contribute which is what happened with Toyota in the bigger picture they also gained a seat at the table and gained power through unions. And we think something like a union. Although a little different, is the right structure to bring about a change in the relationship of people to technology. And for these, this idea of union has had different names in different publications. Sometimes it's called the data trust. I've been preferring the term mid, which stands for mediator of individual data. Amid serves a number of functions. And I want to go over them with you because it's a kind of an interesting Swiss Army device that covers a number of things. One thing amid does, which is important, is it gives people economic power. And
This is a tricky point. We've gotten used to the idea that when we give data we don't get paid for it. And in fact, when I suggest that people should get paid for their data sometimes, so that would be terrible because it would disadvantage. It would encourage the poor to sell their souls and give out even more data and so forth but the thing is, if you live in a market society at all. If you separate yourself from the market, on some level, you disenfranchise yourself. And that's, that's a concept you just have to get used to like, if we're going to reject markets and have a non market society that is perhaps another agenda but you have to really do it. In that case, get rid of your bitcoins and go for socialism. And I'm open to talking about a path to that I've never seen one that I think really works. In principle, maybe it's the future. I don't know but for the moment of paths, I can see from here to the future I still see markets of different times. And if they're going to be markets, if you're not a full first class participant in them you are disenfranchised. Now, one of the interesting things about the union movement is that as soon as workers started to gain some power through collective bargaining. Instead of maximizing their income. Absolutely. They maximized a bundle of things including income and also quality of life. So for instance, he started to get weekends. He started to get holidays, he started to get reasonable work hours, he started to even get leave for childcare you started to get all of these limitations on the engagement in the workplace rather than maximizing it. And so I think it's reasonable to expect that unionization of data, rather than pulling people into ever more privacy violation would in fact be the only way to have enough power to put a stop on privacy violation power and privacy go together. And without market participation in a market economy, you don't have power. So if you want, so it might sound counterintuitive, but it's precisely through monetizing data that it's plausible for people to have the power to modulate how much data they're giving. And it's really the only way in a market driven economy. So I think the logic of that is sound if it doesn't make sense to you before you ridicule it take some time to think it through. I think you'll come to agree that that that the logic is sound there. Another just to state the obvious if you don't have unionization the value of data goes to zero. If it's each person against each other person, then everybody gets poor, and this is universally true in all market economies there have to be little bubbles of cooperation or else everybody destroys everybody if it's each against each So for instance, if we believe in the labor capital divide, which I think is obsolete at this point but if we believe in it. Then on the capital side you have corporations and partnerships for lawyers and all this you have to have little bubbles of cooperation. And then of course on the labor side you have unions. That doesn't change. And then when you have collective bargaining suddenly data is worth something. I think making this argument that people should be paid for data for quite a few years, and from early on Facebook in particular developed a response to it that they've worked on quite a lot. In which they'll say well your data would actually only be worth nothing, you know like your data would be worth pennies it's nothing It's nothing, except that we collate it, and when it's in our hands and it's worth, you know, tech is tech companies on your data is worth trillions of dollars if you own it it's worth nothing. That's a, that's been this sort of absurd state of affairs it's treated as normal. And of course, if unionize people are giving the data, then it is worth something when people have it and that just has to happen. Now, the idea of unmanned accomplishes a couple of other things and this is this is really important.
One of the things is that
since our early modern times and common law, there's been this concept of of a fiduciary responsibility and that is when somebody has more knowledge or power over you, than you can have over yourself. They take on responsibility to serve only your interests, and traditionally some examples of people who take on this responsibility are doctors who are responsible to serve the patient, not the pharmaceutical or the hospital or something. lawyers who are responsible to serve the client and not somebody else, and certain financial advisors, etc. Now in the case of algorithms. in the case of this digital world. You have the explicit implementation of behaviors algorithms that are designed to manipulate people. And without their knowledge, Facebook, proudly published academic articles that were peer reviewed proving they could make multitudes of people sad without people knowing why they were made sad and so on. So you clearly have a situation where you have one entity that has power over an individual individual doesn't have. And yet the tricky thing is there's no entity in position to have fiduciary responsibility, which is kind of crazy. One of the things about that is that you can't have universal fiduciary Princeton's. You can't have one lawyer representing everybody because people have competing interests, that's why you have lawyers representing different people, even in medicine you might have two patients who come into conflict and wanting a limited resource or something and the doctors have to advocate for those patients. You can't have universal fiduciary without having some kind of weird dictatorship that just controls everybody. And so you have to have some entity in the modern information world that can serve in the fiduciary role. Facebook cannot be your fiduciary it's universal, but amid can be so what the mid can do, is the mid can say I don't represent everybody I represent a group of people who through free association have chosen to band together, and they now have representation, their interests are being represented and another crucial piece of that is complexity management. The modern digital world is extremely complex. I'm relatively skilled in it, I would think. And yet, I just find it intolerable to take the amount of time I would have to to really make sure I'm not being spied on that I really have cleaned up my system all the time, etc etc. It's like, it's just too fucking complicated to keep up with. So the mid can take on that duty. The mid can become a trusted source, then there's another thing about mids, and here. I was just talking earlier about the 19th century moving into the Deming revolution and quality. I want to talk about something else, something, some of you might have heard of which is more from the turn of this century, which is called microlending and the Grameen Bank. So this might seem like a pretty sharp turn but just bear with me here. There's a guy named Muhammad Yunus. He's really cool he's from Pakistan, Bangladesh, pardon me, pardon me, Bangladesh and he's. I know he won a Nobel for this actually Nobel in economics might have been a peace prize actually. Anyway, Mohammed had this idea, he was concerned with this question of how do you take impoverished communities, and in places like Bangladesh and parts of Africa and so on and get some credit to them so people can start small businesses and start to create economic improvement. It's decentralized How do you do it. And one of the problems, one of the, one of the difficulties has been that if some entity like a bank is going to extend credit to someone.
They have to assess the creditworthiness of that party. Because without credit if you just lend out money, or extend credit your, you know, you will likely be unpaid and the whole thing will collapse. So how do you establish credit for somebody who has no credit history who's impoverished and anything he he fell upon a rather simple solution, which is you distribute the trust. In this way, it actually reminds me a little bit of some blockchain schemes that are much more recent. He said okay instead of a single person applying for loan. People will form together into little groups, through free association, and they'll vouch for each other. So if any one person in the group defaults on alone. The rest will also be responsible for it. So people suddenly became picky about who they who they connected with they connect with people they knew. And these groups turned out to be incredibly trustworthy. These the Grameen Bank is first experimental bank, doing this type of micro lending had better repayment ramp rates and all the fancy rich World Banks, you know, and so this was just an incredible revolution. And it was just based on the very simple idea of distributing trust instead of trying to centralize the assessment of trust, incredibly simple idea that combined a lot of different ideals about the best way of thinking about markets, the best way of thinking about democracy and district and just giving people, dignity, you know, admitting that people are actually smart and basic most of them are decent and that they can think for themselves. Now, the reason I'm telling you the story of the Grameen Bank and by the way, there are some criticisms of micro lending it's been around for a while and it's not, it's not the ultimate panacea for development so I don't want to. I don't want to claim that it's an absolute utopia or anything but the fundamental mechanism works. Now if we look at the online world today. One of the
horrible things about it is that it seems to
bad stuff, more than good stuff. Okay, so you have you have this pure attention economy where people just want to be noticed and so it's easier to get noticed as a jerk, or to put it in scientific terms. It's easier to gain and maintain attention if you excite the fight or flight responses and others. And so, you tend to have a lot of inflammatory disturbing stuff online that tends to get naturally amplified by the algorithms that adjudicate who sees what. And this happens over and over again. an example. So, YouTube denies it, but when I've tried to replicate published research myself, I see exactly the same result if you let the Google algorithm start representing by the way that's potato the cat behind me walking on the instruments he likes. He still has opinions I'm sure in a second.
the YouTube algorithm will tend to start from any point, whether it knows who you are or not and after somewhere in the teens usually I find around 15 to 17 links of recommendations you end up in some weird paranoid kind of creepy video. And the reason why it's just that those excite the fight or flight responses the algorithms notice people are being excited so they get recommended. The same thing is true in all the other social media systems that are algorithmically mediated. Now, this, this has created a universe of highly cranky individuals cranky paranoid on edge desperate for attention, there's a certain kind of personality type that we become familiar with. My hope is that we're gradually overcoming it, I feel that some of the recent social protests in the United States like Black Lives Matter, have been less reactive on that level and I feel like there's an emerging. I don't know what to call it an emerging familiarity or perhaps maturity with the current system that's very encouraging. But nonetheless, it still is kind of like just, if you look at the United States response to the pandemic. The stupid paranoid ridiculous stuff has dominated good quality medical information, I've been accosted for adding a mask and no many other people have just in public which is going to be remembered historically it's one of the all time stupid things in human history, you know like what could be stupider than that. And that's driven by this weird paranoid irritable thing. And so, so I want to combine this observation about our current culture as it's been mediated by by algorithms that prefer. Things that tweak the fight or flight responses, I want to, I want to combine that with the solution from the Grameen Bank and microlending. What if instead of individuals posting on things like Twitter or whatever. People formed through free association into little groups into little micro publishers where they watched out for each other. And they created essentially a brand where they were responsible for one another. So once again, this is through free association. If you want to get together with a bunch of jerks. Totally free to. And then what that does is it distributes the process of decency, so that people would publish as a member of. It would be like being a member of a little underground scene in the old days or a member of an indie band maybe or something like that you'd like you'd create a little identity. And then people would look out for each other and if somebody is going to become an all out jerk and say something incredibly stupid, their buddies are going to say hey wait a second that's hurting me too. Let's just think this through for a second and then they can either split, if the bad wants to break up that's fine. But there's this mechanism there's this little thing in place it's based on human on human consideration that I don't think interferes with free speech at all it's through free association it's just a layer to inject some human responsibility along with the freedom. Now, what I want to propose is that this grouping could also be the mid, the mid could also create this structure this core screen structure to promote quality. Now, this idea that civilization isn't just made of a bunch of individuals but of groups of individuals that there's a coarse grain adness to things is very fundamental. I want to. If you look at the literature people who've tried to compare societies that are relatively peaceful and decent with societies that are, that fall apart and fail. You'll always come across this phrase of societal institution, and I think a particularly, particularly good writers on this topic are Hannah Arendt and de tocqueville if you know who those people are
and societal institutions are precisely the type of thing I've been talking about their free association quality maintaining groupings. And then if I think in this context I can be permitted to nerd out a little bit. In any large learning system, there has to be there has to be some sense of coarse grained accumulation of knowledge in evolution, we call it a species we don't just have a group of genes moving back and forth between things we have genes forming into structures that have surfaces around them, so that you can have larger scale experiments, and then you get cats, without that you wouldn't get capture people. And so that's coarse graining and evolution in in computer science. It's the internal layers of Learning Network. You can't just have a single layer Machine Learning Network you need to have multiple layers and the reason is you need coarse grained accumulation for larger scale experiments within the learning system, which is so in the abstract system the internal internal layers like the mid is like the species. So, so what I would argue is that this course groundedness is a thing that is the reason why society is turned to crap in this era. When you have only Facebook and a bunch of individuals who destroy coarse grain adness when faced Facebook's early slogan. Move fast and break things was precisely breaking these coarse grain structures and civilization. And then, and then with our distributed trust without distributed attention to decency without distributed attention to quality. The worst comes out of people, which is why you can get attacked for being a mask on the street, which is so astonishingly stupid so so are. So, what I'm trying to build here is an overall picture, in which we would recreate societal institutions where people would have distributed trust through free association in groups. These groups would collectively bargain for the value of data. These groups mids would have enough power to have a seat at the table, people would gain a view into what their data is being used for and would be able to make their data better now. I haven't, I haven't gotten back to that as much. If you go back to the origins of the idea. This is a whole amazingly long and complex saga that really should be some kind of incredible movie series or something but from the very earliest days of computers, there was kind of this ism and how to think about computers, and I'm old enough to have directly knowing the first generation of people who argued about this or at least some of them. There were some figures he thought the computer was a new life form apart from people, and should be almost worship that way. My most important mentor was probably Marvin Minsky who might have been the person who did the most to articulate that point of view. And then, opposed to that point of view were people who thought that the computer wasn't necessarily anything by itself, it was more a new way, a new tool in which people work together. The first important figure in that was Crowley Norbert Wiener, who wrote a book called The human use of human beings, which ends with this remarkable thought experiment about how maybe, and this is in like 1950 or something is like really long ago. It started ends with the thought experiment that maybe someday there'd be radio wirelessly connected computers and everybody would have one of these computers and the computers could enact behavior modification algorithms and the society would go crazy and then be extinguished and it's a route. It's a route to extinction comparable to nuclear weapons and this was like, way back, which is more or less what we're doing. And then some of the other figures. Doug Engelbart who invented the computer mouse was extremely articulate and vocal proponent of the computers early anything that's just people working with people. And he, he tried to re return AI as augmented intelligence that the computers just augmenting what people can do rather than being its own thing separately. And, and there's been kind of, I'm very firmly on the Wiener
And then there's been this whole other AI thing which is kind of one, um, I used to. I think the reason the AI thing one so much is that i mean i i used to argue. I used to argue with Marvin my mentor, because he was like the most pro AI person in history and I'm super anti AI in terms of that conception. We used to argue the last time I saw him before he died when if his other students, called me sick you know Marvin was very frail, don't go and argue with him about AI, you know, and I showed up and he looked up and he said, Are you ready to argue, like Oh hell yeah. And he loved it. We used to love arguing. But what would happen is once in a while. When it came time to talk to the funders and all of the funders from the military he would say, okay, Jaron you know you can shut down your anti AI stuff for the moment let's just play along because we need funding for the web. And then, more or less, you know, if you go to somebody in the military and you say, hey, these machines, you know, Skynet is going to come alive and whoever owns Skynet is going to run the world, they're like oh yeah here's your money. That's a pretty easy. It was a pretty easy pitch. And that's part of how the AI side gained so much ascendance in recent years it's just like, you can scare people pretty easily. It's also really flattering it's like you're creating this new life and almost makes puts the computer scientists into a godlike position. But anyway, um,
the thing is,
I'm speaking from the other, like if I if I'm behind the curtain being computer scientists at a tech company which I also am you know we get all this data, em, and the people don't know what the data is for so it's kind of like, low quality data, like, I'll give you an example, right now, you've probably had the experience of trying to get into some website and they'll say oh wait a second, we need to prove you're human so you're gonna play the capture game, and then you have to identify things on the street like what's the street light or what's a bicycle or something. The reason you're doing that is you're working for free for Google and giving them your tagging your tagging examples of images to help them in their self driving car quest. Okay. But the thing is, That's really stupid. Like, what should like let me, let me point out a better world a better world is one in which people who actually drive for a living like truck drivers are being told, listen we're going to probably there's going to be self driving trucks. But in the meantime, your generation can make a lot of money, programming and self driving trucks and then we'll have time to transition to another thing for the next generation. What are the situations you worry about this is what you're doing like instead of like giving them the agency to tag what really matters to tag the situations that are actually dangerous, like actually bring them into agency bring them into pride and making the thing better get them paid for it to fund the transition to when we have more automated vehicles. Then we get better quality automated vehicles, we get them more robustly. People have more pride and legitimately so honestly So, and we have a better economic transition I mean everything about it gets better the tech gets better, the human side gets better everything gets better. And yet instead. They're obsoleting themselves gradually and in a fumbling way with capture without even understanding why they're doing it. I mean it's it's sort of triple stupid on multiple layers that we're doing things this way, it's not good for anybody in the loop. Now, I mentioned I'm at a tech company, I'm not representing them here but I also I'm a scientist at Microsoft.
these days I'm called with what is the Office of the Chief Technology Officer the team Chief Technology Officer prime unifying scientist and it spells out octopus which is a document here so I'm Microsoft octopus, and I were trying a couple of things you can if you look it up online. We have a thing called Trove which is an early experiment in trying to set up a way that people can look for data that algorithms need, they can they can participate in making the data better they can bargain for what they get paid for it. And, and it's a kind of, it's an interesting experiment because if you look at how AI algorithms are done right now, you tend to either get a bunch of free data like recapture that is kind of a little off the mark and you can't really quite use it for what you want and it's a little awkward and it's spiritually ugly for the people, or you can pay a Data Broker a shitload of money, and it becomes an economical and there's a sweet spot in between that I think is Win Win where people get paid enough for their data that they're not being displaced. And they're in charge of their own privacy and they have agency they have a seat at the table, but at the same time the data quality gets better so the algorithms get better at the same time I think there's a win win I know five minutes I'm going to be done. And so, we're doing some early experiments to find that sweet spot and I feel like there's some cases where we found it. I, I think we need to re conceive of the idea of AI, it's so commonplace in the culture, there's been so much science fiction that thinks of AI as this thing as this other identity, entity that's taking over. We see it in the matrix movies and over and over again there's there's like AI is like this entity that's coming for us and it's just spoken of that way matter of factly but it doesn't have to be that way we can think of aggregate intelligence, we can think of associative intelligence we can think of augmented intelligence, we can think of all kinds of other conceptions in which the idea is that it's a new form of collaboration between people it's a way of people knowingly through free free association working together better by intentionally creating data to make better algorithms for the mutual good and for the individual good to make some money from A to which I if we're going to be in a market economy, you have to respect. And I think there's. So I think we need to re conceive of AI. Bring dignity back to people
understand technology as a wonderful human activity rather it's something that will replace people and I think it technologists can shift their own culture. My prediction is like magic we're going to see a lot of other people being less freaked out and less reactive to modernity than we're currently seeing. That's enough. I'm done. I'm two minutes ahead of time. How about that.
That's pretty awesome. Well thank you for all of those thoughts.
There are a lot of, lot of good things we're thinking about.
There's so many problems that are being exacerbated in the world by so called social media right now and I think you nailed. Some of the reasons why it's so powerfully. So, and. And it's really cool to hear some for maybe so rarely anyways if not the first time, some solutions of things to experiment with to try to move forward in a positive way to bring us back to a place where we can, you know, possibly talk to one another with nuanced conversations and, you know, you and Marvin Minsky could argue about things even though you're on polar opposites, and remain friends until on his deathbed right. And, you know, back when we used to argue all night we didn't see eye to eye on a lot of things but did that matter that's, that's part of what makes life interesting that's part of what makes friendships that's part of what makes us live and grow and learn and and become better. So, yeah, and we do that individually collectively so the idea of MIT and putting that together with market economy which isn't going away all that quickly is missing. Food for thought. Thank you.
Cool, thanks, thanks for listening.
yeah so there are questions from the audience but we don't really have time for them. Okay. Sorry about that. That's not the problem at all. Um, so, but if you, you got a free ticket, of course for being a presenter if you're into it you could get on the matrix rooms and answering questions there if you want to. There's a whole bunch of places there for people to interact.
I'm, I'm very grateful for that invitation and please accept my apologies I just have. I just have too much, too much going on so I'm a little I'm a little overextended these days. And I'm also just kind of a little, I just had a motorcycle accident so I'm a little low energy so I'm going to have to decline a little bit right now but I wish you all well and I would say I'm so grateful that you were interested to listen and, and thank you.
Yeah, well I'm glad you're doing okay enough to, you know, talk so motorcycle accident, and you took the time so great to see you and, and
see you next time. Take good care.
And we're done. That was superduper Thank you so much. Yeah. He then judge Ron is gone Jared is gone immediately so he had he's busy like I'm barely made it with a motorcycle accident. Wow.