Now for all the early stage startup founders beginning now on the sessions tab TechCrunch, staff will be on hand to help you fine tune your pitches. But here on the main stage, we have a super interesting speaker coming up by calling Taylor Hatmaker sat down with the CEO of the anti defamation League, Jonathan Greenblatt, a few days ago to chat about what are the possible policy solutions for online content moderation.
Thank you so much for joining us. We'll just get right into it today. Your work at the anti defamation League, increasingly, in recent times has intersected more and more with technology, Silicon Valley and what platforms are doing. I was wondering if you could describe a little bit about that kind of work that you've been doing in recent years?
Sure, I'm happy to tell. So at the ADL, we're the oldest anti hate organization in the world. But we deeply believe that today, the frontline and fighting hate is really on Facebook. I mean, there's just no question that social media has become a breeding, breeding ground for kind of bigotry, that is offensive and ugly in all respects. Now, we've known this for years. But when I came on board, about five and a half years ago, I really wanted to focus on this, and try to get causal and right to the heart of the problem. So we could finally turn it around. So in 2017, we actually opened an office in Silicon Valley, our Center for Technology and Society, we were the first civil rights group with an actual presence in the valley. And for me, that was sort of second nature, because I had worked in the valley for years before taking this job, you know, both raising money on Sandhill road, managing teams of engineers building products. So through that center in for Technology and Society, we've been doing multiple lines of work. Number one, we've been working directly with the companies because I believe you've got to engage these businesses, and their engineering talent in their capacity for information, and show them how it's in their interests to try to address the intolerance that's so rampant on their services. So we're engaging with all of the big businesses. But even as we typically call them in we also don't hesitate to call them out. And we did that with our stop a for profit campaign last year, which we've talked about. Secondly, we're doing a lot of really interesting r&d. So we have a really exciting Fellows program, we bring in academics and researchers, and others to study specific aspects and issues of hate and harassment online. And then finally, we're looking at policy, I think it's very clear that we can't just rely on self regulation to create a better, safer, less toxic online environment. We need the regulators, we need the legislators to pay attention. So we're working that's really interesting issues at the federal and state level on that front as well.
I'm glad that you mentioned the research work that you all do. You know, I think a lot of that work is really important. You will put out, you know, a ton of data that you collect a ton of reports on the extremism stuff like white supremacy, stuff like misinformation, deadly conspiracies, all of it. And the question I have for you is, do you think that the onus is on these companies, especially major tech platforms, we're talking Facebook primarily, but you know, YouTube, Twitter, to take more of that research in house? Do you think that they need to invest like, you know, their extraordinary resources, and in doing more of this kind of work themselves?
It's a really interesting question. You know, I think, you know, we try to be a very evidence based with a fact oriented organization, meaning we ground our advocacy in research and data, I think it's critical. We've been doing audits of anti semitic incidents for 40 plus years, we've been doing admin attitudinal surveys for more than 50 years. We also do surveys now of online hate and harassment. And the picture is pretty grim. And our 2020 survey, we found that 44% of social media users reported experiencing some degree of harassment on different platforms. 28% characterized it as severe, repeated harassment. I mean, think about that. That's more than one out of four users. And the platform where it happens most frequently is Facebook. That isn't a function of the fact that it's the largest social media service in the world with more than 3 billion active users a month. It's because the platform itself has failed to take the simple basic steps to keep their users safe and secure. So then to the question about, well, do the companies have a bigger need to do more research on this? I'll give you a two part answer. Yes, the companies need to be investing real recent sources to make sure they understand the problem on their platforms. And they're building, you know, the products in a more safe and secure way creating features and functionality to address these issues. That is totally within their wheelhouse. Take Facebook, for example, they brought in $70 billion in 2019. Right, it is a company of gargantuan in size in global ambition. And they are investing hundreds of millions of dollars to research their different products and services. The idea that they can't spend a little bit of more money to more rigorously measure the hate on their own platform. That's a laughable proposition, of course, they should be doing this. If it really were a priority, you would resort to that priority, we failed. We haven't seen Facebook do that yet, they failed to take the kind of action. But on the other hand, I would say Taylor, we can't rely only on the companies themselves. I believe that civil society, organizations like ADL, but there are many others in this space to organizations like Common Sense Media, or free press, or CBT or EF f so many others, they have critical roles to play as watchdogs to keep the companies honest, again, time and time, time and time again, the businesses have not demonstrated the kind of transparency around these issues. They've been unwilling like Facebook, in particular, to publish the data. They have been, you know, kept their doors close to things like independent audits of their hate content. So I think while they need to do more, they also need to create the space for watchdogs and researchers and academics to study this issue on an independent third party basis.
While we're talking about tech companies doing more and taking their own steps, I'd love to talk a little bit about January 6, and the attack on the capital.
you know, for it's really interesting event because I feel like if you are somebody involved in this space, or you're keeping tabs on domestic extremism, white supremacy, things like that the attack on the Capitol felt inevitable, nothing about it was surprising in some ways, other than, you know, the, the utter violence was shocking, regardless, you know, but to a lot of people that was a wake up call, or at least they pretended it was a wake up call, you know, it's hard to know how tech companies actually perceive those events. You know, overnight, basically, we saw tech companies take a number of policy measures that for years, they had said were impossible, you know, they're outside the bounds of what they could do. Do you think that those actions are meaningful and that we're now particularly because of the capital attack, entering a new era of accountability for tech companies?
Well, first of all, I would just say you are entirely right with the way you set up the question. To those of us who've been tracking violent extremists for years. This was not a surprise at all, this was the most predictable terror attack in American history. Literally, these groups told us in advance what they were going to do. And the attack itself was sort of the culmination of years and in the last in the months prior intense campaigning by the President himself, to undermine the integrity of the election, to question the democratic process, to call on individuals to interrupt the certification of the election based on this big lie, this totally contrived idea that somehow the election was rigged. I mean, truly, it was bananas. And I could talk at length about what we saw that day in terms of the militants, in terms of the role of the President. But as you pointed out, indeed, the tech companies who for years have told us there was a political exemption, and they wouldn't necessarily take action when presidents or other politicians said things that were outrageous, and committed slander or incited violence on the platform, suddenly, because of the public pressure from groups like stop hate, for profit in the ADL from internal pressure from their own employees, and I believe, you know, their boards, suddenly, they took action instantaneously, overnight. They, they all their other concerns sort of fell by the wayside. I think it was really important that in order Facebook and Twitter and YouTube took down President Trump, that was critical, we call for them to do that. And I'm really pleased that they did. We called for them and previously to take down our militia groups to take down to and on content. And I'm really glad that they did and it had a huge impact. You know, we've seen like on Twitter q&a, content drop 97% you know, just days after the attack, because the company actually took action. So I think it really laid bare The myth that somehow, some way the companies couldn't do anything about this, clearly they could. And they did. And I think their services and society as a whole is better for it.
In terms of how some of those groups that attacked the Capitol came together, I'd love to talk a little bit about the role of algorithms, something that we saw that was particularly shocking and confusing for people trying to make sense of the events of the Capitol is that we saw, you know, hardcore white supremacists, folks who were anti maskers, who believed a lot of COVID misinformation that's floating around online, we saw q anon we saw well organized militias that had been organizing out in the open on platforms like Facebook for years. And we also saw kind of like run of the mill and maga supporters all converge into this, you know, 10,000 plus person event, I would love to know what you think about how much platforms are responsible for the algorithms that maybe arguably bring these people together or connect them to begin with, you know, tech companies like to defend themselves by being like, well, you can turn on television, you can watch Fox News, you can see all the same stuff on Fox News. But the fact of the matter is that the algorithms, you know that the drive their businesses work really differently than traditional television. And so I'd love to know what you think about, you know, how we can specifically address the problems with algorithms driving people toward extremism,
there is so much to unpack, you know, in what you've laid out, in many ways, like, literally, you've gotten to the heart of the problem. So let's acknowledge a few things. So number one, social media is indeed media, like broadcast media, like print media, like outdoor media, like radio, but it is also inherently different. And it's inherently different indeed, because algorithms give it a kind of interactivity that those other flat two dimensional mediums just lack. So it is a media is kind of media, it is animated by advertising, right. That is how it works. And yet the algorithms electrify it and give it a kind of life of its own, that these other mediums just don't have. I also should point out that the comparisons to broadcast media are appropriate. And oftentimes, we see these tech companies only use it when it's convenient to them. But the reality is, you can't turn on Fox News, or as much as I personally dislike Fox News, or you can't turn on cable news generally. Or you can't turn on or open up the New York Times and see the things there that you see on the social media services. I mean, there's a reason why Alex Jones, you know, doesn't have a show in primetime on CNN, right? There's a reason why Richard Spencer doesn't get as op eds published in The Washington Post. There's a reason why these traditional media companies go to great lengths to observe certain standards of decorum. Why? Because the law means they are liable for what they print, or what they broadcast. The only category of media that bears no responsibility, no liability is social media because of Section 230 of the communications decency act. So it's worth noting that we need to have a serious societal conversation about amending to 30 so that these companies behave with the same kind of accountability that we see all other media exercise, that's for sure. Now to the specific question of algorithms. So look, algorithmic amplification has a lot to do with the dilemma that we found ourselves in, and extremists are, if nothing else innovative, they exploit loopholes. And indeed, they have used the kind of libertarian laissez faire attitude of the companies to their own advantage for a number of years. And so from Facebook groups to YouTube channels to kind of accounts on Twitter, let alone all the other platforms, they've used them with tremendous depth, depth. So what's interesting is, and many people that I know have seen this, I've seen this myself, you may have to I'm sure your audience has. It wasn't too long ago that you might watch a YouTube video and one click or two clicks over, suddenly find yourself down the rabbit hole of some crazy q&a on anti vaxxer you know, Boogaloo content. Same thing on Facebook. When you search a piece of content, suddenly, you're served up Facebook groups that may be from accelerationist, or white supremacists, or other racist and anti Semites. But the reality that we've got to confront is that algorithms aren't our right, if you will, algorithmic amplification isn't a privilege which should be accorded to everyone. It's a responsibility that the companies have to make sure that their products give users what they want, but that they're also not abused. And that the users themselves are not abused, to seeing the kind of things to which they might be very viable. Robots are susceptible. So we deeply believe that algorithmic amplification is very problematic. That's why we've been supporting legislation on Capitol Hill that will finally address this. If you could, you know, Taylor, if you could basically turn off the algorithms for some of these worst elements, you could have curbed these issues a long time ago.
To get into that a little bit, I know you mentioned section 230. And that's, that's one of the many policy avenues right now that could address some of these problems. I mean, I think the tech industry knows that regulation is coming. You know, Congress moves slowly. But at this point, it seems pretty inevitable. It's, you know, one thing that is a bipartisan issue, which is, you know, means that there, there could be more activity that we could see on it, you know, and it's a big conversation we're having, we're seeing federal antitrust suits, we're seeing state lawsuits, we're seeing all kinds of regulatory pressure. And the big one is section 230. And I know that the ADL came out and supported the safe tech Act, which is a bill introduced recently to reform section 230. I was curious about Danielle's decision to support that bill. And if you feel that it's it's kind of like the perfect bill, or is it like one step on the way to the legislation that you want? The Safe tech guy does seem to have a kind of disproportionate focus on, like paid content and advertising. And so I'm wondering, you know, what the ideal bill would look like?
Well, that's a really good question. I mean, I think so there are a few things. So look, the ADL, I mean, we've been literally fighting for a more just country, we've been fighting for civil rights, we've been fighting hate for over 100 years. And we are fiercely ferociously, defenders of the First Amendment. But freedom of speech isn't the freedom to slander people, right to freedom of expression isn't the freedom to incite violence against individuals or groups of people based on their immutable characteristics? And so I think what we've seen is the first amendment been warped and weaponized online in ways that are, you know, completely beyond the pale of what the founding fathers ever would have, you know, could have imagined. So 230 does need to be addressed. And I think that Warner hirono bill that you pointed out is a step in the right direction, it is definitely not sufficient. And I also think you were wise to frame your question. It might actually not be the federal government, but the states that actually pushed the companies to do more, we've seen California, do some innovative stuff on privacy that's pushed the companies and you may see, I think more state action. What is the ideal legislation look like? Well, that's hard to say. But I do believe that, you know, my friend Roger McNamee is onto something when he talks about at the core of the business model itself, you know, Shoshana Zubov from Harvard has written about this, when the product is free. You're the product. And so really thinking about the core business model of these companies. And the honestly, the incredibly anti competitive nature of much of what happens here, we, again, we come back to Facebook. And look, I know a number of the executives of Facebook, I know they're well intentioned people. The service has done, you know, connected people across culture and geography and done some some decent amount of good for sure. But I do think we have to reckon with the wreckage that's actually been left in its wake. And the reality we're all still struggling with, because of what Facebook has sort of wrought upon the world. So in my mind, the reason why they haven't dealt with these issues more effectively, is what I would characterize as, you know, Mark Zuckerberg sort of monopolistic indifference. You know, he doesn't need to change because the markets need him. And he just hasn't exercised the kind of moral leadership that you would see in any other corporate environment in large part again, because he's not liable for what that what that which gets published on his platform. So, again, I think antitrust is a part of it, the state's going to be a part of it, Rob Warner hirono is a step in the right direction.
I like how you described it as wreckage, like looking back through, you know, all of this stuff we're going to have to be accountable for we're going to have to sort through as a society to make sense of to move forward. I think there's something interesting and I mean, probably disturbing about the tech industry in the sense that it doesn't like looking backward, you know, kind of by definition, oh, you fail, you move forward. It's like, you know, the hustle economy, you raise a new round, whatever, you don't look back. Do you think that that like inherently, is going to make this whole process of reckoning really different difficult,
you know, like a year ago, for example, if you went on Facebook, you could see militia groups, hundreds, possibly 1000s of them probably organizing easily. in plain sight. I mean, you know, they they use Facebook as their primary recruitment network. We saw that a year, year and a half prior with the proud boys as we've reported on it. Well, uh, you know, but Facebook makes a policy tweak to their platform policy is like, Oh, we've dealt with this problem we're moving on. But this is all really recent history. How do you think that we grapple with all of that?
Well, it's funny, you know, I mean, I think Silicon Valley is almost like, rooted in this American tradition of like, Manifest Destiny, right? conquering the frontier. It's, it's ironic, but altogether appropriate, that's happening in California, right in the land where they have the Gold Rush, right again, where people went to make their fortunes. And now they're doing it today in Silicon Valley, in tech, and even that's continued to evolve, right? It was the internet 15 years ago, five years ago, with social media. Today, it's clubhouse, and I don't know what comes next. Um, but I do think that the whole industry does need to undergo a serious self examination. And I think you've seen people like who've come out of the industry, I think about Chris Sacca, the former Googler, I think about Alexis ohanian, the former editor, and a few others start to grapple with these issues. You know, trust, my friend, Tristan Harris, at the Center for humane technology has also done this in his film, the social dilemma really plays this out. Whereas Silicon Valley often has a very short memory, the reality is that we there will be a long road ahead of us. And if we don't wrestle with these demons, and if we don't sort again, through the wreckage to what they've wrought, I think the future is very unclear. So I'm optimistic or maybe I should say, I'm hopeful that the that the industry will will find its conscience, maybe led by some of the people I've mentioned, to really wrestle with these issues, and once and for all, create a better, safer, more secure social kind of environment for all of us.
I think that's something we could we could definitely all look forward to. Well, thank you so much for your time, I think we're about out of time for today, obviously, is a huge topic. And we could talk about all of this forever. But your insights have been really useful. And we really appreciate
your time. I appreciate it. Thank you very much. Thank you.
missing from some of these conversations has been some of the biggest companies in the tech industry. I'll be discussing with representatives from these tech Titans, what it means to make real progress and diversity in tech. Please welcome Netflix's Wade Davis and Ubers bo Yodlee. Also, be sure to ask any questions you may have via slide Oh, and I'll be sure to get them answered.