1030-Fri-062521-PR-Fact Checking During the Pandemic-FINAL
3:24AM Jun 18, 2021
Welcome to the Online News Association 2020 conference panel. I'm Baybars Orsek, the director of the International Fact Checking Network at the Poynter Institute, and I'm joined today by Karen Goldshlager from Facebook and Aaron Sharockman from Poynter's PolitiFact. Today, we'll touch on one of the largest efforts to combat misinformation at the Internet scale. The third-party fact checking program facilitated by Facebook, which relies on fact checkers input all over the world. We'll discuss this program from addressing the pandemic standpoint, and we'll hear both from Karen and Aaron on how they have they managed to help users during this unprecedented time. Please kindly note that being a verify signatory to principles is a necessary condition to be a part of a third party fact-checking program, though is not an of assumption one, since Facebook reserves right to work with the fact-checking organizations. Moderation is a challenging task in such a vibrant discussion as far as time management goes, so I'll do my best to make sure that we can cover the essentials of the program, both from a provider perspective, as in the case of PolitiFact, and the platform perspective of Facebook. I hope you will enjoy this panel. This is one of the opportunities that I think we will do our best to make sure that the panelists will have the opportunities to touch on the most essential aspects of this program. Without further ado, I like to kindly ask our panelists to introduce themselves. Aaron, can you please introduce yourself to our audience?
Yeah. Hi, my name is Aaron Sharockman. I'm the executive director of PolitiFact. I've been with PolitiFact since 2010. I'm a journalist. I wrote about government and politics on my life. And since 2015, I've kind of shifted a little bit more to managing our partnerships at PolitiFact, including our work with Facebook. I'm excited to be with you all today and to talk about our work on the platform.
This wonderful, thanks so much, Aaron. Karen, can you also please introduce yourself to our audience?
Yeah, happy to be with you guys today. Hi, everyone. I'm Karen. I lead our news integrity partnerships at Facebook, which includes our fact-checking programs. I previously worked at the New York Times. And the last time I was at ONA, it was 2015, and I was speaking on behalf of a startup called Beacon that was building crowdfunding tools for journalists. So I'm happy to be back today to talk about our work with fact-checkers at Facebook.
Great. Thanks so much, Karen. Aaron, you mentioned about a little bit of your like your background with a PolitiFact. But can you also, just to get us started, give us some background on the origins of PolitiFact, and how you became interested in fact checking, particularly fact checking claims on social media.
Yeah. Well, so PolitiFact started in 2007, and really, the concept behind it was, I think, that readers want to know what's true or not. And they didn't necessarily have the time themselves to figure it out. And I also think there was a layer, in that period of time, of journalists who weren't necessarily answering those questions for readers themselves, because we were kind of in a period where there was a lot of thought about providing all sides to a story and then letting a reader come to their own conclusion. And while I understand the concept of fairness and making sure people have access to all types of perspective, I think PolitiFact was created with the idea that, no, there is an answer to a lot of these very verifiable questions, and we should provide that answer, and we shouldn't make it hard for a reader to find that. So we started backtracking only politicians in the 2008 presidential election. From there, we kind of branched out and started fact tracking local politicians-governors, senators, congresspeople- and then in 2013, we started fact checking television pundits, media pundits, bloggers, radio talk show hosts. The concept there, again, is: a politician, you can hold accountable. If they lie enough, you can vote him out of office. A host of Fox News or MSNBC or CNN, you really don't have a lot of control, as a citizen, over what happens to them, and so we thought we should hold them accountable. I really think for PolitiFact, we did not fully understand the way viral misinformation spread and the effect it could have until after the 2016 election or in the 2016 election. So during that election, we saw posts online that we knew were fake, were false. The most famous one is probably the Pope endorsed Donald Trump. Lots of people saw that story. We saw that story. We assumed, and I assume everyone watching this today knows, that that story is false. We assumed everyone kind of got that. We were wrong, I think, in that assumption. And so what we did is really after the 2016 election, we really started to pivot more towards this type of viral misinformation that was always there. In 2008, in 2010, in 2012, we fact checked viral misinformation. It was just called chain email at that time. And so the process has gotten more sophisticated, and I think we've had to really expand our efforts and energy to fact check claims online. I'll end with this: For our first what, seven, eight years, from 2007 to 2014/2015, probably 80% of what we fact checked came from a politician's mouth. Today, probably that same number, 80%, is something that's a viral piece of social misinformation, something coming from Facebook, Snapchat, YouTube, Tiktok, Twitter, wherever. And so we've really recalibrated based on, I think, how people are receiving information.
This is really helpful, Aaron, and I really appreciate it. And I think none of us would have imagined those chain emails could have turned into something in that large scale, and here we are, I guess. Karen, the third-party fact checking program is probably one of the most covered and reported initiatives against misinformation. But I guess for the sake of our audience that may not have that background information of the program, and even for those who have had the chance to read through what the program entails, just for the sake of making things a little bit more comprehensible and easier for everyone, can you do like a one-on-one on how the program actually works?
Yeah, so I'll start really high level but happy to go into more detail. Our fact checking program is one piece of our much broaders integrity efforts at Facebook. And when we talk about those integrity efforts, we often talk about three main approaches: remove, reduce and inform. So in understanding how the fact checking program works, it's useful to understand those broader approaches, and I'll start with remove. We remove content when it violates our community standards, and those are the rules about what you can and can't post on our platform. So for example, we've removed things like fake accounts or hate speech. For problematic content that doesn't violate our policies, but that our users have still told us they don't want to see a lot of, we focus on taking action by reducing its spread so fewer people see it, and then surfacing more information so that people can really decide for themselves what to read, trust, and perhaps ultimately share, on our platform. And those last two buckets- reduce and inform- is really where our fact checking program sits, and I'm happy to go into more detail if that's useful.
Thanks so much, Karen. I don't want to stick with you if you don't mind. Can you also explain it a bit more on how Facebook partners with the third party fact checking organizations, the third party factoring program, and how we users ultimately interact with the fact-checked content?
Yeah. Aaron mentioned 2016 as kind of a pivot point for PolitiFact and that's really where our efforts began as well. We started the fact checking program at the very end of 2016, and since then, we've been building this program that combines the journalistic insights of fact checkers with the technical scale of our platform to reduce the spread of that exact kind of violent misinformation that Aaron was talking about. So to give you a sense of the size of the program today, we work with more than 80 fact checking organizations in more than 60 languages to reduce the spread of this viral content. And, as you mentioned earlier Baybars, this is one of the biggest efforts by platforms to work directly with fact checking organizations and integrate their input directly into our integrity efforts. So just to break that down a bit more kind of step-by-step what the fact checking process looks like: The first challenge that we have is there are billions of pieces of content posted to Facebook all the time. So we need to identify a pool of content that might be misinformation, and to do that, in many languages we use machine learning models. And these models look at things like: Does the content include a fact checkable claim? Does it look similar to something that's been fact checked in the past? And then we also have user inputs to these models. So one of the things that model looks at is have users flagged this post in their own newsfeed as being potentially false, or have users commented on the post saying things like, "This can't be true," or "I can't believe this," which we can detect through natural language processing. From there, we surface this potential misinformation to fact checking partners. One thing that's really important to note is that fact checking partners then independently decide what they think they want to fact check from this pool of content that we've detected and surfaced to them through our technology. When they choose a post that they want to rate, they'll begin their reporting process. I probably don't need to talk to the ONA audience about what that process looks like, but it's worth calling out the fact checking partners are calling primary sources. They're consulting public data and documents. They might be analyzing videos or images for distortions, and within our partner set, we have a number of different organizations sizes represented and a number of different focus areas as well. Ultimately, once that reporting process is done, fact checking partners write an article so you can actually see their findings, and they apply a rating which is what lets us take action. So they might say, this image we found to be altered, or this claim is false or partly false. And then once the fact checking partner rates something as false, we significantly reduce its spread. And in terms of what users might see, there's a few different ways that we surface this information. First, we will notify people who have already shared that content to let them know there's new reporting available. If someone tries to share, it will show them a pop-up which links to the fact checkers article. And if people come across this in their newsfeed, we'll prominently label it, again, giving people the option to look at the fact checkers article and see what they found in their reporting. So that's kind of what the process looks like, from end-to-end. I'll also just note that we found these labels to be really effective. Just as one example, in March 2020, at the beginning, or spike point of COVID in many of the countries where we live, we were able to label 40 million posts based on about 4,000 articles by our fact checking partners, and we found that when people saw those prominent labels, 95% of the time they didn't click through to view the original content. So that gives us a lot of confidence about the ultimate output of this program from a user impact perspective.
Thank you, Karen. And I guess it will be also helpful for us now to hear from Politifact's Aaron Sharockman, from a retrospective point of view, how the program looks right now, because PolitiFact was one of the first third party fact checkers to join the program back in 2016, immediately after the elections. Aaron, thinking back to that time, could you talk a little bit about why PolitiFact decided to become faithful fact checking partner, and how misinformation on social media has evolved since 2016? Yed about the early days of the viral misinformation from old chain emails to what we have right now. Can you specifically talk about that evolution from 2016 to today?
Yeah, sure. You know, I think the first thing I would say is, we were among a group of fact checkers who I think were interested in talking to social media platforms, not just Facebook, about how we identified a problem. I think the social media companies identified a problem, and we wanted to work on a solution together. And I actually think, at the time, and still a little bit today, there's a little bit of backlash, right? Because there's some people would say, "No, your job is...you just report the news or do the fact checks, and let Facebook or Twitter do their thing." And we thought that we offered a unique perspective as a fact checker in this specific type of work we did that we can actually offer a really good service. And so we kind of really fundamentally thought from the very beginning: we'd rather be part of those discussions so that we might help Facebook shape what its program looks like, or TikTok or YouTube, rather than let them come up with a solution themselves. And so in those early stages, early days, it was it was much, much different. I think there were five original partners, maybe six, and the program was, frankly, because it was so new, probably completely ineffective. Let's be honest. I think we were fact checking at that point, like kind of hoax generators. You can create a fake tweet, and so people will do that or create a fake story and people will do that. And we were fact checking that. There was no way for Facebook to match that content to other pieces of content. There was no really sophisticated way for us, as fact checkers, to kind of see something online that was going viral and say, "Hey, we need to fact check this to kind of stop it." It was very much a very early stage version of a product. I think, to Facebook's credit, however, it's probably something that if they were building a dating app or whatever, they would have worked on behind closed doors for two, three years before it came public. They kind of took us public very quickly. I mean, basically, we started talking to the company in November of 2016. I think we were fact checking in December, so they kind of ramped something up really quickly. Certainly, the program has become what I would simply say is way more sophisticated. It gives us the fact checkers a lot more opportunities to 1. Spot viral misinformation that we might not have seen through Facebook's computers. But in addition, it allows us to do our job, I think, very effectively. 1. Spotting things that we see that Facebook might not have seen, but then 2. letting us make the decisions about what we fed check and what we don't, and what we rate a piece of content and how that process works. And basically, what I would say to people who are kind of on the outside looking at what do we do, what does Politifact do with Facebook? To me, Facebook is a really good tip generator as far as things we should be looking at, because we get signals about the virality of a post. We have now other ways of knowing if something posted in the system might be false, or something like that. We have some kind of indicator there that can help us. And so it allows us to make our process more efficient. So when people talk about fact checking, I say a fact check takes one to two days to do. For the longest time, it's not an easy process. Even when you see something that you're like, I know that's false, to kind of demonstrate and verify that something is indeed false takes some work. So this program allows us as fact checkers to cut down a couple of steps because we don't everyday have to go out scouring transcripts or scouring the internet looking for posts. They kind of are presented to us for us to kind of look and curate. It's important. And you mentioned this, I think Karen mentioned this, but we still choose everything we fact check. We still choose whatever, however we rate a piece of content. That's still our choose choosing, so we still get really control. It's just that the process on our end is a little more efficient now because we're kind of starting to see things maybe a little sooner, a little earlier, using Facebook's algorithms and their computer power, which a fact checker like us - we're the biggest political fact checker in the United States, we're 15 people -we could never afford or develop ourselves.
That's really important, Aaron. And I think it's still, to some extent, like finding searching for a needle in the hay when it comes to finding piece of misinformation to fact check. But at least those signals that you receive from Facebook, on a daily basis, basically, in real time, helps a lot to prioritize what to fact check because detection is a really important tool in fact checkers' workflow. Just related to that workflow- I am wondering, Karen, if you can help us to understand better, especially how Facebook, was continuing, at the beginning of 2020, its preparations to protect against misinformation related to several elections around the world, including the one in the US, the 2020 presidential elections. And when the pandemic hit the US, how did your team adjust to address misinformation stemming from it? And how did you also see Facebook's fact checking partners adapt to that?
Yeah, so I'm smiling because there have been many times in the program where we thought we were all going to work toward preparing for one event, and then a completely different event ended up being what we spent all of our time on. And I think COVID was a good example of that. I think the good news is that we really built our strategy to be able to be nimble and to be able to acknowledge the fact that this information is like the news cycle. We're always going to have to be rresponding to new claims, and the systems we build are always gonna have to take in those new claims. So Aaron talked a bit about the development of the product and how the product has kind of gotten more sophisticated. I think we also want to make sure our systems can think about and account for new claims as they arise. And certainly when COVID had that were many new false claims that we had never seen before. To talk about how we handle that in different parts of our approach: First is to acknowledge that remove part of our strategy. We remove misinformation when it could contribute to the risk of imminent physical harm. We've had that policy in place for several years, and we began applying it to claims about COVID. And to do that, we worked really closely with leading health experts to really understand the harms that were stemming from misinformation. So just as an example, you know, early on in the pandemic, we were really focused on fake cures like the hoax that drinking bleach can curee COVID. Fast forward to December of 2020, when the first vaccines were approved, we really started working with these experts to understand what are the hoaxes spreading about vaccine side effects, vaccine safety, and we have started adding some of those claims to the list of content that we removed. We also started removing conspiracy theories, like for example, that vaccines were being secretly tested on certain groups, all as part of this approach to remove that misinformation that could lead to physical harm. At the same time, and in parallel, our fact checking partners were looking at claims that didn't violate our policies but that we wanted to make sure it didn't go viral, and we wanted to make sure we were surfacing more information to people. And that part also really caused us to think on our feet even more than normal about how to support that fact checking ecosystem during a time like this, where fact checkers were also learning how to do their work at home for the first time while dealing with this kind of giant influx of misinfo. So one of the things we did as an example is we launched a grant program. We did this in partnership with IFCN to help fact checkers pursue editorial projects that would kind of advanced their ability to debunk misinfo about the pandemic in particular. And just to highlight the kind of global nature of some of these programs. There were more than a dozen projects that came out of that, but some of them were fact checkers in Indonesia launched WhatsApp groups focused on spreading facts to kind of combat these groups that they knew were spreading health hoaxes. A fact checker in Italy worked with a local hospital to launch a chatbot on their website where users could actually ask them questions about what they were seeing and get responses. And fact checkers in Congo developed a public SMS phone number that people could text and get a list of fact checked articles back about the pandemic. So each fact checker kind of knew that they had to respond in a unique way based on how the misinformation was spreading in their country, and we tried to create opportunities to enable that work.
Thanks a lot, Karen, I think you touched on a very important topic about the importance of innovation when it comes to doing fact checking, in particular distributing facts to the people. And I was just wondering if Aaron, can you elaborate a bit more on how we adapted to that new need to address health with misinformation? You rightfuly said that fact checking is a time consuming effort, it usually takes like a day or two, and it's probably the case for the cases that you have the expertise. But when it comes to health-related misinformation, how PolitiFact managed to transitioning to a niche, particularly to address health-related misinformation, when I assume you don't have the resources like doctors or health experts in your team at the time.
Yeah, you know, it's definitely a challenge. PolitiFact is a team of journalists so we have a lot of knowledge about a lot of things, but we're not experts on any one given thing. Right? And you layer on a completely new virus. So, don't worry about journalists. Worry about the top medical experts in the country, in the world, still learning about what was happening. You know, a couple of things that I would say is that we looked at this as, and continue to look at this, as obviously something that affects public health. Not to be too dramatic, but has life and death consequences. And as such, I think we need to be very mindful, and we have been, I think, about what were fact checking and the burden of proof if someone's claiming something. You know, one of the hardest things, it's hard to prove that negative if something doesn't work. So let's take hydroxychloroquine. I have been in meetings and talked to people that said my wife or my partner, or my spouse took hydroxychloroquine and that it saved their life from COVID. It is hard to have a conversation with a person where you say, "I don't deny that those two events happened, that your partner took hydroxychloroquine and your partner is still alive. I don't deny that. But what I what I do want to point out is that there's a body of this meta research, this big research, that shows that that might not be the reason. The first thing might not be the reason for the second thing." We tried, I think, to approach this with humility in the sense that we are trying to answer questions for readers. We are going to try to acknowledge that we don't have all the answers in the course of doing so, but also still put a really high burden, a high threshold, on the speaker making the claim. That can be controversial at times. So for instance, someone saying that"Stop making claims about how the virus was created," I think is a good example. People cannot definitively say one thing or the other. And as a fact checker, I kind of lean and say, if you say it was definitely created in the lab, you're not right. I think we we can also probably say that if you say was definitely not created in the lab, you're probably not right. And so I think we kind of have a high standard of PolitiFact. So in the court system, you're innocent until you're proven guilty, right? At Politifact, you're kind of guilty until you're proven innocent And I think with COVID, that ,to me, makes a lot of sense because you're dealing with really important things about how someone might make decisions about their health. And so I think the stakes are pretty high, and so I think that's kind of how we tried to attack it. Luckily for PolitiFact, we have a history of working with a lot of really great experts in a wide variety of fields. And so I think, over time, virologists and immunologists and anyone working on COVID-19, came to trust us as a place where they can say and answer questions, and we would relay that information without spin or any kind of coloring. So I think over time, we built up a good bunch of resources, but it took, definitely did take time. The last thing I would say about this is, some of our work, you might look at it and say, "Why did you fact check?" So, for instance, drinking bleach. Drinking bleach cures the coronavirus. I think a lot of people probably in this virtual room would say, "You did not need to fact check that because everyone would know that's not true." I think we have to understand, we have to break down that barrier that there are a lot of people who are passive news consumers who are going to see something online and not necessarily research it and try to figure it out, and they might get subconsciously affected by this stuff. And the best part about the Facebook project is that I want them to read my story. I would wish they would click on my fact check and read the whole thing. But by doing that fact check and giving that information back to Facebook, Facebook, and then can take action to reduce the spread of that type of misinformation online so it's less likely that people see it in the first place. And so I'm providing information as a fact checker, and that's great, but I'm also giving information to Facebook so they can take action that lets people see the garbage. So you might come to PolitiFact and you might be like, "Oh, I see this really in-depth story about health care policy, and then one about tax policy. And then this honestly, maybe stupid sounding conspiracy about COVID." And just realize that there are different rationales as to why we do that. It's 1. so you can get the information but 2. so maybe less people can see that bad offensive posts in the first place.
This is really helpful. Because if there were two questions that are mostly asked the fact checking community, one would be "Who fact checks the fact checkers?" And the second will be probably "Well, how do you decide to fact check which content?" So you, to some extent, touched on how PolitiFact decides editorially to fact check, and that is, I guess, based on the judgment that you have as a journalist on the virality and the danger of that particular claim to circulate on the internet. So any efforts to diminish that distribution is a very positive step. Maybe to help our audience to relate to that, can you give an example on how readers have reacted to one of your fact checks? Or is there is a story that you can share with us about showing the impact of your fact checks on people?
Yeah, so stories, I have lots of them. (laughs). I would say, there is impact. Some of it... okay, so let's be honest, what would you want? You would want someone to write to you and say, "Aaron, I believe that there was a microchip in the coronavirus vaccine, but then I read your fact check, and now I'm convinced that there's not." We don't get a lot of reaction like that, granted. What are we doing? So I think here's how I measure and think about our impact. One is this Facebook program. For the first time, our work has a direct, measurable impact on the bad harmful content, right? Let's go back to when we fact checked Barack Obama versus John McCain. If John McCain or Barack Obama got a pants on fire, we could hope that people would be like, "Stop lying, John McCain or Barack Obama." And maybe they would vote for the other guy based on something we fact checked, but we'd have no way of knowing that. And frankly, it's immeasurable, right? With pundits, the first year of our pundit fact checking project, we found 150 pundits who told a false statement. 150. Of the 150, how many corrected themselves? Six. Six out of 150 -not a really measurable or substantial impact. With this project, if we say something is false, all the people who shared that post will get our fact check, will get sent it. That is a great opportunity for us to try to reach those readers. Now, what is the reaction? In most cases, they are very mad, which makes sense. No one wants to be told they shared something that was false. But I do think a couple things: they do read the story, or at least they try to, they start to, and I think that can help. And frankly, so that's part one. Part two is I think we're working for that person who thinks they know what's going on in the world, doesn't necessarily know how to talk about it, doesn't necessarily know how to interact with someone who might share or spread false or misleading information, and we're trying to kind of provide them with that ammunition- the ability to have that conversation with their uncle or their coworker or their colleague and say, "Hey, you know that thing you shared the other day? I'm not so sure about and here's why. Let's start a conversation." So I think we're providing people that kind of ammunition, those middle of the road where I think there are millions and millions of people in the United States. We're also kind of blunt forcing, frankly, the people who share the false misinformation to say,"Hey, that's wrong. Here's the truth." I will say, the one thing that we've done with our team, we've talked about that exact step. If someone's getting a notification saying "You shared something that's false. Here's the fact check," we want to have a story that is sympathetic, or at least acknowledging that these people may not be sharing this because they're bad third party state actors, right? They might be just normal people who thought this was true and got duped. And so we want to try to write with compassion. So you don't want to headline it says, "I can't believe these idiots shared this post x," or "They're wrong again." We want to be thoughtful about why would someone believe this misinformation? And how might we have a conversation with them in our words that maybe convinces a couple of them? We're lucky to have tons of great readers who support our work in a number of different ways, and they thank us, most often, because they consider us nonpartisan, meaning they just want to know what's true, and they think they can trust us to provide that answer. And those are the people we write for. We don't write for partisans. They often will not like us. And unfortunately, COVID as a topic, has gotten very politicized, and so it's kind of filtered into a bubble, and so we, for the most part, stay out of that and work for the people who I think are most Americans, most citizens, who are thinking that everyone lies. Life is gray- it's not black and white, and that we all can use a little help understanding what's true.
Wow, thank you so much, Aaron, I think it was really helpful to conceptualize fact checking as public interest journalism, because at the end of the day, you somehow also help people to be better equipped and empowered with factual information. So that can also help people to foster a healthier public discourse as well. So I think that was a great example of how fact checking serves that purpose. So the last question that I have is going to be to you, Karen, I would like you to respond to that. But I might also jump in with some insights that I have working from fact checking organizations all around the world. Obviously, we have spoken most about the U.S. today, and I hope that was useful for our audience, but fact checkers have globally been also grappling with elections, crisis on race, and the pandemic like the coronavirus outbreaks are constantly taking place in different parts of the world as we speak right now. So how does Facebook partner with the international community to address such unique misinformation events when they happen?
Yeah, great question. As Aaron mentioned, the program when it started looked quite different in that it was a small handful of partners in the U.S. and Western Europe where there were a few elections coming up. I think we had a global roundtable early on, before I joined the company where all of our partners fit in a small room, which is certainly not what our global events look like today. And as we've expanded, and we've gotten to those 80 partner organizations that we work with today, I think the main thing we've learned is that there are certainly things that every fact checking organization has in common or shared experiences even when it comes to their work in our program, but at the same time, if we didn't have fact checking partners on the ground, where misinformation is spreading speaking languages that misinformation is spreading in, we really would not be able to respond to breaking news events the way that we can. And I think there are a couple of good examples of that. We have 10 partners here in the U.S. We also now have 10 partners in India, and they cover almost a dozen local languages in India, which is really critical to reach different populations there. And these partners- it was so fascinating to learn from them during the recent COVID surge in India, long after COVID was mostly no longer surging in most parts of the U.S. Our India fact checkers we're actually finding hoaxes that were specific to India that we've never even seen before here and were able to consult with our teams on that. And we're also playing a really active role in terms of the community they are. They were working with their local health authorities, or they were conducting trainings in newsrooms there to help people understand misinformation and how to spot it. So having that group of partners who were really focused on that country, or even on the language that they work in, was so important to get us through that type of event. And that's been true for any event that you can name almost anywhere. The other thing I'll say that we haven't touched on, but I think is a big evolution that's really globally focused is we've had to think about the actual platforms that people use to communicate. So one of the things that we have piloted and really grown in the past year is, how can we work with fact checkers on WhatsApp? In a lot of the countries where we know misinformation is spreading, that is actually the most important platform where people are getting that information rather than Facebook. And it's a really different platform. It's end-to-end encrypted. It's designed for one-to-one communication. And so what fact checking looks like on WhatsApp is can we get users to actually share tips with fact checking organizations about what they're seeing in the WhatsApp groups that they're part of. And what we're trying to do as a company is think about tools that can basically help fact checkers field these tips and respond to them at scale. So you can imagine someone forwarding a meme they got from a family group chat to a trusted fact checking organization, and the fact checking organization actually being able to reply back in a one-to-one messaging setting with a debunking article. So we're piloting this type of tip line in places like India, Brazil, Spain, Indonesia, Mexico, where we know that WhatsApp is really important, and we're really excited to learn about that from the perspective of global misinfo. So I'll stop there.
Thank you so much, Karen. So that was a great conversation for me, even as someone like me who spends his time working with a fact checking organizations around the world closely following the third party factoring program, and being in interaction with fact checkers like Aaron, all over the world basically. It helped me to have this opportunity to see how it's a complex cycle of misinformation, internet skill, but at the same time how it's personal and also a journalistic effort to do those fact checks and inform people. Because at the end of the day, I think it's fair to say that our job as fact checkers is to help people to be better informed, empowered with factual information. And just to Aaron's point, I think the Facebook partnership for fact checkers is very helpful on many different fronts, but especially on detecting misinformation, because it's a whole ocean out there of pieces of misinformation, and any signal and guidance to see what is being viral or getting viral is very helpful. And many fact checkers also use certain tools provided by Facebook, like the Crowdtangle, to see what is getting viral on the internet. So all those efforts add up to a certain amount of power the fact checkers to see the way the world flows. So I really thank all of our panelists and just would like to ask if they have any final thoughts to share with our audience before we wrap up, and transition to the Q&A? So Aaron?
Yeah, I would simply say that I think that for us, we are learning every day to get better as a fact checker. And I think we're also learning how to work with tech platforms. I think that's one thing that's interesting here. We were a very independent organization. And so building these kind of integrations with Facebook. We have a partnership with TikTok. We have relationships with Google and Snapchat. And so we have had to learn how to interact in these spaces, which is new to us, and they present different demands. I think the biggest demand, and this isn't Facebook demanding it of us, but it's just generally, everyone wants our information faster because in these moments of crisis, the quicker you have a fact check- kind of a professional piece of journalism that verifies or debunks a piece of information- it is very helpful. And so we've had to work really hard, I think, on how we can make all of our processes more efficient to respond to these kind of crisis in the moments. Some of them you can plan for. The perfect example is the "plandemic" videos from last year. "Plandemic one" happened, and no one was really ready for it because it kind of just dropped, and we were all scrambling. And it took us, I think everyone- like the world, a couple days really to figure out what had happened. But by "plandemic two," we were prepared, and I think most people probably don't remember plandemic two, because fact checkers, I think, did a lot of work to be ready to slow...working with Facebook and other social media companies...to slow the spread of that video. One story that I had real quick is that I was thinking about the election, the 2020 election, and we spent a lot of time in September and October thinking about what might happen after November 6. And frankly, I guess I'm just too naive. I just thought it wouldn't be this bad, right? So as we were talking to folks, they're like, "Okay, you got to be prepared to be going until December, at least through the end of November." And I'm like, "Oh no, it'll be fine." And I was amazed by the amount of misinformation that spread, starting in the hours and then the days and the weeks. And for us, and for our team, I'm just really proud of how they were able to just, in a year in which every day was a crisis or felt like it, they were able to just keep coming and keep going back. I don't know how many fact checks we published in November of 2020, but what I know is that it was an order of magnitude greater than anything we've ever done because the need was there. And so we've learned. I think we handled that situation about as well as we could have. But generally it's been nice to have a partner with kind of the technology abilities and the ability to have an impact that helps us do our work I think more efficiently, have more reach and hopefully more impact-sometimes negative, right, but that tension is okay. And so we've been excited. We've been here doing this for five years, and I don't think we're stopping so that's great.
Thank you, Aaron. Karen?
I'll just piggyback off what Aaron said. I think that figuring out ways to collaborate with many people who work at different companies on issues this complicated is not straightforward. But I think it's a really interesting question to think about: what is the role of a tech company here? I think building technology, thinking about ways to make the process more efficient, providing transparency about the signals we do have about content, and then thinking about what the role of fact checking organizations is to provide that journalistic input, and finding a way for that to work together. It's a really fascinating thing to work on. So I echo everything that Aaron said, and thanks for giving us opportunity to talk about it a bit today.
Great, thank you so much. And I think we are just on time, so I'm really happy as a moderator to be able to manage the time. It's always a challenge, especially when you have speakers like you, Aaron and Karen, which have a lot to share with our audience, as far as how to tackle, how challenging tackling misinformation at scale is, and also, give our audience an insider look of how factchecking is being done on a daily basis in such a established and respected newsroom like PolitiFact. So I really appreciate you taking the time and walking our audience through these steps and those policies. I hope it has been useful and fruitful conversation for those who are watching this, and wish you all a great conference for the rest of this conference. And thank you so much for watching.