People Are Not Equal to Bots - or How Researchers Delegitimize Social Movements
12:00AM Jul 27, 2020
or happy, whatever time of day it is to you wherever you may be we have our speaker from Germany has just joined us, and is working on getting his, his computer up and running. Welcome to Michael kreil. He is a data scientist from Berlin, who is working with a open data journalism agency, and is looking at how to tell stories with data and but he's been working with data science for quite a number of years now, and he's been working extensively with Twitter the last few years, looking at the way that people are tweeting and looking at these bots versus humans so I'm sure you've read the the brief for the talk, and I think Michael is ready to take it away. So with No further ado, welcome to hope Michael and people are not bots.
Hello. Yeah, my name is Michael. Today I for securities as well don't have a camera, though, just a joke, we have technical problems. Yeah, but the slides will be enough. And it's quite hard because actually right now it's two o'clock in the morning. I drink a lot of energy drinks to be well fit enough to give this talk. And as you can see people are not bought so we such a delegitimize social movements. I also added in a URL where I upload the slides. Additionally, so if there's any kind of data or question I will add that to this GitHub repo. About me I'm a data scientist and a data journalist work for the last 1020 years and different areas on data you can see some of my works. Usually about maps or. But what I really love is working with huge amounts of data. And when, in the end of 2016. All these stories came up that social bots, trying to influence elections and the Brexit referendum. I really fell in love with the idea, because it was a perfect storm. I mean, I could fetch the data, analyze it make interactive data visualization and explain to the public, what social bots are actually about. But, unfortunately, this was a big failure because I was not able to reproduce the scientific papers behind that. So, what I do now, is basically tell you the story of my failure. And I think it's important to share. To begin with, it is the question, What are social bots. So, basically social bots are automated accounts. We all know that for example the Arduino kit, where you fetch it into your plan for example and it's tweeting automatically. If it's thirsty, and stuff like that so that's that's of course it's an automated account, you probably know things like Instagram. You can connect it to Twitter whenever you post in your photo on Instagram, you will get an automatic tweet or your account or automatic tweet that there's other ways of connecting your Twitter account or other automation like the PlayStation Network, check out my new broadcast for fortnight or whatever. And here's an example of how you can connect it to Facebook, that whenever you post something on Facebook, you're comfortable automatically tweet something and this automation is quite useful. I think that's actually something special about the Twitter platform that it's easy to use the API to fetch something and it also New York Times is using that using tools like social flow. They themselves, say it's a social flow automatically delivers content to target audience, target audiences. As soon as relevant kinds of conversation, are gaining momentum. So to be clear. The New York Times Twitter account is also a social bot. And we have to keep that in mind when we, when we talk about social bots. So, to summarize that we have things like Instagram, Facebook, piping it to Twitter. Also, if this and that PlayStation Bitly I saw something like McDonald's campaign in Japan I think they used to have automation, but also media like washing post. The New York Times, BBC garden and so on. Basically every
newspaper is having some kind of mechanism where the content management system is piping something into Twitter. So, one thing that I want to make clear is that sock puppets are not social but a lot of people, mixing that up so just to just to be sure things plan that that for example, you probably heard about the Russian internet research agency. Use sock puppets to impersonate people or groups. So for example, the Tennessee rapid republicans or here's a Twitter account from a Pamela more that probably never existed. Twitter deleted, up to 55,000 accounts of the internet research agency. But these are sock puppets. These are not social bots. But there's more because there's also some kind of fantasy of human like social bots. So, there's there's rumors like that, they can mimic human behaviors or one sentence I really like is elusive bots could easily infiltrate a population of unaware humans and manipulate manipulate them to affect the perception of reality, this is these are quotes from a scientific paper from 2015. Now we have 2020, and in six years, no one was able to demonstrate this kind of sophistication. So, when I talk to other people, machine learning by machine a. It's hard enough to have an somehow interactive chat bot but having an army of human like bots, that's that's actually more fantasy. Probably it's a narrative from from Mike from James Cameron. These this Terminator. They look like flesh and blood but under the skin stare machines, that's, that's actually the same narrative. And here the picture Terminator, the rise of the machine. Well, it's quite similar to the scientific paper, the rise of the social bot. And by the way, the next page. In this journal is this picture of Twitter birds, looking like the Terminator, but it's still a fantasy of James Cameron actually can make a movie out of that the rise of the social bots, they mimic human behavior infiltrate the population and manipulate our perceptions of reality. This is not science. This is fiction.
Or, especially if you're talking about
that we need a scientific base for these claims, I would call that even bullshit. But what the scientific consensus of the community is, is that social bots are accounts that use automation. Now let's have a look at these social bots, for example, during the Brexit referendum, there was an article from EU scientists, beware of the Brexit bots, the Twitter spam out to swing our vote. This whole article is based on scientific paper from the University of Oxford from the compra up from the computation propaganda project. This paper, specifically is from Philip Howard and Ben's Kalani. And so, if you want to prove that social bots influence the Brexit referendum, you need to do two things. First of all, If to show that they actually have any kind of influence. And also, that you found bots. So, how do they prove that these accounts had influence. Well basically they just counted hashtags. And that's, that's a problem because reading a hashtag has no political influence on you. And also, the hashtag itself is not enough, you need the context. So for example, I could tweet something like a support Brexit, or tweet or post Brexit, it's the same hashtag, but two different meanings. And if you look deeper into Twitter you find a lot of messages like this one, translated to South Koreans. I'm your biggest fan of the love your heart heart heart. Hashtag Justin Bieber hashtag Pokemon GO hashtag Brexit hashtag Game of Thrones. So, so just counting the, the, the number of hashtag Brexit, is not a good base on calculating the influence.
So, so, yeah.
Yeah, for many reasons I could talk longer about that. So for example, if a hashtag is trending, a lot of spammers are using that. So, for example like Justin Bieber hashtag grand with fantastic Brexit and anti Spam is not political opinion. And also just having a hashtag that is strong, doesn't mean that it has political influence. In that case, like for example Kpop fans might be the most political influential group in the world. Yeah, because they can push hashtags. But the core of the paper is, how did they find the bots. So, spatch Russian bots, who, who manipulated us. And as I mentioned before some scientists believe that they can mimic human behavior. So, what is this really smart criterion to find them. And the university Oxford is doing this redefined heavy automation is accounts that post at least 50 times a day. That is the basic definition of social bots, and that's not a good criterion. Just tweeting 50 times a day. It's, if you have just a for example a common conversation with somebody else, then Twitter becomes some kind of a chat client. So, if you will look deeper in this, to this criterion. I can name you a lot of other accounts, who are even more active than just 50 tweets per day so for example Starbucks 150 tweets. British Airways 280 tweets per day. The journalist Glenn Greenwald even has more than 50 tweets per day and Cory Doctorow thank you for the keynote is three times stronger than a social bots. 150 tweets per day. And also, New York Times cnn NBA NFL, while it does and so on. So it's, it would be quite interesting, if, if these news media. Well, if the journalists would check if if they if they talk about social about paper. If the, the news media themselves might be categorized as social but I can show you examples from from the Brexit referendum, for example, the CNN use the hashtag Brexit voters use the hashtag Brexit and I've had the tweet from Cory Doctorow using the hashtag remain. This would all be social bots, who tried to political politically influence the public, the public opinion. So I would call the bullshit. I'm sorry to say that but this this paper is does not reach any scientific value is neither proved influence nor found, does it found. Have fun social bonds. And if then your scientists. Well, published an article about that they should check that if, probably, they, themselves, would be called social bots. Okay, but that was the beginning. Everybody knows that the idea of social bots really came up during the US presidential election. And there was a lot of articles about that from New York Times BBC by the Washington Post. In dead of 2016 beginning 2017 I checked what kind of news articles I can find. And these are I'm not sure I think 40 articles I found every dot represents one article. And here is the media. So for example, BBC published three articles and stuff like that. And then I read every single article and checked, what is the scientific basis of that. And this is the result. So, most articles used the paper from the paper from Oxford. Then, the second most used paper is from Southern California Indiana. And the third one is from Berkeley and Swansea and we can have a look at all these all of these three papers and the first one, from Oxford is also from Ben's qadiani in Howard. And of course, they also really find a high level of automation. It's accounts that post at least 50 times a day and I'm sorry to say that it's bullshit, that's that's that's not science, but on the bottom Berkeley Swansea, that might have been better approach. And it is hard. This paper is very complex, especially their bot detection method is very complex. I could, I could explain you how it works and I would use a lot of swear words. So I pointed to that blog article it's from my current. He's one of the very few people in the world who has actually fought bots on social media platforms as a member of the Google abuse team from 2010 to 2013 he spent a large amount of time working on anti spam and anti automation platforms. So he basically. Yeah. Put the scientific paper apart, and his conclusion is, it's the most responsible abuse of math I've ever seen for a long time.
But it's, it's not just about how they define social bots it's also the problem, how they calculate the actual influence of them and then you can see that the scientists, obviously had problems on understanding causality. So for example, they had the idea if in a state for example, there's more support for Trump. And there will also be more votes for Trump and that makes total sense that's a causality. It also if there are more support for Trump. That means, there are more tweets for Trump. That makes sense too. So if there's some kind of social bots, pushing more tweets out there so that means that in the causality It will also create more votes. They don't even consider that causality doesn't work in every direction, and that not every voter is on Twitter, but it doesn't matter. Papers good when it's creating huge numbers so they suggest that 3.23 percentage points of the actual vote, could be rationalized with the influence of bots at even time and having posts. Talking reproducing that but. So just by generating hashtags. You don't you don't get votes for that. But he comes to Court for the Southern California Indiana project. That's the paper social bought distorted the 2016 US presidential election on a discussion it's by the central Bessie and Emir Fatiha. I'm not sure if you know Amelia Ferrara, you probably know him from the movie The rise of the social bots.
In this paper, the core technology they're using for detecting bots, is the bottom meter, also known as bot or not they named it. And the idea of the bottom meter is that they basically had to two lists. One is straining data one is a list of social bots. And the other list is a list of humans. And then they fetch the meta data from Twitter and put it into a machine learning algorithm that then automatically has an high accuracy to distinguish between bots and humans. That sounds like a promising approach, use machine learning to decide based on meta data in the Twitter account is a bot, or a human. But this system has problems, actually, in the paper itself there are two main problems, first of all, you need a list of social bots. How do you get a list of social bots. If you don't have detection methods yet because you're trying to build one with water meter. So, so you need something that you think that might be so social bots and what they did is basically just relabeled, a data set of another research team. What they found was content polluters, they're even, they're not sure either if they are bots, or just humans or spam accounts or whatever. But, but the bottom he does tend to trade, a trend on something like spam accounts so maybe it's not a bot detection I would miss maybe a spam account detection, and they never noticed. And the second problem is you want to test your algorithm, if it works or not. And the worst thing you can do is to test your algorithm on your training data that data scientists don't do that, especially if you have highly bias training data, because you've selected a group of perfect bots and a group of perfect humans. It's basically, if you have a machine learning algorithm, and you've trained it on whether it's something is picture, whatever a picture of a cat or a picture of a tree, and then you find out oh this algorithm can perfectly detect if this is a picture of a cat or a tree, and then you give them a picture of an elephant and the algorithm don't know what to do. So that's a problem of Demeter, you don't know what the bottom meter will do if it sees new accounts that have metadata that it has never seen before. So, having a high accuracy on your training data doesn't work. You have to verify it in the wild. And they never did that. But the good thing is to put the meters online. You can test it there, and you can test the false positive rate. And the easiest way of testing the false positive rate is to use a group of perfect lists for example list of Twitter accounts, we know that everyone is a human being and test the bottom we done it and I did it on the US Congress and the result is that almost half of the US Congress are social bots. And this is bad. This is really really bad. If you have a false positive rate of almost over 40%. Well that's that's almost like flipping a coin. Yeah, and being one half of the time. So, according to the bottom Ada over 50% of US Congress members are social bots and I guess that a lot of Congress members, read the article and never know one of them thought that maybe the bottom meters also claiming that they are social bots. So here's a comparison of the social bot chairs, according to the different methods. So yes election, they come from Oxford project found. 0.1%. bocha. I think it's almost 4 million Twitter accounts and they found 4000 of them so it's 1000, and the US election discussion Automator found 14%, but the same system on the US Congress is 40% so the social bot share is in the US Congress higher. Then, in the public during the US election. And this is bad this is a really really bad result, it's actually a PR disaster, so that's that's why we were contacted by Automator we didn't get a response, the only response we got so far is a tweet. Sadly, our research project and social robot detection tool Automator are under attack by some by some German academic trolls including data journalists mica. And Dr Florian godets. He's a professor for computer of computer science in Germany is an expert on pattern recognition and deep learning. So he knows. Yeah, he can read these papers and in fact there was. So here are my questions to immediate instead of a mini ditcher algorithm estimates during the US election that 14% of the counts involved were social bots and did the same algorithm classifies 40% of the US Congress, social bonds. And the third one and I think that's the most important. Don't you think this false positive rate is a false positive rate it's a problem, because your research at national, international political consequences.
This could be a big PR disaster so magically in 2018, the bottom media got an update, and now the US Congress is magically, what free. If you know how the algorithm works, be quite easy to just put the US Congress on the whitelist. But then you didn't improve your method that that's not a fix. it's just hiding the fact that you really screwed up. And they can put every Twitter list onto a whitelist so that you can still find many other lists of human beings that the Automator is not able to, to identify them. So for example from 65 Nobel Prize winners. 12% about 3137 female directors 40% about from the naza accounts 14% about a lot of astronauts. Yeah. And here's some more. Reuters journalists UN Women staff, the German use agency DPR and also the Federal Parliament of the Varian, the various 30% of us. And just to be sure we can check the other side. There's a biggie where I could find 937 bots. And according to bought two meters 660 percent are humans. So the Automator fails in both directions. It can't detect reliably humans and it can't detect reliably what's. And the problem is that a lot of researchers believe that the Automator scientific tool, because it comes out of a scientific paper but it's it's not true. It's not a scientific tool. You actually have to prove that it's scientifically correct.
For me that's bullshit.
And if you want to read it in more detail, without any swear words, I can recommend you are down the hall Fleiss, and Jonas Kaiser's, they wrote a great paper about the false positive problem of automatic pot detection. Specifically, the water meter. You can see and read more. In the Details F and bread experiment on. Yeah, measuring the accuracy of water meter. So, in the end, the three papers, will be based on the narrative that social bots have influenced the US election. All three bullshit, sorry to say that but none of them has proven that they found social bots. None of them has proved any kind of malicious interference, and none of them are scientifically reproducible and that's important. That's the core idea of science. It has to be scientifically reproducible others should be able to use the same data and the same code and get the same results, and neither data nor code is available, it's a closed source or, well at least it's not published. And all of them have received a very large media coverage. Probably because, yeah, the narrative of of a terminator in social media. It's something that that people like. Usually when I, when I talk to other people so for example I've met, members of Parliament's here in Germany, other journalists NGO scientists, there's a lot of discussion about social bots here in Germany. However, reliable the research is. I had got a lot of questions so I'm pushing here some kind of an FAQ between one of the typical questions is, but many Trump followers are social bots because their accounts without profit pictures. But the problem is, no they're not social bots. Many new users are passive and just treat it like a feed reader, especially if you just joined Twitter you and you didn't even have time to put up a profile picture. So, if you're a passive and just want to follow for example Donald Trump because the streets, got a lot of media coverage. Therefore they have no treats no pictures and followers, they just use it as passive users. Because having an accurate count is not a bad criteria. This is normal human behavior. Then another question is sometimes Scheduler. Is But A friend of mine now is a guy who said that you can infer intellectual social bots. That's. First of all, that's not the question. That's a common and second, the brother law of identity says that this guy is a liar. Seriously. But we need a discussion, it's not gut feelings or rumors or I know someone who had something we need facts, and especially if we believe that social bots are pushing society to the limits or whatever. Then we need facts and does not effect, but didn't try to admit that there are millions of social bots on the platform, and it's deleting them. And the answer is no. The New York Times, for example, social bots will not be deleted, hopefully, today announced that they're fighting malicious activities like spam fake accounts or artificial amplification, but they are not fighting automation. And I want to make clear as a journalist. Journalists should check the accuracy of their quotes. They, a lot of journalists are not quoting Twitter correctly. If, if Twitter is talking about spam and the journalists, quoting that associates that's simply not true. There are millions of very young accounts like Mark, or Megan Mike and then an eight digit number of course these are social bots. And now, this is the default pattern for new Twitter account names. And if you don't believe me quaden you can see for yourself that's that's actually I'm not sure when they introduced that but if you create a new Twitter account, you automatically get this kind of count it's the it's the first name and then eight random digit digits. So, this is also not a bot criterion, this is normal human behavior. And there are millions of social accounts out there but you can't see them because they look and act exactly like people. So this is the the Terminator argument, but people who believe that also believe that billions of crypto loads out there but you can't see them because they're looking at exact like people come on that's so the idea that the social bots are so perfectly human. It's a conspiracy theory. If, and also by the way if bots behave exactly like humans. Humans then science doesn't work anymore, because then the paper like just counting how often they treat is, it's not a reliable way of detecting them. So, so you created a level of sophistication, where science can't work anymore. And this, this is really annoying to me that that I'm having again and again. Always the same questions. So,
more social bots want you to smoke a cigarettes. What to smoke. What just wants you to smoke okay. I did not notice them on slicer. For example, He is and that that's really hurting to me as an artist from Scientific American social media bots deceive e cigarettes users. So, so, yeah. And this is the paper. And the core of the paper is, in order to distinguish between human users and social bots but not algorithm. So, about two meters used. We know that it's bullshit that this algorithm is not able to reliably detect humans, or bots. So for example, if an if an astronaut would for example talk about secrets and you would call that social but but social bots deny climate change. BBC talked about that study finds quote of climate change tweets from bots. That's work.
I'm not sure.
From which one I find the Brown University. Thomas model. But the core of the papers also be used to machine learning algorithm Automator, and we know that is bullshit. Climate change. No, it was about the corner virus, that was trying to destroy the discussion about crona and that America should reopen. And now we have a new player we have never had before, Kathleen Kali from the Carnegie Mellon University is based on the press release so I tried to research what is the scientific paper to look, what is the what is the algorithm, what is the method of actually dekton bots and finding the influence and the interesting thing is, nope. There's no scientific paper, it's just claims and no proof. And what about social bots and black life matters. Well, researchers, one in three accounts tweeting about black life matters protests are guaranteed to be bots, and the author is Kevin Collie again scientific paper. Nope, just claims, no proof. I fret, social bot research papers for the last four years, and my summary is social but research is a disaster. Every single paper, I looked at it deeper fell apart. And the problem is that it has real life impact.
And to show you that.
I will point you to sorry I'm not sure if you if you notice that, I think there are more but here's one from the EU from the EU copyright reform from last year. So basically, the European Union wanted to update the copyrights in in Europe. And one of the ideas is to enforce YouTube, Facebook and other platforms to install upload filters so whenever you upload a picture video, the algorithm will decide if this is a copyright infringement or not. So even if the algorithm doesn't know if it's satire, or not. So, a lot of NGOs, put up websites if your internet pledge 2020 10,000 emails and calls to the MEPs were sent by one of the VPS from the Conservative Party said that, oh, I get thousands of emails from different Gmail accounts, but all of them have the same content. Well that can only be a Google bot bot attack. So, and so now that was narrative that all the protesters are actually not humans that they are just bots. So, 100, thousands hundred thousands of people go to the streets and chanting visit kind of bots, we are not bots, and there was even a music video. We are not the bots, the bot narrative is just a new way of demonizing opposition. Yeah, that's a problem. And I think I think Twitter itself, put it in a good way. Oh, it's quite as good, the binary judgment of who's about or not, has real potential to poison our public discourse, particularly, particularly when they're pushed out to the media. I want to add something. There are proprietary closed source and unscientific algorithms that decides which social movement is legitimate and which one is not. I think that this is a new and dangerous dystopia, and we have to fix that. And here in Germany, we have a lot of discussions around that there's more and more scientists publicly saying, okay, we have to be really careful about the social studies, for example, the Sense Media Center here in Germany recommends has a heavy hand out for the journalist, saying okay if there's a social but research paper just Just be careful. But the problem is, and, and I want to say clearly more and more bullshit is coming from universities of the United States, and we always have to fight that. So here's the thing that I'm begging you for. Please start a debate on the scientific validity of Social Work research in the United States. It would really help us, because I think that this, this is, this is hurtful. So, I was a little bit faster. Maybe thanks to the energy drink. And I have to thank you all for helping us. People are not butts and on the bottom you can see the link to the page, where we'll put up all the data and slides. Thank you.
Thank you very much michael i think was a great introduction to the topic for everyone. And I think one of the things we've seen a lot in the, in the q&a here. There are some questions we'll get to in a minute. But there was a sentiment raised, which I just read out what was written, I think it's fair to say that I disagree with this speaker. I think automated accounts run by powerful groups can affect debates. That said, I really appreciate his taking the piss out of shoddy research and journalism, and I think that all of us should be able to agree with the second part of that sentiment at least. So without further ado, I'm going to take questions that are posted in the matrix chat if you're not in there go in there, put your question into the q&a area there but first of all, we have a question Do you have any thoughts Miko on verified accounts, and whether you think that social bots should be identified.
Okay what Plato, said oh
yeah so so so there is two parts of the question right it's asking about the classification, the identification of accounts by Twitter. And so whether they're identified as verified whether they're your average common or garden account, whether they're flagging them as social bots. Do you have any thoughts about what that might mean and what might be necessary for that.
So I'm not sure what what Twitter's actually doing I'm not working there, I have no connections to them, but I know at least that for example the verified flag is a process that you can't. Well, circumvent in a way for example having creating 100,000 verified, social bots. That's as far as I understand it not possible. So, that's that's one of the things that I find quite interesting that none of the scientific papers I found so far, actually looked at if an account is verified or not. Usually there are companies or people who probably had the video call with Twitter to ensure that they're human beings, but this. No one is actually checking that I could find a very, a lot of verified accounts that were categorized as social bots and probably influential to, for example the elections.
Okay, thank you. The next question we have, what is the difference between social bots and artificial amplification. Because artificial amplification of trends, sounds like an effort to manipulate political discourse. And if it's automated. Then it sounds like social bots being used to affect debates. Your question is Where is the line between social bots and people.
Yeah, into, of course, of course I can buy fake followers and actually I did I bought fake followers what second account and analyze them. What are the companies behind them, and so on. And of course you can fake the number of followers you can fake the number of likes or retweets or pushing the hashtag. But the question is, are, for example trending hashtags, the core element of, of our public discussion. I would never check the trending hashtags to find out which party or president I should vote for. I can't stand it. So I'm having a lot of likes for example is something you can usually see for, for example, right wing radicals, they're jumping on a specific topic and shitstorm in and stuff like that. So the question is, is Twitter or general social media. A good platform for measuring the quality of public discussion. I don't think so that it's a good idea and I know that the general is still looking at that what what isn't isn't hashtag important or how many followers a specific politician has, so that's that's that's about amplification, and, but but to put in perspective to to this talk. What I saw a lot of times was that people try to implement. So for example, people use one hashtag, 20 times in one tweet and retweeted it and push it again and push it again because they want to have a specific hashtag, and if you talk to the people you find out that they're afraid of migration or whatever conspiracy or whatever and they want to. They want to show the public that here's the real danger and that's why they're trying to push hashtags, but they don't do it automatically they do it manually. And if you have thousands of followers of a conspiracy theory, they can be generated quite momentum. But this is not artificial amplification that's, yeah. Oh, it's not automated. Yes So official of course about chatbots advanced. The question I almost forgot it. I think, I think you definitely
touched on a lot of the topics there so let me give you the next question that there was an early Twitter bot that would retweet any tweet that it detected, to have been deleted by a predefined list of politicians Twitter accounts. Yep. Twitter decided to pull that bots account, claiming that even deleting a tweet was free speech. What do you think of that,
yeah that's that's a good point. I think it was a project from the Netherlands, at least I knew that one called polygroups. And it's a good point, if you for example treating something when you're really drunk and you didn't want to leave it next day, I'm deleting tweets too because I don't notice, okay, not from every context is that is that a good idea to, to save them that way. And, yeah, of course, of course, somebody else could just generate screenshots of that. So, so the question is, how do you want to hope we want to talk on this on this public platform and it could be approached from to the to do that. If you think, personally, is a good idea or not that's well that's your decision.
Okay. You touched on from at the start of your slides and Sonia talk. We have a question here. Did anti Trump sentiment allow for otherwise, perhaps good researchers to poison their data with bad data, ie since researchers could not believe that Trump could win. They were more easily allowing themselves to use that data.
And I'm not sure about that. Of course, I think, almost every research group have to know the use hashtags, before the beginning so they will always be a bias in the research that's a general problem of research that there's always a bias that. That's why you want to have a reproducible that other scientists with other biases might come to different conclusions. But the thing that that the question touches and I think that is more more important is that the Voting Result was quite surprising. It was nobody would, would bet on that Donald Trump would have been everybody thought it would be election, and there are times all the data jealous I'll make this interactive visualizations all come to the same conclusion. And that was the same thing was true during the Brexit referendum so in 2016, in just a few months we have two major events that were astonishing and and frightening, even for some and we needed an explanation for that. And there was an explanation from totally unknown social scientists said okay that socialists manipulate the public. I think that is the most important reason why why so many journalists and news media talked about social bots. After the US election, because we couldn't find any other good answers for that, that this does not answer your question, but I just want to want to mention that that might be
bias in the reporting about for example social goods.
Thanks, Michael, and you know we have another question here which maybe plays into that as well. Talking about the shallowness of American electoral politics. The question is, to what extent do you think this may be a deliberate effort to keep people from more radical politics with specific goals, and does it matter if it's deliberate or not.
One thing as that I learned as a journalist and as scientists, is to never think about what but what could be possibly the motivation and somebody is doing that well not that way it's because because I couldn't either prove it or the other person can disprove it so I don't think about the motivation. So, I cut out to the question.
We have a question that may play into what you've already answered, but let's see if you want to give an answer on this one. Do you count paid posters, such as China's 50 cent party as bots, or do you consider them something else.
Yeah, well that would be like sock puppets, or that I mentioned in the beginning of the talk.
Yet, that's that's a problem, and the question.
Maybe, maybe, maybe just. If you're talking about if something is about or not, is, is maybe not a good approach maybe we should think about what is malicious activity or whatnot. And specifically, if you have an account and you don't know if it's automated of for example Chinese person is behind it. Is it really important who's behind it, or is it enough to just look at the at the malicious activity. So maybe that's the thing we should focus on. I believe that it will be harder and harder to actually figure out if if someone I think that that the amount of data you can get off a Twitter account is not not good enough to really detect social bots, so that's why almost every order, probably every social bot paper is failing on that.
Okay. The next question we have has a shout out for sky talk so thank you to whoever wrote that, um there's been some wonderful bot research. I've seen presented at Sky talks and other US based conferences, featuring researchers based in North America. Most of the research presented was the result of AI researchers and data scientists personal works, rather than something sponsored by larger organizations. What role do you think funding plays in but research in the US compared to elsewhere in the
world. I don't know and and I hope that doesn't play any role, scientists and journalists should always be as objective as possible. you probably get funding for having a specific area. I think we can see another effect that a lot of social robot researchers, when they said okay, social bots are the new big dangerous dystopia that's coming up that that has a side effect that their
probably gets more money, because they're the only ones who can actually research that so that maybe creates a bias, so that the dependent on the dependence on third party money.
So you're saying though that you don't think that there are research organizations that explicitly come up to a conclusion that is in line with the funding, they're receiving.
I'm not sure I don't know, I saw a lot of research done by for example companies, saying, so for example here's a company in Germany. They bought a fight bots, I don't remember the name. So, whatever they publish has to be in the area of that social bots exists and that they're big threat, because if they say different things, they will settle down but the business model. So, so there's a lot of advice in there. And I hope that that if the research would be more reproducible that we can fact check the science.
Okay, Michael, thank you very much.
You're welcome. After thank you for the invitation.
So, taken by.
All right, and if you haven't seen it, I would encourage you to take a look at the URL on screen behind me and go and put your bump in. I will do it.
It's a three o'clock in the morning, let's let's for the
audience, but you can put an entry Michael.
Okay, right. Thank you. Bye.