09_Trust_as_the_Bedrock_of_Internet_Interaction

9:49PM Mar 1, 2024

Speakers:

Jonathan Zuck

Wolfgang Kleinwächter

Derrick L. Cogburn

Keywords:

trust

ai

means

internet

people

governments

question

data

information

online

mechanisms

misuse

countries

transparency

digital

services

model

cybercrime

world

multi stakeholder

So the first session here, we're going to start our conversation about trust. And the importance of I think the subject of the session is the bedrock of the Internet and the importance of trust. Online, we did a little survey about what the things are that we think can lead to trust and enter process is an interesting.

Right. number of ways I had the opportunity within the ICANN context to chair the review.

Review, that that previous round, and a lot of things really easier, easy to measure, right? Was there more choice? Yes. Because there's now 1500 domains to choose from when you're, if you're a photographer, you can choose dot gallery instead. of.com, for example. There was also a, you know, competition was something that was relatively easy to measure, but perhaps too soon, which was this notion of, you know, how the choices people made in terms of what they registered. So one of the things that suggests?

was

saying that gap, even when I knew there wasn't a great deal of spec would be, and so it's it's complicated whether or not it created the kind of competition that was anticipated. A lot of folks think that ccTLDs are better competition for the legacy gTLDs than the new gTLD czar, for example, because they have that local, you know, feel to them. with.uk Or, or, or, you know, duck PR, for example. So that was competition and choice, and then trust. How do we go about measuring that? That was our third task. And so we endeavor to do was two things.

Survey

both for in generally, it's, and they don't tell us this, it's all part of ICANN contract. And and there was an increase. But it's difficult to say, because when people say and what they do, or sometimes two different things, right? In some instances, I can say I trust something. But unless I'm really willing to put a credit card down on a website, I probably don't actually trust it. No matter what I say to somebody doing a survey. So then it seems it's often what you want to do, at least in addition to doing the survey is to try and track people's behavior. And try to understand how people are using the Internet and whether it's a growth of the Internet, and what their resistances to friction on the Internet.

So you think

it's this idea of how hardest to do is, say that beer, which basically set you to sell the most of them, right? If the price is too high, then people don't want to buy it. If it's too low, lots of people want to buy it, but you don't make any money. So how do you make the most money is finding that perfect price, right? And this is very similar if we think about trust in a very analytical way, and a non emotional way. It may be what is that ideal level? What is that ideal balance that maximizes Internet activity? With just the right amount of friction, right? The right amount of friction is do I have to, to to stage authentication, for example, right? That's friction. It makes it a little bit harder to do the things that I want to do. But it's worth it to me to do that on my banking site, but maybe not worth it to do it when I log into Flickr. Right. And so finding that I Do a place that maximizes my activities, and my trusting activities online is is, is sort of what might be what we're after. When we think about the ideal level of trust. There's a, I did a little bit of research to see some of the work that others have done out there. And that was a pretty interesting study that was conducted in 2018. And then again in 2021, on digital trust, out of Tufts University, and I'll put it into the, into the chat if you want to take a look at it. But the let me see if I can do. So they looked at 42 different countries and looked at different aspects of it. So the attitudes like how users feel about digital trust environment, their behavior, how users engaged in the digital environment, the environment itself, in other words, what sort of things were in place digital protections, etc, what level of digital evolution was there within that particular country, and then the experience how users experience digital trust environment, right, so those were sort of the four categories they looked at, across 42 different countries. And it's interesting, because this ended up dividing the countries into four categories. There was these standout countries, where you had a high level of digital evolution. In other words, a lot of the mechanisms are in place for protection, there was legislation for privacy, etc, there was some sophistication to the digital environment. And there was still a continuing momentum of growth in digital activity. Right, those are sort of the standout countries. But at the opposite, you might have a stall out when many of these are in Europe, where you didn't have this momentum, even though you had a sophisticated digital environment. But in say, The Lesser Developed countries where the environment was still new, but demand was so high, my willingness to put up with a lack of trust, a lack of friction with friction, etc, is higher just because of this high demand. And so there's this tension between, you know, the sophistication of the digital environment, and like the newness of that environment as well. And so the countries that that did the best are the ones that have managed to evolve the digital environment without losing the momentum associated with growth of that activity. So I'll put the link to this. It's interesting. And one of the tools on it that is interesting, is a there's actually a interactive thing that allows you to go through and look at the givers. In other words, that's us that people willing to give their trust, versus the guarantors, the people that are trying to get to gain our trust, and what the different drivers are. And it sort of tracks them across these different countries so that you can see how these countries fall into these different categories, the standout, the stall out the Watch out, and the breakout of these different countries that are in the sort of access between the level of digital sophistication, and the level of digital activity, right. So this is an interactive tool that lets you choose to countries and compare them and things like that. So I won't spend a lot of time on it. But I thought you might find it interesting that that was some of the research that had been done on that issue of have of trust. And so I will put that into the into the zoom here at the end of the session. So that was just by way of kind of introducing this topic of trust is a bedrock of the Internet. And so what I want to do is go to our esteemed panel here, Derek Cochran, and, and Wolfgang claimed, record, thank you for coming here to be here. And we're going to have a little conversation about it. I thought I'd just start by getting you to give your sense of what you think the importance of trust is for the Internet. Derek, do you want to start?

So really? Yes. So I'm really happy to be here, I have a number of ideas that I wanted to share about trust. I love the idea of the the theme of this panel really thinking about trust as the bedrock for Internet interaction. And what I thought about was if we think about the kind of meaningful connections that we want to have online and that we want to engage with that it's those types of activities, these online active interactions with people with platforms and with other information sources that are at the heart of what we mean by trust. So trust is shaping our experiences with relationships. And our experiences with institutions and information. So when we think about experiences, for example, when we search for information online, we're searching for answers we're searching for information that answers the questions that we have. And, you know, while many users have developed a critical lens around how to evaluate the information that they see, we still want to assume that the information we find is what we were looking for, and what we thought we were looking for. So for example, if you're applying for college, or you're applying for a position somewhere, and you think you're interacting with the University of Michigan, our American University, and you're starting to send your sensitive information in an application or something like that, you are having trust in that institution that you're able to, that you're talking to who you think you are, and you're sending that information to that website. And so that level of trust is required for us to feel comfortable conducting business online. This came up very heavily in early conversations around electronic commerce, if we're going to take advantage of being able to buy goods and services online, we have to be able to trust that we can safely put in our credit card ation. And if we do, and we order something, we'll be able to receive the product that we order. And this is also in terms of banking, and medical and healthcare facilities. You know, we're trusting that the private information that we're sharing, you know, increasingly, we like the flexibility that comes from being able to have applications that have our health information and so forth. But we want to trust that that information is private, and it's only for us and our family members. So those are some of the initial ideas that I have around experiences and and why trusts are important in facilitating our engagement in the online environment.

Yeah, thank you very much. I always think that trust is the basic currency of the Internet. So I think I will go back a little bit into the history, because the whole development of the Internet was based on trust. So that means there was no central top down order or something like that. People who started to work to develop the Internet and invented all the small things to dot the ad, and the DNS and all this say trusted each other. They said, Okay, what you are doing, I accept, I trusted you do the right things. And I will make my contribution and you can trust me. So it was a mutual relationship based on mutual trust. This is really important. And remember when before ICANN was established, a top level domains were delegated by John pastel. And he delegated it to so called trusted miniature already determine a trusted manager worse, that Patel said, it's too big for me to manage all the top level domains, I need somebody. And this will be a person where I have a trust in it. So that means trust was the key element from the very early days of the Internet, everything was was based on trust. And people in the early days, it was still in the 1990s 30 years ago. So the trust emerged based on a certain set of rules, which was called genetic data. So that means people had to respect certain rules, because you cannot do anything in the world without certain guidelines and rules. But the rules came not top down by a governmental regulation or by international convention, but they were developed bottom up by the community itself and said, Okay, these are basic rules. And based on this rules, you know, we trust each other. So and this was genetic data, you can go back to the IRT F document. It's defined in an IFC, the latest version is 1994. It's forgotten what it's 30 years ago. But to remember it, if you read the text of the RFC, regarding genetic data, then you said, Oh, this is what we need today. Unfortunately, it's forgotten. But it's like an universal document, genetic data. And it's like this human rights declaration. It was adopted already 75 years ago, but it's still relevant. So and so I I would say also, the netic data and the included elements of trust for as a basis for communication on the Internet is very, very valid. So what what happened in the last 30 years, so and Git, there has been some, some developments, which were not really foreseen. But on the other hand, to be realistic, one could expect us because the Internet was 1 million users is different from a Internet with 5 billion users that, you know, probably in a community among 1 million, it's already difficult to expect that everybody is a good guy, because some bad guys even among 1 million, but among 5 billion, you know, this reflects the whole world when surf always argued, the Internet is not a world in itself, it mirrors the real world. So we have the virtual world, that's true, but the virtual world is not distinct from the real world. So it mirrors the real world, I think this is also an important basic understanding of what the Internet is. So it has not created a new world, as some visionaries has expected in the 1990s, including our good friend Mr. Barlow, who said, okay, you know, this will be a totally new world. Now, the real world does not disappear if we built a virtual world. And then so far, you know, if we speak about trust, then the all the problems of the real world real appear then in the virtual world, and the Internet commodification, and with 5 billion users, you have all the hate preachers to pedophiles, the criminals, and the bad guys are also using the Internet. So I would say not using misusing the Internet, because and here comes standard, the problem for trust in today's Internet world, we have to discuss about not only the use of the Internet, but about the misuse of the Internet. And misuse undermines trust. That's the problem. And the question is, it can be misused by all stakeholders, it can be misused by individuals doing bad things, we have individual criminals, it can be misused by corporations, you know, like in, in the real economy, there are companies, corporations, you know, startups or SMEs who trust try to make money and, you know, misuse your trust, by looking for their own benefits, it can be misused by governments, you know, we have a lot of things which, which, which undermined and so the challenge for the future, if it comes to trust, is, you know, to have, I would say a holistic approach, you know, and to see it as a layered system. So, it needs to trust among the people who are managing the infrastructure. So that means, the operator of root server, you know, has to trust to operators of the name server. So that means if say, sent a communication from place A to place B, so that they will do what they have to do. So, the whole system of name servers and computing and route service is based on mutual trust that everybody expect, like nerves and expect it, you know, from the sales of the bakery, that people understand their responsibility, and do what they have to do as an operator, an ISP, or, you know, as a technical service provider in the Internet field. But then the data layer is on the application layer in using the Internet. And it's a different category. And probably we can discuss a little bit later the difference between trust on the infrastructure layer, and trust on the application layer.

I think something that raises some interesting questions about those different layers, because when you talk about trust in the infrastructure layer, I think what you're talking about is trust between infrastructure players, which is different than my trust of the infrastructure layer. Because that's another aspect of trust. In other words, there's different actors on the Internet, that are sort of in between me and the things that I want to do or accomplish. Some of them are infrastructure players, some of them are businesses, some of them are criminals. And it's like a gauntlet that I have to go through and I need to act on my level of trust in each of those different areas. So my trust in the infrastructure is what Derek brought up, which is when I type in a domain name, it's actually going to the place that I meant it to go. I mean, you mentioned, you're showing trust in, say, the university American University. But that's not about my trust American University. That's my trust in the Internet to get me to American University. Right. So that's separate from my trust in the university itself to handle my data that I'm giving them to make sure that they're going to secure it, and not reveal it to you know, private information. Right. So there's two levels of trust there that take place. So I'm wondering what you, do you think I'm meant for both of you? Are we doing all that we can to ensure trust? And I guess I'm thinking more of trust by the users themselves, as opposed to trust between infrastructure players, because some of it's about the incentives that are in place, right, the infrastructure players, as did the early academics have had very low stakes, and most reward from from this trust, trust behavior, right. And as our activities become more sophisticated, the stakes go up. The money's being spent, you know, and things like that, and more people are involved. And the incentives are more complicated. Do you think that we're doing what we need to, to ensure trust online,

I think one of the one of the areas that is really concerning is the level of fear that gets created in online spaces. Fear that people have of not being able to trust being safe in an environment and being cyber bullied, or, you know, being subjected to ransomware attacks, or hearing the stories that if you click on this link, you might have your computer camera, you know, opened up and you know, you'll be able to be subjected to, you know, these kinds of spying episodes and things like that. You know, also we have the trust, whether or not we can trust the information that we're seeing online. So the increasing rise of deep fakes and computer generated images, and video, and audio, that sounds like something that we should trust. But in fact, it's not coming from the source that we think it's coming from. These are, these are all problems and issues that significantly undermine our trust with being online and being in these environments. And I think that that's one of the things that's happening is that you do have communities that are walling themselves off and having their own private spaces. So we're talking about social media platforms. You know, if you look at the fediverse, and this sort of federated social media platforms, as opposed to centralized social media platforms, there are many instances in the fediverse, where instances decide they're not going to federate with another instance, because of their rules and so forth, receive information from activity pub standard, they're not going to get it from that other site. So I think that there are processes in place to try to protect some users in that way. But I think the level of potential film, the reduction in trusts that people have online, Jonathan,

you mentioned the incentives, you know, what are the incentives to build trust and to follow the rules, quote, unquote, you know, in the early days, it was this in interdependence, so that means it was either win, win or lose, lose, that means I trust you, and if you do good, then I will have a benefit. So if I fail, then it fires back to myself. So it was in mutual trust relationship, if you fail, I will fail. If I do my job, then it's good for you. What is good for you is good for me. What is bad for you is bad for me, this was an incentive and said, Okay, I have to follow the rules, because, you know, I will benefit from if everybody except there was, so this is yesterday, it's hard. Fortunately, unfortunately, today, you know, it's sometimes like a battle if you lose the way back into zero sum games. So if you lose, I will win. If I win, you will lose and vice versa. This is back, you know, we know the slogan, trust is good. Control is better. So, this leads us to the question of the oversight mechanism. So are them you know, normally the Internet it's bottom up. So that means but buttoned up means you have to trust each other. So, we always for decades rejected a top down approach. But there are certain moments and in particular, if you deal with 5 billion Internet users, where you have to find the right mixture, between a bottom up approach and a top down approach and top down approach, you need institutions oversight bodies, some groups who oversee this, you yourself were chair of this review committee, that means you did oversee you know what happened that means you have to have a certain review control mechanism in place and then to make recommendations to correct what what goes wrong. So, and in Zophar, we have to discuss, you know, what are the real mechanisms, which can enhance trust. So, the we rejected from many, many years, that this oversight mechanism should be an inter governmental or an a governmental mechanism so that governments control the Internet. No, no, no, no, this is not what we want. The Internet is free. If Captain Ben's intervene, probably we have good governments and bad governments, liberal, autocratic governments, but in general, government oversight is bad for the Internet. So they will reduce liberties, it will reduce freedoms and you know, will also be bad for computation and, and innovation. They just did this. So, but I think yet, we don't really have a problem, how to find the right mix. And what I see now in latest developments with social networks is AI, you know, intermingled with the whole Internet governance debate, even big companies analysis. Okay. So we want to be on the safe side. So please give us some some governmental regulation, that, that we have guidelines. So we are in the midst of this debate, you know, how to find the right mix, because nobody wants to have really a top down control of everything. So that's the Chinese model. So we will not want to have this. So but on the other hand, you know, with a growing misuse of the Internet, so we realize something has to be done. You know, cybercrime is a very good example. So certainly, it's Remsen, where this is really a fundamental misuse of the of the Internet, and you have to do something about means to fight against crime is primarily a question for law enforcement, for for for the government, that so the answer we gave in the community was, what we need is a multi stakeholder approach. That means you cannot leave it in the hands of governments alone, we need governments but we, you know, governments have to include private sector, civil society, technical community, to find the right answers. So and that's an acid challenge. That means if we talk about trust building, or let's say, control, the trust is really established. So then we have to look about not delegate everything to the governments but to create mighty stakeholder mechanisms, where the different roles different stakeholders play, can bring also say expertise and their power to, let's say, promote trust and to reduce risks.

Maybe I've just framed that up a little bit, because there's some when we what we saw in the word cloud at the beginning, in terms of things we thought, the group thought led to trust that number one was transparency. And the second one was education. And so what I want to bring into this conversation a little bit is, and this is jumping off what you said, which you were worried about people's fear and things like that, which means that they're learning more about the dangers and therefore are more fearful. But are they also therefore more careful? And and and more educated? Or do we need to add an educational component so that their sophistication online increases in their by their trust in themselves? Which was other other question, right, increases? I mean, because it's, it's a balance between all those things, because you want them to know enough to to have good digital hygiene, even though knowing more makes them more fearful and less trusting,

right? Yeah, absolutely. I think that's perfect for me. So I wanted to first follow up on something Wolfgang said, and I'll address both of those. So the first is transparency. So one of the challenges when we talk about deep fakes and we talk about generative AI and we talk about the ways in which these tools are being used to create content. Here's a key area which is transparency of the AI. How did it what data was a train doing? What are the algorithms look like? And so the levels of transparency have been very low and AI. You know, people think about it as a black box or as if it's magical. And many of the companies that are building these generative AI models don't want to tell you what it's trained on. There's some legal reasons why they don't want to do that. You may have seen the most recent case where the New York Times is suing open AI, for training its models on its current on his copyrighted content, and, you know, telling him to pull it out, and so forth. So this level of transparency has led to this movement called Explainable AI, where, you know, the idea is you just don't rely on you know, what the AI tells me, but you want to be able to understand and explain how did it arrive at that decision? Who were the engineers who worked on it? What's the potential bias in the data sources. And then the second piece of this is education. And we talked about this a little bit yesterday. And I didn't feel like I got the reception that I wanted, when I was when I was making this point. To try again, I'm gonna try again. So what I was saying is, you know, if we talk about the question of disinformation, and we talk about the, the platform side, you know, and the difficulties in stopping disinformation on the platform side, one of the things that we can do is to change it on the consumption side is to help consumers become more educated about disinformation. So you know, the example I use yesterday is that we've made progress on helping consumers to understand the danger of phishing exploits, we've, we've made some progress. And I'm still completely worried about this, because the phishing exploits have gotten more sophisticated, the vectors have increased to text messages, and, you know, not just emails anymore, and so forth. But in that space, we've been able to help consumers understand, don't just respond to a message that looks like it comes from your mother, you know, don't look at the reply to address hover over it, you know, the link to see if it's really going where you think it's going. I mean, we've we have some basic steps that we've taken to try one of those of that education, it's getting better. And part of it, part of the reason that is getting better is that we have Cybersecurity Awareness Month, and institutions are implementing training for their their home, you know, at least on you know, universities and companies where they say all month, you know, we'll be talking about the dangers and the strategies for how to protect yourself, and so forth. So if we take those kinds of public education strategies related to disinformation as well, to say, it is increasing, it is becoming more sophisticated, it is becoming more realistic. You can be more easily fooled by somebody's voice, and a photo and so forth. And these are the ways that you critique and guard against believing this disinformation. I think that that's one strategy in a mix of strategies that we need to consider.

Yeah, thank you very much. And I think these are the two key issues, transparency education, which is much better than governmental oversight and control. So you need government as I outlined a little bit earlier, to a certain degree, but to what degree is still subject of discussion, though, but let me add some some remarks to transparency and education. The first transparency I think, a couple of years ago, there was a discussion about trust in E commerce. So because, you know, there was a lot of misuse. And there was a network called GBD Global Business dialogue, an E commerce, although 20 years ago, and say they're very good at how we can enhance trust in our selling products online, it was the early days. So and the idea came up okay, in the real world, we have a trademark system. So that means if you go buy a trademark product, you know, okay, Nike shoes are your go to Intercontinental Hotel. So, this has a trademark and then you can expect a certain quality of service or whatsoever you know, when you are buying that a trademark product or service, you know, guarantees, if certain guarantees, you can trust the trademark. So, and the idea was to introduce for each services at Trustmark system, so that so, the idea was great, but it was never implemented because it's very complex. So that means you need an institution, you need an evaluation process. If somebody offers a service online. Better this will get the trust mark. People have to understand the trust mark, though that means is it trademark system is very simple, everybody understands it, the trust Mark system very complex. So that means while the idea was great, so and this was also to bring transparency and certain criteria for quality checks and into the system, so, it never worked. So, what we have on the discussion list now is watermark for AI services. So, that means, if you see a picture and say, Okay, can I trust the picture? Remember the pope its coat? So, that means, and then you have to look, you know, is on the watermark, you know, is this real? Or is this virtual. So, that means, here, we have still a long way to go, but we have to do something, not only, you know, to enhance transparency, but we have to translate to transparency into certain mechanisms. So, and this is also complicated, because this needs also institutions, units mechanisms. So, the first thing but then governments have in mind, create new intergovernmental organizations now that discuss, you know, and new intergovernmental organization for AI. So, Secretary General of the United Nations Kadesh, has proposed to take the IAEA I have from from Vienna and intergovernmental organization as a model, also Lafonda line, the President of the European Commission says oh, the intergovernmental climate Council could be a model for AI. So, my answers would be, why not to use ICANN as a model a multi stakeholder organization, you know, where you have, take the community, private sector, civil society and governments, you know, together in an organization, which discusses future and AI to build trust in AI, and then you know, to have a mechanisms to certify certain services to define the categories, you know, as in the EU act is where you have high risk or low risk and things like that. So, this could be done much better by a multi stakeholder mechanism than by an intergovernmental body. So, we are waiting now for the UN High Level panel on AI, what they will recommend in the summer. So, interim report, probably we have seen this, they have analyzed 35 Different organizations as model, but from the 35 organizations serving for the government organizations. So I think this is what I would reject. That means if you want to build trust in the Internet world, which is more and more based on AI, and you think about it institution, which could be helpful to build trust, then this institution has to be multi stakeholder and it's not in the governmental, my second point to education, I think it has to start really in the kindergarten. So that means we we teach our child, Child's you know, children, when I say go to the Crossroads clean, you can go read, you know, no. So, if we teach our children if they leave the house, close to the door, let it up. So, these are simple things. And unfortunately, you know, you do not have cyber or digital in the basic teaching programs in the kindergarten or in basic schools. So they test people live in this world say I'm not prepared the Chitlin Circuit learn by doing. So, learning by doing is very good, because then you make a mistake only once, but sometimes the first mistake is disastrous. So that means you can avoid this. So, that means this is really not only you know, for people who try to train their consumers, you know, like, companies are doing a lot for capacity building and we have to inform our customers, all this is important, this is good, but this is really a governmental duty, you know, to go through the curricula of basic schools and to say, you know, this is our basic responsibility as a government to prepare the next generation and also total generation, you know, for this totally new environment, and this will get more complex with AI. So people are confused and a millions of people outside say benefit from the confusion and say, okay, oh, this is an opportunity. Let's say cybercrime is not a service. So you can buy towards you can make more money with cybercrime than with a threat of weapons, arms trade or trucks. So this is really incredible and and say make use of the low level of inflammation and individual crude smile was arguably the highest risk is the Internet is the end user 5 billion, 5 billion risks. So that means you have to start with that user. If you want to lower the risk, you will never get 100%. But 60% is better than 40%. And 80% is better than six.

Yeah, it's interesting that you use the example of a streetlight as part of the education because the truth of the matter is the fact that the light is green is not really sufficient information to drive through, because people run through red lights, it's a, it's an indication of whether or not it's legal to go, which, no, that's right. And so we teach our children wait for a green light, but still look both ways. So it's Trust, but verify a little bit, that becomes a part of education because everything becomes a leap of faith. And we as human beings want to generalize. In other words, we want our decision matrix to be easier. We want a green light to mean go, even though it doesn't really, but we want that. And that's how accidents occur is because I just see it turned green, I don't live here, and I just go and I get T boned, right. And it's because I want something that's simple to be the indicator that I can trust and make this leap of faith, which is to enter this intersection. And I think that's part of the the problem is that there's an even greater level of sophistication needed, you know, on the Internet. Again, we're throwing terms around like do we trust AI? Is one question. But do we trust that what I'm seeing was created by AI as a completely separate question? Right? In other words, I mean, do I do I have faith that when I'm using AI, that I'm the images I'm creating aren't violating the law or et cetera? Do I trust AI that the footnotes that it's creating actually exists? Right. So that's the trust in the AI itself. But then there's this other issue is, the Internet itself now has AI as a as a sort of a mixed blessing for for for the texts that we read, the images we see, etc. And so it's added to the difficulty of our trust in the Internet itself.

I just wanted to add two things to what we've gotten said, going back, I love what you just said, Jonathan, about, you know, the green light, not necessarily being okay to go across, you know, we also teach them look left and right. So, this last couple of summers, I've been doing a lot of work community based work with young people. And I introduced a cybersecurity session to 10 through 12 year olds, and people were saying, Why are you doing that? And it's exactly for this reason that, you know, it's that kick button, but it was starting early. And so I introduced this cybersecurity session. And you should have seen a look on the faces of these young people, when I started talking about the threat, the cyber threats that are out there from tools that they're using every day, they were mortified. And they had not considered that. And it does need to be pushed down at this lower level that it's earlier on, you know, they're getting exposed to these responses. And then also, we've gone on the models. I think you're absolutely right, in terms of how do we look at multi stakeholder models for AI governance, that take advantage that the multi stakeholder model that we've talked about the last couple of days, and that we've talked about for decades, has benefited from the expertise that comes from all of the stakeholders. So governments don't have I'll say all of the expertise that they need a lot of that expertise, much of it comes from civil society and the private sector. And so there's a recognition that the insights that can come from this multi stakeholder model should be applied to AI governance as well. And in the US, the NIST National Institute of Standards and Technology has just launched an AI safety consortium. And I'm pleased to say that we you know, we are a part of it, but it is it is explicitly multi stakeholder. So it's government's multi governmental in US government institutions, private sector, really good collection of private sector, academics and civil society. So it's a national level model, but it's a national level, multi stakeholder model for AI safety and governance.

The other question that sort of comes to the foreground is about balance, right? If we go back to the streetlights metaphor, it wouldn't be possible. We talked about government solutions to have barriers come up when the light turns red, that would in fact, prevent people from running the red light but we've decided that it's good enough. You know, that there's a balance there that we're that we've taught people that they should slow down on the yellow instead of speeding up, right you know, things like that. And but some people still do they hit the accelerator when the light turns yellow. Right. And so so but we've we've decided that it's enough. So So I guess the question I have for both of you is how do we decide when what we think we're doing, which is a combination of government action, and sort of corporate best practices that might be related to government. So things like privacy legislation and things like that have have some implications there as well. Or industry, multi stakeholder models of, of behavior, and education being the third piece of this, if those are our three points of our triangle for trying to increase trust, how do we decide whether we're doing enough? How do we how do we measure that? I mean, the study that I was looking at from Tufts, you know, was sort of looking at whether or not we're maximizing economic activity online, and it's, is that the measure is, is it okay to be cold like that and say, well, people are using it. So it's okay, there's enough trust that people and enough need to do online banking that I'm gonna, you know, stretch my trust to do those activities? Or do we need a more paternalistic view that says that we need to constantly striving to increase trust online? I'm curious about

the question. Right, balanced, is it still a $1 million question. So there is no one size fits all. So that means my first reaction would be, you have to go case by case. So that means there there is no silver bullet. So probably, if you deal with bank transactions, so you need a different system, then by pricing district. So I think this, in so far, even it's criticized in the United States, I like the risk based approach in the AI act of the European Commission, because you know, if gives you a certain differentiation, the problem is how you certify certain services, and categorize them. So this creates a new bureaucracy, who controls the bureaucracy and all this. So that means the basic approach, it's okay, but how to implement is just a problem with AI. But coming back to the differentiated approach, I think the the benefit from this for risk categories is that you can have different balances. So that means if you if the risk is to lose, let's say $10, then you could say, Okay, if you are stupid, and you lose the $10, and you have made a mistake, so then millions of variants in the world, so do Hulu's marae, in much higher degrees without using the Internet. So that means life is risky, and everybody has its own responsibility. And if you have a certain service, you know, which is just part of the day to day things, and you lose some one, and then when someone so this is part of the real life, if it goes higher, then you need another balance, and how far this will go has to be the subject of discussion. So as I said, there is no silver bullet, you have to figure it out. Certainly, you know, if it comes to, let's say bank transaction, then we expect that the banks together with governments have a system in place, you know, which really tries to get the criminals and all the Remsenburg activities under control. So that means you are as an individual, or as a powerless to deal with this, these types of people. It's not part of the organized crime. So that means as an individual, you are powerless if you deal with the mafia. So you expect that law enforcement, you know, has a certain responsibility, but probably if you deal with, you know, public transport, where US also meanwhile, AI, so, without tickets, you know, trust using your phone, then probably there is no, it's an asset balance. So I do not have the answer. That's why I said it's the million dollar question. But what I would propose is a differentiated system where you, which is issue based, that means 100 issues 100 different solutions.

So I think I'm generally known as an optimist, an eternal optimist, but in this case, I I'm very pessimistic, I am concerned, I don't think we will be able to make the regulatory changes that we need, I think, at least in the US, our political environment is so dysfunctional at the moment, it's very difficult to get bipartisan support for very obvious things that need to be done. So I have a concern. And that concern for me is exacerbated. Because we're going into 2024 election season in the US where the stakes are very high, not a traditional election season. And there are elections happening around the world that are facing at the same time that are facing the same kinds of challenges. I talked a little bit yesterday about micro targeting of disinformation. This is something that I'm petrified about in this current period, because of what we talked about with AI generated deep fakes and voice has already started to happen in New Hampshire. In the primaries, there was a robo call that had President Biden's voice, telling people not to vote, you know, in going out to, you know, these robo calls automatically go out. And you got what, what purports to be the President's voice telling voters not to not to vote. This is going to be exacerbated and increased and with the micro targeting. So you know, we talked about this a little bit yesterday with the extensive profiles and data that's being built up on users. cleavages in societies can be identified and exacerbated, with very targeted, very specific information, these predictive models can figure out what would it take you to do or not do something so to vote or not vote? What What What's the message that you need to receive, to do or not do something, and because that is known that can be determined with very accurate predictive models, because of all this data that we have from our use of the Internet and all these services, that is going to be increased far more than it was in the 2016 election, which was, you know, notorious for how much there was, I've got the data, I've got the tweets and so forth, this election cycle is going to be horrendous, you know, some people are talking about this as the AI election, you know, around the world, and that we don't have guardrails in place to protect us from this. So my normal optimism has reduced Jonathan.

All I can say to that, all I can say to that is Thank goodness for Taylor Swift, right? Because I think a lot more people are listening to her than they are to President Biden anyway. And she's gonna tell everyone to go out and vote. So I guess competition plays a role in some of these environments. So in which you have concerns in other words, there's just as much money being spent by both parties in each case, all trying to use whatever tools are available to get people to engage in the behaviors in which they want them to engage. I mean, your concern is that more efforts will be spent on on sort of voter suppression, voter suppression, then will be on, on getting votes out or something like that, because it seems like competition can play a role. Because everybody's using the same AI tools, etc, to try and get the behavior they want. Yeah,

so my concerns really centered around the level of sophistication of these attacks that I think will be much more sophisticated in this cycle. So even in 2016, we had bots that were created, fake organizations, you know, taking on names of organizations that seem like they have affinity with certain groups where they're trying to make the cleavages. So you know, African Americans, you know, against Hillary or African Americans for Hillary, but then telling them to vote on the wrong day, or through the wrong mechanisms, LGBTQ groups that get set up, you know, again, with the same, they can push pro they can push con. And so that level of sophistication with the ability to create content, you know, quickly create content that's, you know, really crafted to meet just the right messages to say, Well, I'm just going to stay home or that it's so much information and misinformation that's flooding the system that just says I'm going to check out and I'm not going to participate in the in in the US the margins of victory are so thin, you know, we're talking about, you know, 1000s of votes, in some cases 10s of 1000s of votes. In other cases, we're not talking about large numbers, you know, that can have a definitive impact on you The election. And, you know, I don't want to go into everything that can come from what happens if it? If, you know, President Trump is reelected, shall we say, so?

I think that's a problem for the United States. But that's also a problem for the world. And, you know, from a human being a European yonder on the panel, I think we have to really face the challenge, we need global global. I would not say so useless, but a global strategy, how to deal with these challenges. So and as we know, unfortunately, the Internet has no technical borders. So but we live in a world with 193 jurisdiction. So we have this conflict, one world, one Internet, on the transport layer, where we, everybody can communicate with everybody from Australia to Paraguay, and from San Juan to my city, Home City, Leipzig. But the reality is, we live in nation states. And, you know, unfortunately, the dreams of the 1990s that will be really United Nations, more or less disappeared, we have now you know, this autocracy versus democracy difficulties. And so that means, on the other hand, so, we need also, certainly, we can create a coalition of like minded countries, as you know, the United States government's tried to do it with the declaration on the future of the Internet. So, but also, you know, divine powers had to realize Timbo that he will not get more than 70 countries behind the declaration, but we have 193 member states. So, the challenge also for democracies to 70 countries signed the Declaration on the future Internet is we have to have a double strategy, a strategy to be united in our own democracies and to make safe against attacks from outside. But at the same time, we live in one world we need a strategy, how to deal with this autocracies. And here we have probably also a layered system where you know, we have to confront in certain areas if it comes to basic values, basic human rights or things like that. But on the other hand, you can we have to cooperate and it starts with universal things we, we need quote unquote, them it starts with climate change, but next to climate change comes cybercrime. So that means if we just try to build democratic fortress, so cyber crimes then on the other side of the fortress, and they will find a way you create safe havens for them, if they say okay, here we have safe places, which did not sign the Declaration on the future of the Internet, and we would legalized it and would say, Okay, you are outside and you can do what you want. No, we have to have them in a system. So I'm very, you know, we have nowadays negotiations on the Cybercrime Convention under the United Nations to so called committee, they had already seven sessions defined a draft was negotiated in New York in late January, early February. So the hope was say we will finalize this convention but there is no agreement because the basic disagreement is what is the Cybercrime So Western countries want to have a narrow definition and say, okay, illegally intercept with networks or data banks, so autocracies want to include, you know, content related crimes. So that means if you criticize your government, this is seen in China as a crime and this would be then international legalized. So this is one of the, towards me, main conflicts so that you know how to move forward now in this next round is now in summer in New York. So to do a Cybercrime Convention has 60 articles. So are we able to make compromises dealing with autocracies to fight a bigger evil, which is even against all of us? And how far can we go? I think this is really a big challenge. And this is twice not on the on the infrastructure layer. That is trust among governments, and trust among governments is now close to zero. So nobody trusted anymore. So that is what we do. have to change? I do not have the answer. But I want to raise the point. Yeah.

So I'm glad tomorrow we're going to talk about hope. Because today, you know, we're talking about the lack of trust and and as the bedrock of the Internet, I just wanted to quickly go to something Jonathan said he was joking mentioning, Taylor Swift not joking is true, you know, the, what she commands, but think about what happened in the last couple of weeks. So there were deep fakes created of her sexual images, or videos or something. And because they were created, and then circulated, that social media platforms decided to stop allowing searches on her name, to be able to because they were supposedly protecting her from those images. But yet, but if that's when her announcement came out, to go and register and vote, and so forth, so it's it's a very challenging situation that we're facing. So tomorrow, we're going to have hope.

Today is Dante's Inferno. All right. Give up all hope ye who enter here. So I want to hopefully invite folks either online or in the room to ask questions.

I don't have a question, but I have a comment. And I was telling professor that I read one of his books in cybersecurity law. And the biggest thing that by that he mentions in the book, which is not talked about, is really the risk of AI is not on the outcome, the risk of AI is on the data collection. Because we cannot regulate what AI is going to, is going to replicate, or what AI is going to produce. But each time you and I download an AI app on our phone, there's a disclaimer. And because I'm a data privacy attorney, I have read the disclaimer, part of the sentence on that disclaimer, as much as they put it in very small writings, it says, you do accept that we will use your information for research purposes. And in in that saying that you're actually accepting that whatever you're going to put on that AI, they're going to use it for their own purposes, and they can publish it with your own right, as long as you hit the Accept, which accept is actually in bowls, it is in green, and it is actually in bigger letters than the terms and conditions itself. So for me, I am not as pessimistic about AI. If we look at it in that way, because if you and I are aware of those terms and conditions, before we click the Accept button, which is very tempting thing, and it's very light up in green, and all those bold letters, before we can actually accept that someone can actually it also says you actually accept to us tracking your phone. I don't know if you've seen that. If you read the terms and conditions of the AI on your phone. It says there's a tab that says by clicking Accept you also let us track your phone. And your location at some point, I think I found has a disclaimer to that other point. But the moment you click Accept, you've literally given someone your data, your date of birth, whatever you store on your phone, my biggest education or stock awareness I would like to raise is for people to be aware of what you click Accept. Don't just click accept for things you're not aware of. I know. I'm one of those attorneys that say if we're able to,

I think that also is an interview. What about the issue of misplaced trust? It was when I hit accept that involves an element of Article of trust and article of faith. And so that comes back to education. So we're trying to increase trust online, but separate from that is trustworthiness of those tools. And how do those two things right? Yeah,

yeah, just just thank you for the comment. And I think you're absolutely right. So very few people read Ulus. You know, they click and sign you know, they agree to these end user licensing agreements. And particularly with these AI systems, the crutch that people are relying on them increasingly, and not thinking about what happens when you upload your company's spreadsheet so that it'll make a report for you. What happens when you upload your family's genealogical report or you know, all of these Things that will now be used to continue to feed the research and feed the models of these organizations. And so this level, Jonathan of displace trust, you know, you're thinking that these tools are helping you and they are helping immensely helpful. I'm very positive in terms of of AI. But I'm also eyes wide open in terms of what the downsides are as well.

Yeah, trust is difficult to measure. And measurement of trust was one of the questions, Jonathan Alex good in the beginning. So trust is the result of a process. So and it's based on experience, very often not very personal experience. So that means look into interpersonal relationship, the friends around you, you know, do you trust him to trust her? Sometimes, you know, you trust her or him, and then something is happening and say, Oh, he has misused my trust, and you turn around. So it's a very liquid, very, very liquid thing was trust. So and this means also trusted institutions. So you know, where you're getting your information from, probably, you have a trusted relationship with the New York Times, you say, Okay, if it's a New York Times, I trust them, they have experienced people. And if say, Pick something and put it on the front page, they will have checked it three or four times, and double check. And so I can trust that, say, you have a reason. And to publish this, and I will take this, you know, on board to my my own opinion, if you go to NASA, let's say social media publication you have seen for the first time, then probably you have a natural mistrust. If they tell you, you know, if they say put it as a criminal, probably you'll say, okay, that's right. If I say put in the Democrat, then you have a mistrust. But in general, you should have a mistrust. In sources you do not know. So you should not open documents, which comes from an unknown email address that all this is part of you, let's say form, have you read and natural mistrust in yourself. So that means except in, in one circumstance can be okay. Because I do not, you know, I trust the New York Times. And if they ask to accept it, then I accept it because I trust the institution, you do not understand the mechanism behind this. So what's underway between you and the New York Times How many intermediaries are taking your data. And so this is the risk of life so that you cannot avoid this. But it looks nice. So the trend

to Mr. Ross is to use trust times that has a responsibility to try to be as truthful as possible, not only all those layers of accountability, but they can be sued, whereas these social media platforms are not able to be sued for the dissemination of information that they provide.

We have a couple of questions.

Online, there are two comments. One is from Marita Mall. And she's saying that, sadly, this is most of us had been accepting those kinds of terms over the years because otherwise, we are locked out of the system, we can use them. Like for example. We below the average member of the Internet community community, and it's not wise to expect that the average of the user will be educated.

All right, you wanted to make

I had a follow up question about that. Again, this is not defending myself and the attorneys who draft those terms and conditions. But this is just me putting it out there. I do. Put me as the devil's advocate, okay. I do draft those terms and conditions. Where is the line between your ignorance and the trust? And that's the question I'm posing to the panel. And everyone. Well, where do we draw the line between ignorance and the trust? Because when I give you the choice of accept or deny, you have two options, right? But also, do you look at me as a bad person when I give you long terms and conditions that you're not able to read? So for my for the panelists, where do we draw the line between trust and ignorance?

I think the bottom line is what Wolfgang was alluding to is real. So there are different types of trust. So I've studied trust for a long time. So we're using the word Trust. But there are various kinds of trust. So there's swift trust, there's cognitive trust. There's, you know, various ways of thinking about and measuring trust. But one key component is the ability to disclose and share information, and not have that information abused in some way that I wouldn't appreciate. So that that's one of the ways that personal trust starts to get built, where somebody can share something like through an app, that information gets shared, and it's not abused. It's not, you know, it doesn't come back to harm you in some way. So you're absolutely right. I mean, we're making those choices. We have freewill to decide, as the online person said, to opt in or opt out of the system. You don't have to use Gmail, you don't have to use x, you don't have to use these other systems. So if you do, you're now subject to what happens and the level of trust that they do or do not warrant.

By the way, a very probably strange command is do do we need all this new services for our daily life? We a, you know, I remember very old story from from the Greek times 2000 years ago, when Socrates and his wife, son tipa, had always conflicts, and sandpaper wanted to go shopping on the marketplace, in essence, and so I said, No, you know, it's a waste of time, I have to sink and not to go shopping. And, you know, one Saturday morning, he said, Okay, I will go to the marketplace and sandpaper, took him to a whole marketplace. And when they finished, she said, Oh, you know, what is your impression? Now? You know, how many things are here? And so, okay, my conclusion is, how many things are available, which I do not need? So, probably this is also a good advice for all the services which were around us in the Internet world, a lot of things you really do not need be, you know, train your needs. This is hotter probably could have

been there was a question. Yeah, there's a question in the back. And we'll make this the last one, because we're at the end of our time.

Okay. Thank you. Morning, everyone. My name is Chanel, McPherson. I'm a student at the University of the Commonwealth Caribbean. And I'm studying networking in cybersecurity. So you know that I'm already invested in this topic. I think one of the core values of my profession is to build as the trust, of course, and to protect our cyberspace so that persons are, you know, trusting in their real really well reliability, and usefulness and ask for security. Know, one of the questions that I have is that, how do we, as Jonathan pointed out to some details in an initial point of his presentation about some countries that are lacking, and I'm sure, Jamaica is one of those countries that are looking into digital transformation. So how do we bridge the gap? Between trusting and a country? That is, that is not trusting anything that's regarding digital? How do we like, bring that country up to speed? What's happening? And trust in the cyberspace?

That's a big question to get answered in the next 30 seconds. But I mean, part of what it is, is evolving our own understanding of what policies actually make a difference in trust. And so why the why that tufts study is interesting is it looks at the types of laws that were put in places in different countries, the type of digital revolutions that have happened, the best practices of corporations, it's looking at a number of those things, that all have an impact on trust. And so if someone's a late comer, to building a more trusted environment, they at least have more information now that they're doing less experimentation, and they kind of know what's more likely to work. So in lesser developed countries, a lot of it has more to do with some Ulos that can be read on a mobile phone, for example. That's another question right? I'm really not going to read three pages of a EULA. If it's on a phone, how can I represent that information differently, to build trust and make people make intelligent decisions? If they're doing the one handed review and the single thumb except right and, and how is that different? And that's so that does, the things that you ought to be doing to build trust vary from country to country. So I want

to do one thing before so I want to give my second book giveaway. So yesterday I gave my book on transnational advocacy networks to Benjamin back there, he won. The book today is the turn to infrastructure and Internet governance. So the first person who can first fellow who can tell me the difference between disinformation and misinformation, we'll get the second book. So

how are they going to do that? They can come up to you come up. All right. All right, so excellent. So please join me in thanking our panelists for getting this conversation started. It's a whole day on this topic. So hopefully, we haven't depressed everyone with our lack with our lack of hope, on our day about trust. But thanks, Derek, and Wolfgang for your participation. And let's continue the conversation of the course of the day.