And thank you to State of the Net for having us as well. I'm glad we've gotten the doggie bag etiquette out of the way.
As I was yelling over there, you're welcome.
Do away with those questions.
Absolutely.
Great. So I wanted to start off this conversation with talking about one of the big topics, of course, at this year's event, and in DC tech policy circles, which is AI. I was just wondering if you could speak to what you see as the most pressing issue posed by the technology that the agency needs to grapple with?
Sure. I think right now and first of all, let me say thank you, Cristiano, for volunteering to do this. So we've got to say thank you to Cristiano, thank you to Amy, thank you to Tim, and all of you for being here.
I think there's two things that are front of mind right now, obviously, we need to stay on top of generative AI large language models, these very sophisticated systems that are very quickly coming into into regular consumer use and business use as well. And so, on that front, we have recently subpoenaed some of the most prominent companies in the space to figure out some of the competitive dynamics at play, some of the relationships between the companies that may or may not be apparent from the outside. We have convened artists, we've convened creators, we've convened writers, to make sure that we don't get lost in this dazzle of the technology. The fact that there are real people's livings, their work, there art, at play in making sure that they get a fair shake. Lastly, we have, through our Alexa settlement, I think made clear that "We need to train our machine learning model", is not a justification to break privacy law. And so, of course, we are trying to stay ahead of the curve on this, I think we're succeeding thanks to Chair Khan, thanks to Commissioner Slaughter, and the staff working on all this.
At the same time, I feel very strongly that, and I know my colleagues do as well, we cannot lose sight of the fact that while these very advanced generative systems are rolling out, simpler, but no less consequential, systems are pervading decision making in our lives. We're used to AI being used to, you know, prioritize messages in our inbox, help us avoid traffic, things like that. But, in 2024, increasingly, absolutely critical decisions about our lives, our health care, who is hired, who is fired, how much we pay for rent, that kind of thing, is set by algorithm, and we are keen to make sure that those decisions are made fairly. And so, for me, one case that has gotten too little attention is our settlement in the Rite Aid matter.
So, those are the two fronts, I would say are front of mind when you say: AI, what are you doing, FTC?
So, on the Rite Aid settlement, in your statement on that you talked about how the settlement was, quote, a strong baseline for what an algorithmic fairness program should look like? How might the agency look to apply this baseline to try to rein in use of algorithms more broadly?
So, let's take a step back and talk about what Rite Aid was and did, and then I want to get to that question. So, in Rite Aid, we allege, and everything I'm going to say, these are allegations that were part of a settlement, we allege that the company, the pharmacy, the retailer rolled out a face surveillance system over eight years that disproportionately flagged women, disproportionately flagged people of color, and for falsely accusing them of being persons of interest who'd engage in shoplifting, or other illegal illegal activities. And, some of the cases that came up were, were shocking. There are instances where an 11 year old girl walks into a store, and is falsely accused of shoplifting, is stopped and searched. Her mother later says, you know, I had to miss work because my 11 year old daughter was so distraught at this. You had a woman who was stop searched, I think the police were called to come and get her, who was flagged in response to an image that was later described... the woman in question was African American, the image was later described as depicting a white blonde lady, right? You had instances where people were out with their co-workers, with their families, and were audibly, publicly accused of breaking the law, falsely.These people who had done nothing wrong.
And so, why am I you know, giving you this litany of harms? Because we need to remember that these systems can hurt people, and the legal term is 'substantial injury' under our unfairness analysis. And so, when you say, how might this be used in the future? One reason I want to underline the settlement, like 16 times, is because, if you are a company using an algorithmic decision making system in a way that may substantially injure people in a way that they cannot reasonably avoid, and in a way where the benefits aren't outweighed by those harms, then you should expect to familiarize yourself with the Rite Aid settlement, because we will be very interested in applying it if this comes to the attention of our staff. I see it as a as a way to comprehensively assess bias, root out bias, ensure that the right people are running these systems, ensure that people know about the systems, ensure that they're told when they're used on them, and they have an opportunity to contest them. So, I see it as a framework for addressing algorithmic unfairness. I hope we don't have to use it again, but I suspect we might.
On the front of AI and competition. You touched on the inquiry that the FTC recently launched into some of the investments that major tech companies are making into this space, including companies such as OpenAI and Anthropic, and Politico recently reported that the DOJ and FTC have had discussions about which agency should lead a potential competition investigation into some of those partnerships. What's your personal level of concern at the moment about whether those types of investments pose a threat to competition? And do you see this as an issue that's in the FTC's lane?
We'll see, but we're interested. We're very obviously interested. One, and this is why, you know, we're conducting a 6B study, which is a study where you can really peer under the hood of these companies. You send them compulsory process, which is a fancy word for a subpoena, to understand exactly what the relationships are, what the competitive dynamics are. I really benefited from reading a report from one of our, I guess you could call, sister agencies in the UK. The Competition Markets Authority did a terrific report on these large systems, and helpfully pointed out that it's not as if there are limitless resources here. There are already bottlenecks, in the words of the CMA. I don't know if they technically use the word bottleneck, but they said, you have the need for a skilled person, a personnel staff with certain requirements, and that's a limiting factor, you also need to have massive compute. That's a limiting factor, you also need to have, at the platforms in which users or other businesses will engage with you. So, at each of these joints, junctures, etc, whatever want to call them, that could limit the ability of companies to have a level playing field, and so that was a very helpful for mapping out what may be some of the competitive bottlenecks. My hope is that the study that we're conducting will shed light on that, and, obviously, some of the information that comes out, or that we receive, will be information that's already out in public, but some of it will not be, and I think will benefit from that.
Could you talk a little more about what types of questions you're hoping that that study, that inquiry, provides, that might help sort of shape the agency's direction on this going forward?
So, I don't want to mischaracterize... I mean, they're all right there. I'm pretty sure that we that the questions from the 6B are public, so I would suggest folks look at that. But, for me, the most interesting thing is the relationships between who controls those potential bottlenecks, and how that will affect the playing field for new entrants, who don't have those legs up already.
So, shifting gears a little bit, you yourself and the agency, of course, have been very involved in discussions at the federal level about Children's Online Safety. A big piece of that is the FTC plan to bar Meta from monetizing children's data. I've spoken to a number of of advocates in the children's safety space that have argued that this should be a roadmap for how the agency, from a policy perspective, tackles this issue across the tech sector. Do you agree and how do you think that could be applied?
So, I cannot talk about that litigation, but, in other public settings, I have endorsed legislation that calls for a ban on targeted ads for children. So, as a policy matter, I think that it is a very compelling proposal to reduce the desire to keep children online in perpetuity, and targeted advertising is a key part of that. As a policy matter, I absolutely think it's a logical step as of whether...
Yep, I'll leave it at the policy side of it, and not touch on that litigation.
Okay. Another sort of key decision from the Agency recently was the settlement around Fortnite and Epic Games, which were over allegations that the company use deceptive design features to trick players into making unwanted purchases. Wondering if you could talk about how you see the standard there potentially being applied more broadly.
Yeah. So, one really important thing to appreciate about Fortnite is that it goes at one of the key contributors of what are alleged to be teen mental health harms online. If you, and Danielle Estrada, my colleague, and I did this, go online and just download all of the social science research on this question of what is driving alleged teen mental health harms online, you're generally gonna find three buckets. One is the one that probably most people spend time talking about, which is content. I'm not making a claim either way on that, but the allegation is, in research and elsewhere, that teens are exposed to content online, let's say, extreme dieting content, let's say pro-anorexia content, pro-bulimia content, etc., and that that is harmful. So, that's one bucket of harms.
The second bucket of harms goes to extended engagement, it goes to children and teens spending much more time online than they want to, or normally would, but for the absence of certain techniques and technologies and design features that keep people online. So, the most commonly trotted out ones are endless scroll, autoplay, etc., but there's any number of features that continually drive people to return to the platform, when they wouldn't normally think to do so.
The third bucket, that Fortnite gets at, is another generator of teen mental health harms online is abuse and harassment. One thing that was happening that we alleged was happening, in the case of Fortnite, is that the company set privacy settings so low, that children and teens were being verbally, audibly harassed, while they were playing Fortnite. So much so that, I think one of the most jaw-dropping parts of that complaint was that you would have fortnight engineers visit, you know, their little cousin's house and ask, Hey, what why is the why is the volume off on the television? They'd say, yeah yeah, these weird guys are harassing us, so we just turn it off, because we can't figure out how to turn off the audio. The engineer went back to their team, and said, how wild is it? And I'm speaking loosely here, this is obviously not exactly what's in the complaint, the complaint will tell you what's in the complaint, saying, How can it possibly be that we've built something that's so hard to manage in terms of privacy settings, that perfectly savvy kids and teens can't figure it out, and instead are turning off the volume of the television.
So, we required the company, and this was the staff's idea, and I think was a terrific one, to set the privacy settings for kids and teens to their highest level. That is something, I think it's come up in some other cases, but making sure that kids and teens have to make the deliberate decision to connect with other people, and they can choose who they connect to, but that it isn't imposed on them, I think is a very important step forward, that will help address this third bucket of harms around abuse and harassment.
Another major effort that's related, of course, around children's privacy is the FTC proposal on the COPPA rule, and to try to strengthen some of those requirements. This was a process that was initiated in 2019, the agency began soliciting comments just this past year, they put this plan plan forward. I'm wondering if you could speak to the pace at which the agency has been able to move on some of these issues, and if you're if you're concerned that it's it's not able to keep up with some of these concerns, as they're springing up,
Let me say two things, that are both true at the same time. If you compare the FTC staff in charge of privacy to the population of the United States, that's a ratio. Compare that ratio to those of our peer countries. We are, compared to our peer G7, G8 countries, a small fraction of what they are in terms of the amount of money that their governments spend on privacy at the national level. With that said, our DPF staff is truly top flight, and I knew it from the outside, when I was a Senate staffer working with the FTC, calling them as witnesses, but now, on the inside...
One thing I would point out is that that same month that we proposed the COPPA rule update, we also settled the first ever algorithmic fairness settlement with Rite Aid. In the same matter of weeks, we settled the matter with an advertiser that was literally creating categories of people by the doctors they visit, and just selling that to the highest bidder, and settled that case. All this is just the public stuff. Behind the scenes, you have other processes, other cases, other matters. And so, with the resources we are given, pound for pound, DPF and the FTC is knocking it out of the park, but I think it's a shame that our commission is not funded to the same level as...
Well, I'll just say... I'll reiterate what I said earlier, that we receive a fraction of the funding, in relation to our peer nations, when it comes to privacy, and I think things may look different if that were not true.
We have just a few more minutes, but I'm going to try to sneak in a couple more questions here. This debate around Children's Online Safety, of course, is happening at the federal level, it's happening on Capitol Hill, it's happening at the state level. There's a major proposal in the Senate, the Kids Online Safety Act, that seeks to expand some of these requirements, but it has faced some pushback over concerns from advocates that it could hinder privacy, and also chill speech online. You're, of course, a longtime privacy advocate, I wanted to get your thoughts on the approach of that legislation, and whether it strikes the right balance on these issues.
So, let me answer quickly a few things. First, this is hotly debated. I have not read the most recent drafts. I've publicly said two lodestars for me are, yes, I do think there need to be new tools that help folks in law enforcement, FTC, and elsewhere, protecting teen mental health online. At the same time, the Internet is a lifeline for LGBT people, and we cannot cut that lifeline. So, those are my two lodestars. But, I won't speak to the current debate, because I have not been following this latest iterations closely.
As for what we're doing? We're doing every single thing we can, using every resource at our disposal. We are an active part of the President's Task Force on Teen Mental Health Online, number one. Number two, we recently discussed the the COPPA rule proposal, one of the things that's part of that rule proposal is making sure that the special permissions, that are allowed for contact information, aren't misused to keep nudging children and teens to come back to the platform over and over and over again, that's a proposal. I encourage people to comment on that. And then, lastly, we're bringing enforcement actions like Fortnite.
So, we're doing everything we can to address this problem. And, sorry, I should mention, we're also working to bring on psychologists, pediatricians, etc., on staff this fall, so that we follow agencies like the CMA, and some of our peer agencies, that have these interdisciplinary teams, and so we're building capacity using law enforcement, using rulemaking, using every tool at our disposal. That's what we're trying to do.
A number of states are taking a different approach, including Florida and Utah, of either banning teens up to a certain age from accessing social media altogether, or requiring parental consent. Do you think that's the right approach?
Absolutely not. Meet a teenager, they will find a way to get around that, number one. Number two, I am not a first amendment expert, but I have a hard time seeing how that would survive first amendment scrutiny. So, no. That I can say clearly, I do not support that.
And last question, before we wrap up here, you are involved with the task force that President Biden created dealing with this issue, along with other agencies, and you guys are working on developing best practices for industry to try to tackle this issue, wondering what you see as some of the key areas that that could potentially address.
Let me just suggest one frame that I think I want to encourage anyone working on best practices to think of when it comes to teen mental health online. I'm sure everyone, or a lot of people in the room, are familiar with the Surgeon General's Mental Health Advisory on teen mental health, and people are also familiar with the National Academy's report on teen mental health online, which came to a slightly different conclusion. I just want to point out a subtle but critical difference between these two documents. Because, if you read Surgeon General Murphy's advisory, he is asking one question, and if you read the National Academy's report, it is answering a different question. The National Academy's report is asking, is there enough evidence that social media is harmful? The burden is to prove the danger. Whereas Surgeon General Murthy is answering a different question. He's saying, is there enough evidence to prove that social media use is safe? So, the burden is on the company to prove safety, the research to prove safety?
And so, I think that us, as a law enforcement agency, we have to answer this question. We have to establish causation. We can't bring a case against someone unless we can prove that the technology, technique etc, caused a harm. Of course, absolutely. But, when we're dealing with best practices, I think the more relevant question is, is it safe? Can you prove it is safe? And, for me, what the Surgeon General said is the most informed word, I would say, on the subject, and the conclusion he makes is, while there are benefits, there also is not enough evidence to establish safety, and we do need to take precautions. So, that's what I would say, is that we need to ask, is there evidence to establish safety when it comes to best practices?
We've covered a lot of ground Commissioner, thank you so much for your time.