Thank you, Tim. I'll just say thank you for that intro for both of us. But particularly, I will say, congratulations to Tim on the 20th of these, it's pretty incredible. And, I will say he's right. I was at CDT at the time that this conference started. I remember it very well, because I was also on this IEF board. I remember when the idea of this conference came up, a lot of us were skeptical, and I remember saying, like, this is a brilliant idea, but, really, can you do more than one or two of these? Anyway, clearly, completely wrong. Clearly, there was a lot to talk about. I'll just say congratulations to Tim Lordan, and his whole team here, for 20 years of incredible conversations that have really made us all smarter in this space. So, congratulations, Tim.
Sorry.
Well, Alan, thank you so much for that opening. It kind of brings us right into a topic I wanted to start out with, which is AI. You've been in Washington for many tech policy battles over the years. What do you think makes the AI moment we're in right now different?
Well, I think we've captured the public's imagination, and that's really powerful, and something that we should all really take a moment and lean into. I'll just say, as a starting point, I think we've all realized the power of AI, that machine learning and these new systems are really going to transform so much of our economy, and I think there's a strong feeling within the administration that responsible AI innovation is going to benefit a lot of people, but the key is, we're only going to realize that benefit if we are able to also manage the very real risks that these systems are posing, and posing today, and so I think that has captured people's attention, both the potential benefit, and the fact that we need to really get ahead of addressing the risk.The large language models have really brought it to the fore. We've been talking about it, you've been covering it, for years, but it's really something unusual and special, how much people are paying attention right now, so we have to take advantage of that moment.
On that point, last year, you did a request for comment on AI accountability. Can you tell us anything about the initial findings from that comment period?
Thank you. We started this project about 18 months ago, before large language models were cool. The idea behind it was to say, if you really want responsible innovation, and you want trustworthy systems, we have to have some way to assess and to hold the systems accountable. The big question we raised in this project is, what does it take to be able to hold a system accountable? What do you need to know? We put out a request for comment last year. We got over 1400 comments, which for us is like a huge amount, clearly touched a nerve. Thank you to everybody who's submitted, many of you are here. I think what we've heard is that there is real action we can take to help make systems more accountable. We have a report, I would say it's coming out in weeks, not months, so this winter., I see the report's primary author over there, he's nodding, one of our main authors.
There are a set of things that we can do, and this is what we're really trying to understand. What can government do to promote an ecosystem around accountability? It starts with more transparency for models, so people can really understand what they're doing. There's work by the private sector to be done. There's accountability consequences for missing on what use of a model can do. One of the analogies we're using for this, is like the financial auditing. We have a world today in a financial accounting, where we can tell if a company has actually made its numbers, because we have a very accepted set of practices about what it means to be in compliance with accounting rules, and we have an ecosystem of auditors out there who audit companies, very well accepted. That is the world we need to move towards, like a world where we've got real rules that people understand, and an ecosystem, an army of auditors out there, who know how to look at systems, and understand, and be able to prove that they're doing what they say they're doing, keeping people private, keeping people safe, being unbiased, and that's the world we want to move towards.
So when you think about having a system of AI auditing from the government, what do you see as the biggest obstacles to that? Is it talent, compute power? What might hold that back?
It's all of the above. We are so early in this process still. When you look at the level of transformation, the impact on the economy, we're very early in this machine learning AI revolution. We have a lot of things we need to build. Part of our goal is to identify those building blocks, starting with the rule space, giving also public more access to information, more access to compute power, so that researchers can understand and set up better practices. And, like I say, creating this private sector ecosystem, a world where we do have companies that are in the business of holding systems accountable.
To have that vision to realize that what steps do you think Congress needs to take as they debate AI legislation right now?
There is a lot, I think, ultimately, for Congress to do in this space. Again, it's early days. We're very keenly interested in watching and trying to provide technical assistance, where we can, to Congress where it's getting engaged in this. Utimately, there will be a lot to do here. Everything from how do we make sure we've got the resources to stay in front, that's a key part, we think that innovation here in the US and in the West needs to happen, it's going to provide a lot of benefit to people, so there's a lot that Congress can do to promote that, we also are going to need some some accountability on the back end, and the Congress can have a role there.
On that point of accountability, we see their steps being taken towards setting up the AI Safety Institute within NIST, how is the work you're doing at NTIA informing those efforts?
First of all, very excited about the AI Safety Institute, a big announcement last week, new leadership there, so I just want to congratulate the new leaders who have been put in place, and I think that is going to be an enduring institution for the US. We see them coming up in other places around the world. We're working very closely with our colleagues at NIST, there's plenty of work to go around it turns out. This is all a product of the AI Executive Order from the administration, which I think is a symbol, both of the level of urgency that we feel in the administration about this, there's a strong sense of urgency about the need to engage on AI, the second piece is that I think that executive order is really probably the most ambitious government effort to date, of any government, to address all of the broad range of issues around AI, and the Safety Institute's a big piece of it. We've got some homework assignments as well at NTIA, that we're going to be working on with the Safety Institute ourselves.
That's an interesting claim. Do you think this is more ambitious than the EU AI act?
I do? I do. I think that the AI Act is going to take years to come into effect. We've seen this with the GDPR, the privacy rules in the EU. If you look at that executive order, some people have said it is maybe the longest executive order... a lot of people say it's the longest Executive Order they've ever read. It has a lot of different pieces to it, and a lot of it is also about promoting US success in this market, so investing in research, investing in compute power, making sure we've got cutting edge technology, and also investing in dealing with safety, and dealing with risks. It is, across the board, a very ambitious effort. We will, almost certainly in the long run, need legislation in this space too, but for something that we could do quickly, and I'd say I'm very proud of the fact that we were able to move in this space very quickly. It was less than a year ago, this spring, even nine months ago, when we started in an effort to get commitments from leading AI companies around AI and large language models. We got those commitments, we've moved to an executive order. We're working with our international partners. Again, it just shows the sense of urgency in addressing AI right now, and it's going to be an enduring theme. Just look at today's program, there's a lot of conversation to be had about it, and appropriately so.
On that topic of the Executive Order. You've announced that you'll be doing a request for comment on the topic of open source. What do you hope to learn from that comment period?
One of the more the most interesting homework assignments, really the most interesting homework assignment, that we got at NTIA from the Executive Order, was to give the report to the President, by July, we have a date, we have a shot clock, about widely available model weights for foundational models, for leading models. What does that mean? It means, what should we think about opening up, making widely available some of the key parameters around models, that let people use them and replicate them. It's a very sensitive topic, and I'm very glad that we're taking the time to think about it.
Because, it would have been easy, well, it might have been hard, to implement, but there's an initial reaction that, gosh, if we open up these incredibly powerful models, how will we protect safety, how will we protect security, when they're open, and anybody can use them for almost anything? There's another competing view, which is that opening up these models can be a way to promote competition, democratize access, give more people the ability to use them in different ways, and innovate. Figuring out that path through, and making sure that it's not just a few large companies that have access and control the most important models, that is a really big assignment for us. I'll say we had a terrific listening session in December, at the Center for Democracy and Technology. There's more to come that we're going to be doing, a lot of outreach, a request for comment that'll be coming out. For us, the key thing, in findings so far, is that this is not a binary choice, that there's a gradient of openness, if you would, and there are a lot of choices that we can make about how open models will be. That will provide us a path forward, but it's early days to tell, and our report will be coming out in July.
On this issue, particularly this is the one where I hear the most concern from startups and venture capitalists about the lobbying power of some of the largest tech companies in Washington. How are you weighing that as you handle this process?
It's a terrific question. I think it shows why it's so important to get a wide variety of inputs. That's why we actually very consciously did our first listening session at a public interest organization. We've tried to get input from a wide variety of stakeholders. We'll be doing a public request for comment coming out in the next few weeks, and we hope many of you will contribute to that. I think we need to have a public conversation about it, and we're working with some of our international counterparts too, who are thinking about this.
I want to take a moment, you talked about the importance for Congress to act, when it come,s on AI, but, over the last decade or so, we've seen Congress unable to act, it seems, on many tech issues. When you look at even areas where there's bipartisan interests, like privacy or children's safety, we don't have new laws in those areas. Why do you think AI might be different?
It is a great question. We have a lot of members of Congress coming today, maybe you all can take this, too. There are some key areas where we really would benefit from action. Put privacy at the top of the list. If you had asked many of us who were working in this space when this conference started, and said, 20 years from now, we'll be sitting here, and we still won't have a national comprehensive privacy law, I think most of us would have said, that couldn't happen, we know we're going to need one of those.
We would benefit from from more of that action. The reason I'm cautiously optimistic about AI is, I think it has really captured the public's imagination, because it's immediately so obviously useful. Our kids are using it for their homework assignments, maybe they shouldn't be. People are using it in their in the workplace now. It is transforming our society. You see members of Congress engaging in a way that they haven't I think for a lot of tech issues. You see companies running to this, and you see government, even through executive action, moving, I would say, faster than I could think of any other major tech issue. That's a reason to think there may well be action here because of the urgency, because, really, people are reacting.
While we're on the topic of Congress, I wanted to ask you, there was recently a kids' safety hearing on Capitol Hill with the CEOs of major social media companies. You that day had a session at the White House with the Kids Safety Task Force. Can you tell me a little bit about how that hearing might have impacted the discussions there?
Well, as a starting point, I'd just say these issues around the safety of our young people online, and the health of our young people online, are massive and need to be addressed. It's probably one of the things I hear about the most when I go around and talk, particularly about broadband and other things, with people around the country. Parents are really worried. While there's still a lot of questions and research to be done about exactly the impacts of the Internet and online services on young people, I think the strong feeling among many of us is that we know enough now to know that we need to act.
We did have this listening session with some of the leading mental health kids safety advocates, we also had a couple of young people there, in the White House. It was really clear to everybody there the time for hearings is over, and the time for action is now, and there are things that we can do. I co-chaired this task force that the President set up, a taskforce on kids online health and safety, with some great colleagues at HHS, we've got the Surgeon General on board, FTC, and we've been tasked with a couple of things, including putting out a set of best practices for industry, recommendations for families, and policy recommendations. That first one especially, best practices for industry, we need to do more. We cannot say this is just on parents now to keep their kids safe, and we can't say that kids just need to stay offline longer, or not use these services. They're using them, they're active online, we need to help them thrive online, and industry needs to help, too.
Given the urgency on this issue, you just mentioned what you did with AI, having an industry pledge, would you ever do something similar in the area of children's safety?
It's a great idea. We will see, I think, I think we're very keenly interested right now in identifying those best practices as a start, and making sure that we're raising the floor. There's some very good innovations out there. There are a lot of companies who are working hard on how to make sure that kids are safe. I think we can help a lot by lifting up the good practices, and also asking people to make sure that we're raising the floor here.
While we're on the topic of social media, I wanted to ask you a bit about TikTok. Last night, we saw the Biden campaign joined TikTok. Obviously, there have been a lot of concerns about the app and its ownership by the Chinese company ByteDance. Do you think the Biden campaign joining Tiktok presents a security concern?
Well, I can't speak for the campaign. They will make their own choices and assessments about how to how to engage with the American public, where it is, and we do you know that TikTok is a very popular application right now. I can say, we have a process that's going on through CFIUS about foreign investment. The administration has endorsed the Restrict Act, which is a bipartisan effort to think about how we create better tools to address these issues. This administration, I will say, is focused on two important things in this area, one is, making sure that Americans' data is private and secure wherever they operate online, and the second is, the President has said that he wants large platforms to be more accountable for the illegal or harmful content that's on their platforms. I think that will be true regardless of what the platform is, and we're going to continue to pursue those things.
Last year, when there was a debate about banning Tiktok, the Commerce Secretary came out and said, banning Tiktok would be a sure way to lose voters under the age of 35. We've seen the Biden White House reach out to influencers. Are political calculations impacting how this administration thinks about its policy on TikTok?
We've had a very clear line on this all along, and I think the Secretary of Commerce was in jest in some ways when she made those comments, but the point is, we've had a very clear line on this all along, TikTok is not a service that's available on any government devices in this administration. I can't use it. Our team doesn't use it. We can't install it on our devices. And, that's for a reason. As I said, we've been laser-focused on these key goals, promoting the security and privacy of data, holding platforms accountable, and we will continue to focus on that.
I want to shift the conversation now to broadband, because I know you've been doing a lot of work, especially since the infrastructure package passed on this front. We recently saw that the Affordable Connectivity Program stopped enrolling new customers, and it's on track to run out of funding by May, with no action from Congress. Do you have a message for lawmakers on this front?
Affordability is a necessity not a luxury when it comes to high speed Internet service now. We have 23 million households in America now relying on the Affordable Connectivity Program in order to get online. It would be very damaging, it will be very damaging to our economy. It will be damaging, it'll be harmful, for these families to no longer be able to get online, and I will say, from the point of view at NTIA, we're out in the world, trying to promote broadband access, trying to build out networks. ACP, the Affordable Connectivity Program has been a key part of making sure that we can deploy those networks well. Knowing that there's a customer base available, and that people are going to participate, is part of what makes network deployment more affordable, makes it more attractive. Without ACP, we also have a real problem with our goals of reaching everybody, and building out networks for everybody in America. So, it is an incredibly important program, and we're hopeful that Congress will be able to act. It's part of the President's supplemental request, it's a high priority for this administration, and we will see what happens.
Can you tell us a little bit more specifically about how the digital equity work that you're doing under the infrastructure package might be impacted if this program goes dark?
Well, as I said, affordability, it's a necessity, not a luxury, it's a key part of getting people online. We've been talking about the digital divide in this country for 25 years, we now, thanks to the Bipartisan Infrastructure Law, actually have the ability to do something serious about connecting people. We are going to, in the next decade, connect everybody in America with a high speed Internet connection. That is going to happen, we're on our way to doing it. But, simply having a wire that goes past a family's home doesn't do that family any good, if they can't afford the connection. So, affordability has to be a key part of this, as well as access to tools, devices skills. That's why our digital equity work looks beyond... our goal is not just to have a connection available, it's to help people thrive online. It's to get people to have meaningful adoption in this country, where people are actually online and using the tools. Affordability is a key component of it. Without ACP, we have a hard time reaching that goal. We have other tools, this year we've got a $3 billion digital equity program, we're going to be putting out about a billion dollars worth of state capacity grants this year. Every state is going to have a digital equity plan submitted to NTIA, I think by this spring. So, there's a lot of work happening in this space, but affordability a key component of it.
I wanted to circle back to some news from today, that you announced a new grant to open a testing center for Open RAN in Dallas. Do you have any response to some of the skepticism, especially in Europe, to this initiative?
I'll say, this is news this morning broken by the Washington Post, thank you. We're putting out, and we'll have an event this afternoon, a $42 million grant for a testbed, a testing center, around open wireless technologies. This is what people popularly call Open RAN.The importance of this is, 5G is a very dynamic market, but the 5G equipment market, the cell towers ,is a very consolidated and static market. There's really only four or five major vendors around the world who produce the equipment in these cell phone towers, and a couple of them are not trusted, Huawei, ZTE. So, there's been a multi-year push to try to figure out how we can promote more trusted networks, and particularly more open networks, because openness can help us build a much more resilient supply chain, instead of relying on two or three vendors.
The announcement today I'm really excited about, because it's exactly the kind of place where government can help. We have a $1.5 billion fund to help promote innovation and openness in this space. Supporting these kind of testbeds, it's one of the most important things that we heard from industry about how we can help, by creating these testbed so that people can prove that their equipment works. That's the answer to the skeptics. As you mentioned, there are people who are skeptical about this. By proving that equipment can interoperate, we bend the curve, and that is our goal here. More open networks are inevitable in this space. We want to make that happen faster.
We're just about out of time. I want to ask you a big picture question. The Biden administration came in with very high expectations on tech policy, on issues from competition, broadband, social media privacy, how would you grade their progress so far?
I think we're doing well here. There's still a lot of work to do, obviously. There are things that we'd like to do more of. This is an important moment. There's a lot to feel proud about, our work on broadband and connecting everybody across America, our work on competition, which you'll hear about, and the work of the FTC, the Justice Department, our colleagues there. There is a lot more we'd like from Congress, we already spoke about that. Our work on AI is nothing like we've seen before, in terms of engagement on a major technology issue in this country, in recent memory. And so, I think, we're very proud of the work that we've done, but there is obviously a huge amount still to do, and it's going to be challenging, I think this is going to be a very challenging year, driven by tech, concerns about AI, concerns about mis- and disinformation, concerns about how AI is going to fuel those things, and in a charged political year.
If there's one thing that gives me hope, I would say, and I am an optimist, it is conversations like the ones we're having today, and the ones we're going to have today. It's the fact that we have a community of people who are engaged in this in a way that we never had before. In the previous conversations about the Internet, the Web, social media, we never have this level of engagement. I really think that we have tools at our disposal to deal with these really giant challenges in front of us. Connectivity, AI, they're hard problems for us to deal with, but they also give us the tools to deal with the big things that are facing our planet, to think about how do we create a vaccine for pandemic? How do we deal with climate change? We will figure those things out with these tools. So, we need to get them right. We are engaged far earlier in this conversation than we've ever been in the technology space, and that's what gives me hope.
So, I would say there's a tremendous program of conversation today. We can look at this agenda and say, these are exactly the conversations we need to have. It's hard work, but I know that, working together, we can really build technology that's going to make people's lives better, and that's the ultimate goal.
None of us know exactly what's going to happen in November, but, if you only have less than a year left in your office, what is the number one item on your to do list? What has to get done?
I'm going to say, we are well on our way to connecting everybody in America. That is our all hands on deck project. This is a once in a generation moment. We are only going to get 10s of billions of dollars to connect everybody in America once from this Congress, we have to get it right. Generations before us did giant things, connecting everybody with electricity and water, building the Interstate Highway System. This is our big infrastructure moment. This is our generation's chance to connect everybody with the tools that they need to thrive in the modern digital economy. That is the one thing we are really focused on, making sure we land this year, is getting that program launched. It's well on its way, but we need everybody's help.
Well, I think we're just about out of time. Thank you so much, Secretary Davidson for your comments today.