SOTN2022 10 How Can Access to AI Resources Support Human Rights?
10:02AM Mar 3, 2022
Speakers:
Keywords:
ai
people
question
taskforce
talking
nair
resources
thinking
technology
research
computer vision
build
communities
folks
companies
data
world
government
country
access
All right. I want to welcome everyone to how can access to AI resources support human rights. I'm Ben Brody, a senior reporter focusing on policy at protocol. And I did shine my head for this. A lot of you here probably believe that getting AI right is the great tech policy challenge of the next 10 years. We have some advantages in this project. We've gone through enough technological change to understand that the benefits of AI absolutely must be shared as widely as possible, and that the downsides for human rights can be real if we're not careful. In many ways, it's the perfect moment for this panel. The White House is leading a task force right now to determine the shape of the national AI research resource. There is designed to provide computing power data education and more to spur the development of AI in the US. The task force's recommendations for the shape of Nair are due within months this year, and the administration is also putting together an AI Bill of Rights. Members of Congress from both sides of the aisle have urged these two initiatives to go hand in hand. The question now is, how should the US seek to build AI that lifts up people of all races and ethnicities, regions and ages, abilities and experience? How can it help people live free, prosperous and self guided lives? And how can people from all backgrounds get access to the potential benefits of AI? The future can perhaps be as exciting as revitalized communities that have suffered for decades from redlining, or the loss of manufacturing jobs, but it can also be as dire as any repressive sci fi oligarchy you can imagine? Perhaps, if I've made it sound enough, like an MCU trailer, then I guess we're ready to start. We're gonna take audience questions at the end. But for now, I have a sob Ramzan Ali, he's legislative director for Congressman, Congresswoman Anna Eshoo, who co authored the narrow task force act. De Baki. Raj is founder and CEO of cloud AI, which specializes in no code computer vision. Tom Dawson, is CEO of proof systems, which focuses on AI identity management. And Austin Carson is the head of the newly founded nonprofit seed a I welcome all. I wanted to start with you. This is a letter from a boss a quote from a letter your boss recently wrote, smart individuals with good ideas should not need to work at a handful of large technology companies to have access to the computer power or other resources needed to research and deploy AI based technologies. What were the concerns behind that letter? And what's the word that this isn't being carried out?
Yeah, so thank you for doing this. And we're excited to be here for this. So the near the national AI research resource, which is a mouthful, the idea is captured in that thesis, the idea is that there's going to be a lot of gains from society that come from advances in artificial intelligence in the coming years, we're already seeing that. But we need to democratize who has access to be able to do that? Right now, it's concentrated in a few companies that can afford the inputs, which I think of as data, compute power and expertise. There's really just a handful of company companies in the world who can do that. We need universities, we need nonprofits, we need others at the table to also be contributing to that foundational thought and research. But also the gains to your point, I
want to bring the rest of the panel in here. Can you guys talk about what small businesses and research institutions can bring to the table to make sure that those benefits go wide, and that are maybe things that those biggest companies can't bring?
Well, that's a tough one I'm looking at my two colleagues are looking at me here. So I kick off, you can kick off, but I'll talk loudly
until my words just for good. So the funny thing is that the things I'm going to quote actually are things that I've heard from David key and Tom, which is, I think you see a lot of ambition and a lot of willingness on the part of dynamic startups, especially those that are focused on access and on making things more approachable, and more easy to work with and build upon. And they're working with community colleges with research with not just research institutions, but partnerships between research institutions, and some up and coming institutions, and all across the country. And I'd love to hear you talk about it more, but it was powerful.
There enough, thank you. So at cardio, we built this no code computer vision tool. Essentially, we enable anyone regardless of technical background to be able to build our own computer vision As a small company, we started building best in class computer vision for a lot of the major flagship DoD AI projects out there. And what we realized was that a lot of this technology, again, was in the hands of very few, you know, either Silicon Valley startups, academic institutions, or very large, deeply technical companies. So over the last two years, we built this platform, and we started to think about how to get it in the hands of as many people as possible. So in parallel, what we did was we opened up a platform and created academic partnerships with universities like Gallaudet University, Howard University, James Madison, JHU, Texas a&m. And then as part of OSTP, they're doing something around community college education programs, we've signed on a pledge to allow our platform to be used by community colleges. But ultimately, what we are trying to say is that in order for AI to go beyond r&d, and into the hands of as many people as possible, you need to arm the next generation of the workforce with an understanding of what AI is, they don't need to be aI experts, but they need to be able to be subject matter experts in their own industry, or their own workflow, to be able to then not be out of a job in a couple of years, but to essentially inform the way AI is improving their own kind of kind of working, whether it be an industry or in other in other areas.
Time to do.
Yeah, I'll pick it back up just a little bit on that, because I think what you said is, is true in quite impressive, I'm looking, I'm gonna look at this a little bit from the other side. You know, proof Systems is focused primarily on visual systems and also audio systems in terms of AI. We do facial recognition, voice recognition, you name it, it goes on, we do it all together, in form of video and whatnot. A lot of it's going on a lot of it's interesting, a lot of it's important. But what we've seen is that there's not been real access to the communities that are out there in general, particularly communities of color women in the like. And I always like to pull it back over to that question, because we are talking about workforce. But we're also talking about inequities in terms of access to these kinds of tools, or just kind of knowledge in general. One of the conversations my colleagues and I had a little while back, was that, hey, look, you know, I've got friends whose kids are already starting to think about 867. Think about AI, they're thinking about computers and thinking about how to think around computers, but we've got an entire world of individuals who don't even understand some of the basic concepts around it. And so you're talking about things like education and bringing dollars in that way? Or you're talking about the workforce? How do you get them even to be begin to think about those questions. And so I think part of what we're talking about, particularly in terms of human rights, which, you know, you're talking about directing resources in a certain way, you're talking about getting folks to think about these questions, now. And you're talking about directing resources in a way that gets them to think about these sorts of questions right now. So what does aI mean to all of us? You know, there are lots of different kinds of ways of thinking, but there's machine learning, there's deep learning, you name it, folks don't know much about it. I happen to know a little bit about it got an economics background, and whatever, I got friends, we've worked hard at it, we developed algorithms, we've done some really interesting work. But how many of me, you any of the folks on this panel there actually exist out there thinking about these questions? And so you look at America right now. You look at AI, about 2% of folks can really do this, if they say in the world? Well, in order for us to really reach our full potential, at least in this country, we need to think about how more six how to be successful reaching out to that broader population of individuals that often don't look like some of my colleagues who've gone to Stanford, or MIT or the like, we need to be able to reach out. And so part of what I think Nair is about is making sure those research those resources reach deep down into those communities, like, for example, Alabama State University, who we've reached out to and began working with in this way, those HBCUs, for example, and I'll stop right there. But
no, I think that's a perfect jumping off point because that I want to kind of punt it back to you. A lot of the Nair funding is going to live at the National Science Foundation, which of course, you know, does tremendous work on funding, the most cutting edge scientific research at our best research universities. Is that sufficient to go through the breadth of community college, even high school HBCUs worker retraining that the panel is talking about
here? Yeah. So the, to kind of step back for a second, the timeline of things is, Congress passed a law that directed the NSF, the National Science Foundation, along with the OSTP to come to establish
as part of the White House Office of Science and Technology Policy.
Thank you for checking the acronyms that often lose people in these kinds of panels. The They're supposed to put a taskforce together to figure out questions like, Where should it live? What should the funding model look like? What's the sustainable model? And so I think that's a fair question the NSF has since World War Two and its founding, been about basic research. I think what we're talking about is more than that. We're talking about applications. We're talking about translational research. We're talking about other pieces of the innovation lifecycle. The taskforce is supposed to figure out those types of questions. And I think they're, at least from what we can sell, you know, not looking at the recommendations yet. I think they're grappling with those kinds of questions.
Yeah. And then part of what I've read as part of the narrow is like the education piece is going to be really important. But it's not just about what is AI but it's educating any type of person that's going to be contributing into the workforce about like how AI could potentially be part of their the jobs that are going to be taking in the future. So what does that mean exactly? Right. Now, a lot of AI, there's a lot of training associated with it for port, one type of deep learning. And a lot of those jobs are being now brought overseas. And these are kind of low barrier to entry jobs. I think part of the narrow could be how do you bring these jobs back over to the US to create training data for all these new types of industries that are going to be relying on AI. So I think that there's a lot of work to be done not just bringing, you know, cutting edge technology and funding cutting edge technology, but also translating these resources to a lot of people that don't want to be left behind when AI takes a larger place in today's workforce and industry.
Yeah, and one thing I think about a lot, and one of the main reasons that I actually started seed AI is because you want to see people get the economic benefit, you want to see people's communities benefit from what they create. But I really want to see as much agency as possible, coming from each community, especially any that is likely to be disproportionately impacted throughout the entire process, throughout the process of creating very foundational technology that does require a ton of resources through the process of creating applications on top of that. And on top of that, it's going to become like opponent based software like we have right now. And I don't want that XKCD topic or comic of like the one piece from a guy in Nebraska in 2007, except for that piece is super racist. You know what I mean? Like that's in the world scenario.
I hope everybody in this room knows that comic, if not, we should find a way to put it on the resources, because that's classic. Austin, it talks about the geographic component of that, where it's not just, you know, communities that have historically been marginalized, you know, we don't just want, you know, communities in Baltimore and Washington and New York, even if they have been historically marginalized to be the benefits of that. So talk about the geographic component.
And I think there's two sides of it. The first is, it's not just the geographic distribution of the resources themselves is the geographic distribution of the users of those resources. Right, and it's near, you certainly have kind of socio economic, racial, societal factors that are at play, and are part of the consideration of this in terms of access, and particularly in like proportionality and representation. But the same can be said, if you're looking at the competitiveness of the United States writ large for the different skills and the different challenges of every state in the nation, right? Like there is this predominant idea that if you're going to bring in good AI talent, or good AI application that's going to come from where the AI stuff is, what we need to move towards, it's going to come from where the everything else stuff is, right? We have all of this great technology being developed. But we're going to hit a wall as long as it's focused upon, again, the interests of a fairly small set, both economically and socially. And so the geographic distribution, it's going to follow what's called the EPSCoR program, which means that the states that get a very small percentage of NSF funding will get a prioritization of like 20% of the funding that's going to come out assuming that that funding comes from, you know, some of this make it here, legislation that's going to come down but the narrow itself requires and across the across the country approach. Like I said, it's about strengths. And it's about challenges, not about doing anybody a favor, right. It's about getting people to the point where that contribution and competitiveness flows across the country.
Can I add on to that for a second? So the geographic component, my boss is Congresswoman Anna Eshoo. She represents much of Silicon Valley where the companies were trying to democratize where the concentration exists, right of AI resources. Our view has been generally that the democratization even geographically outside of Silicon Valley is a great thing for the country, but also for our district. If you want to see Silicon Valley startups thrive. You want them to partner with all of the reasons horses in the humans and the intelligence that are available across the country, but across the world. And so that's how we think about that question as well as that's a kind of rising tide situation.
You know, I do want to say just a little bit here about this piggybacking on what you said, Carson, assumptions, assumptions, assumptions, what are the assumptions, you talked about me the bias assumptions. One of the things that we struggled with at early on is, I remember, early in our research, we published a couple of things on it, looking at the ability to recognize people of color, women in particular, and folks I think was like 80%, it was horrible at the time, when we looked at some of the base programming that was going on. And again, it was about the assumptions, but the people who were making the assumptions, developing the AI at the time, you know, you got teams like ours, that are barely diverse, made different kinds of people. But many of these teams aren't, and it's not that they're doing something necessarily on purpose is that they went to school with, that's who's in their community, they happen to be in Boston, I met, I met one of my best friends in the world in this space in Boston. But the reality is that the assumptions are implied in the product that they're in net product. And we need to begin to think about that people think that when I like to say my 01 people, economics kind of guy are 01. Folks, they think, you know, they may not recognize any, you know, they don't see they're not they're not trying to I remember the the statement that came out about was two and a half years ago, maybe three before the sort of big Fallout we saw in the newspapers when the certain? Well, anyway, I don't want to get too political on this one. But the bottom line is this. Before the big Fallout about identification of individuals online, came and we talked about sort of the gender and race bias, there really was no recognition of it, they just simply said, we can't do it. And small companies like ours, that began to look at these different types of questions said, that's not true, we can do it. But the resources and the energy put toward it wasn't there near I think is in part a response to it. And
let me give a specific example. So the reason we think that workforce development is really important is the more types of people you get building computer vision models, or natural language processing models or tabular data models, then you get a more diversity of thought. So let me give you two specific examples. The first is, you know, right now, if you have footage of brown people carrying guns, right, and you're a lot of your data is trained on brown people carrying guns than a model that suddenly sees on a new data set a person that's not brown carrying a gun, then it's probably less likely to find it, right. But if a, if a diverse workforce builds models, they they understand the assumptions that, you know, some examples may not follow, right? So then you're thinking about what are the assumptions you're making, when you create the training data necessary in order to build a model. Similar to this is a much more known use case, which is for financial data, right? For them to build a model based on understanding a loan application, right, they would take a lot of socio economic data, because that's historically, they would do it based on who would get the loans. So if you look at historical data on who gets the loans, then it's going to put a lot of people with certain backgrounds out of that process. And so if you're building these machine learning models, that doesn't automatically, then again, you're biasing who gets these loans. So as part of kind of the thesis of workforce development, diverse development and part of education around putting resources to the NA R into education of more diverse workforce, and you're getting different diverse people solving these problems, not just for me, this is how the data looks like this is where my data comes from, but like how do we make sure the data reflects best the population or the problems you're trying to solve?
What do you need from government? What does the sod need to go back to the hill and tell them to do so there's the education and the workforce component of ethical uses of AI, particularly in high risk situations, let's say facial recognition? What do you need from the government to make that work better?
Well, I can talk about specifically as a small business, right? So if you actually look at the way funding is being spent on AI, it's primarily focused on a lot of the big prime contractor contractors, right. So a small business even though, you know, for example, it could be a woman owned small business or something like that. You, you generally won't get access to those dollars and access to those contracts, because there's a lot of different types of regulations in terms of getting contracts in the first place. So it's really open to us is either you've got people that are really excited about bringing in Silicon Valley Technology and making it commercially licensable and within the government, which is much more rare, or you have to go the approach of an SBIR. an SBIR is have been known to not convert into programs of record and like the conversion rate is vanishing. Small, right? So part of what we need from the government is more resources around bringing small businesses rather than people being comfortable with incumbents that are very large prime contractors that they're used to, that they understand how to do business with, as part of opening up the resources for the near outside of very large businesses that have a lot of obviously, large lobbying power, etc.
Let's talk a little bit about whether it's all well and good to kind of have a philosophy like, yeah, this really needs to be broad base. We want this to be broad based. What steps do you think can should will be taken to make sure that it actually is that we don't just say like, well, we opened it up, but yeah, all those people with those big lobbyists, they just kind of happen to one. It's really a surprise.
Yeah, I think there's two parts to your question. And I think the first is actually the question you asked at the beginning that I've been answered. Why the letter came to be? So and I didn't, I wasn't trying to avoid it. I just went on a different tangent.
I'm generous, but persistent.
I appreciate it. You're good at your job. So the the reason the letter came to be is there are we were talking about this earlier, there's a bunch of efforts across government that are basically dealing with AI, right? There's the cloud of like something something AI. The Nair we worked on, we're very invested in we're following closely. There's another and that scene, is that like, how do we make sure we're competitive as the economy starts to depend on AI? How do we make sure that we have a broader base of who can contribute to and benefit from that there are other efforts to think about how the downsides of AI, the bias issues, the labor force, displacement issues, all those kinds of things? Those are other conversations happening, the letter came to be to say, we need to bring those together. If we're going to make AI research dollars available for other types of entities, we shouldn't then ask them to think about all those problems. 10 years later, it should be baked into the first layer of it. When the taskforce is figuring out what Nair is, how do we even think about access to resources? That's where we should be thinking about bias. That's where we should be thinking. So in the statute that created the taskforce, there's a specific provision that says you need to think about privacy, civil rights, civil liberties, you need to give us an assessment of that, how should we design that at the initial layers of this program? So that's kind of how we think about that one part of it. The second thing I think you're getting at is questions of who's actually going to deploy them there. If this is a research cloud, there will be entities that can actually that can actualize that, right and make that available. There are cloud players out there. There are also many examples in government where government itself built the infrastructure. Our general view is that part of the thesis of this thing is that there is a concentration in the AI research space, it this effort to lower the concentration shouldn't worse than the concentration. When you look at the efforts to build government infrastructure, in our view, that's a better long term solution. Now, can you get that off the ground tomorrow? No. And I think there's got to grapple with those equities of how do you think about those two pieces of the coin, or the taskforce does? And I'll be curious to see where they come out? Do I have my own biases? And do I think that there are trade offs? Yeah. So we're thinking about that as well.
And I'm going to push back here, which is the mayor, the council that was built to put the narrative together is primarily large businesses and academic institutions. I think there's only one small business represented there. So it's kind of it's, it's still continue to think about all the different players that will eventually be involved in the actual rollout of Dinair, outside of just kind of theory and letter writing.
Right. I think that's fair. I think that's totally fair. And I think that's where these the pressure that comes from people calling out what different what happens at different steps, that's a good thing. We should be voicing, hey, is this right? Is this how this should look? And we should be gut checking it at every layer levels? I think that's right.
Yeah. And I'll give them the near taskforce a little bit of like a moderating comment on that. But I do understand and agree with the concern. I think watching the taskforce meetings, they have been imminently serious, at least. And there was a letter earlier on in the administration with folks calling for a little bit of more of attention at OSTP, for representation, and on civil rights issues. And I think if you've watched the near taskforce meetings, there is a very good faith effort that's been made on that front. And I will say that to the original question of like, how are we going to make sure this ends up going the right way? There's two things about it. First is there have been again, pretty good, serious conversations about setting up like, here are some governance bodies, here's who we need to make sure we have in the room. Second thing is, I mean, that's my job, right? That's your job. That's all of our job. That's Assad's job. And this is an incredible opportunity. And it would be a terrible, terrible shame if they got tanked because we couldn't really think about just okay, how can we agree that it should work the way that it should work? You know, and the second piece is, there's a lot of kind of public private opportunity to be had here between the entities that are kind of Tom and David here representing right in addition to the folks that build the hardware and do the training and all those things for local academic systems, because with the Nair is ultimately going to be, you know, as, as drawn out in a really excellently complicated graph on the most recent taskforce meeting slides, which Chandler in particular should check out, I know, you want to look at it, you're gonna love it, it's gonna be connective tissue. With a, again, ideally, as stated intent by this taskforce, it's going to be connective tissue between a bunch of different existing federal and like allocated otherwise resources, much like exceed is but at a much larger scale with an easier access point. And so I think that opens you into again, you're going to address the geographical component, you're going to address some of the socio economic components, you're going to have specific, intentional allocation for, it's as a public private resource, it should be used for a public good. And we know for a fact that you don't have the time and you don't have the GPU hours to sit and like sem out with somebody, everything that could possibly happen. And that's part of the beauty of you having a marketplace and opening things up the same time, it's all the more need for there to be a step built into everybody's marketplaces, right? That runs it through it, let's make sure this isn't going to accidentally do anything terrible, especially as we get to the larger, more complex, more abstracted types of transformer networks that are going to open up that surface even further. So it's an opportunity as foundational movement is happening, to set it on as straight as the field was as we can, without making too many, again, even personally biased decisions. Me trying to think what I think is good for everybody else sitting here isn't going to work in the early days of clubhouse, and I'll move on because I can see your buddy hang me. I was on a chat with this guy. And he was going on he was kind of like a, like I said many ways he was going on and saying that he didn't think that it was necessarily important to have representation because he can like put himself in people's shoes and think about it. And I lost it a little bit. I wasn't proud of it. I said a lot of language. I won't say here, but I was pretty upset. But I'm like, No, you can't. Anyways, that's that was my baby. No,
I want I want a recording of that.
I am going to say something I have to jump in here very quickly. But this is an important point that you've made. One of the things that we need to remember right now, we're talking about in this world 2% of being able to do this stuff, period. And, um, that we're talking about a fraction of that being willing to focus on this question. Again, it's not just the work shortage, workforce issue, it's not just the community, it's a survival issue for this country, to be able to compete successfully with the rest of the world. We're not just talking about China, we're not just talking about India, we're not just talking. This is a competition question that we need to ask ourselves, honestly, and make it agnostic both politically and racially. Why? Because it's the only way we succeed in this world and continue to be competitive. You think about this, everyone likes to use African Americans as the poster child of bad behavior when it comes to drugs, for example. That's the reality, we know that any TV show you look at whatever the first people you look at there, maybe they're trying to do a little bit different. But let's think about this. And I've known a few folks who've come out of this space, and then gone on to do other things in their lives and do better. These are really, really smart people playing with numbers and situations every day, why are we not focused on how to pull them out of their settings, and put them in a different direction, where their lifelong earning potential cannot just be the same as but be above what it is that they might make in those professions? Why is that not being championed? And it's because of this sort of disposition that, hey, I can sit myself in the shoes of this individual who happens to be in front of certain kinds of social changes. And so social challenges. Now look, I'm the wrong person to talk about this. My life is fairly damn privileged, I'll be honest, I have not had the same challenges that some folks have had that I've had access to talk to individuals who've been in these situations. And I am saying we are somehow missing the point. If we don't get right back to what you've been talking about, how do we encourage folks to be in the work get into this workforce? How do we expose them? If they don't know? They don't know? And if they don't know they don't do and guess what? If they don't? Do we all lose? That's where we are in terms of competition is world.
I just want to take this question. Just sort of straight down the panel. I want to talk a little bit about vendor responsibility for these high risk uses, how should we think about what what Nair should be doing about about vendor responsibility and, and the potential you know, if you talk about facial recognition, the potential misuses of not the actual vendors, but whoever they sell the tools to, and then from, from the actual vendor point of view, what are the things you think through I mean, you know, you're actually doing computer vision. So let's just kind of go down the panel that way.
I think there's at the level of abstraction of like, in general, how should you think about vendors and vendors, vendors, if you will, third and fourth parties. I do think that, especially when the government is involved, there's a level of diligence that's required for not just, we I think we do a reasonable job at questions like security. But I think we have to do a better job at questions like equity question, questions of bias. And I say better, because we're not starting from zero, right? The people who are making these decisions, are thinking about these questions. But when you look at the recent, the IRS clear view, the IRS ID me situation, right. That's one of the things we're like, what was like, what was the equity thinking ahead of time? And it wasn't None, right? I'm not trying to sit here and bash the IRS. But I think there was a what could we have done a little bit better? You can have my watch?
Yeah, I think this is a really interesting question. Part of it as a vendor, you think about what levels of kind of security does the end customer ie the government have on your AI? Right? So a lot of computer vision, right? As a Silicon Valley company, you have to think about, not everybody wants to do AI for the US government, when you think about hiring, so what are the things you're willing to do when you're not willing to do? And what are the things can you translate back to the company that, for example, the DOD does in to ensure that your AI isn't just going to potentially run amok? You know, part of it is just looking at a lot of DOD, kind of core tenants, which is they won't just automatically use an AI without having humans in the loop making the final decision, right. So kind of making sure that the AI that you do isn't something that's just going to be immediately used for something without any kind of human interaction. In parallel, right, as a small startup, there's things that we are going to do, and we're not willing to do. And that's something that we put together some AI guidelines and ethics that we as a company came together, to put together and that's something that we also have to continue grapple, as technology changes as use case changes as world affairs change as well. So for us, it's more it's very, it's a very complex problem that we actually have to grapple with every day.
You know, our companies also small business like yours. As a founder I with my other founders we sent around had lots and lots and lots of debates about how we're going to train our solution, how we're going to train this platform. And what we ended up doing in terms of AI via visual and audio, but the visual solution, we realized we wanted to tweak we didn't we did not want to hold on to people's data. But this was an early on decision, as opposed to not pick me on it, me and others. But this is an early decision that we made, how do we want to train our solution? The notion that, for example, now, I think it's coming into popularity. Now, the idea of being able to develop which we did sort of a zero knowledge proof or zero knowledge protocol for taking in data and not holding the data because you know, even the visual, it's still data. How do you use that data became really important to us. And we had to actually, it was a financial decision, it costs us money, it cost us time, but it was ethically important to us. In terms of what we were going to do. I may not be saying all the things that my CTO and co founder and partner might like me to say in terms of the right way to say this the right way. But the bottom line is up front, it was part of our thinking. Now the question becomes when dealing with vendors, and others that we would like to use or involve in our work, do they share our philosophical bent? I think that if, for example, the question about ID me and others hadn't become a big issue. Lately, I don't think people would have considered what we've been doing in terms of this approach to how you use the data or whether you can hold on to the data, or people wouldn't have been thinking about it. Now they are, but they weren't thinking about it before. And that tells you something that somehow we're disconnected from the concerns that folks have about privacy or the direction morally or ethically in this country, not saying that this was a moral question. There's a difference. There's a difference. There's a disconnect. And we as creators, inventors, business peoples, we need to take the lead. This is the generation of change this generation of folks right here in front of you, only folks who are in this age bracket here. We know right and wrong. And we know the outcomes of bad decision making. We can make that change, and we can stand on our principles about what's right. You've done it. I've done it. We're all thinking about it.
I'm going to open the audience to question before Austin talk so if you if you have one raise your hand Boston, Boston, Gus.
All right. I'm like, completely lost my train of thought. No, I would. So I would say the first thing is what folks have talked about here, which is, think about it seriously, right. And from my experience, the vast majority of folks that I have talked to are thinking about it seriously. And if there is any impediment to them doing more work, it is ironically, given the nature of this conversation resource based, that is often headcount based and time based, and what you have to struggle to make these things happen, right, but to at least get to the point that, hey, if there were more resources, you'd be ready to dive in. I think the second thing is, if at all possible to participate in things like the NIST risk management framework, development of that, we talked about a number of different kinds of federal and other collaborative processes to develop some best practices or give some advice, the willingness of everyone here the desire to participate. And the same thing I think, with making the relationships between yourselves and other community colleges and institutes and institutions that are not, again, currently that you know, well represented are not currently given the opportunity to engage and apply this stuff. And then finally, just as the thing that we can do to support this, in general, is just a risk, you know, an AI assurance ecosystem, doing everything that we can to support an ecosystem that can help in a neutral third way, like, maybe be involved in some of these test beds that can work hand in hand with companies that are doing some more, you know, perhaps harm exposed applications.
Alright, I'll just, I'll be the audience with the question. Talk about the hype around AI. I did this in my intro, I talked about the Terminator and I also talked about paradise, are those? How important is it to avoid the hype, particularly as you try to bring people into either working in the space or you know, using AI in a way to actually make their their lives better?
We primarily focus on AI that is low hanging fruit, right? How can you automate rote tasks that happen a lot either on the manufacturing side, we work with very large beer manufacturer or largest reptile manufacturers, alongside the DOD, right? The the sets of use cases are vastly, vastly different. But ultimately, it's thinking about, there's multiple ways one can be engaged with AI, you could take the very, you know, 10,000 feet approach and build something extraordinarily, deeply complicated from a research perspective, like open AI, which is definitely has its place in this world, and fundamentally will change the way we we do work much more drastically than low hanging fruit. But I do think over the next 1020 years, if you're going to get to that stage, you can't leave again, the workforce behind. So how do you get them to be involved in the low hanging fruit? How do you get them involved in where automation can can improve their day to day life, and then eventually, kind of as AI continues to grow, due to funding from the NAR to make sure that the US remains on top of the research side, then people won't be left behind. So that's how we think about I think there's obviously a lot of hype, and rightly so there's incredible research coming out of both academia as well as the private sector. And that research will continue to help the US stay on top of, of us being, you know, research and science technology leaders. But there's a lot of non sexy things one can do with AI right now. And that's something that I think that there's there's a lot of room for growth, both within the US government as well as in commercial, you have to think that when you think about fortune 500 companies, vast majority of them don't have access to the deeply technical talent of the fang companies out there. Right. So you still gotta You can't leave industry behind. Right, as well, in the US, or I'll go overseas?
I think. So, I think there's I think there's a couple levels, right. One is what you're saying? Like what, how do you actually apply this to things that are real world issues with real world humans that should be involved? And I think that's the right place to be thinking. And that's where a lot of the discussions been, especially when there's it's really applications of to your point earlier, like a couple of strands of machine learning and deep learning. I also try to think about it at a different level. How is AI itself evolving? Because we are right now caught? I started with narrow exists because you need a lot of data, a lot of compute, and a lot of expertise. What if you don't like what are the next steps of a what is unsupervised learning? What are the next branches of AI that we're not developing yet? And so I don't think what we want to try to avoid with wherever near goes is path dependency. Yeah, right. We should invest in the applications of this technology, but we should also be thinking about what's next.
Yeah. Good. No, no, no, no, the analyst. Oh, you know, it's scary as it is. We are really thinking about what's next. That's the interesting thing, where it's, you know, this these terminators are a joke But the reality is, is we are, what we're thinking about what we're playing with right now. We need some responsible actors in the mix, because creators create, and that's what I call I think of the people, my team and my best friend, I think of him as a, an artist. And frankly, he was an artist, you know, I don't know, he plays like a million instruments, you know, he's just, you know, we sit around and toss ideas back and forth. And, you know, you know, my wife, and I said, you know, she's an artist, you know, these guys are artists. And so they just like to create now getting them to focus in on something that's useful. But when you push them on a tangent that can talk about visual systems of like, their minds, or just how do we do more, we have to be aware of that. It is scary. And be frankly, it is scary. And I think in the next 10 years, we really, what we'll see in the next 10 years is going to be frightening. And the fact that there are more honors students in China, working on issues than there are students that we've gotten this country is not really political here. I'm just
thrown at Tom can't help. But we still love them. We keep I would say two things. One helpful one unhelpful. So the first is I normally tell people, you know, talking about the pace of AI acceleration, talking about the pace of AI research, it's accelerating at like an unimaginable pace. It's completely bonkers. And I'll get into that after the helpful part. The part is, even if AI technology froze today, and we didn't have any new research for just the next 10 years, we have 95% unexplored space and the quote, boring stuff that David key is talking about where you can go and, and not the ship we talk about all day, every day, but in the just random stuff that makes life better for people across the country to apply what we already have a make. So now to the unhelpful part, it is getting completely insane, and it's about to get way completely insane. And I think that the lack of the lack of knowledge of that itself is somewhat dangerous, both to the point of anticipating the future needs of a system such as a national AI resource, and explaining to policymakers in any other person exactly what's going on. Right. I mean, I remember, you know, there's a couple of really capable generative technologies that came out that sparked real concerns about deep fakes is one thing, which we all probably remember. And then these systems called GPT, two and GPT, three, which can generate text. And if certain all of a sudden GPT three gets sophisticated and large enough that it's a general system, which we always talked about never having general system, that won't happen, it'll never happen, because we thought of it as like a thing with a soul and not a thing that was generally productive and useful. And all of a sudden, as soon as that happened, there was a no code platform everywhere, there was like image generation, there was video generation. And since that happened, like a year or two ago, there have been six different, even bigger and crazier versions of this stuff, I'll say, Hey, this is cool as hell, this is like one of the coolest things that has happened in technology ever, we made electricity that can like have some slight maneuverability, and thoughts and feelings about stuff that are a representation of ourselves. Right. And yet, at the same time, we have to make sure that that next like two generations of this thing we're gonna have whatever it is that we kind of can't conceptualize yet as a true independently productive artificially intelligent machine. And I don't mean like I am robot, you know, walking around with AI robot, AI robot, not I Am Legend way darker. Fucking around, but like there if that those systems are a very abstract representation of the humans involved in collecting, curating and creating the data inside of them, because again, unsupervised learning on the rise, going to continue to be reinforcement learning where stuff happens with, again, the machine kind of learning in a simulated environment, or real environment or wherever, whatever. But we have to make sure those systems are built from the most broadly representative group of people possible both so we have the best technology. And so we don't have this massive internal social collapse. That seems
like a pretty good place to leave it. We appreciate you all coming to this. Stay here for the keynote. It'll just be coming to you in the next few minutes. Thank you so much, everyone.