Demo with Brian Brackeen (Kairos) | Disrupt SF (Day 3)
7:39PM Sep 7, 2018
Hello, my name is Brian Brackeen, CEO of Kairos. We're a facial recognition company based in Miami, Florida and Kairos we're working with focused on facial recognition in the enterprise space. So every time I go to a party, someone says, oh, facial recognition, you must do it for government, you must do it for security. And we basically don't do it for government security.
We do facial recognition for enterprise companies like retailers, advertisers, banks, blockchain companies, verifying the person who's doing a transfer really is that person. And so we're the biggest company you may not have ever heard of. We've now processed over a billion faces.
And with that, we've learned a lot over our seven year history. One of our kind of big grand we thought was great ideas was we also do race, ethnicity, age, gender, and we started to get results back from our AI that were sub percentages, right. And so
The AI essentially understood a human genealogy. And we didn't train to do. So it's like, wow, this is cool. You know, we're going to do we're going to create a web app, right? And we're going to let people do their own kind of racial backgrounds as we've lost its diversity. And it went completely viral. And this is the point where I'm going to be a little different than most speakers here at TechCrunch.
I'm not going to talk about our newest widget, our newest gadget, the thing you should buy a whiz bang boom I'm going to talk about a product that we stopped making we're talking about a product that we think that we should have thought through it a little differently and so with this product
As you see this is my actual DNA results and Much to my surprise I'm a little bit Welsh here but also largely Nigerian like the like the last panel and so we're just a selfie the AI got, you know, pretty close.
To that actual result, so we released an app let people will do this themselves. And what we learned was really two things. One, people are completely obsessed with it, it was run over over 10 million times and just a period of weeks. In fact, our CTO joke that if we put AdWords on it, we could have made a ton of money in that period of time. We didn't,
but too we learned that some of the conversations people were having around race and around tone and the things that they were learning are actually quite negative. There was some positive conversations and we love those. And that's what we're looking forward to foster that knowledge that we're all actually very diverse, but there are also some negative feelings and we thought, you know, we don't want to really do that anymore. The second thing we learned
is that the confidence the AI had in some of these values was different or stronger depending on where you are in the spectrum. So if you think of artificial intelligence is like being on the golf course for most AI. In fact, for all AI pale males is right down the fairway. You know, if you if you meet that criteria, any AI can get you right almost 100% of the time.
And then further in the fringes is everyone else. So if you're a woman, you're a little bit lower left a little bit, right, if you're a dark skinned if you're Asian, you name it,
it may still be able to figure you out, but it's less and less confident about that information. And so this is one of the many things that came to our attention and we thought, wow, this is a real problem that we want to solve.
And then the next question was, okay, well, how do we solve that problem? What does it look like? What does the AI seeing how do we understand better what it's doing? And so here's a bit of a demo.
So we took our AI, with a partner of ours called untangle AI, I'm going to this here you can see the areas here in the red at the bottom, let's go full screen here for the people
you see jumping on the left, all three employees of our zoom in the left Hispanic from the center, obviously black American, happen to be Jamaican someone in the right European American or we also known as white for regular folks. So if you see the dots in the bottom, this is what we've poured our model into other AI to show us what it's seeing. And so every red dot is where it thinks a piece of the face or face may be located and so what you see here in the left is for someone who's Hispanic you've got some sense of what it is for Anthony in the middle is he there is not there is invisible right and for Max here in the end, very sure we've get his beard pretty well. And so as we move around
Anthony in the center just simply isn't there is almost like a ghost, right? And so then give these these larger kind of societal questions about does AI work for me, right?
Does AI even worked for Brian, the CEO of AI facial recognition company. Now, if anybody is you work for I would love it to work on me pretty well. Right? And so how do we, you know, how do we solve this and and how we solve it. We want to share here with you guys and you're going to be the first people ever hear about this and something that we're really kind of quite excited about. And so
there's another partner called New emotion we are actually going to create synthetic train data. There's a lot of issues around privacy, people going out scraping the web finding pictures of people using those to build facial recognition algorithms. We think that's completely inappropriate. And privacy is a really, really important part of the culture at Kairos. And so how do you create kind of world changing AI
In the same way protect privacy. And so we think using gold GANs, this is generative adversarial networks. And so what that means in layman's terms, this is AI that trains AI. And so we take a model kind of pour skin over it. And as you see here in the center, we can give the person different skin tones dark to light that person the center has never existed. It's not a real person, we can give them glasses, no glasses, beard, you name it kind of 20 different variations, let's say of this kind of false person than permeate that across all the variations of race and so we didn't take that or take our for every hundred real people that Kairos
Has on file, we can create 20,000 new people who have never lived on the earth, right? completely private, because they're none of us. And so with this, this has worked as the first quarter of next year, we're going to combine our kind of billion person data set and the synthetic data set. And we're going to have training data that will be more people than are even on the planet. We're going to pour all that data into the facial recognition algorithm and essentially have the most accurate facial recognition algorithm certainly in the world possibly ever created.
And so this is really, really exciting to kind of drive home a point here getting rid of bias and AI and getting rid of bias in the work for for those of us that are doing work in computer vision isn't just a nice to have, you know, I was talking to a senior person at Google on the engineering side just yesterday I asked them about what they're doing to solve these problems. And he said, you know, we talked about it, we go to meetings. But you know, no one's going to get a promotion for solving this problem. No one's going to get it raised for solving this problem, right? And so we never assign resources to it, and mean that we know what we should do. We just don't, right.
And what I want to drive home here, for the folks that are watching at home or folks here in the room. Solving the bias problem isn't just good for society and all of us, it's actually great business. It allows, it allows our algorithm to be the most accurate, it will allow us to take a leadership position and allow us to continue to grow very quickly. We're a small team, less than 30 people we've raised, you know, somewhere in the neighborhood of $20 million, and for the big Titans, the Amazons of the world, right for the Microsoft's who have these problems in their AI and don't solve them with infinite resource, right. infinite number of engineers essentially
we think is something we should all expect of them, but we also know that somebody's got to go first. And so if the company is the official recognition company with the African American founder doesn't solve this problem first. I mean, who else is going to solve it? And by the way, if you meet another one, please tweet at me at Brian Brackeen great. I would love to meet this other guy who is also a facial recognition CEO was African American. But until that day, I feel like it falls upon me to falls upon us. And we're super excited to be leading the field. And we think it's going to lead to just such as a better world, but also better business. So thank you.
That looks really interesting. I love the stuff about the GANs. And I think that this your perspective is really interesting. Do you see more companies adopting this kind of technology to solve this problem,
we see a lot of user growth so they want to solve the problem of facial recognition solve, you know, we've people over over 70,000 accounts you know, we're processing millions, millions of faces today,
But I don't see cuz some of the big guys solving the problem right now, you know, I think Amazon's been a lot of trouble over this issue. Microsoft's been in trouble over this issue. So hopefully this will change things.
And it looks like this could dodge some data privacy issues, obviously, like you said, these are people who've never existed on the earth, which is cool to think about. But you don't have to worry about those people's privacy because they never existed.
Yeah, still get to like a terminator world. Well, that's a real person or something. Exactly. People who are we believe that privacy is important, it's seminal. Right? And if we can protect that, that's good business.
I have a feeling I may know your answer to this question. Given that you run a facial recognition company, but it's facial recognition, technology inherently evil.
Good question I don't think is inherently evil. So you would have guessed that I will say we're really, really strong about not working with government because of the implications of facial recognition.
When I get something wrong with you might be asked a second time to do a Ethereum transfer right to verify yourself when the government's wrong, it could be your life for liberty at stake. And we don't feel like that's appropriate.
So you think that it's more ethical by just working with the private sector and not making this government contracts? Obviously, there's been so many conversations lately about employees at major tech companies kind of coming out and voicing, you know, opposition about these major defence contracts. In particular, how do you think that fits in with your company's mission,
Yes so you think about it, again, back to the use cases of convention center here was one of our customers and they went to have an easy pathway in the front. We all got searched. We came in today, right? But let's say they've done a background check. They want you to just come on in, you provide them with a picture because they don't have a picture of all of us. And so there's a control there, right, a requirement of of that we give our information to get a service with government, they have all of our passport photos, all of our drivers licenses they could put a camera in mainstream in any city you know, everyone who's driving by in fact we recently turned down the government for that very use case in the last couple months that came to us by the
We said this in TechCrunch are not working with the government. So they're not reading TechCrunch enough clearly. And this is really should terrible, terrible, right?
But they wanted to do was Homeland Security and they want to do facial recognition in moving cars right yikes yeah and if you imagine it's not NSA or CIA or outside of the US This is Homeland Security so this is on Americans right of course we turn them down but these are the reasons we're also really big on legislation to control some of these government use cases.
Do you know how they ended up whose door they ended up knocking on for that one?
I don't, I don't shame them later, though, we find out
totally, I'd like to hear a little bit more I guess about what you were saying about prioritizing facial recognition data for non pale males. Obviously, you have you don't run a giant company, you have a fairly small team and you all are doing this in an efficient way, which makes it truly look like an issue of prioritization.
Yeah, yeah, it's we find the for our customers somebody gets really the biggest ones to are really worried about bias. And so if they roll out facial recognition, and then
The world's like, oh, it didn't work for me didn't work for this person like Anthony in the, in the image right
in this bad for their brand bad for the company. And so for we find people come to us because of our work in this space
what's next for you all not that this isn't enough like this cool too
is right. It's for everyone globally. You know, I say one thing is we're going to share what we learn, right? So we're going to be blogging about tweeting about talking about it, sharing on the Kairos icon for us blog so people can learn what we did and recreate that we don't want to solve this for ourselves. We want to solve it for the entire world. We're going to keep on growing, keep on you know, hopefully just grow any revenue where we will do one more round, but we want to have an ethical AI all across the globe.
I'd love to talk a little bit more about GANs I know we had a phone call before this and I was like please explain that a little to me, but it's essentially like to AI is having an argument which is really interesting. Um, do you think that this is the first application of GANs for this kind of thing?
This is like a, like a constructive argument. Exactly, I mean, AI and even the first example with the untangle folks, that's also AI based, you can get so much further when you use AI on AI, right? But it's just in both cases Do you have enough data to do so some of us in the world are starting to get to that point you know, again we across you know, hundreds of millions of faces and then you can get really exponential results
definitely let's see I guess I'd be curious to know like what kind of facial recognition technology right now you're the most worried about I'm like with misuse obviously your companies trying to do at the elbow
all the Chinese stuff you see Chinese making sure that certain people ethnic minorities are staying in certain parts of the country not moving you see them look, assigning a score to people for us, I think. be offensive to the kind of Western version of AI
Alright, well looks like that's all the time we have. But thank you very much for the demo. Thank you.