It's a Monkey Podcast: Kevin Garber and Sam Liang of Otter.ai
1:07AM Jul 12, 2018
One get back with it It's a Monkey podcast where we talk about everything related to technology startups entrepreneurship. And a few weeks ago, I was searching for an app that integrates with Slack to allow me to make audio recordings, we put a lot of text updates, we've got a distributed team. So slack is one of our main tools. And each team member put an update a couple of times a day. And I want to have the option of creating an audio update with that I can just pop in the channel. And I can every now and then break it up with an audio video update. And I was searching around for apps and I stumbled upon an app that was really interesting, it was called or it is called otter. And it was an interesting audio recording app that had an automatic transcription feature tied in. So you would record your audio and it would transcribe on the fly. And I thought, okay, that's interesting. I started using it, the UI was really good. There was a lot of interesting, polish to the product. So I looked up, you know, where this product was from, and who was behind it. And I actually tracked down the, the co founder and the CEO, and I've dragged him on the podcast. So I'm very happy to say, I've got Sam Liang from Silicon Valley central is on the end of my Skype line. Sam, thanks so much for joining us on the podcast today.
Thank you, Kevin. Thank you for inviting me to your podcast. I saw the I loved your name. It's a monkey because under Chinese zodiac, I'm actually a monkey myself and nice. Yeah, always love monkey.
Nice. Yeah, you know, the, the name of the podcast is lost in the history of the company. I can't quite remember why we named it that. I think it had something to do with the fact we didn't want it. We didn't want to use word the word tech in the podcast because it's just so overused and we weren't quite sure where the podcast you know what it would land up being so we just came up with the generic type name that was that was a little bit different. So the as far as I can remember, it was quite a long time ago we started the podcast, but let's talk about otter you Firstly, I mean, the name you obviously you like animal names, too, right? You named it. otter
Oh, yeah. Otter is a you know, very adorable animal. And most people don't know that. Otter is actually one of the smartest animals in the world. Otter has very high intelligence. They can learn to use tools
on YouTube, that people are showing others playing basketball after they taught them to play basketball. They have amazing memory as well. They there's a lot of studies that show that Otter can learn things you teach them many years, years ago, they many years later, when they test them, they still know it. So
that's one of the reasons we picked the name otter for our product. So we released a product just a few months ago, officially, actually, we only two months ago, we released the general available ga release may 9th. Yeah. Since then, we've got, you know, a lot of a lot of traction, a lot of adoption of this app. We're getting a lot of user feedback as well.
Just a couple of days ago, we learned that Mashable actually selected Otter as one of the seven best apps in 2018. So
Fantastic. Congratulations. Now Sam why did you choose voice recording and transcription as the the problem that you wanted to solve?
It's a long story. Just a quick background, if you're interested, I used to work at Google I was the lead of Google Map location service back in 2006, and 2010, if you remember, 10 years ago, 2007, when Steve Jobs launched the first iPhone in 2007, they actually came to us to ask for help, we provided a Google map for iPhone one. One important feature is to show the location of the user. So that's a well known blue dot on google map that shows your current location. That's the
the feature actually, our team created, I was the lead of the location platform at that time. So I left Google in 2010 to start my first startup in Palo Alto. So you know, just three miles from where we are right now.
And that company did
mobile user context. Basically, it tracks location and tracks a lot of sensors on your smartphone, to understand user behavior. So that company was later successfully acquired by Ali Baba,
and I worked with them for a couple years. And I decided to leave and move on, tried to do something else.
The but then I realized that, you know, I have this big pain point that, you know, I,
I keep forgetting things, although, you know, I,
I have a lot of meetings every day as a as a founder, because in I need to talk about VCs, I talked to partners talk to potential customers,
so I have all these meetings every day, but it just so
difficult to remember everything,
then I think about it, you know, probably not just myself who is having this problem, you know, there are billions of people in the world, everybody is talking aloud every day, you know, voice communication is one of the most important
messages for people to communicate. So, you know, it's interesting that people can search your email 10 years ago, within seconds to find anything in your email history, but there's no way for me to search what I heard this morning.
So that's, you know, why we started to work on this product, you know, it say, Hey, you know, what, if we are able to record everything and make everything searchable,
that's the first thing I like to get, then realize that actually, once we have that information, you know, the
AI assistant can actually do a lot more than just transcribing and search I know, you can do a lot of analytics can understand what the conversation is, you know, talking about, you know,
can understand my emotion as well, you know, we just realized there's so many more use cases, once we provide this enabling platform.
let's just talk quickly about the transcription side of things because transcription has been around for a long time, voice recognition. I mean, I remember first hearing about it must have been even like 90s with the dragon product, right? Right. Yeah, now now transcriptions. Probably the type of problem correct me if I'm wrong, where it's, it's easy to get 60 to 70%, right. But that final 30 to 40% is really, really hard, right?
Yes, that's totally true. You mentioned dragon is created by a company called Nuance, it's a company that has been the leader in the speech recognition for the past 15 20 years. However, you know, you don't hear about Nuance very often these days
they did well before, but they didn't move to the AI era fast enough. In the last, you know, I would say four to five years that the technology has been completely change in terms of the way we do speech recognition. Now, it's everything is based on deep learning in the past, they use some traditional methods based on you know, GMM and some other
old methods to do speech recognition. That's why they sort of hit a ceiling, in terms of accuracy. It's very hard for them to continue improving the accuracy
just a few years ago, the product like otter is actually not very practical. You know,
as you mentioned, you could get to 60%, 70%,
even 80% accuracy, but it's extremely hard to get even higher accuracy.
So I'm watching a live transcript using the product now to record our chat and you've screen shared it with me, and I've been watching the product create a transcript on the fly dynamically, right? And the accuracy is unbelievable. Now, I have to say that I've got a little bit of an accent. It's a hybrid South African Australian accent, you've got a little bit of an accent, and the product just seems to be ticking along in quite a remarkable way. I've seen a few little errors, but they're quite minor. It's, it's quite, it's quite fascinating. Am I correct in saying it actually correct words and pops in a word and then somehow works out the context and it actually changes it?
Yes, you're right
behind it is actually a pretty sophisticated the AI behind it. We use a neural network, you know, deep learning neural network with multiple layers and thousands of neurons.
The it can, it's composed of, you know, several separate models, the major ones are caught one is called a acoustic model it handles
transforming, you know,
sounds into full names, and it actually break down a sound into, you know, full names, like the vowels, and you know,
the pronunciation I could
you know, it is phonetics, yeah, that the very tiny chunk of the sound then it tries to figure it out, it also use language model to figure out, you know, we based on this full names, which words make the most sense which one matches the best,
it's very challenging, because everyone's pronunciation's a little different. And the volume is different, the frequency's different, you know, this, the, the pace is different, somebody speaks really fast, and somebody speaks little slower. And I, as you mentioned, I have my accent.
So, this is why we need to, you know, actually collect, you know, I would say thousands,
hundreds of thousands of hours of training data to do a lot the deep learning training to build our acoustic model and language model,
the language model takes more, you know, pays more attention in terms of the
common phrases, we call it n grams to understand, you know, which words usually are used are usually used together, you know, you know,
when I say How are you, you know, that's a very common three word phrase. So there's a lot of mathematics behind it, to calculate the probability of each word, the probability of each combination in the in the right context of the sentence. So that's why in a while we are doing we're talking it is actually constantly correcting itself
based on what you know, it hears later. I know, it can correct something earlier, based on what I'm saying right now.
And I think, and I think it's a fantastic example of artificial intelligence and machine learning, in the sense of, it's getting better the whole time, right. And this is the whole big deal of artificial intelligence that I think, you know, I think my friends get a little bit bored of me talking about, you know, explaining to them how the world is going to be so different in 10 years time is because it's not the static nature of, of technology, where it's being iterative, based on humans pushing it and evolving and developing it is totally going to shift to their systems, iterating themselves based on learning from experience, right?
Yeah absolutely, you're totally right. And the reason AI is so powerful is that it is actually constantly learning and improving itself, the more you use it, and the more data it hears or the more data it acquires, is actually taking advantage of all the new data and based on also based on the feedback, people give it it actually constantly learning constantly, you know, you know, cumulatively improving the accuracy, improving the understanding of things. So, it's, you know, it's similar to the way a human being learned things, right, you know, when a baby starts to learn words, you know, he or she, you know, it's just, it listens, and based on the response based on the experience, and he or she just slowly, incrementally learn new new words and new sentences and learn new meanings, you know, for us, you know, we can, you know, in a training phase, we already, you know, inject it, as I mentioned, hundreds of thousands of hours of data into the model. So, it already had a big foundation to start with, but then when the users use it, when thousands, or even millions of users use it, they keep injecting new data into the system so that the model will, you know, keep crunching on that new data and learn from New accents, you know,
the new combination of words in the which words have, you know, which companies I have a higher probability is also context based, you know, people and it's, this is why one general model may not work for everybody, right, it based on your, your domain of your expertise, you know, for example, doctors, they use a lot of words, you know, in the US, normal people don't use everyday
and, you know, for computer scientists, they have their, you know, high probability words, you know, that, you know, a
sales person don't use so, it this is also you know, why this is so complicated. And it also is, why is this so interesting for us to work on because there just as so many different things, so many different problems that we have to solve, you know, I am engineer myself i got my PhD from Stanford
2003. I love, you know, hard problems, you know, these kind of problems are very, very attractive, and because of AI is actually it's practical to, to attack this problem, you know, 10 years ago, this is just too hard to solve. But these days, we see in fact that we see the benefit from AI
Sam, I've been watching as you're talking again, and I picked up a few little errors, can you Are you happy to say what percentage of accuracy because we've, in the past had humans transcribing a podcast. And depending on the service you get, it can be i don't know 50 bucks to 300 bucks or something length. And you know, it's pretty pricey, I guess. But obviously, their accuracy is pretty high, you have a human listening to it and typing away, what would you say currently is the accuracy of a transcription of a discussion like you and I,
Yeah between you and i right now, I would estimate around 90% accuracy
or even higher,
I think most of the words are correct,
I would say maybe even 93%, I think
I think based on what I've been looking at, because I've been I've been watching it the whole time, I would probably and this is totally non scientific, I would probably say about 95 to 97%. So it's which is incredibly high for such a complex. One thing I would like, I would like it to identify the different speakers, that would be pretty cool if you could put in the name and it would just pop in when it's more voice and popping when it's your voice into the transcription and could actually label it That would be nice.
Yes we actually have that technology
right now it's actually we will do that in after the right now we're not able to do it in real time yet, although we're working on that technology.
It currently after the meeting is finished.
separate the speakers. Yeah, there's a separate algorithm that rends that separate the speakers, I'm showing you a past a meeting we had with Tech Crunch,
this one is automatically labeled with the the speaker name You see, the Sarah Perez is the reporter who interviewed us and this is myself. So it's actually separated and labeled.
Once you teach the system, your voice with a couple of minutes of your speech, our system generate a voice profile or voice print for each speaker. So then use that voice print to rematch the rest of the conversation. Then it can identify for each sentence who the speaker is
it's fantastic. It's Yeah, it does a little bit of processing and just tags that then we forget that doing these things on the fly is quite remarkable. To have an on the fly transcription service with the 97% accuracy
for someone like me, that comes from that remembers the world of the first generation of automatic transcription services is is is quite remarkable. Sam, talk to us. A lot of people are fascinated by AI machine learning, robotics, 3d printing, you know, I think there's going to be a coming together of all these technologies i don't know, between one year to 10 years time where it's going to be an absolute seismic shift on society, like we've never seen before. you're obviously much deeper in this industry than I am. And you're actually creating tools upon all these, you know, frameworks and technologies, what are your thoughts about where society's heading and what were probably some of the areas that are going to be the first to be disrupted and transformed and, and significantly changed based on AI, etc?
Oh, you know, Ai, obviously, it's, you know, it has been in the news for a while, and there's tremendously wide range of
industries or domains, products, AI is going to disrupt it, you know, the self driving car is a big sector there, you know, it's automating driving, right? It's not perfect yet, it's, you know, but, you know, I, I have friends who work at Google waymo, for many, many years, you know, it's, it's getting better and better, you know, Tesla already has, you know, autopilot in them. Although, you know, again, it's not perfect, but I believe it's already better than human driver, you know, it doesn't fall asleep, it doesn't drink alcohol, right. So, you know, for ourselves, you know, right now, we're focusing on those speech part, you know, there's still a lot more to do, you know, you see, the accuracy is already good, but, you know, understanding the conversation is still challenging, you know, it can get the words but it doesn't the machine doesn't really understand what you you're talking about. So, it's still a big question, What does understanding mean, right, you know, when you ask a question, does it mean, does the machine know what you're, you're asking? And while when you ask Google a question, Google could you know, search there the answer for you that, you know, doesn't really understand what you're asking about it's still a, you know, a question, you know,
medical it's a, it's a huge domain that AI can help, you know,
how do you diagnose cancer, for example,
we have, we have seen a lot of studies that show that having a AI algorithm to read radiology images, it can dramatically help doctors, the, there's a lot of deep learning training that trained AI algorithms to look at a, you know, for X ray or MRI image and any other, you know, radiology images and diagnose disease, you know,
by yourself. It's actually the accuracy is not as good as a very experienced doctor yet, but what they found that when they combine the, the AI and a real human doctor, the accuracy is actually higher than the doctor himself.
I once read this study a few years ago and which might yield different results these days. But it was some AI type of diagnosis and recommendation for treatment tool for cancer and they compare the results from this tool to a human dr results and the identification was almost exactly the same but where where the machine really pulled head was recommending treatments for rare and unusual cancers and situations because of course the the AI system has got access and can analyze these fringe treatments or experimental treatments en mass happening around the world whereas the human doctor is obviously much more limited in their their scope of you know knowledge and understanding so that that's really where the machines quite excelled was matching treatments to unique and specific situations. This this researchers a little bit a few years ago. So this might have all shifted a little bit by now. But it was interesting nonetheless.
Yeah. yeah. Yeah, that's, that's totally true for us, actually, in a one of them
applications, or one of the use cases for our application is actually will be related to medical as well, you know, one of our, our passion Well, the motivation is that, you know, what, if I use a
device that just listen to my voice all the time, constantly, it doesn't, you know, of course, you know, we're very keen on security and confidentiality and privacy. It doesn't mean you're sharing with people on social media or anything, but by just listening your voice all the time it's actually can the AI will be able to detect a lot of things, you know, just a simple things like how many words you say every day can actually is a good indicator of person's emotion.
It's actually if you look at ask them, psychologists or psychiatrists, are there a good correlation between people's emotions and the number of words they say every day. In the pace they they They say the emotion No, are they happy? Are they you know depressed are they angry and it was what percentage of your time are you are you angry, you know, or this actually can be detected by listening
let's let's see you brought up that word, which is the definitely the buzzword for this year privacy. You guys obviously all cloud based when you record something and transcribed sitting in the cloud, if people are to use your product for board meetings, medical purposes, legal purposes, I mean, what type of, you know, privacy? can you offer your customers in terms of the security and the integrity of their transcriptions?
Yeah, we take it very seriously, you know, this, all this data is obviously owned by the user, we don't own it, the user has the full control of the data, you know, if they want to move it away, you raise it, we would definitely have to erase it, we will erase it completely. It is encrypted, you know, the user has the control again, the general model is similar to other cloud based services like Evernote or slack or zoom, you know, the,
the general model is similar, you know, the data is stored in their cloud, but, you know, they want to make sure just for the, you know, there for these businesses, they, they, they're now sell, they're not trying to make money by selling your data, right, they make money by selling their their general services. So, actually for themselves, they have to make sure their their reputation is not tarnished by leaking users data.
So actually, it is, you know, we are incentivized, and we are, you know, for our company to survive, we have to protect the privacy, you know, if we need the data it you know, nobody trust us anymore. So
and and it's a subscription service, right, I think, I think,
we make money but yeah, we make money by taking subscription
and are you guys venture funded Sam?
Oh yeah. We are venture funded, you know,
they're there are
quite a few high profile VCs behind us. One of them is Tim Draper
Tim Draper isn't he's a big Bitcoin evangelist as well
he is I would say he's visionary is very adventurous. He takes a big risk.
He's the first investor in Tesla, actually, many years ago,
to bet on Elon Musk.
He actually invested in my first startup after I left Google
so got a pretty good relationship with him.
This is actually a Tim Draper here wearing the otter t shirt
nice yeah he was in he was in Sydney about a year ago by the way and I went to a talk of his Hmm So I think he's he's he's got some relationships with some investment companies here and super smart guy yeah but if you're listening to the show and you want to listen to some very smart
tech VC sorts look up Tim Draper's there's great photos, yes, I'm, I'm looking at Tim Draper looking at looking at some otters.
Yeah these are otters because we had our launch party in the San Francisco aquarium by the bay where they have live otters swimming around. So we had our otter launch party two months ago there and Tim Draper came and gave a speech and
told his story for using otter. He's actually a big fan of the otter app, he uses the otter app for all his founder pitches, because he's such a high profile vc everybody wants to pitch him and get his money. So he used otter to record all those pitches.
And for his board meetings. He uses otter too. So he's a big fan. And he really believes in this
Fantastic I mean, it's, it's a, it's a huge vote of confidence to have such a high profile investor in in an app like yours. I mean, what's what's the next stage for you guys? Is it just iterating, iterating, iterating and just building out the product?
Yeah there's a couple of different things we're doing in a we need to continue improving the accuracy, there are still a lot of challenges in a when there's a background noise, when there are multiple people speaking, you know,
we want to further increase the accuracy, better match of people's voice, when you have 10 people talking, it's actually not very easy to even for a human ear is actually if you don't see their face, it's not a very easy to tell who is talking
unless you're really familiar with their voice.
So all those things and also better understanding of the conversation. You know, we're building a new feature to summarize,
summarize the conversation, if you
already have some interesting feature. Yeah, if
you look at this one, when we were talking to the Tech Crunch reporter on the top, you see a list of keywords, these are actually automatically generated by our algorithm
it, you know, this is the summarization keywords, you know, without seeing the rest of the transcript, you can already guess what we were talking about.
So we're, you know that this is already available in otter. Today, we're building more sophisticated summarization to recognize action items. For example,
my my requests which are touched on in our pre interview chat, I would like a voice recorder that's optimized for slack we use slack a lot and I think slacks greats that is predominantly text and images and links. But it would be fantastic if there was a voice recorder that integrated incredibly tightly with slack now I know, there's the conflict in Slack, might be building out their own sort of feature, etc. But I still think there's room for
a third party product that does that incredibly well, with lots of options, lots of integration. So that's my, my personal request is to integrate very well, very thoughtfully, very tightly with slack.
Yeah that's I totally agree with you. That's a one thing we want to work on as well, to embed to integrate better with slack so that the playback experience will be much better integrated inside slack. You don't have to leave the app to listen and to see the transcript
and also, I think, to be able to record and make it accessible very quickly. Similar to like that WhatsApp does when you record a voice message on WhatsApp. The person gets it almost instantly to be good in slack. If you could hit record on on or just put, you know, forward slash otter have a record button record. Boom. It's there ready to be listened in line. But I'm sure you guys will get there. It sounds like you've got a strong stronger sense of product. what's what's the size of your team currently? Sam,
we're still very small. We have 20 people here, you know, but everybody I would say everybody is a superstar in a it's very small team. And most people actually don't believe us. When we told them, we have only 20 people. actually even less full time people, some of them are consultants. This was this product, you know, this
has very sophisticated super sophisticated algorithms and speech engines behind it. And it takes a lot of effort to build the, you know, the iOS app, the Android app and the web applications and, you know, we I think I mentioned to us zoom video conferencing, they actually licensed our technologies and zoom actually provide this to their customers
to generate a meeting transcript automatically,
so, what we we are scaling up the business, you know, we are another actually interesting use case is universities found us because there were already a few universities actually contacted us after they saw the altar product,
one immediate use is that for the universities, they have
every university have some students who cannot hear well, either they're completely deaf, or they cannot hear very clearly. So the University actually have the responsibility to provide note takers for those students.
That's expensive and difficult to organize, and logistical nightmare and pain for them. But obviously an important problem to solve for those students.
Yes so they found otter will be really convenient for them. You know,
it's so you know, those are
good use cases. And also, you know, we see enterprise a lot of enterprises would like to use this because they have so many mediums to track. And also we provide a collaboration system that's similar to slack channels. Inside Otter, you can actually create groups, very similar to channels that, you know, for product team, they have their Otter group, so that all the meetings related to product is shared in that group, you know, suppose one day you cannot attend the meeting, but you know, your colleagues share that a meeting recording inside the Otter Group, you can listen to it, or you don't have to spend one hour to listen to everything, you can just look at the transcript, which you can read much faster. And, or you can search the transcript to find the five minutes you're you're the only five minutes you're you're interested in. So you don't have to listen to, you know, the whole 60 minutes.
Sam, tell us tell us briefly, just backtracking a little bit. You worked quite a while at Google. It was a little while back. I think Google's had a few iterations itself since then. And a few restructures and but what was your experience working like at Google a lot of people are very intrigued about this fascinating company that that drives the world that's, that's a little bit of opaque to the outside world, not as opaque, as Apple Apple is, is really opaque to what goes on inside. But Google's pretty opaque. What was your What was it like working at Google.
Um, it was a little bit of time in a little while ago, when I left Google was 2010 it was 8 years ago. But I still have a lot of friends there. I go back and get free lunch all the time. Because we're only 10 minutes away. That's another reason we picked this location. Because, you know, we I can get free lunch. I can, you know, hire good engineers from Google really easily. Well, not very easy, because these people are paid really well, inside Google.
Google is a great company it's the best company I've ever worked with. You know, although, you know, I think AISense will be even better.
The it's the engine engineering driven mindset that's different than a lot of in old style companies. You know,
I did my PhD at Stanford, you know, Google's
founders came from Stanford, you know,
the Larry Page and Sergey were two PhD students at Stanford computer science department.
And later, Eric Schmidt. He actually, you know, the CEO of Google for a long time, you know, you know,
I remember that, you know, once we had this all hands meeting at Google, when I was still there, Eric Schmidt said, you know, they want to make sure Google and continue to have this engineering driven mindset. You know,
because he said, you know, you know, three of us, you know, Larry Page, Sergey, and Eric Schmidt, they're all computer scientist. This, he said, and I, as long as we are on the top of the, you know, the management team, you know, we want to make sure this is the culture we want to maintain. I mean, it's interesting, a lot of people have said that the engineering culture has been to the detriment in the sense of being being failed to, to get social media success, right? they've, they've struggled so hard to have. That's true, I think, yeah, Every company has their own DNA, and they have their own strengths and their own weaknesses.
It's, it's a Google Plus Google Plus almost, made it and I've got my own theories, what they could have done to, to make it a success. But yet, they've really they've really tried hard. But on the engineering side of things, I mean, I mean, the, the most recent CEO, because they split it out into alphabet, etc. The most recent CEOs basically said that they now and AI company Oh, yeah, actually, right. Yeah.
Yeah. Yeah. Google claims they're an AI company, now so yeah. is you know, it. Remember Marc Andreessen said a while ago that software is eating the world now, we're saying that AI is eating the world now.
It's, it's, it's very interesting, actually, another piece of information. And my PhD advisor, is a professor at Stanford he actually is the first one who wrote a check to Larry Page and Sergei back in 1997
who is that Sam. His name is David Cheriton,
I can show you
he wrote a check of hundred thousand dollars to Larry Page and Sergey to start
that based on current valuations would be worth a lot of money.
Yeah, it's $100,000 became a few billion dollars. One, Google went IPO. So that's probably the the best angel investment in the history
into a few billion dollars. Well, he, he backed them, you know,
have any of those cases have landed up nothing, so good, good for him. That's, that's interesting bit of history there. And that's what's so interesting about that the Bay Area, it's really quite a unique place. And even with distributed teams, and, you know, different ecosystems involving this, there's still nothing quite like the Bay Area.
Oh, yeah, this guy actually, he himself created several company, very successful companies himself. That's why I actually learned Page and Sergey went to him for advice. And he recognized the talents he recognized the market and, you know, he has his vision. So that's why he actually, you know, quickly wrote them a check of $100,000
so it's not a pure luck it's has a lot of lot of reasoning behind it. And he's actually an investor in our company is an investor in otter,
He's a strong supporter for us,
it sounds like you've got a really a
very greatly high, high profile list of investors there. And that's obviously that's not just profile that they have to have profile because they're incredibly smart, successful people, right,
yeah, absolutely. I mean, these people extremely smart, they, you know, they can predict the future. That's why they are so good, you know, they, they know, what will will will work and what will won't work. So they
Sam Quick, quick final question I could see on your screen that you're running out of battery running out of time here. And just to preface this to say to our listeners, we don't give financial advice on the show. So this is just general discussion. Are there any listed companies that are you know, quite well exposed to the AI revolution? Obviously, there's the Googles and the Facebooks and you know the Microsofts but is there is there any other company that would be interesting for listeners to do some due diligence and research on that are doing interesting thing on i i that perhaps they can get a little bit of a finger in the pie financially on
I don't quite understand the question. You mean, some large companies or startups
any? Yeah, anyone on NASDAQ or New York Stock Exchange or that's that's currently listed that people can buy some shares in that are that are doing some interesting AI work. And you see as, you know, any interesting companies to follow that if people because obviously people can invest in your company, unless they, you know, Angel seed or series A, but for the person that's the plain, simple person listening to this podcast that wants to buy some stocks in companies that are doing some AI work.
Yeah, I wouldn't claim to be a financial advisor myself, but I like you know, I obviously, I like Google, Amazon Facebook I mean, these people are actually true AI companies, a lot of it because of the AI revolution. A lot of people claim they're doing it now. But not not many are actually doing real AI. They just want to jump onto the AI bandwagon and, and brand themselves by AI buzzword, right? Yeah, the but it's a buzzword, but if you look at their product, you know, the same old product. But, you know, in this field, you know, Tesla is good. I mean, that the, you know, I'm always a big fan of Elon Musk, you know, it's still risky. Because, you know, it's a lot of challenges. You know, Tesla is great. Amazon is obvious. I mean, this is, this is not news for your listeners. I mean, it's
common sense now.
Yeah, I'm definitely I'm going to do some research to see if there's any, any other companies on the on the fringes. But Sam, it's been a really fantastic chat, I've been chatting with Sam Liang, who's the CEO, co founder of aisense which creates the which creates otter to really fantastic app and that's iterating fast, I encourage you to download it and have a play it's on Google Play for Android and for iOS Sam, we wish you all the success of the product I think it is going to be successful it is it is quite quite obvious and look forward to staying in touch and I appreciate your time on the show today.
Thank you Kevin. So the last words I think we can provide the transcript for your listeners of this podcast so that you know they can easily find the part they're interested in and also for your future podcast we can also provide some service so that you know you can organize all your podcasts in the system
thanks Sam will will definitely put your your Otter generated transcript up on the show's nuts that it's a monkey dot com. So thanks for your time again. Thank you, Kevin.