ACM Keynote: Using AI and mobile phones to overcome the pandemic

4:11PM Jul 9, 2020

Speakers:

Fred Werner

Keywords:

device

sensors

cough

developed

starting

healthcare

phone

acm

symptomatic

ai

bilirubin

blood

happening

data

pandemic

techniques

clinical

technology

resonances

microphone

Good morning, good afternoon. Good day evening and welcome to the AI for Good Global Summit always on all year long. We hope that you, your family, your friends, and your colleagues are all keeping healthy and safe. My name is Fred Warner from the ITU, the International Telecommunication Union. And it's a privilege for me to be introducing today's webinar. Now, vi t u is for United Nations specialized agency for information communication technologies. And we're also the organizers of the AI for Good Global Summit, hand in hand with XPrize foundation and in partnership with 36 un sister agencies, ACM and co convened with Switzerland. And the goal of the AI Summit is to identify practical applications of AI to advance the sustainable development goals for global impact. And like much of the world the summit has gone digital, and we're moving forward with weekly programming, allowing us to reach more people than ever before. And we're very pleased to introduce the first keynote of webinar series presented by Dr. Chuck Patel. He's the director Health for Google. He's a professor at University of Washington. And he's a recipient of the very prestigious ACM computing prize. And I'd also like to take this opportunity to thank Vicki Hanson, the CEO of ACM, for her tireless and long standing support of the AI Summit, both as a partner and a gold sponsor, and without which none of this would have been possible today. So on behalf of the AI for Good team, we'd like to say a big thank you from the team. Thank you, Vicki. And I think I can even hear some virtual applause happening as we speak. So before I introduce today's moderator, I'd like to take care of a few housekeeping issues. First of all, your microphone has been disabled. So please use the chat and q&a function if you wish to communicate. It's the job of the moderator to identify and ask questions to the keynote. And we're counting on your active participation to have a very interactive session today. And speaking of being interactive, I have a first challenge for you. Can you please let us know where you're connecting from simply use the chat function and type in what country you're from or what Wherever you're connecting from, and make sure you select chat to everyone, and let's see who's on board. I'll start I'm connecting from Geneva, Switzerland. We have Denmark, UK, Washington, India, Kenya, Ankara, Portugal, India, Italy, Algeria, San Francisco, Singapore, Toronto, Berlin. Wow, I can't even keep up with all the cities and countries. And that's great. We truly have a global audience with us here today. So without further ado, I'd like to introduce today's moderator, Dr. Yunus needies, and he's a professor of the University of Athens. And he also leads the Athena center for innovation and research. So Yunus without further ado, the floor is all yours. Welcome.

Thank you. So Fred, it's a great pleasure and honor to be moderating the first AI for Good keynote. It's a keynote sponsored by the Association of computer machinery ACM, which is a global organization, the oldest and by far, the largest. Organization of computing professionals, more than 100,000 members, serving researchers, educators, practitioners, and helping also policymakers with issues of computing. It's a great pleasure for ACM to be to have shadrach Patel as the keynote, as it was sad. He is a professor of computer science and engineering at the University of Washington and and also at Google. Besides the ACM prize in computing, so doc has received many awards, many fellowships, and and his, his his CV is long, it will take too long, but let me just mention MacArthur Fellowship, Sloan fellowship, Microsoft fellowship, a World Economic Forum, Young Global scientists award and so on. He's also an ACM fellow. And he has founded three startup companies all of them exited very successfully. Today's presentation is about how ubiquitous computing and the mobile phone can improve our approach to healthcare with special emphasis on the COVID-19 pandemic. So without further ado, we are waiting for shredder to great trip for the next 4045 minutes. So the floor is yours.

Great. Thanks, Yanis. So bring up my presentation.

All right. Thank you again, Yana. Thank you, it you. Thank you. If we're good team, this is a great honor and pleasure. And thanks to the ACM to have for nominating me to give this this keynote. Good morning. Good afternoon, everyone. It's amazing to see where everybody's from. This is just great to see the entire world here. you're

wanting to look at kind of the work that we my lab has been doing for the last few years in addition to just wanted to share some thoughts around the the area of how the mobile phone can actually impact health care. So what I want to talk about today is, you know, how, how do we look at the mobile phone these days? And how does the mobile phone play a role in healthcare. And right now, obviously, we're in the midst of a pretty unprecedented pandemic, and wanted to talk a little bit about how the work that we've done and how the community can start to look at how phones can be used for the current pandemic as well. So most of my talk will be focused on the intersection of computing and healthcare, but also how AI and machine learning is playing a critical role to help enabling some of these applications in healthcare where you can take some of the sensors on the phone and take them to the next level in terms of the kind of utility they can have in the healthcare space. So that's where I spend most of my time. But before I do that, I just wanted to give everyone an overview of my week. Research in general, so I, myself as a computer scientist, but even as a computer scientist, I'm a very applied researcher in the sense that I typically use computer science and machine learning and sensors and those as tools to solve socially meaningful problems. So a lot of my earlier work when, when I joined the faculty at the University of Washington, and even when I was a graduate student, it was a lot around activity recognition, energy monitoring, sustainability, so did a lot of work and how to use machine learning and AI to help people better understand their energy and water usage. So a lot of work in sustainability. I've been doing healthcare research from from the start of my PhD, where a lot of the work that I've done in machine learning and sensors have been applied for things like elder care, remote monitoring, so healthcare is just such a complicated space that it takes many years to have impact in that space. You have to kind of persevere through it. So been doing a lot of work in healthcare. And the honest mentioned that one of my areas of focus is Also ubiquitous computing. So I've looked at different ways of how do we build new sensors that are truly ubiquitous. So sensors that can be deployed in an environment where you don't have to worry about retrieving them to replace their batteries or having to go connect it to a power source. So how do you build sensors that are low power, wireless, that gift can give you this long, long Jeopardy, but also, the breadth of deployment. So looked at a number of different solutions around a low power wireless sensors. And also, if you think about computing these days, just like, you know, what I'm going to be talking about on the mobile phone for healthcare, and the phone is is one of the most ubiquitous computing platforms out there. And so with a small device like a phone interacting with it gets very difficult compared to a traditional keyboard or mouse. So my group and I have also looked at a number of different emerging interaction techniques to be able to interact with a device that is, you know, something that size, something of that size, and looking at different input modalities, looking at gesture interaction, what interaction. So looking at how do we expand the vocabulary of interacting with a computing platform. So this is, you know, my research has been fairly broad but but a lot of the work that we've been doing is very applied and looking at problem driven research where we look at particular areas of emphasis, and then and use computing as tools to solve those. But today, what I really want to focus on is something that I've been doing for over a decade now, which is looking at how do we build computing technologies to address the need for personal health monitoring. And what I mean by personal health monitoring is that if you think about the healthcare industry right now, healthcare technologies, a lot of it is really based around general population information. And what that means is what I mean by that is, you know, you have statistics on, you know, the trend of something happening either, you know, maybe there's a, you know, a regional statistics around the infection rate of influenza or maybe Coronavirus, but then there's also this personal component to it where some people might be removed. risk, some people have certain risk factors that others might not have. Some people might be exposed to some things that others might not have. So the ability to monitor monitor physiology, physiologically, your personal well being is really important in that in the in the healthcare space, but that's very tough. We don't have a mobile hospital with us all everywhere clinic with us, you know, right now the model is you go to a clinic or a hospital, but what would it mean if you had the technology with you? And my personal hypothesis, and I think emerging hypothesis is given that the larger larger portion of the world mobile starts is starting to become more and more ubiquitous platform. How do we start to pull some of that into mobile phones? And so that's really the crux of this talk.

A lot of people talk about why is it computer scientists doing so much healthcare work? I often like to point out that, you know, I've been honored to be the recipient of the ACM prize in computing. The highest honor that ACM provides is the the Turing Award and it's just interesting to point out that at what a couple of points in the last maybe couple years Alan Turing's most cited paper was actually a biology and health paper. And this is something that a lot of people don't realize often they forget about. And they all remember the computer science impact that he's had. But but but some this paper on morphogenesis, which is basically this concept of, you know, how do identical cells differentiate into different things and physiologically, it's stripes and legs and tails. And and so he had a theoretical theoretical underpinning on how that happened, how that happens from a chemistry standpoint. And it turned out that that was a very fundamental basis in biology and bio science and, and even in health. And so it's just interesting to point out that even the the top prize that bears the name Turing actually, one of the most cited papers is actually a health and biology paper. So so just something to think about that. You know, computer science can have an impact anywhere computer scientists A powerful tool if harnessed very effectively and in the right ways can have a huge impact and healthcare is an area that encouraged a lot of you to really consider in computer science as a place to have impact. The thing about healthcare that's really fascinating is that if you look at the paradigm shifts, they've all been enabled by very interesting phenomenon in society. You know, right now we're in an unprecedented pandemic with the Coronavirus and, and some of the things that you look you can see in healthcare, we're actually initiated because of major pandemics when you have a pandemic, you have the entire community galvanizing around what's going on there to try to see what you can do to address those issues. It's unfortunate that often takes a pandemic to do this. But pandemics have actually trailblaze a lot of innovative work in health care. If you look at vaccines, you know, you think about polio or vaccinate vaccines that were being developed over the course of time where you know, innovation around vaccine development has been even more recently spurred by innovations in computing the Coronavirus if you look about look at how the vaccine has been developed very unprecedented ways that these studies are being done and how computing is playing, playing a role treatment. Thinking about the HIV pandemic where we went from this pandemic, now we're starting to have treatment where people can benefit from these very complicated treatments. But these treatments have been advanced because of the some of the original pandemics that had hit many parts of the world and continue to hit the hit many parts of the world unfortunately, but at least there are treatments that are starting to be developed there. Surgical robotics or manipulation, looking at how surgeries are now safer and you can do these in outpatient settings. You can do these in ways where it's very, not not as invasive and a lot safer. imaging medical imaging has been revolutionized because of AI and computer vision, being able to interpret diagnose and screen images that is typically human overhead. But AI techniques can help identify things that maybe a human may miss or maybe assist an individual to be able to interpret Under over read an image. So these things have all been pushed by either major technological advances or paradigm shifts that have happened because of pandemics or major, major conditions that have been arising in society. Another one that I want I'd like to point out is point of care diagnostics. It has been a major paradigm shift in in healthcare as well. And what I mean by point of care diagnostics is that the notion that when you go to a clinic or hospital that at that moment, you can get a diagnosis. So instead of doing an exam and then maybe having a chest X ray or blood draw and sending off to a lab, maybe a few days later, maybe a week later you get a result back, the fact that there are machines at the time of clinical of the time of the clinical encounter, you can actually get a diagnosis was revolutionary. The fact that you can go into a clinic, you can get a mammogram or you can get in this case an ultrasound and see the wellness of a

fiddle, you can get the fetal heart rate. See the wellness of a baby At the moment that somebody is getting that care just changed how you were diagnosed disease and what you could do from that point onward. So, so that is one major area of innovation in healthcare that's really caused a major paradigm shift. Right now we're actually in the midst of a very unique paradigm shift, which is basically this whole other area of mobile devices and sensors and wearables. With the intersection of AI and machine learning and signal processing. were pushing the boundaries on what current computing platforms can do with low cost, ubiquitous sensors, and machine learning and AI to basically push the boundaries on what you can do with healthcare outside the four walls of the hospital has been is another major paradigm shift. We're in the middle of right now, the Coronavirus has actually accelerated a lot of this work, which I'll talk about in a second. So if you look at the example here, this is a pulse oximeter that's connected to the microphone. Have a microphone jack. Oh, Have a smartphone, just to do remote monitoring, you have the computational power in the phone already, you potentially already have network capabilities. And you have a user interface. So the only thing really left is an external sensor that you connect to this. Now you have something that may be as akin to a clinical device. And so this whole paradigm shift are using mobile as a way to compute display and transmit sensor physiological data changes the game in terms of healthcare, especially how you administer it outside the four walls of the hospital or the clinic. And on top of that, you've got this whole plethora of technology that are being developed for on the wearable side. So if you think about heart rate monitors pulse oximeters, looking at activity trackers, ECG, EKG breathing monitors, looking at gait analysis to prevent falls or detect the onset of of a potential fall hazard. You're certainly see sensor technology starting to be embedded into daily into one's daily life. This is all free Really new right now. But it starts to really change the way that we think about health care. And so what what does this really mean? Why Why bring health care to the mobile space? You know, one of the things that we're already seeing in the pandemic right now is that telemedicine usage is just expanding dramatically, which is an enabled remote care monitoring. The fact that you don't necessarily need to go to the clinic or in the hospital where you can do a lot of these virtual care visits remotely, has really changed the way that you administer carry, the the timeliness, the convenience, and also the frequency at which you can interact with a professional. The other thing is that mobile devices because they're with the and they're so personal, and mobile devices are things that you know, if you forget the device at home, you're more than likely go get it to basically use it for that day because a lot of people you need that just for your daily activities in terms of being able to do your work, being able to keep up with family being able to just operate in that day, that's a device that people will likely have with them. And you can use this as an opportunity to drive screening. So for the Coronavirus, if there are new guidelines that come out or if there's things that you want to have somebody do where you might have a checklist or might want to do a triage, you could push these to the phones and the phone will be with them. They might not have a success seminar may not have other devices, at least the phone could be a broker, or at least a first source of information. They also push the boundary and how do we develop new diagnostics, we're not beholden to this list of things you can do in the clinic, we can actually start to do things that are more longitudinal, looking at things over time changes over time to be able to

diagnose or treat disease, and also improvement in treatment. Now we have more real time feedback from the individual so that it's not just beholden to the maybe every one week or six months or every one year where one might see a medical professional. Here you have more continuous interaction or physiological access to one's information. So that you can start to make more real time assessment in terms of changing or updating treatment. And I mentioned this before continuous monitoring, because changes measurement is this notion that you have more physiological data from the individual to make a better judgment in terms of how treatment is going or the onset of disease before you have disease. So one of the things that is is top of mind for a lot of health researchers is how do you get to a prediction of a diagnostic or predict disease before you're symptomatic or you're asymptomatic? So what would it mean if you could diagnose or screen for Coronavirus before you're symptomatic? Because a lot of times when you're symptomatic not symptomatic, you're still a spreader. You can still infect others, influenza is the same before you're symptomatic. You're still can expose other people you're still can your infectivity capabilities are fairly high, even pre symptom, but also personally by time you're symptomatic for a lot of diseases. The treatments are much more complicated. So by looking at longitudinal continuous data, you have this possibility to be able to pattern match and say that, hey, there's something different going on here over this course of this year than normally in your body. Now let's start to get at some of these tests that we can do before you're symptomatic and before conditions starts to worsen.

The mobile phone has been an important has played an important role in a lot of these in the healthcare space. In general, a lot of a lot of these tools that we've seen out there have mainly been data collection apps, or you know, one of the canonical ones are food journaling, for for for weight loss or just general nutrition, where you enter information about what you've eaten, how much you've exercised, and you have a food journal or log and a lot of these things are manually input it and there's also different technologies out there we can use the phone as a journal, but one of the things that that's happening is with a modern smartphone or even the smartphone or just five years ago, or even the phone seven years ago are very capable If you think about the mobile phone penetration worldwide, we're starting to get to phone penetration that order of three 4 billion phones in the world even more than that, depending on how you lump a smartphone to a feature phone. But if you look at what's in a phone, you have a camera flash, a touchscreen that's capacitive. He's got accelerometer gyros, multiple microphones, multiple speakers, you know, some of the newer phones, IR projectors, but often we take these sensors for granted where you know, we use it for telephony, maybe use it for gaming, they often use it for photography. But the sensors are fairly capable, especially when you look at machine learning and AI and signal processing as a layer to start to interpret what you can get from the sensors that these are things that are already with many people and how do we leverage those sensors for health care is really been the the crux of the work that that I've been doing and now the community has been doing. There's been a lot of work that has been developed beyond just my lab. Now. That is looking How to use existing sensors on mobile phones for health monitoring. And what I mean by existing sensors is that yes, you can attach a device to a phone. But what I mean here is that how to use the sensors that are already on the phone for doing health monitoring. This could be the speaker, the microphone, and or even some of the more emerging sensors that you might see on their accelerometer gyros or Sony, or pretty much on every phone. But now you start to see other kinds of sensors like NFC, or other kinds of radios on there. So how do you start to leverage those for physiological monitoring, really pushing the envelope with what you could do with these sensors. And the goal is that you don't have to be perfect, you have to just have enough of an insight to know what to do next. So you don't have to make these things perfectly accurate or as good as a clinical device that you might have in a hospital. But how you have just enough confidence that this can give you information to triage an individual so a precursor to what you might actually do as a diagnostic so you know what path an individual can take So we've been working on a number of different technologies. I can't talk about all of them today, but I'll just give you a little bit of a taste of the kinds of things that we've done. We've done work in the pulmonary space, looking at using the microphone or cough assessment, long assessment, even blood screening, looking at using the camera and flash on assessing how much hemoglobin in the blood Bailey Ruben in the baby in newborns and babies cardiovascular disease. So looking at SPO two monitoring using the camera and flash Blood Pressure Monitoring using pulse transit time using the sensors that are already on the phone, technology around sleep osteoporosis, and and I welcome anybody to go to our lab website, the University of Washington to to take a look at any of these papers. But I'll give you kind of a little bit of a, an overview on a variety of these different technologies and how well they work and the impact that they can have on healthcare and also some of the impact that they're having on the current pandemic.

So the first technology I want to talk to you about is something that we did as one of the first technologies that we had Built for for a monitoring lung function and this is very relevant right now if you think about the Coronavirus, which is a respiratory ailment that you have to monitor very closely, especially people that might be at risk that might have asthma, a chronic obstructive pulmonary disease. monitoring your lungs are very important in that case and in measuring lung function is typically done by a technical spirometry. So you know, if you have cardiovascular or heart disease, you might get an EKG or echocardiogram. In the lung space, you get a spirometry measurement. And spirometry is basically a measurement of how much air you can push out over a certain period of time to get assessment of how well are your lungs doing are the fluids in the lungs or the lungs not as efficient? Are there other things happening? And as the lung function drops, pulmonologist typically use this as a screener or diagnosis to basically take a step, you know, what do you do next? So on the bottom left is an example of a clinical spirometer. These are fairly big machines, whereas there's a two minute You put your mouth over the tube, we blow as hard as possible the machine records the airflow. That's typically what's used in the clinic. The one in the middle is a digital version of that. That's a device that's a little smaller has a computer inside of it. You know, the, these are the one on the far left is a roughly about a 10,000 US dollar device. The one in the middles or middle is about a 2000 us to $2,000 device. And then the one on the right, we're starting to see more and more of these homes barometers which might be $100 or $200. us but still fairly expensive. But the thing is that if you have asthma, you don't really know when you're going to have an asthma attack. You might kind of learn the PRI signals about what might be happening, but you might not have the spirometer with you. You might not have the device with you, or in the case of Coronavirus. If you want to monitor post infection how your lungs doing well there's not a technology out there to do that or if you're in a remote space, where you might have telemedicine How do I do a lung function if I can't go to the clinic? So what we developed was a technology that could use the phone. But before I go into that, the kind of thing that you want to just keep in mind is the thing that pulmonologist care about is that you really care about volume and flow. So and typically what they look at is this curve where you know, you have restriction where you might have the scooping,

look to your full volume curve, or you might have restriction where it doesn't go up to the high level peak for your particular age and weight. And so pulmonologists need these curves, a lot of these home, barometers don't even have that they might have one of the metrics that might be relevant, but the pulmonologist needs to have all the metrics available. So some of the existing devices didn't quite do that. And so what we wanted to do was develop a phone based tool that just use the microphone, so no additional hardware, just think about the barrier. If you had to plug another device in, you might as well have a dedicated device and a lot of those cases, what's the how do we use the microphone as a signal or as a sensor to To be able to assess how much air or, or or air that one is blowing out in the same way that you would do in a clinical spirometer. But do it with a mobile phone. And so that's what we develop. So this is an app that basically hold the phone in front of you, you blow it the face of the phone. So in this case, some of you may or may not be able to see this video, but you blow up the face of the phone, there's a visualization that shows that are you going hard enough because this is very technique dependent, because you've got to get all the air out. And then there's a visualization that shows Yep, because there's still more air in the lungs, no more air in the lungs. And then when you're done, it gives you a flow volume curve. And you can either read this out to a clinician, you can send it to them, you can text message it to them, but the idea is you can do a phone based spirometry maneuver. So how does this work? Um, so traditionals parameters use a flow sensor. So a flow sensor is just a turbine, you basically have a little orifice you have a sensor in it, which might be a little turbine. As you blow inside of it, the thing spins faster and faster. The faster it spins, the more spins the more air is going through, and you just integrate that over time. So that's the example on the thing on the left. On the left, there's some more advanced flow sensors that don't have a turbine, they're called New mattacks, which is much more of a has less obstruction, the intake of a car in the intake flow sensor for the internal combustion engine in a car using the flow sensors, a little wire that heats up, and as air moves across, it cools down, and that tells you how much flow goes through it. So there's lots of different ways to do flow sensing. But in our case, we don't have a microphone. And a microphone is technically an uncalibrated pressure sensor. And so so how do we turn this uncalibrated pressure sensor that's basically designed to just pick up sound waves into a flow meter. And that's what we basically started to work on. And it turned out, we actually stumbled across a technique that we could use that was being used in the speech recognition community for years, where in the speech recognition community, the vocal tract resonances that are happening. So if you want to analyze my speech or transcribe my speech or my sound, the vocal track resonances that come through, we're actually causing issues and noise problems with the actual speech recognition algorithms themselves. And so there is a long history of developing filters that would cancel out the resonance. But if you look at the physiology, when you're blowing air out, the obstructions are happening in the pulmonary system. So what would happen is when you're have some type of obstruction, the vocal tract resonances will actually change. So in fact, the human body can be the sensor, you don't need an external sensor. The resonances were actually very much related to the pulmonary system itself. So we found that so in a lot of the research and literature, we took a problem that was a issue for speech recognition, turned the problem on its head and said, Hey, that thing that was causing problems for speech recognition, we can actually use that for health sensing because that's actually telling you a lot about the physiology. So why don't we take it from that perspective. So that's how we actually develop some of these algorithms. So we, we created this vocal track model. And we actually went back to the original literature, this is the old Flanagan model of speech recognition. So a lot of these speech recognition folks out there may actually recognize some of this. But really, it's this physical model that you can create from the glottis, the vocal tract in the mouth when air is coming out of it. This is work that the computer science communities, the the speech recognition community, even the medical communities have these models around how air flows and how the mouth creates this resonance that creates sound. And so what we did was we actually modeled the vocal track. And this is actually a 3d printed vocal track, we actually created 3d models how air flows through if you had a restriction. So let's say you have, you know, a musical instrument and as you change the properties of that musical instrument in terms of adding or removing a mass within that track, it changes the sound. And that's essentially what we did. We tried to create a model where As the vocal tract resonances are changing, it turned out that in our clinical data collection, that that resonance changes is proportional to the actual flow coming out of it, you have to normalize it based on the height and weight of the individual, because you got to kind of get a sense of the size of the of the pulmonary system. But that's what a model that we use was basically a statistical model that basically predicted that, but now we've actually moved beyond that. And we're actually using deep learning models, and I'll talk about how we've been collecting the data on that in a second. But one of the things we did early on was that not everybody has a smartphone, how do we enable 5 billion phones, a 4 billion phones, how many phones are out there in the world to be able to do this? So we actually develop the technology in two ways. One was using an app on a phone where you get the best audio data possible. And then a phone, any ordinary phone in the world. So this is a feature of not a smartphone, but a regular phone that that doesn't have many apps on you can actually get download an app easily on it, where you can make a call. So one of the things that we did was created a version where you could make a phone call to a toll free number. And then you would do the maneuver the same way. In that case, we had to analyze the data that goes from the speaker all the way through the telephony network through the GSM tower through the transcontinental fiber line or the oceanic line. Or maybe it's going through the satellite connection, really looking at how that data gets mangled, and gets to the input. It turns out from the Mulan coding and the way the encoders work, it actually preserved just enough of that sound that's created when you make that blowing maneuver that we can still reconstruct some of this is not as as accurate as a phone app. But this would actually allow you to do it in such a way that you can call any toll free number and you take a text messages you back the results. So you can use any ordinary phone in the world literally turning any phone in the world into a pulmonary assessment tool. So this is something we did early on and bottom right is an example where we deployed this technical In Bangladesh to collect some of the data doing the validation and having clinics where you might only have one pulmonologist in a particular region. And you have many, many patients that need to go a very long way to see that pulmonologist. Now you could do this in the villages and have community health workers deploy some of these in their, in their, in their respective communities and villages. So we did a very large clinical study of about 10,000 patients around the world where we took a clinical Glade $10,000 clinical spirometer and the app to see how well does this work. And so if you look here is a PDF FPV one FEC those are just the numbers. Those are numbers that the pulmonologist care about. So what what's really relevant here is the percentage error from a clinical device. And it turns out a lot of these measures, even if you use the phone app or even the local recording, which is just basically the telephone call, you have errors in the you know about, you know, five to 10%. And to put that into context, most of the The FDA the Food and Drug Drug Administration, clearance for a lot of these spirometer is within five or 10%. So when you do a spirometry test and you do another one, you already have 5% variation from test to test. So if you think about this, even if you had a 10 or 15% error, if you just think about, you know, healthy lung function moderate, or you see a change in that you're getting a downward trajectory that already is good enough for triage. So let alone that this is already very close to a clinical device, so that we're actually approaching clinical device accuracy numbers, but for a triage tool, you actually don't need to show anybody number, he just have to say that, hey, if you're doing this test once a week, or once a day, if you see you go down the threshold, if you you know if you're if you have a downward trajectory, that's what would elicit a secondary step. But if things are looking normal, then you can keep just doing the test. But now, we can start to triage people that are actually having a decline and start to push more some more of our resources. To those individuals. So even as a screening tool, this would be really helpful. So this gives you a sense that these things can actually work fairly well when you compare them a clinical device. But but it's really enabled by the theoretical models from the physics model to the sensor innovation that was required and the machine learning work that was required there. So some similar work that we did related to those was to study cough. So cough is an often common symptom that people often talk about, but from a from a healthcare standpoint, people often don't give it much thought because coughing is a symptom in a lot of things write pulmonologist or medical professionals are always like cops, what do you do with a cough? Everybody's coughing, but one of the things that we've been looking at is that, hey, you know, you know, the human ear, can't really differentiate a lot of things about a cough. You know, when somebody is coughing in the room, a lot of times we don't even notice that. So if you ask somebody self report how often they cough if they have a pulmonary disease, they grossly underestimate. Typically somebody that costs maybe 100 times an hour may say how only cost two or three times because you got to tune that out. But machine learning algorithm or microphone won't tune that out, there's things that a human ears may not pick up, whereas the machine learning algorithm could pick up. So our hypothesis was that human ears may miss some of these subtle characteristics. Whereas in computer vision, and image, there's only so much you can do with image processing. But with because of the resolution of the camera, what may or may not be in the scene. But the visual processing that one could do on an image is greatly surpasses what you can do with a computer vision algorithm. But in the human ears, there's a different

situation here where where there's the subtle characteristics that don't even get captured, because you're just kind of canceling tuning them out. So the idea here was that there might be some more entropy in coffee than we think. And so let's study that. So we developed these techniques that are actually starting to be used now with Coronavirus where cough is a major symptom of this where we looked at first Let's use that same glottis model and start to create a cough model from it. And instead of doing spirometry, let's identify cough, you know what is cough, a cough is this only 300 millisecond thing that happens where you have this, you in hell, and then you have this really large, explosive event that happens where you have this cough event, and it happens about 200 to 300 milliseconds long. You have this burst period and then you have this roundoff. So as you see on the spectrogram time on the x axis, and you have frequency and the intensity of the color changes, basically how much energy is in that band, and you have this drop off. It turns out there's actually a pretty unique signature for a cough. So if you just wanted to identify a cough, you can actually do that fairly accurately, compared to throat clearing speech, background noise, car banging, backfiring sounds, car sounds, doors, opening and closing, you can actually fairly accurately identify a cough, which could actually potentially be useful in the medical space. There wasn't really technology They can actually quantify cough. So we had to develop that first before you could do anything deeper. But if you look at the signal, there's some entropy here. And if you start to look closely at it, when somebody coughs that could tell you a lot about the physiology. Is it a dry cough, wet cough? Potentially one of the things that's happening in the computer science community do right now is could you diagnose a screen Corona rise from the cough? I think that's gonna be very difficult because Corona viruses can be, you know, influence a cough or, sorry, a pneumonia cough might sound the same, but the fact that you have something happening might be a good signal. So could you identify wet versus dry or precursor to potentially something that could be something like a Coronavirus, like an upper respiratory infection, you could do but the first is identify if you can find the cough accurately. So so one of the things that we focused on early on was not just cost for cough sake was really looking at it for a particular condition, which was tuberculosis. So we've worked very closely with the Bill and Melinda Gates Foundation. See, could we use our coffee algorithms to help identify and study work alone? It was a highly infectious lung disease. In fact, a lot of things in TV parallel the Coronavirus, is infectious. coughing is a major symptom. We were still trying to figure out how it spreads is a community transmission? It probably is. But you have a super spreader. Is that one individual spreading a lot? Or is everybody spreading? How do the particles

stay? How long do they stay in the air? What kind of humidity temperature causes the particles to stay suspended? There's a lot of parallels there. So we actually looked at studying tuberculosis and cough as a way to say how can we do maybe public health assessment? How do we know if there's a super spreader? How do we know if there's a lot of coughing could be an indicator for tuberculosis or even personal health monitoring? If I had a way to monitor for my cough in the evenings or maybe throughout the day, am I doing better now at post infection of TB. So one of the things we had to do was collect data here and so we worked with the Gates Foundation and you Cape Town to deploy some of our technologies in this cough box that they had developed, which is a little box that you go inside of you close it up. And these are people that are infected with, they already have tuberculosis, they go in there, this thing is obviously disinfected every time somebody goes in there, but there's an impactor in there that can capture the particles. So when the particles are analyzed, you know, in fact that there's TB particles that were captured from the cough. There's obviously a microphone in there for our own assessment. There's a number of different devices in there, but you capture the cough, you capture the particle and from the chest X ray, and the particles you know, for sure that they have TV or not. And that's how we were able to collect a lot of the data early on. If you look at tuberculosis, you know, healthy long on the left here is basically you don't have any, you know, holes or these granulomas, but when you have an infection like tuberculosis, the infection is hit by you know, you've got your white blood cells attacking it, and then you've got these granulomas that form which is bases little holes that show up on a chest X ray Right. And the idea here is that, hey, if I'm coughing with a healthy lung versus a TV log, my cough has to sound different, even though if I can't hear it differently, from an algorithmic standpoint, or machine learning and AI standpoint, it has to look different because you're completely changing the airflow of how the origination happens, how the flow is coming out, it's just inherently from the physics that has to change. But how do we identify the change? That's the hard part. So these are some of the things that we've been looking at now and some of the community has been looking at is in addition to identifying the cough, when you zoom into that waveform, there is a signature that you can start to look at, where if it's wet or dry. And also, if you look at the last 50 milliseconds, that's when the voice comes through. So you can actually start to identify if it is one copper or another copper. So some of the work that we've done is identify not only who's not only is it a cough, but clustering saying hey, that cough that we just heard is different from this other costs. So you can start to do these privacy preserving techniques on device in an environment, you can start to see how what the infectivity rate of an environment is, without having to send any data to the cloud, you just locally process that and cluster, how many different individuals might be might have been infected. Or you can have a personalized device, which is something for you to monitor your own rates. And so some of the algorithms that we've developed, especially around cost identification, are starting to get better and better over time. So if you think about the false positive, right, you don't want to have this technique say that people are coughing when they're not coughing. So you need to have this really high, true positive rate, you got to really know that somebody, when there's a cough, it's actually a cough, so you don't over predict some of these things. And so some of these convolutional neural net, the CN ns are actually starting to outperform some of our traditional SVM that we had worked on in the past for vector machines. And so if you start to look at, you know, sensitivity specificity, you can start to get to, you know, less than

10 or 5% or 1%. false positive rates with accuracies that are above 85%, which is well above self report well below self report. And these are just using commodity microphones. So this gives you a sense of how how from the AI work is actually accelerated some of the capabilities of how you can do this kind of assessment. So some of the things that we're doing and some of the things that the community is now starting to come around with Coronavirus is starting to look at post infection. So now, you know, we've got so many people affected, but post infection, how are we going to triage post infection care? So after you've been infected, you still have to assess one's lung function, especially if somebody was at risk. So are you doing better or not? And how do we do that at a large scale? We just can't have everybody come back to the hospital because the clinics are overwhelmed with the diagnostic cases. How about the post post infection cases? So some of the tools that we've been building is tools on the phone where you can have the phone that's nice side you can put it next to your bedside where in the evening that's a good time to start to monitor How often you're coughing or not because sometimes pulmonary exacerbations are something that might arise, some may happen at night. And so if you see a cough frequency is going up where after a co sleep or yourself might not know, well how things are going or how you're breathing. So you can start to do this assessment to say, Hey, you know, this is getting worse. Now you can call it a nurse case manager or community health worker to see what's to do next. And then a community health worker can monitor all the patients to say, hey, these people are doing okay. Oh, hey, I'm seeing a little bit more trending here. Let's let me check in on them. So instead of calling everybody now, you can check in on ones that are have emergent cases because it's the ones that now you can automatically triage. So you can actually use machine learning here to triage automatically as well. So this is just an example where we did this for TV, but all of that is being applied for Coronavirus and COVID right now. So this is another tool that we built, not as relevant to Coronavirus, but just wanted to talk about a couple of examples on non invasive blood screening. So we talked about microphones everything you do in life. There's a lot A lot more you can do with microphones. I just talked about two examples. But what can you do with the camera? The cameras are getting better and better they can start to you can start to do amazing photography with these cameras and you can capture amazing pictures and the technology in the camera is getting so much better. But what can you do it from a health standpoint? So we've been looking at how do we use the camera and the flash that's are also getting better for non invasive blood screen. So one of the first areas we looked at was newborn jaundice. So newborn jaundice is basically you know, when a baby is born, one of the things that you monitor is the amount of bilirubin that's in the blood. This is pretty important. Billy Rubens typically may increase at when a baby is born right after birth where they're their body starting to still adapt where you know blood cells, you know, you have blood cells that are developed in blood cells, red blood cells that die they get garbage collected, so to speak. And, and that that garbage collection process is starting to be developed in the body for For a newborn, and so sometimes they don't develop fast enough. And so you have these, these dead blood cells that are still starting to back up in the body. And what happens is the baby turns yellow. That's one of the symptomatic signs, but this can actually have debilitating impact. It can have neurological impact, and this is

a case for newborn mortality. Unfortunately, in in many parts of the world and developing countries in particular, in developed countries. This is tested for their moment the baby is born. It's monitored at the moment of birth, before the baby is discharged from the hospital. In many parts of the world where you don't have birth in a hospital where you might have a midwife. This might not even be tested for so so we are looking at how do we develop a technology that uses the phone where you can take a picture of a baby to see how much Billy Reubens in the blood. Because it physically manifests itself in the skin. You can optically analyze the amount of bilirubin As you can see the yellowness, the problem is, is that when you ask a parent is your baby yellow or not? It's really hard to do that for darker pigmented babies. They don't look that they don't look yellow or they don't look yellower. Or even if you had a baby that looks yellow, it's hard to notice like, do they look more yellow than yesterday? It's just so hard to do visual assessment. So what we want to do was create a tool that could do that automatically. Right now the way that Billy Reuben is assessed is that you do a blood draw. So it's called Total Sella cerebellar would be take a little bit of blood, you analyze it, they make these non invasive devices to this is a device that you put on the forehead of a baby and he uses an optical technique. It shines couple of different frequencies of light and looks at the absorption and reflection of certain frequencies to see how much Billy rubezh is at the soup kitchen, his level of the skin. So basically the capillaries and the vessels that are right below the skin, it just looks at the color that being absorbed and reflected to infer bilirubin. These devices are fairly expensive. These are about five, maybe five plus thousand dollars and they're not very ubiquitous at all. It's a very specialized device that's not devoid massively. And because we knew that there was a device that could be done or used in a non invasive way. We basically said, Could we just do this on a phone? It's got a flash and I got a camera. Could we use machine learning with those sensors to do this? And the reason I believe group is an interesting one to look at is that bigger movement starts to peak after babies have typically been discharged from a hospital or after a midwife has actually delivered the baby. So typically, you're outside the care of a professional when Billy Rubens are at the highest peak where you need to monitor them. So in the United States, after a baby is discharged after a day or two from the clinic, or the hospital, then a parent has to monitor and say, Hey, you know, is your bill, are the kids bilirubin going up? I don't know. We don't have what how do I know? I have to look at them. And so a tool that you could do at home, especially for developing countries where you don't have a blood draw. You can have a case nurse case manager or a 3d health worker start to do that. assessment. And so the way this works is that turns out bilirubin absorbs this blue light. So if you look at the wavelength, the absorption of probability of bilirubin, you've got this peak here, which is basically where if you have high amounts of bilirubin, you have this like 464 70 nanometer light source that actually gets absorbed. And so, so here what we said was, Hey, you know, a modern phone has a white flash, a pretty broadband flash. So if you can, if we know the properties of the flash, and if we put, if we took a picture of the baby with the flash, we could look at how much ambient light we had before we took a picture. We know we can have the phone close enough so that you blanch the skin with the flash, you kind of wash the skin out and we can look at how much light is absorbed and reflected compared to a healthy baby with low bilirubin. You can start to see how much blue light is absorbed. And it's very similar to that non invasive device but you Here we're doing it in a non contact way. In this example, on the far top left, we have a calibration card. So the idea here is that the calibrate the camera because every camera has a different property in terms of its white balance. And it might have different correction factors that are built into the hardware, but it's just to calibrate the camera as a reference, maybe ambient light to calibrate. So but most of the work is done by the phone here. And what we did was we basically created an algorithm that essentially identified where this peak happened. And so our algorithm

we did a clinical trial, we've actually done more babies than this, but when our initial trial of about 500 plus babies where we took a blood draw of a newborn and use the phone app to collect the data, the compared to the blood draw, Billy cam had a point nine one correlation to the actual blood draw. The TCB device is that non invasive device I showed earlier, which is point nine, two so this tool is almost as good as an array. regulated a clinical non invasive device. And if you look at the bland ultimate plot at the bottom, if you look at the level, the spread, we're within the spread of what the transcutaneous device or the non invasive device could do. So it's roughly within the same ballpark of where the the the clinical device would operate when you connect, when you compare that to a blood draw. And as I said earlier, you don't have to be perfect, even if you could provide a warning signal saying that, hey, we're in the downward trajectory, or we're seeing a high amounts of bilirubin and we're not seeing a reduction in bilirubin that's already a important indicator that doesn't even happen right now. So this is an example of how to use a camera and flash to potentially get at some of that. I won't go into detail here but one of the areas after we had published this paper, a number of folks that have reached out to us and said hey, you know, there's the thing about bilirubin in adults. You don't see the yellowing happening in dust because you know, the Billy Reubens are very minute They change very little. And so you don't manifest it as a color change. But what happens is in pancreatic cancer, believe it does increase in pancreatic cancer has a five year survival rate of only like about 6%. And that's because people don't know that they have it until they're symptomatic. And by 10 years symptomatic, it's often too late. You're in advanced stages. So if you can screen somebody much earlier, the prognosis could actually be very different. You could actually do things like a Whipple procedure or medical procedures that can maybe help somebody sooner, but you don't know until you're symptomatic. One of the things that we were looking at is that in the eyes in an adult with pancreatic cancer, the white parts of your eye, the sclera actually get just a little bit yellow. And so we've created a protocol where you could do a selfie type picture in a study where we wanted to see could we identify bilirubin not in kids but in adults, and see if jaundice is increasing. And so this these are some preliminary results, but we were actually able to get Some fairly accurate results in terms of seeing a trend where your billing Reubens are okay. And they're starting to go up. And if you are at risk for pancreatic cancer, you potentially could use this as a home screening tool to see what might be happening. So this could be a selfie you can take or you could take a picture, maybe once a month at once. And this is still very early work. But these are things that we've been looking at where right now we do it with a controlled box just so we can control light. But this is an area we can start to get a potential diagnostic or screen that no How else would you have done it without being symptomatic or doing a blood draw at home. So these are some other areas that that the community has been looking at. Another technology that we've been developing we've been developing is hemoglobin, so hemoglobin in the blood. So this is relevant for pretty much anybody for people with sickle cell anemia. People that are anemic, pregnant women, just anybody where you want to monitor your hemoglobin levels. Right now it's done with a blood draw. Similarly to Billy Rubin, there's a spectral property that can be leveraged, where you can actually start to do blood assessment. And so the way this works is you take your finger for the over the camera and flash, so the flash turns on. And then you put your finger over that there's a lot of x apps that you can download. Now that gives you your heart rate, which it looks at the cardiac volume changes. So basically the chain but what this does, it actually looks at the different frequencies of light in the RGB channel in the camera to figure out hemoglobin levels, not just your heart rate, it can capture your heart rate, obviously, and we know that you actually have a finger there when we see the heartbeat, but then it gives you in grams per deciliter what your hemoglobin levels are through an optical assessment. So similarly to the bilirubin device, there's actually a device called the pronto device from masa which is a clip on device that can do it non invasively. So we knew that hypothesis had some legs there because there is a non invasive customized optical technique you can use to do this. which there are devices out there that can do it. The way this works is that you know, hemoglobin which is basically the amount of

so in the blood, you have basically hemoglobin and plasma or water that's in your blood. So really what you care about is the percentage of hemoglobin in plasma. So how much hemoglobin is in there. And it turns out the color is a good proxy for this too. The more red it is, the more hemoglobin you have than plasma. And so if you look at on the weight absorption wavelength, just like bilirubin for hemoglobin, you know, plasma is about 500 nanometers or water and then you've got this absorption rate is very high. And then you have this hemoglobin is which is a little bit lower and it starts to dip a little bit faster than plasma. If you start to identify two different wavelengths on that curve, you can start to do the ratio between the plasma and helium hemoglobin so we care about how much hemoglobin is in the blood as a ratio of plasma. So you can essentially Look at the color nest if you wanted to. It's called hero chrome analysis. And so you shine at it, you look at how much light is reflected and absorbed just like in the boiler room and technique. In this case, we put the finger up against the camera because it's a lot harder to see these subtle changes. So you have to really push it up against it. So but but you can, if you know the light source and weight in the wavelength and you know how much is absorbed within without a finger, then you can start to do that. But the challenge here is that, how do you know that you're taking the skin tone versus the blood into account. So if some of you may or may not be able to see this, but when you put your finger over the camera and you record a video, you will slowly see your heartbeat, if you will say actually see the throbbing, it's just the cardiac volume when your heart beats. As that volume of blood gets to your fingertip, you see the volume change and increase and decrease as the blood is flowing through. And that's basically your heartbeat. So what we do is because we can see the heartbeat through the finger, we know when there is a change because of the blood because the blood was flowing versus the skin color which is not going to change that rapidly. So we can do the differential from the blood contributed to color change, versus just a natural color change of the finger. So that's what we do. So we look at the resting heartbeat and look at resting again. So we can do the analysis before, during and after the heart is beating in that in that way form. And then and then we can cancel out the tissue from the blood in there. So again, we did a clinical evaluation, the pronto device is a clinical device that's a non invasive, it's a little bit harder to do so the correlations a little bit lower than Billy cams, 2.1 but still really relevant and helpful as a screening tool if you're anemic if you have a blood disorders, or if you need to do a out of the clinic assessment or for a community health worker to do a quick assessment there. So one of the things we did here was the the Ministry of Health improved reached out to us where they had a pretty big pandemic where they had a number of their kids in Peru that were anemic, and a lot of this was attributed to malnutrition, access to nutrition, and so on. They needed to do a massive deployment of how to screen for who would need to get a blood transfusion or who needs to get care. But if you think about the blood draws you'd have to do and just how you would get those into the some of the jungles of Peru would be very difficult. So on the left is one of my graduate students is now Edwin was now a professor at University of California, San Diego. He says de, when he was a grad student was doing this deployment and data collection in Peru to basically do screening using a phone as a tool to know that a community health worker can quickly screen hundreds of patients where typically it could take much longer than that if you had a traditional blood draw, take it to the lab, you had to get the lab result back, this is a way to do a triage and then call it down to the smaller number of individuals that would actually have to get the blood draw. So you can actually so this is just in the clinic where you can do this massive screening much more effectively. And in fact, you could deploy the app to all the community health workers and do a country wide assessment very rapidly. And that's always the goal of this project. Here.

So some of the things, I want to leave you with just something that we're working on, just that's relevant, but this early on, you know, obviously fever is a big one for Coronavirus. And a lot of people say why this scale thermometer, but when you need to get triage, you may not have a thermometer handy. So one thing that turns out on a phone is when you unpack a phone, a phone actually has a lot of temperature sensors in it, it's got a set of thermistors for the CPU for the actual phone itself. For the touchscreen, it's got thermistors for the battery overcharge protection. So there's a lot of temperature sensors. And in fact, if you look at some of the temperature sensors in a phone that are the same same temperature sensors you might find in a thermometer like a forehead thermometer. Obviously, they're designed for different uses. But one of the things that we've looked we've been looking at is that the thermal mass of the body if you if you knew what's operating on the phones, if you basically only had one app running or if you controlled what's running on the phone so you can get the temperature at a stable state. And if you put the face of the phone like on the example here, with Joe who's one of my grad students on your forehead, you can actually change the thermal mass of the phone where the heat from the body can actually skew the temperature sensors because you're at a temperature that is around if not a little bit lower, but just one order of magnitude wise that you can change the thermal mass of the phone and see all those thermistors pick up that differential. So if you have a fever of 100, or 105, or 106, versus if you had a mild grade fever or no fever, you could see those changes. So you don't actually need to get the exact Oh, I went one on one on 102. Even a clinical thermometer is not even that great. I mean, even though you might get point one degree Fahrenheit accuracy. And so one on one and Fahrenheit, one on 1.5 and Fahrenheit, it doesn't matter are you above a particular number, what is what matters. And so if you look at over time, the battery temperature has this exponential curve, if you knew the model of what the battery temperature curve was with different processes running in the background, and then you were to quickly change the thermal mass by introducing the body in this case, the forehead You can actually see the change off of that baseline. And if that baseline change is much higher than the than the just a natural temperature fluctuation, then you know that you're in a particular grade of fever, because it's just, you know, this differential wouldn't cause that change. So this is something we're working on. Now. We're just doing some studies and clinical studies were very rapidly in the field if you don't happen to have it. This is very important for gig workers, workers that are out in the field, we might not have to monitor we just want to be able to self monitor, do I have a fever or not? I feel feverish, quick way of seeing, Oh, are you above a threshold or not? And so this could be a way that you can pretty much deploy these on billions of phones. If we can get them all to work well, so we're in the midst of doing this work. And this is something again, we're using a statistical model to model the the curve of the the actual battery temperature but also looking at how does that differ when you have somebody with and without a fever. The other area we're looking at is a number of Deep learning techniques on interpreting rapid diagnostic tests. So this is not directly doing a screen. But this is where you might have a lateral flow assay where you might have a malaria test. The one on the right are some of these COVID rapid diagnostic tests that are being developed where you might take a sputum or you might take a nasal swab, put the specimen on this little strip, and the strip changes color kinda like a pregnancy test, it'll change color, and then you have to assess it. The problem is, is these lines are failing, sometimes you can't really interpret them. So some we've been building these open source tools that you can take a picture of a malaria strip or even a Coronavirus strip, and it basically interprets it for you. So you have an objective measure so that, you know, is that line too faint? Is it dark enough? We tried to create an objective measure to see okay, is this diagnostic or not? And because there's a lot of human error and interpretation, surprising. It's like oh, it's just a color change. It's a line or not, but it turns out it's actually very difficult because a lot of the gray area is important area because if it's very obvious that you're infected or not, but when the middle area Am I or am I not? That's what We have to be very objective. So so we have a number of mobile health NGOs that are using some of our tools now to basically deploy in their region, malaria or even now they're starting to be cornered by your scripts that are being developed to be able to do very rapid diagnostic tests where you go into community, do a diagnostic tests and quickly take a picture of these tasks to get them into the database, but also over read them very rapidly as well.

There's an I'll leave you with this project that's been going on for a while where it's a this is another sensor. This is a scanner, this instead of the camera or the microphone, this is the accelerometer and gyro and you might think, what can you do an accelerometer gyro. In this case, we're trying to do bone density. So osteoporosis is basically a reduction in density over time. So if you look at the bottom left, and you know healthy bone is dense, if you think about a piece of wood, there's, you know, it's a fairly dense piece of wood but osteo product bones if you think about a piece of wood that has hollow hollows, has hollowed out or you have these holes in them, then their bones get bent. And so one of the things that we had a hypothesis on was we found this paper in the 70s, where there's a very clever clinician that basically designed a tuning fork where you hold the tuning fork that you hold it in your hand and you tap your elbow. And if the tuning fork makes a sound, that means that there are some hollowing happening in your bones. Because you think about a structure, if you had a solid structure versus a structure that had some hollowing effect to it, the resonance that would get to the tuning fork would change. So the idea was that if I tap my elbow and I create this impulse response, if this thing made a sound, that means there's something happening with my bones. We're like, Oh, interesting. We could actually probably do this with a phone. If you think about a modern phone, the accelerometer and gyro can be sampled at over 1000 hertz a kilohertz. And now you can start to get these high frequencies that you can capture the small vibrations and a broad band to basically say, could we replicate the to the tuning fork experiment. And so the way that this works is you take the phone, you squeeze it in your hand Or you can actually put it up against maybe the the tibia on your on your leg. And then you can use a, a little impulse where you maybe put an input on the tibia. So you basically create a little resonance, you can either tap your elbow or use a little, a medical hammer, to basically create a impulse. And then you look at what gets to the phone, because you've got the natural shaking that might happen, but you can get these small resonances that occur. So if you look at the spectrogram, on a healthy bone and osteo, product bone, a healthy bone, you get the sub 400 hertz resonances, you've got this solid structure, you've got these resonances that occur, they're still ringing. But when you have an osteo product bone, those low frequencies go away, because now you've got the higher frequencies going through because you've got this hollowed effect. And so you've got these higher frequencies going through and lower frequencies going away. So algorithmically. You can either use a very simple threshold here or you can just use any conventional cnn to basically do image recognition on just the spectrogram. If you wanted to. To identify the, hey, there's high frequency components here, this one doesn't. And then again, just like a lot of these previous techniques, you don't have to get an actual number, you can just see, because I show process doesn't instantly happen. It happens over the course of time, many months years. So if you had a sample once a month, then you can actually see when something happening. But you don't have to worry about

conventional techniques that are that require X ray. So a lot of this is done using x rays. And you know, you're not going to get an X ray every week or every month. And so here you can use a non invasive way of potentially measuring bone density. So this is just a quick just overview of, you know, it's not just the microphone and speaker, or the camera and the flash. You can also use Excel on gyro in unique ways. And there's others that use accelerometer gyros for other things as well like for, for looking at marching heartbeats, and there's other things you can do with it as well. But this is just one example of bone density. So one thing I want to leave With this app, you know, what is this space opened up with? You know, one of the things that's really interesting now is the regulatory space, you know, regulating medical devices, you know, fairly straightforward. You have this device, you control how it's designed and developed into a clinical study, you validate it. And then you have this regulate what how do you regulate an app? That's very different? Because there's no device, the device is a ubiquitous thing like a phone. So regulating an app, a software as a medical device is an area that's emerging in the United States and even worldwide, and how do you regulate an AI algorithm? And so I've been in the midst of doing a lot of that work on but it's a it's an interesting one, where if you think about some of the, if you look at look at explainable AI explained ability, if you look at the, some of the like, those techniques are going to be very important because when from a regulatory standpoint, you've got to have to have a sense of what how does the thing work that you're trying to develop, you can have some physiological basis here. But I think the world is going to be heading in a direction where you may not need that but I think that was really needed for the safety and trust of these things. The other area is around safety and trust is that the reaction you often get for apps is that Oh, apps are cheap. They're designed to be free or $1, and cheap. And so there's this perception that those are inaccurate. And that's a really tough one. Because, you know, apps are not supposed to be high quality. But in fact, apps can be pretty high quality, because you get more continuous data, the apps can be updated more frequently. And so there's this perception issue. The other one is around trust in general, you know, a lot of the technology that we've developed, you know, allows you to do monitoring at home. But what happens if the data isn't secure, or if the data doesn't reside in places where the user does have control? So all the work that we've done, the user has complete control over it, the data is local, it's on their device, they can delete it, they decide on when to send it to a physician or a clinician, it's all on device and, and being done locally there.

And so I'm sure you're consenting to it. But if you think about some of these things, the passive nature of some of these things you Gotta be really consider what you can do with some of that like for mornings, for example is in the coursework. We're looking at technology to identify when somebody might have, you know, pulmonary issues, but somebody could actually take that same technology and potentially identify from that cough. You had Coronavirus or didn't have Coronavirus, and we can't quite do that yet that technology doesn't exist. But just think about the things that could be done with the technology. So we're have to think very creatively and carefully about the unintended consequences of some of these things. It's making it easier for developing healthcare for all of our for everybody in the world. At the same time, there's some things that it makes easier for people to identify that may not be as intended as we thought we wanted. The patient provider relationship completely changes now you're empowering an individual with tools or community health workers with tools that a tip that typically the clinicians had access to those tools so that that reap there that that conversation between the clinician and the provider and the patient changes a little bit now, the data now the patient Visual has more of it and more access to it. And so that that relationship changes. The other one that we're starting to see more and more of that we're, you know, we're inking as much as we can out of the sensors. And it's actually helping inform inform new medical devices, instead of thinking of it from a traditional way of developing a heart rate monitor blood pressure monitor, now you have these other ways of developing maybe even lower cost devices. Now, if you've made innovations and breakthroughs with these phone based solutions, it can actually drive innovation in medical devices that are developed, even lower cost that could be used in the clinical setting. And finally, I just want to leave, while this talk was around diagnostics and screening and tools for collecting data, but I just want to point out that that's just a small sliver of what we can do and what we need to do. Many of the problems that are pervasive worldwide are social determinants of health. And what I mean by social determinants is where you live access to water, safe water, nutrition, there's a lot of other things we can do that can have a major impact on healthcare in the United States. The zip code typically is actually one of the bigger predictors of mortality, or just health and wellness, just your zip code where you live. So the environment the around you, the support structures you have, the infrastructure has a big impact on your health care as well. So, so just diagnostics and screening isn't gonna solve the problem, I just want to call out that social determinants are just as important if not even more important, and in fact, mobile could have played a role there as well. So this is my talk has been focused on screening, but mobile has a lot more impact that can have on social determinants. So So I'd like to end with just thanking all my current graduate students, current students, past students, students that have all gone off to bigger and better things as professors and thought leaders in industry. So just want to thank them a lot of this work all this work was done by my students. I just had the honor of passing along this information. These are the the folks that did all the hard work and and actually start to revolutionize healthcare. through some of these techniques, so credit goes to them, not me. I'm just like I said, I'm just a conduit of the work that they're doing. So with that, I'm happy to take questions. Thank you for your attention. Hopefully this was helpful and inspirational to all of you. But what I would like to leave you with is that computer science can have an incredible impact on society. Healthcare is a place with the current pandemic with Coronavirus, there's unprecedented things that are happening and computing is playing a critical role across the board, then it's only going to increase I think there is an amazing opportunity for computing to really change the world. It is changing the world right now. So thank you. Happy to answer questions.

Yeah, so please submit your questions. Be honest, we'll help triage and try to consolidate questions and then pass them along to me. So

great.

There are a bunch of questions that have been placed. And I'll start with one that is, appears and brings in in various versions, people have talked about privacy. So how do we deal with privacy in this new paradigm of essentially personal, personal healthcare? How do we many of the things that were discussed are doing or done server based? I was going to see some shift of all this happening on the on the on the phone itself.

What do you say about that?

Yeah, no, that's a great question. Um, so one of the things I pointed out is a lot of the work that we've done has primarily been can be done on device. So I think one of the shifts that we're seeing is, as we're seeing things like federated learning, as we start to see on device compute, you know, these binarized neural nets that are being developed, most of the work that I demonstrate I have showed you can actually do on device and I think a push towards on devices one way actually don't need to do a lot of this stuff in the server. So that's one area. The other thing is, I think we just need to fundamentally think about privacy here in general. And I think one of the things is that, and that's something that we often think about in our group is looking at the unintended consequences. So you know, as I mentioned, the cough was an interesting example of that, where, you know, the goal here is to actually have an impact in healthcare where we want to be able to identify, you know, cases that are happening where there's an exacerbation or helping you with the own healthcare, but if that data is, can be used in a couple of different ways. So if you think about health insurance, if somebody has access to that data, they could probably use the cost identify that you might have a condition that might not have been self reported and health insurance companies can be used that. So I think one of the things that we have to do is as this area so fairly early and nascent, we have to ask those questions and build those things into the technology. So one is, as I mentioned, on device focusing on device, the other ones that we worked on, is that there's ways to actually build these networks and these algorithms in such a way that you can't reconstruct the original source of the data. So for example, one of the things you can do is you can compute the statistics. So the cost doesn't go anywhere, the sound doesn't go anywhere, it stays on your phone, but what it generates are the statistics on call frequency, cough, intensity, those kinds of things, but the original audio won't be analyzed, because it's not even available there. But if you took the statistics or if we took some of the features that came out of it, maybe the embeddings or anything like that, in trying to reconstruct it, you can't get back to the original data. So there might there's ways to basically get the things that you need from us statistical physiological standpoint, but can never be reconstructed. But But and these are all technological solutions, obviously. But I think there's a fundamental privacy data governance a set of considerations that we just need to take into account. Because these are emerging technologies that we don't often think about.

Excellent. One other question, actually set of questions that deal with the same thing is, what about accuracy? You mentioned in your talk that you don't have to be completely accurate. It's like the devices in hospitals and clinics and so on. Yeah. Still, you need some accuracy, some level of certification, some level of confidence. Yeah. How do you address this?

Yeah, luckily, so with some of the work that we've been doing, the accuracy is actually pretty good. In fact, some of it is getting very close to the clinical devices, and even maybe even getting some of those things regulated just as a clinical device. In fact, the accuracy is not the problem right now. So so so as I said, I think you don't have to be perfect because even clinical devices aren't perfect. If you go into a clinic with a spirometer. You could have a major error there. So we need you kept to my blood pressure. If I go in, if I happen to cross my legs or if I'm nervous, you're going to get an inaccurate result, even if that's where you take multiple measurements right? Now, if you have multiple measurements at home with a device, that's not, it's perfectly accurate, you can still use the longitudinal trending information. But But what you can do is have a different lens on the accuracy. So when we look at your sensitivity, specificity curves, what you want to look at is not just that correlation, how accurate is that correlation? But what you want to look at is, where do you have your limits of agreement? Or where do you have your cut offs in terms of where you have a threshold that says that, hey, I need to take the next step versus so when you have those kinds of metrics, then you could get to very high accuracy. In other words, you don't miss anybody that might have a condition that's arising. But you still you have very high sensitivity specificity with a threshold saying that, Hey, everybody that, you know, does this maneuver. We basically captured they have some pulmonary ailment happening and you don't miss a lot of people. So because of that, the problem is you don't need to get to clinical grade there. You just need to get to a case where you know who to focus on and who to triage. But But as I mentioned earlier, someone has accuracies not because of the technical reasons, it's because of the technique reasons. So for spirometry, some people might hold the phone wrong, they might blow not hard enough, they might not keep their mouth open, I might hold the finger wrong. And so when I have a telemedicine visit, luckily, the triage nurse on the other end of the of the telemedicine call, can say, Hey, hold it this way, we're getting a better signal. So many of the things that we found actually become human computer interaction problems or opportunities were helping them do the technique better increases the accuracy to and so and compliance is the other one, you're actually doing it often enough and those kinds of things. But yeah, so I, I think we're in the midst of a, a kind of a paradigm shift in the regulatory space in the UK with the FDA, and the United States and other regulatory bodies around the world looking at how to regulate some of this stuff. You're already starting to see AI models that are being FDA cleared. So we're just slowly moving into that space, but but i think i think my worry has been in this space. If everybody tries to get to the perfect clinical accuracy, then we're shooting ourselves in the foot because we're attaining a goal that you might not need to always get to, then you may people might not be innovating in a space because it's impossible to get to that sensitivity specificity. Whereas something over here, that should be good enough if you think about it, right. So that's the thing is just to have a broad perspective on what are the actual accuracies you need to go after, you just don't have to be you don't have to use Clinical Standards or clinical devices as your gold standard all the time.

Privacy accuracy, and now another concept equity. Yeah, not everyone has a smartphone. Yeah. Even if you have a smartphone, you don't have the last the last model that can take all these good days. How do we serve the entire world?

Yeah, no, that's a great question. And one of the things and that's something that you know, when I meant to spiral call work, you know, early on, we said, so we we often use this. So if it can't work on the smartphone, it's not going to work on an older generation phone. So what we try to do is try to do our prototyping early on to see, is it even feasible? If we can't even get good data with a modern phone, then it's gonna be really tough to get it with some of the older phones. With spiral smart, we're getting good data, we said, hey, let's start to look at these feature phones. So that one was a phone that didn't even have apps on it, right? You could basically do a call a one 800 number, and you just use text messaging, which some of a lot of those phones had. So one of the things that we've been pushing on is that as you start to develop some of these models, you have to look at, you know, don't lock yourself in with the computational capabilities of a phone that's only five years old. We'll take a broader perspective. But I agree, I think one of the things is that as you have to kind of look, look at a broad set of phones, but the smartphones right now that we develop on it's just a way to prototype and to validate is this even possible, and then we try to develop these techniques where again, it doesn't have to be perfect for some of these things, as long as you can use it as a triage tool. So the spiral call work is an example of that, where we said, Hey, we're going to go through the overhead of analyzing this audio data that's going through all these different networking and telephony connections. And at the end of it, it's not going to sound like the thing that I would have locally recorded on the phone. But But what can you do with that? So I think, going in with that as a first principle, and adding that constraint, and from the get go makes a big difference, because the moment you don't make that fundamental mental shift is that you're at the you have the luxury of using a smartphone, and you will never go like it'd be so difficult to unwind some of that. But if you add those constraints in early, that's really helpful. The other thing that I've been really pushing on if you can demonstrate on a modern smartphone, what he can and can't do, then as these lower cost phones are being developed globally, you know what to focus on right? So you have new phones that are developed their smartphones that are lower costs, but they take they make trade offs, right to get to that price point. We're gonna reduce the resolution the camera, we're only gonna put one microphone and we're not going to take this microphone. But now what we can do is inform handset manufacturers say focus on these things and cut these things down so that now that phone can be helpful for telephony, communication, micro payments, payments and healthcare. Right now handset manufacturer doesn't have a clue like, oh, what microphone Should I pick for health, but at least we can help call that down to say, if you only have this much budget from a bill of material standpoint to build a $5 $10 $20 phone, here's what we can help inform the design of the phone. That's the other thing we can do is moving forward as these low cost phones are developed. We can make sure that the right things get put in there.

Great. Another set of questions now moves away from the technology itself, but how do you communicate the all these technique techniques, all these tools to the healthcare providers Yeah, make them believe in them, use them, engage them into the healthcare system.

Yeah, it's a really tough Yes, a very tough one. Luckily, a lot of my collaborators have been fairly, I'm kind of open to technology and change. But the healthcare industry, unfortunately, very slow. It's very entrenched, that a lot of things that are our common knowledge or dogma now may not even be accurate. But still, that's the going standards right now. The way so there's been a couple of different ways that we've been trying to prove this out. is one is we do the clinical studies, as I showed, as a computer scientist, my group does a lot of clinical studies. I mean, if you look at the kinds of things we do is like, Is he a computer scientist? Is he a clinician? Is he a bio engineer, for me, it doesn't matter. I'm a scientist, I'm trying to help the world. But computing is, you know, the core of what I do and a lot of these areas. So one is being able to do rigorous clinical studies that professional healthcare professionals understand. And so that's one The other one is we actually take a lot of this stuff to the next level to do outcome studies and what I mean by outcome studies, when you start to deploy this technology, in a in an intervention or treatment to show how well this work. So what we've done is, in the United States, it's really tough to do this because our healthcare system is, is entrenched and very slow to change sometimes. So we've been partnering with the Gates Foundation for for almost a decade now, where we deploy a lot of our technologies in developing countries in emerging markets, because they're, they don't often have an entrenched system, they have a lot more flexibility in terms of innovating and maybe even leapfrog, you know, North America or Europe in terms of the kinds of technologies they can develop where they, you know, just like landlines, right landlines didn't exist in many of these regions. They went straight to mobile and cellular. So you've completely leapfrog this whole telephony network in the same way that we can leapfrog these kind of systems that don't work well right now and just move so so. The way that we can prove a lot of these things out is go to regionals. You might have these extreme phenotypes. So extreme cases where you might have a lot of tuberculosis, you might have a lot of pulmonary issues, where you have wood burning stove and those kinds of things. And very quickly, we can validate, hey, here's the efficacy of this stuff, and then bring that back to clinicians to say, hey, look, here's the kind of impact we're having. So you got to be really kind of opportunistic in terms of where you would deploy it, how you deploy it. But also, I think you have to show the efficacy, which is really tough as a computer scientist. And that's why a lot of computer science researchers go down this path is that, I mean, I just, you have to be in it for the long haul. And that's one of the challenges here.

One other very interesting question that we have, is that that many of the things even in current health care is about correcting diseases. But how about preventing diseases and this technology that you have it all the time with you? Yeah, you sleep with it? increase the level of early detection. Yes, that's only shown so to speak without going after and looking for something.

Yeah, yeah, no. And that's really the crux of a lot of this notion of continuous monitoring which is a pre symptomatically so asymptomatic before you're symptomatic. Could you identify something that's happening like a concrete example is Coronavirus, like so if we knew that you had Coronavirus before you're symptomatic, then you are can protect yourself obviously, by you know, doing the right measures, but you also protect everybody else because even pre-symptomatic you're a spreader, you're also infecting other influenza. flu is the same thing before you're symptomatic. Actually, when you're symptomatic, your viral load is actually going down your viral load was going up before you had the fever and the aches. In fact, you are more infectious then. So so there's a lot of value in that a pancreatic cancer as I showed. So the idea here is that because we have the ability if we had wearables and mobile and these kinds of devices, you You see the changes in your physiology physiology? That's an indicator. This is the this is where computer science has a unique role it can play is that that pattern matching work? And that AI work is absolutely critical, because we don't know in clinical science, like what are those signals that are precursors. But now if you think about the ubiquity of the phone, if you have the clinical outcome, you have the sensor data from the phone, now you have this massive data set where you can start to do predictive modeling a priori before you're symptomatic, like this is. So there's a huge possibility there. But that's one of the goals here. The other thing I want to point out is that you know, you there's a couple things there's one is managing your disease. So if you have been infected or you have a disease, mobile health can help you manage your treatment, if you have tuberculosis if it's getting worse or not pre symptomatic, but the other one is, you know, some things are societal driven. You know, just because you have a certain condition doesn't mean there's something wrong with you. It just means that society wasn't hasn't been a isn't adapted for your needs. So technology You can also play a role to adapt to your needs. So you can interface with society too. So I think there's three roles that plays one is triage in the healthcare space. One is that you might have a chronic disease that you can't cure. But you have to use the mobile tool as a way to interface with society, because it's not built around individuals that might have certain conditions. And then the third is treatment, basically, being able to monitor how well your treatment is going. So I think those are the three areas.

This is great. Unfortunately, time flies, we have tons of questions that we wouldn't be able to, to ask and get an answer. But let me finish with one of the last comments that came to the chat and I really am completely in sync with it. real scientists are altruistic. Thank you very much, Mr. Patel. On behalf of everyone. Thanks a lot for that. This was amazing.

Great. Thank you very much, everyone. Thank you, everybody.

Thank you. Bye bye.

Of course, if anyone has, there could be more written answers to your questions offline and that you can see that, but we have a time limit. So thank you, again. And thank you all for participating.

Goodbye.

Thank you Yanis very much for the excellent job moderating and handling all those questions and thank you to attack as well for an amazing talk. I don't think I'll ever look at this device as a phone ever again. It's something new to me. That was truly amazing. And it's a pity we ran out of time and but I think, you know, the amount of questions is testament to the interest in your topic. So again, thank you to Yanis as a moderator, shortstack as a keynote, amazing talk. And also thank you to the participants for all of your questions. And following that, I'd like to make a few announcements for next week where Looking forward to a very action packed week where we have three webinars next week. And on Tuesday, the 13th of July we'll be having what is considered part two of the breakthrough session on the future of food. So if you're interested in that topic, please tune in. And on Wednesday on the 14th of July, we'll be having a session from curated by AI for Africa, and promises to be very interesting as well. And on the 15th of July, we'll be having a launch of the global data pledge. And there's a special guest who will be joining on board the Nobel Peace Prize winner Muhammad Yunus will be speaking as well during that session. So I'd like to encourage you to attend all of them or one or many, it's up to you depending on your topic of interest. And my colleagues have been pasting all of the links to these sessions in the chat so you can go and register directly so you don't miss out on anything. Last but not least, I'd like to highlight the fact that Two years ago, because of AI for Good Global Summit, we launched Focus Group on AI and health with the World Health Organization. And the goal of the group is to basically develop a benchmarking framework for the testing, evaluation and efficacy of AI for health algorithms and applications. So I'm assuming if you're here today, you're probably quite interested in that topic. They'll be having their 10th meeting mid September online, it's an open meeting, anyone can join. So I would encourage you as well to go in the chat and click on the link if you want to join that focus group, and then meet with like minded folks and mid September to you know, continue this important work. So with that we've reached the end all good things must come to an end. I'd like to once again, thanks to the the speakers, the panelists, the moderator, the participants, and of course, ACM, the gold sponsor of a summit for many years in a row now and also sponsoring this keynote. Thanks again to ACM, Vicki and switek. Much appreciated and also our coconut beaner Switzerland. And with that, I say goodbye and hope to see many of you next week. Thank you.