Yeah, if you take a step back for a second, I think there are three... you know, the underpinning your question is, well, how do you deal with the tough stuff, and you said, data. Data is one of the tough things that we have to deal with. There's a massive chunk of data at the end of the pipeline, there's a huge amount of information that needs to be gathered by instrumentation, which has never been built to gather this much information. And then on the biochemistry side, there's a huge amount of complexity in trying to analyze the molecules that we're going after. And it might be interesting just to kind of... I'll double click quickly on each of those areas, and then you guys can delve in and tell me which areas you think are most interesting. But in the biochemistry side of it, one of the things that made genomic sequencing possible was that nature already has a mechanism to go and copy and read DNA. It has to copy it for cell division so that each has a copy of the DNA and has to read it so that a process can be undergone to transform DNA into RNA, which then becomes protein. And in order to make a genomic sequencer, Lumina, which is the world leader by foreign genomic sequencing, built a system that took a DNA fragment, amplified it, meaning copied it many, many times and that enabled them to get signal amplification, which made the process of measuring it easier. For us, once something becomes a protein in nature, there are no mechanisms to read it back, or to copy it. And so without those mechanisms, without a way to optically look at it, because these objects are miniscule, they're well below the optical resolution of any microscope, because they're orders of magnitude smaller than the wavelength of light. Without a way of measuring these molecules, it's really, really... without a way to borrow from nature in order to measure these molecules, it's really difficult to develop a method. And that's one of the aha's that Parag had and the crux of this method is that instead of trying to make a specific identification of what a molecule is, if we borrow from computer science, a technique that probes the molecule, many, many, many times, each probe leaking slightly different information about the molecule that you could computationally combine that to get to a very specific identity of what that molecule is. And then you have to figure out how do you do that in parallel for a large number of molecules. That's the second part of the problem. In order to make a dent in the world that we're going after, you have to measure millions and billions of molecules in a single run of an instrument. And that can't take more than a few days. And the reason you have to measure that much is that your cells, each cell has a lot of proteins in it, roughly a million protein molecules in every cell. In a typical pharmaceutical drug development application, you might be dealing with 96 well plates, each one of them have 1000 cells in it. So if you can't turn through billions of molecules, you're not gonna want to make a dent, even if you could do better in terms of protein identification, and we've had to spend enormous amount of effort figuring out how can we, in our bio-chip have density so that when you're scanning it, that we're able to get a lot of information quickly, we've had to make a lot of innovations in microscopy to figure out how we can image these molecules at a single molecule level. At speed, we had to make a lot of innovations in the microfluidic system, and all the things that we need to probe these molecules over and over again, so as a huge body of work there. And then once that's done, it goes into a computer, a computer is dealing with 10, or 20 terabytes of raw information coming off the instrument every day, that has to get reduced in real time, these are images, so they have to first get deskewed, they have to deal with pincushion distortion, they have to deal with all the fuzziness of imaging at very low light levels, which is what we have to deal with. And from there, then we reduce the dataset, we send it to the cloud, and computation using this algorithm that Parag conceived of four years ago, requires hundreds of cores and many hours of compute power. And so that's just to get to quantification of what's in my sample, what's in that drop of blood. Then from there, the question is, well how do I use that information to build better drugs, to build diagnostics to personalized medicine? And that's a whole area of exploration that we will be a big part of, but our customers also be really integral piece of.