All right, thank you everybody for joining today. Appreciate your attendance in person and online today. This is the sticky ledge panel and we're going to talk to you about the ever exciting topic of latency, which many of you will not know about. And if you do you probably miss understand. So we're gonna go into that everybody loves to talk about broadband speed, speed speed, you know, gigabit per second symmetric, that, as it turns out, isn't necessarily the most important thing. So we've got a great panel for you today. And we've got folks here I am Jason Livengood from Comcast. I'm the Vice President of Technology Policy, product and standards. And the panelists that we have today, we have Dominique Lazanski. She works on international governance and cybersecurity policy, and become an expert on things like mobile device security and standards development. And she works on a variety of projects with her company, last press label, and just finished her PhD. And congratulations to have that ever. She's also an amateur by athlete and role whiskey and Nordic coach. So as you'll find out, everybody on this panel is athletic in some way, shape, or form, including myself,
Next up we have Nick Feamster, who's the Neubauer Professor of Computer Science and the director of research at the Data Science Institute at the University of Chicago. And that is a long title, that'squite a mouthful. He focuses his research on many aspects of computer networking and network systems, particularly network operations, security, and censorship resistant communication systems. He's also wanted to add a distance runner and has completed more than 20 marathons, which is quite an achievement. So kudos.
Next we have Erman, Ermin Sakic is from Nvidia. He's a senior software engineer there. And surprising to see Nvidia is not in the AI presentation right now, which is good, they're here. So take that fun for a sign of where what things are meaningful, I think, in terms of network and technology at Nvidia, he works on QE improvements and network routing for their growing cloud gaming service called Geforce Now, and that's an AR VR streaming stack, which is going to be the next thing that really grows rapidly for them. He's also been at Siemens where he worked on real time, real time sensitive networks and real time applications, and has a PhD from Technical University of Munich. And in his spare time, he runs half-marathons, so he's not quite up to marathon.
Next time.
Reads the science fiction. And last week, we have Mike Conlow. He's the director of network strategy at Cloudflare. He helps measure Internet performance and build their infrastructure around the world advocating for things to help grow the Internet, which is always good. Before then he's focused on political technology, both at the Obama campaign in 2012, and as Deputy CTO, and in his spare time, he plays ice hockey, it works on many hobby projects, including attending his community garden, which is great.
So we've got an ice hockey player, three runners, and then a runner and, and cyclists. So lots of lots of fun stuff there.
So let's dive into the slides. So this is Ditch the Glitch. I think you recognize things like that. But you might not recognize my dog, that's my dog [name]. Got to know her even better during the pandemic. And the first thing that you love, especially if you've got great Internet service at home, during the pandemic, you probably still had Zoom problems, you probably had little glitches. And the reason was lack of bandwidth. That's a clue to our panel.
So let's talk through exactly what latency is. So latency is an important thing, aspect of Internet performance. And essentially, is very different from what people understand it to be. And I'll explain that in another slide. But really, it says that everything we know about Internet performance is mostly wrong. We used to think that more speed, more bandwidth solves all ills on the Internet. And it's true speed is very important. And it's always been a good proxy for quality of experience for users. But it is not the only key performance factor. And as we enter what I sort of call the era of bandwidth abundance or the post gigabit era, it is no longer the primary factor in user quality. And that thing is latency and almost ever really, you know, problem that people experience is latency related these days. But people don't really understand what that is. And so a lot of times when people think, wait and see that sort of a nerdy kind of word, maybe better, I think to just think delay, no one likes to wait. One wants to wait for content, no one wants to wait for something to load on your screen. And if you're doing something interactive in real time, you don't want to wait for those things. And that's important. Delay is really the most important thing. Today, when policymakers in particular talk about latency, they talk about what's really vital latency. So it's when you run a round trip, you know, like a ping packet, from point A to point B. And you see how long that takes. And to use an example, you know, fiber, we're DOCSIS networks might have latency on the order of 10 to 15 milliseconds. And we really think, wow, with one type of networking is 10 milliseconds of idle latency and another 15, that's a huge difference, right? Doesn't matter, really, because working latency, which is the next one view, is on the order of hundreds of milliseconds of delay with 1000s. So it could be a half a second or a second of delay. And if you think about a lot of web pages and applications, doing multiple round trips, you then have all of that delay stacking up and causing real delay to the point that it's a several second delay in many cases. And that's why you notice occasional glitches, games or video conferences, or what have you. So that's the key thing is sort of, you know, reframing what we think of as latency. And so, you know, this is hopefully about as nerdy as it's gonna get, this is a distribution of delay, and what the sources of the delay or and that idle latency, that basic paying that someone runs, is really just testing the first two parts, the propagation delay, that's the speed of light on a fiber and you can't go faster, unless you're maybe in outer space, which, you know, one of those cool kind of things, but you can't go faster than that interface. nollaig is that protocol, like pond or DOCSIS, and so on. Again, you know, there's a certain amount of built in delay on those protocols. But, you know, those are what they are. Clearly, the highest that you would see is maybe geostationary, you know, high orbit satellite, where you might see 100 milliseconds or more. But for most technologies, you know, the got this non negligible budget, and now the left side of it. But really, the big one is the queuing delay, what happens at the Internet protocol layer in a computer that's trying to connect to the Internet and send packets, so we're in your home router, or your Wi Fi network. And that is where from a return on investment standpoint, the network community has been focused. And that is where all of the problems from a user standpoint occur today. And so that's, that's important. So at an IETF, which is the Internet Engineering Task Force, they've been focused on this new notion of dual queue networking. And so that's a new thing. And the first, I think, important aspect of this to understand is that with any end to end path, from, say, your house, out to content at Google, or CloudFlare, or in video or wherever, somewhere on that path, there's a bottle and click that link that is the most constrained. And that link is where this cue is going to build up where the delay will become a factor. And so that is where the IETF and other networking organizations in focus their research and development on to try to fix that. And so the great part about the standards, including Active queue management, which is something that works can deploy today is that it can be incrementally deployed, it's not working, you have to wait for everybody to support it before you can deploy, which is always good. So that is a good thing. And there's also very loose coupling. So a network that might deploy dual Q networking or a QM can do so without having to do any coordination, in particular, with application vendors. Very different. I think, in contrast with sort of 5G network slicing where there's a high degree of coordination, or other things where where that happens. And I think in general, you know, the sort of way that the Internet architecture succeeds is by having loose coupling.
And then from a net neutrality standpoint, you're a lot of you are policy wonks, and for for you, when you think about the policy aspects. You have the applications themselves that are marking their traffic, their packet headers, it's not the network doing networking. And that's an important distinction from the net neutrality standard. At the same time, any app provider that can mark their traffic can do so it's not that they have to get permission, they have to agree to some API Terms of Service, and so on. So that's another big ask there. That is a little bit of attention from some existing events. As I mentioned, very quickly, you know, Comcast There's a field trial of low latency networking. That's out in in practice right now we've been doing and since the late summer, we've seen the bait a basically a 50% reduction, in some cases, more reduction in weight. And seeing which is amazing, really is transformative from cloud gaming standpoint. And we think it's an important enabler for AR VR experiences. And really, any application that you can think of that involves interaction between user and screen, every user and an AI device, don't like it was something along those lines. So with that, we'll go to our second slide deck.
Question.
Yes, quick question.
Is this an open standard, or...?
It's a totally open standard. Yes. Anybody can embrace it. So you know, the first thing that many people have deployed over the last few years and are deploying now is active queue mangement, anybody can do that. And then the IETF's new form of dual queue networking, totally open standard, the first three RFCs published last year, there are a few more that are published this year. Totally open, anybody can adopt, which is great. And starting to get built into open source.
I notice you don't address, in asymmetric systems, older 2 and 3 sytems, where you can't have as much of an upload, and so what happens, when they reach that upload limit, then the latency dramatically increeases, to like a hundred thousand milliseconds. I think 4 is going to be symmetrical, and it is going to resolve that issue?
Even with symmetric gigabit service, you still can have high working latency because there's a bottleneck somewhere. Folks like Nick might mention this in terms of the home network being one of those places, but even the gateway outbound in a symmetric service can in many cases, be the source of that latency because you've got all these different types of cross traffic competing for network queuing. Yep. So next up, we've got the Dominique.
Right. Thank you. Thank you, Jason. And thank you to all of you for coming here instead of the AI panel. And I'm going to take you on a quick tour of net neutrality regulation. Despite what Jason just said about the new IETF standard, being net neutrality friendly, it's important to understand that there are very few technologists in policymaking that, that understand how things work and understand emerging standards. And so I'm going to a lot of what we're talking about is still going to be regulated in the EU and UK in particular. And also in the US, as you're probably more familiar with, under General Net Neutrality rules. I'm I'm from the UK. So I'm going to spend a little more time on Europe in the UK. But I a quick tour and hopefully, if five minutes. So I just wanted to point out that in the European Union, the regulation is still the net neutrality regulation is still enforced and all of the European countries, except for the UK, which had Brexit, which is why they have a different net neutrality, directive or regulation and auto directive. Just wanted to remind you about this, because basically, this is from 2015. And this particular paragraph talks about traffic management and from a technical point of view, rather from preventing content or other kind of information from getting through in the service of net neutrality. It's an article three of your interested for the policy wonks. And then of course, the European Commission last year had a consultation. And I just the header of the consultations on this slide. There was a fair share campaign by various telcos in Europe to basically ask another version of Senator pays, but to ask platforms to pay for build out and content, which is really interesting, because I think that's a direct result, probably many of you know from from heavy handed regulation, right, you know, freeing up regulation, and we would be able to create more innovation around contracting and working with different parties. But, but the fact of the matter is that this is this is what happened last year, the European Commission had three takeaways, which we see also on this particular slide, we need more innovation and invest in efficient investment leveraging the single market and securing our networks. So the commission that and for many of you who may not be watching this, they have one more year left of a five year term. There will be elections next year and Europe European elections and this whole Probably uproot and change all the work that's been done, but also change the direction because as the world is going, it looks like most of the people that will be elected next year will form will come from center right parties or right parties. So we'll have a completely different approach to the Commission's kind of slowing down their work at the moment. In the UK, this is where it gets interesting. In the UK, there was a consultation off the back of Brexit. And the results were basically after multiple consultations results in October, basically turned over to industry, the ability to offer premium services, specialized services, different types of uses of traffic management, the technical and for managing different like gamers, you know, versus other things where they would be able to offer different rates, depending on different people who want different things and zero rating. I think the important thing here with the what Ofcom, the regulator in the UK has said is that they are not going you know, we also need to clarify, we're likely to have concerns where ISPs are taking reasonable approaches. So they're gonna take their hands off and look for innovation in the market. So one last slide, trying to be quick. Okay, where does that leave us with latency? Well, I think just hearing from Jason, I think we end we'll hear from our other more technical panelists, there's a clear misunderstanding of what latency is, obviously, I think all of you and me just learning about it for this particular panel really doesn't understand how traffic management and how latency and the IETF standards and other standards can combine to actually even provide more efficiency. My, my proposal, obviously, would be to simplify regulation so that that innovation can occur. And there can be different partnerships, both with academic and other companies as well. Obviously, in the US, all of you are waiting more discussion on net neutrality coming up very soon, right. And again, in Europe, this will change significantly in the next year. But in the UK, it's going to be interesting to see how how, what the recommendations that came out a few months ago, will will, you know, play out. So that's it.
Thank you. Next we have Michael all the way at the end.
I'm going to do a little bit of an overview like like Jason, but I think I will cover different ground. So back in the early part of the Internet, if you were accessing a website, of a company like mine, your traffic might literally go to the headquarters of the company in the basement on a server, and they come come back to you. And so a lot of things have changed. But at its core, a lot of what we do on the Internet is over requests for the URI issuing the request from your computer, it's literally called a request for the website, that's going to go out from your router, it's going to go through a couple of routers at your ISP. And then eventually, it's going to hopefully find the content or service that you requested at that data center. And there are some places in the US like in the Midwest in the south primarily where it could actually go across a couple of states, before it reached a kind of regional data center that had a copy of the content that you were looking for, there's a bunch of places in the world where it could travel across countries or across regions before it found the data that that you're looking for. And so that's that's the styling here. And then what has happened over time is that CDNs have have stood up and cash out the property of that onset, closer to the end user. And so now the your request for the content goes out from your home, it hits a set a same router or to with your ISP, but hopefully, and we're getting better at this, hopefully, it can find the content in your local metro area. And that's that's the idea is we want to be able to, to just like going to the supermarket, you want it to be convenient, you want to be able to find as much content and services as possible, as close to the user as possible, literally, so that data doesn't have to travel as far and so it doesn't, it just doesn't make difference. And so that's the role that CDNs play. I mean, it's not just about performance, there are security and, and the liability aspects of it too. But performance is is a big one. And it's all in service of, you know, reproducing that latency. And so a lot of sometimes we fall into this trap of AR and VR needs really good late so gamers need really good latency and and Operating System uploads or downloads the really high thing, I want to try to try to make the case that the almost everything that we do on the Internet is latency sensitive.
And so this is, this is a load of cnn.com, I could have done weather.com, which is possibly the worst loading.
[laughter]
But cnn.com will do the job. And so if you see in here, the first request that it makes, I clicked on cnn.com. But the web page is hosted at www.cnn.com. And so it was 300 milliseconds before that 301 redirect, the redirect from cnn.com to www.cnn.com got back to my browser. So now I know that if the load www.cnn.com. Now I need to get the HTML file. So I go back out to a server and get that file and it comes back to me. Now I need to know that I download a bunch of JavaScript files. And so it goes back out, gets the JavaScript files, brings them back in. Somewhere in there, it has enough data to load the webpage. But there's a whole bunch of blocking round trip requests in there, and so that really adds up, and the more we can compress that latency, the faster we can getthis web page to load.
So let's see what this looks like the with data. This is data from a great paper from some MIT researchers using the FCC's Measuring Broadband America Program, which is itself a great data source. But it shows that above about 20 megabits per second, the webpage is not loading any faster, the more throughput, the more bandwidth you apply to it. So I'm not going to use the term speed. I have an allergic reaction to using the word speed to describe bandwidth. But this is a real data chart that shows that your gigabit connection is not is not helping you load web pages any faster. Compare that to the chart of what happens when you reduce latency, which is the same effect I was showing for cnn.com. It is a linear relationship, the more we can reduce the latency, the faster the webpage filled out for the exact reason that we looked at with cnn.com, and that that relationship will hold true down down to almost zero latency, and speed up the web.
So that's it, I think I'd like to close by saying I think that it is any technology that we have that can reduce latency and improve Internet performance is good to have. I think we're at the point in the United States where there's about 10% of Americans that don't have access to a broadband connection, there's about 20% of Americans who aren't using the Internet. It's probably a little bit more than that, who's Internet connection is kind of not good enough. As I started to think about how we apply policy to this, how we encourage better latency, I'm thinking about how can we make it so that that, as all Americans get enough throughput, that they also have a really fast Internet connection in terms of the latency. I'll leave it there.
Thank you. Next up, we have Ermin.
Thanks, Jason, and thanks for inviting me to bow. So my name is Ermin Sakic, and I am with Nvidia where I work on the cloud streaming service. Specifically gaming and AR/VR streaming applications. And so I will give a perspective on the ever growing service, and how latency affects us, and how other aspects can impact the streaming experience and adoption. So, let's now show some data of that. So, for those who are unaware what cloud streaming is, our basic idea is that instead of having your game or some heavy workload execute locally, perhaps on your MacBook Air, some lightweight device, maybe your Android or iOS phone, you're basically offloading the calculations, such as rendering of game frames at perhaps 4K 120fps, if you want to do so, away from your local device, and put that workload in a server farm out there, and basically let just those frames be transmitted over to your local home network. You are doing this over Internet, which means it's an unreliable link, it might work well in some conditions, sometimes it does not, and we have to adapt to that, but the basic idea and premise here is, you don't need to continuously upgrade your hardware or your client hardware in order to render the latest content that might be interesting, and also you can save on the battery life, for example, because you don't need execute complex equations anymore locally, which, again, can provide for thin clients, and deployment of complex applications on a thinner clients than we might have with today's AR devices, for example. You don't need to carry a Teeter battery with you, for example. So we support different apps each year. So there's like eating 100 games in the library. And there's also a cloud XR, it's a library, which basically allows you to do the same for stereo streaming for XR pocket. So AR and VR, though. And the GeForce Now is the cloud gaming service that I'm talking about here. It's already supported in over 100 countries, and we have over 25 million users confirmed and so far, and we do host our own data center. So minimize the latency aspect of it. If you think about an interactive service, any kind of input that you expect a response to basically, or motion volume, latency will affect your experience. And so you want to put those data centers as close to you as possible. So we had distributed and have over 30, physical data center deployments around the world. And each of these will have at least two and the franchise uplinks so optimized for peering or provide for an organization peering between the ISPs. And we are also have points of presence is different than the territory change points to minimize lates. And I will show here how the latency can impact user experience. So what we do after every single session whenever a user starts again, and as they get offered, basically an optional rating step where they can rate their service. And we do see a significant of inverse correlation here between increasing the latency and the drop in the ratings. So basically, as the, I'm showing here, on the x axis, the Iranian police, which is basically our metric for measuring implant latency, specifically, the percentile 99. As it rolls over to 400 milliseconds, here are the expected rate in bold drop by 30%, which basically indicates that users don't likely and see are they already bad and might even jump away from the service if we end up being in that region, which has very high leads. Fortunately, majority of our session still stream at less than 100 milliseconds of personal Nike, now, latencies, otherwise wouldn't be business with a service. So there's still a really long field here on the right, if you look, distribution shows that there's still a bunch of sessions that end up with high latency numbers. And this is usually going from all networks and buffer ports and scenarios that hit and result highlights. So for us, latency is also not just about networking, there's a system latency in world, I won't go into each of these boxes, but basically each of them are adding latency in your implant flow. So on server, for example, you might need to capture the frame from a GPU, you might be saying forwarded and packetize it and send it Internet. And on the client side, you have to actually assemble that frame, you need to our OS positive, you need to upscale it and you had a base to present it on your display, which works with the variable refresh rate. And that refresh rate is adding on some latency. And in between all of that you have the network where we have transmitted packets in both directions, because we also care about the you know the inputs or the service. It's interactive. So all of this adds up to latency. What we can optimize well is the server and the client parts, so the edge parts, but the network in between is something that is out of our control. And that's something that we can adapt to reactively when we observe network impairments, but we basically rely on infrastructure providers to offer a decent or usable inserts here. On the server and client side, we do things such as 240 FPS streaming, two minutes to like, show me the latest available frames or do things like frame pacing, installing the game engine so we can synchronize the monitor, refresh our timing with the frame, render timing on the server side and things like that. But network is what adds up. While we're waiting, and next last Slide Here, I've basically shown here just a simplified example of a connection. So we might have a streaming server, like Nvidia on the left and some other service that's that's sending data in your in your home network over Internet. And on the very right, you see your home network would be correct the whereby for 5G, for example. And once we start sending traffic, we have multiple points in the network that impacts us. So one, one is the packet losses that may come from, from transit peering. So if there's a bottleneck, there's a Super Bowl being streamed during peak hours, for example, a lot of users watch it on your ISP, there's a high chance that you might encounter worse performance than usual, if there's a bottleneck link there and appearing. And on the right side, as we approach the edge, where we stream data into the users network, we start seeing issues such as bottlenecks that Jason mentioned before, that may be related to your plan, you're trying to download more data into your home network than your plan actually provides. So you basically start buffering data and in a queue, this adds up latency and is what we usually call buffer bloat. And in order to get around that and minimize latency, there are mechanisms such as htm, that can be deployed. And on the very right side, we can also encounter latency and packet loss because of scheduling delays, where your devices connect your gateway to the Wi Fi router, for example, or jeenode bien 5G context, any interference or scheduling delays can also impact your performance. And we need to address all of these. Now, if we cannot do this, and we lose some packets, and we cannot assemble the frames that we might need to do retransmissions that are super costly, because you need to again, talk to your server, get a new frame to reference in the future for the future frames, and this is very latency demanding. We can also do error concealment. But this results in picture distortion. And it's not an ideal case. And this is my last slide that I will show. So this is just an example of a game called fortnight streamed in the exact same bandwidth condition, the exact same network. The only difference between the screen on the left and the right is that on the left, we don't have an AQ M, that's latencies relating we don't have an active queue management mechanism that ensures that our service that the responses service gets low latency. Whereas on the right we have the tail for us. And so what you see on the left is a lot of stutter and jumps and frame skips and decoder skips, making for a very unplayable or an enjoyable experience. Whereas on the right to see as a personal stream and the latency number on the top right. You see the spikes in the red on the left screen and no spikes on the right. This is basically what we show to the users as an indicator that they have a latency issues, among other indicators for Yep, and with that, I will close out I do have a couple backup slides in case we get to that later.
Next up is Nick.
All right. Great. So thanks. As Jason introduced. I'm a professor of Computer Science at the University of Chicago and I work in network measurement. I've actually been working on designing, building and deploying measurement systems since about 2020 1120 12. Since the early days of the measuring broadband America program, we had this system called the broadband Internet service benchmark or Bismarck that we built that to measure throughput way back when when speeds were single digit megabits per second downstream a lot of the time, DSL was still fairly prevalent as an access mode. And that project has, has actually continued. We're still working on that in the same area today, but I'll sort of tell you about the arc of that project and how sorry to use the word speed. But how it's really kind of evolved beyond just throughput speed in the traditional sets. I should note I know this panel is on latency. But I think even in the 2012 days it was it became quickly apparent to me that met even measuring throughput is not a straightforward exercise. Okay, enough said on that I only have five minutes. But that that certainly was an interesting journey in and of itself. I think it was around 2014 was speaking with some of the folks who were heavily involved in the measuring broadband American project that time while Johnson being one of them, came to me and said hey, I think I think in your work you might consider doing Your web test because I think, you know, web web page load time, may actually stop correlating with throughput at some point. There's a lot of discussion about the role of latency, etc. I think, Mike, I don't know if that was, that might have been my paper you cited actually. We around 2014 2015, we had a paper using, we actually helped the samknows folks develop their web speed test and deployed one of our own and saw that essentially, web page load time stopped improving around 20 megabits per second, down, downstream. So kind of confirming, confirming the hypothesis of those, like Walt. So that was kind of our first our first exposure to latency as a sort of a primary performance metrics that was to like, almost eight years ago, now eight or nine years ago. And I think the story, the story has really continued from there and continue to continue to evolve, I would say, around that time, maybe a year or so later, we were approached by some folks at the Wall Street Journal who said, hey, you know, we're getting a lot of like, a lot of people calling us talking about how people are complaining about their video performance, and they're getting advice to upgrade their, their, their speeds, tear connections, and we're not so sure that, you know, speed correlates with with video streaming quality, or video on demand, I should say quality after a certain point. And so it's kind of the Walt Johnson story all over again, except, I guess, from the journalist consumer side and a different application. And that was an interesting journey. Because we took on the question of, how does, how does streaming video on demand performance correlate with speed? And when does that sort of what does that correlation sort of cease to exist? And that turned out to be a very technically interesting question. Because if you're not the video service provider, figuring out things like framerate resolution, startup delay, et cetera, actually a little bit challenging to do from network traffic. And we thought, oh, this won't be too bad. We'll just look at the dash protocol. This should be straightforward, except thanks to folks like Jason and others alike, a lot of network traffic at the time started to become encrypted. And so that that turned out to be really interesting, interesting, technical problem of how you how you infer things like video on demand performance from encrypted traffic. So we did manage to, to crack that problem and do another deployment where we then looked at we used, so you're not the IP handle, but you're gonna get some AI. We deployed some machine learning models to look at encrypted traffic and about 60 homes, mostly journalists, in the New York City area, but wide variety of ISPs looking at the correlation between speeds, the throughput, excuse me that we could measure from those homes correlated with video quality of experience for video on demand. And lo and behold, we saw the same kind of trend except the plateau wasn't at 20. This time, it was about, you know, 100 or 200 megabits per second, you could really start to see a plateau as far as the things that users cared about startup delay, resolution, frame rate, etc. And then I think that sort of brings me to chapter three, which is, right. Right now I think with the deployment of fixed fixed 5G, certainly the question again, has come to the fore looking at, well, how does the performance of these kinds of networks compare or relate to, to other modes of access? And as part of that, I think all of the things that I've talked about kind of led up to not only we continue to be active in our research on this, but in a commercial venture that we've that we've started, we're also exploring what the market looks like as far as different modes of access and a lot of interesting results there, I'll say related to latency under load and responsiveness as it relates to the to the fixed 5G space. So I think for this audience, in particular, I'll say that it would certainly appear that some of those fixed 5G providers are showing significantly higher latencies and latency under load, lower responsiveness than our than our wireline providers. And so it's it's really incumbent to be looking at these metrics that we're hearing about in this panel as as part of the whole The User Experience package. Okay, I think just a couple of closing notes. I'm at time, I think.
I think one of the things I've learned from from my, my journey in this space over the last 10 or 12 years is that the the applications evolved and network infrastructure evolves. And certainly we're seeing right now that the importance of measuring along performance along multiple dimensions is extremely important. But again, I think the needs of of consumers continue to evolve applications, you know, receiving vision Pro and other things. Now, these are things that maybe we could have predicted, but we might not have been able to quite predict the specific ways to measure the network to assess what user quality of experience or user experience really isn't. So I think there's a real it's incumbent on our community to design, measurement, platform suites that are extensible open source. And also allow us to, to gain access to public data so that we can we can really see how these networks are are performing.
Great, thank you. So we've got a little bit less than 20 minutes left, I've got some directed questions, I'm going to ask our panelists, and then we'll take questions from the audience. The first one is about measurement of this problem. And I think, really, Mike and Nick are kind of the authority snom here, a lot of measurement. Over the years, I know CloudFlare, you know, has a good speed test, you know, like that name. And that data set is now I think, being integrated into the M labs data store, which is interesting. And Cloudflare. Cloudflare also has something called radar, which is a website that you can look up different networks and their quality and so on. And I know, Nick, you've looked a lot, particularly lately about Wi Fi networks being the bottleneck to correct cases. So you know, question to the both of you, you know, what interesting things have you seen, Nick, you mentioned one of them being, you know, difference between wireless wireline networks. But what would be interesting from a measurement standpoint, since, you know, mostly these days, everybody is just aware of the speed test, and really nothing else.
You mentioned the Wi Fi bottleneck. issue. That's another one that I guess thanks for the opportunity to talk about that. Another thing that we've we've done over the past, I guess, four or five years is develop tests that concurrently measure the user's Wi Fi with the speed test that's running, essentially off the off the home router. And obviously, the situation is going to depend on the home. But I think that's actually kind of the point is that is that a user's home network setting can have significant effects on their experience. I don't have the specific statistic at hand right now. But it's it's roughly what you would expect as far as, as far as Wi Fi bottlenecks. I will say that in as part of our work like we've we've done, we've done a bunch of deployments in hundreds of homes across the United States in the last five years, I will say. And once again, I think one once speeds get above about a gigabit per second, it's it's almost exclusive in our measurements that the users performance bottleneck is actually inside the home for throughput I'm speaking about. And then even you get to slightly lower speed tiers than that are speeds. And it's still, by and large, the Wi-Fi That's actually a limiting factor. And I think we could certainly talk a lot more about about that.
I guess just briefly say is we're measuring the kind of application latency somebody requesting our content or services and have a round trip time to get back to them. There are there are places in in the world where a new pop a you a new data center that it tracks that traffic is just totally transformative to send your experience because the traffic isn't needed in countries and continents. In the US we have we're in a better position on that yet still. Here's an example there. A couple of years ago the city of Montgomery, Alabama launched an Internet exchange and they put the city put a whole bunch of time and effort and a little money not too much but a little bit of money into into that Internet exchange and that has the ability if it attracts enough networks that have been attracts content and services that works on one side, if it attracts Internet service providers on the other side, to make it such that somebody in Montgomery, Alabama can find their streaming game or anything else being in their local market, and they're their traffic isn't backhauled to Atlanta or Dallas. And they say that's hot that that happens in the south, and also is happening quite a bit in the Midwest and in Iowa and the Dakotas. And some of those states that the traffic has to go a great distance. And so I'd say one of the things that we can measure is what what is the latency by ISP, and then by State, to make sure that, that, that is one of the things is taken into consideration as we deploy new networks?
Yeah, that's a really good point, especially given the FCC spoke to us on middle mile grants, you know, through the BEAD program, and you know, this need in the, you know, the content to get sort of hyper local so that it's the shortest round trip time. So moving on, you know, Ermyn, I wonder what you think about not just the customer impact when you're, say gaming, and you encounter this high latency, but what other kinds of applications, you know, from your standpoint become possible. And we've got someone that's, you know, here from Apple, they have the vision Pro headset, which is taking the world by storm. And, you know, you have older generations of other companies that, you know, people describe like VR motion sickness and these kinds of things. So what do you think in terms of applications that become, you know, really possible now in the future with ultra low latency.
So, I mentioned before the rvr use case where we basically streaming the app itself, including some production tools, we have a product, they're called omniverse. Basically, for like, 3d modeling CAD, a type of application streaming that in low latency, properties ensure high throughput, high fidelity, and possibly also to an AR, VR enabled device. So to an HMD. Like provision, something like this becomes possible once you can guarantee below 20 milliseconds, round trip delays, and ideally, a sub 10 milliseconds latencies. Another interesting application is tele driving, for example, where there's a startup called V, which has demonstrated recently also an example of how far as with I believe, Ericsson and Deutsche Telekom that it's possible to have kind of a semi autonomous solution to tours of today's issues around abs. So basically, their use cases, you deliver a rental car to your place, and you pick it up, while having a remote connection remote driver also in the loop that's capable of reacting faster than or with some responsibility and the person in the loop given if the network is also supporting it. I don't think we're quite there. So that so wide, that the network by Vinay pulses, they but they have already demonstrated it on different streets in Europe, for example, with an alpha of 5G jeenode visa. And I think that said, cloud gaming is growing for for us. And one, I think, important observation there that we cannot miss often as you don't always need the lowest latency at all times. What very much matters for us, we're good experiences consistent latency. So jitters and latency variation are makes for a very unpredictable service and performance, like I mentioned, our motion to formulate. So before, when a user clicks when you make a you know, keyboard type, and you expect a feedback loop to kick in at specific time in the future. This doesn't work if you have constantly a spikes in your latency. So even though your average latency might be low, any spikes will give for an intermediate poor performance and poor experience. So for us consistent experience is very important. And again, depending on the content also becomes obvious if you're playing you know, a strategy game, some like slow content, or even some type of short, like professionalization application versus streaming a first person shooter, you might be perfectly fine with having, you know, 6070 milliseconds fixed latency, versus, you know, having low latency at the time, but that's piling up two to 300 milliseconds in between.
So predictably low.
At least predictable moderate latency is what will make for a decent experience. Ideally, we will always have a low latency and having it be consistent. Upstream latency matters too. This is in contrast to something like Netflix, that might be you know, serving the content which might be slightly buffered, but you want to deliver it as fast as possible. So benefits from latency Throughout for us upstream bottlenecks are also very important. So if you have, and we do see a lot of isometric deployments in terms of bandwidth, and in the US, which is also technology limitation, to some extent, but having AQ amps and mechanisms that enable low latency on upstream is very critical for these kinds of applications. Because there's an inevitable
yep, that's a good point to mention AQMs which for those of you that don't know what that is, that's active queue management, a form of TCP congestion control, and network equipment can embrace that and deploy it now. Comcast has it deployed in our gateways and cable modems, other people can do it. And you know, there's lots of variety of the AQM, pie is one of them, DOCSIS as an implementation of that, here's a cute curl and others that exist out there. And so they're things people can do in particular to, you know, try to improve their network. Now. I'd like to go to one last question before we open up to the audience. And this was one really for Domine. About the policy implications that you talked a little bit ago. And I can tell you from somebody who's been at Comcast for a while, my sort of baptism by fire, the tech world was the Victorian incident. So this week, and we had a little bit of a problem with peer to peer traffic. And, as it turns out, as we understand it, now, that was a working latency problem that was fundable. Just completely misunderstood, solved in a bizarre way, didn't wasn't great. So lots of lessons learned, in a difficult way, but as a result, now we have net neutrality, like flow rate, so I guess you can, you can really blame latency for that. But what are your thoughts about this? Like, is it ultimately, especially in the European and UK context? Is that a specialized service is? Or is it just, you know, because it's applications embracing and marking or graphic for it doesn't become, you know, just sort of an okay thing? Because since we know that, you know, Europe and the UK are always sort of further know, on the spectrum of regulation.
Yes, that's true. Thanks. Just one comment I want to make on IXPs, I've been able to see just in Africa, and some of the sub Saharan countries just going from zero to one i XP, how much of a difference that makes, I mean, we're lucky here, but it's unbelievable, especially around just getting content. And I just wanted to highlight that, because it's been incredible to see that, but thanks. On the on the policy and regulation side, I saw a couple of things I have to say. And I sort of hinted at it, you know, the regulation is old, right? That's why there's been consultations in Europe and the UK, as recently. But in Europe, the regulation is old, like this wasn't really even an issue back then. And as we have more VR coming on board, there was a working group on autonomous vehicles with the in Europe, most recently, all of this is post 2015. For the directive, again, each country within Europe puts the directive into their law in a different way. There's quite strict net neutrality laws in, in, in the Netherlands, for example. And so you'll see different and there's more technologists, I would say that have worked on the law. So it's really quite a very different approach depending on the country. However, I think that it, you know, most of this is seen as active as active technical, Trent, a traffic management. And I think that as all of you work on developing more and different ways to manage traffic, I think, you know, engaging with policymakers is gonna become ever more important throughout global rollout, but also throughout, you know, 5G 6G rollout, which is coming the 6G standards of starting, and then again, the impact and effect on not just Starlink, but other, you know, providers, satellite providers, I mean, I know that I spend most of the summer using Starlink. Right? So how is that going to impact me? Or how am I going to be able to use that? And so there's, there's multiple technology questions that have multiple policy implications. But I think with a review, like what the UK undertook, and the approach that it's taken, it has some pretty solid and quite, quite good engineers internally at Ofcom, but also they liaise quite closely with their industry, because it's just a smaller country more than anything else. You're starting to see the understanding of having flexible regulation that has, I don't like to say, you know, level playing field, but it has a pretty low sort of, you know, monitor or level of regulation, where you really can do innovation. So I think what comes to mind when it comes to this is like, I think about a content provider, you know, having CDNs or working you know, within a low golf community and how they are able to contract financially and how they're able to set up their terms and being able to provide that flexibility for each of those companies as the technology changes is really, really important. So the bottom line answer is I think net neutrality is what's regulating, but I don't think it's necessarily appropriate.
So, questions from the audience? What do you want to know? What are you?
Oh, hi, thanks. I have two questions that only they can ask both or just one. Just one relates to was M insured about a QM in the context of a home network. So private network? Jason, you just explained that ISPs can deploy a QM in order to improve traffic management? How can the orchestration of different requests for different AQIM classes I guess, work in a way that that keeps this loose coupling that you describe between applications and networks? And how is that consistent or not consistent with different views of net neutrality? Right, that's the first question. And if I may just the second question was more for NEC, around the comment you made about 5G Swa networks potentially exhibiting higher latency under load. This morning, Commissioner Carr said, Whatever we do, we need to be tech neutral. And obviously, one of the questions is How do you can use specify latency and the load thresholds in order to preserve tech neutrality, and yet get the performance that we want to get from gigabit networks? Thanks.
Got it. So I'll tackle the first one about the marking the way that it works. There are two different IETF standards for how you mark one is in the ECN packet header part of the packet header and the other is the DSCP, part of the packet header. And any application can mark it's documented in the ECN, part in three RFCs. And the other part for DSCP. Coming soon. I think the Internet draft isn't probably the 18th iteration. So really, any application can just apply those marks in the client. And then any network hop that implements that standard will put it into this separate queue. And so you know, totally open from that standpoint. And then the to the second question was around 5G, fixed wireless, anything in particular about tech neutrality. I noticed, you know, even though the bandwidth is good, you've noticed in your measurements, like substantially different and highly variable latency.
Yeah, I don't know, maybe I'm gonna get ejected from the city for saying this. But I don't know. I mean, I think that that that phrase is ridiculous. I mean, that's like saying, oh, let's be let's be tech neutral. We should we should consider a carrier pigeon, and you know, old school modems and DSL as well. If the if the performance metrics don't line up to things that deliver a good user quality of experience, then that needs to be front and center. I mean, yeah, we I think there's a difference between saying, let's not put our thumb on the scale for one technology versus another as far as maybe regulation or or, you know, yes, as far as regulation is concerned, but it's a completely different thing to pretend that these are one in the same technology type of technology delivering the same experience for users, because at the moment, they're not. And so yeah, sure. I mean, there's I'm sure there a lot of smart people working on on improving latency for fixed wireless technology. But I can tell you right now, from what I see, it's it's not there yet. So
next question, we had Harold in the queue.
So I'm going to ignore net neutrality entirely. And focus on my other obsession, which is Wi Fi and unlicensed spectrum, where I was very interested to hear about something we all sort of generically know, which is the problem inside the home network, where increasingly, it is number one impossible to find devices that don't have Wi Fi capability, where the default is that it is on so that it can report back to various motherships about how many times I run my washing machine or whatever. And the just the number of devices like these pads and phones that won't connect to a Yeah. I'm curious. Is there number one away within the Wi Fi protocols itself? Is it something that you go to Wi Fi forward and say, Hey, for Wi Fi eight, can there be some kind of traffic load management that you Now works to help reduce the latency as we move across the good the network's here, or is this a, if people are gonna insist on doing this, we're gonna need a bigger channel sizes, which has been the solution of Wi-Fi six and Wi-Fi Seven, which means finding more unlicensed spectrum that can be used for, for Wi Fi routers. Yeah,
I think there may be some few answers there. One, a lot of the Wi Fi standards to date have been heavily optimized towards maximizing throughput, like many things, right. And so the, the current versions of Wi Fi do have a notion of different sort of priorities. There's best effort and video and voice and background. But that's really just, you know, relative priority, like quality of service, not really low delay. It is something that, you know, there is in discussion for future versions of the standard. And I hope that we'll see more alignment between the sort of marking that IEEE uses and IETF use, you know, so that that's sort of one of those challenges. But I would say to your first question, probably the most important part is more Wi Fi spectrum, more unlicensed spectrum, make, you could have a huge impact on you know, everyday Americans Internet performance by having more access to unlicensed Wi Fi spectrum. That's the primary way that every single device connects. Even if you think about a mobile phone, like on our network with XFINITY mobile, something like 80 or 90% of the time that is connecting over Wi Fi network, not over a 5G network. So I think all of that argues for, you know, a lot more unlicensed spectrum in the home. And, you know, as a really important aspect, any other comments on Wi Fi? I mean,
just even with the C band, we don't have the C band problem that you have here. and Europe, I mean, just even releasing some of this feedback will actually free up a lot more innovation, I think, um, you know, and it's kind of a little bit of a shocker to see that that's still a discussion when I come over. So we've got a couple more questions over here. Yes, so when this started out in the bay area, but I think we have time for one more question. All right, we've got one one final question.
That your button pusher. I've been ready to fix wireless network for 25 years. So thanks for that. And the reason we haven't been scaling is because the FCC has been allowing all the spectrum to go to the licensed carriers, in fact, the $3 billion record sale they made to Verizon, three months later, Ajit pi, their lawyer, who was the chairman, give them $5 billion grant. So you can't tell me that it's fair, they've been making sure that fixed wireless couldn't grow, because we already have the customers and corporate is doing their best to take it over. But you can't always blame the ISP either. Much of the modern latency is old computer operating systems, 100, bank cards, Ethernet cards, computers, improper placement of home routers. And of course, we play the game with Akula, where we do bursting, so they have their own speeds. And, of course, luckily itself doesn't work because it only uses available bandwidth. And the only people who run available bandwidth tests is when they're gaming or streaming. So it shows extremely low, which is the way to show that it doesn't work, really industry programmers, regulatory agencies need to stop the pendulum swing of excessive control. Net Neutrality is throwing 1934 laws on Internet providers who were lightly touched before. There's something that the FCC called the be permitting that simply make sure that you make sure that the you're not slowing down people between technologies like Apple does of their texting to Android and things like that. We need to that's the kind of stuff that we need to work towards. And the people who are the technical industry, people need to be making true assessments of speed. And looking at the user, the home user capabilities at all, because now the FCC wants to call an inability for a customer to get the speed digital discrimination and they can sue us so I could get a slap suit wiped out of community ISP, and that's one less competitor against the corporate giants. All right.
Any reactions or comments in our one minute?
Yeah, I'll just respond to that. So wouldn't be a panel without a little controversy. But I think I think it's I enjoyed your comments actually, because I as measured as someone who does performance measurement, I see the outcomes and outputs. But I think your comments were really on, you know, the shape of the industry in the in the in the context and the inputs and like what results in what we see on on the, on the outcomes, which is something that I don't spend a lot of time thinking about and I think like one of the it's just too sort of get back on my own soapbox. I think one of the things that that I hope is that measurements that we can all kind of agree on are are a good representative set of like what users are seeing from different providers. Definitely and actually help everybody tell their story. Everybody seems to have a different story to tell a different agenda. Except I think maybe we can all agree that improving the user experience is one that we'd like to, we'd like to all achieve.
Well, thank you to all of my panelists. Really appreciate your time and your travel today. Thank you. Also to our audience, I was very impressed. No one fell asleep. That's a really big, big deal. But lunch is available outside and happy to take any questions that you may have afterwards. Oh, and there's. So I'm told there's salons later, and you can sign up outside of the north room. See, I got that. Right. That's great. And if you're interested in that, that's where you sign up. I don't know what they're discussing, but you can find out I'm sure they're on the agenda. Thank you.