-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What should happen in case of large number of added routes? #1746
Comments
We've discussed this topic in WICG/service-worker-static-routing-api#5 and not concluded yet. I agree that we should mention the length/depth limit in the spec. Can we just say specific numbers in the spec? Or should we avoid mentioning specific numbers and give some flexibilities to implementers? Yes, Chrome sets those numbers as length/depth limit, but I'm a bit unsure if those numbers are really reasonable. |
That seems reasonable to me. |
Note that #1714 is trying to cover the case. It suggests existence of a rule size limit and a depth limit, but it does not tell details. There was WPTs for checking rule size and depth in the public WPT repo, but I moved it to the internal because I believe the limit is Chromium specific. Maybe no WPTs for ensuring the lower limit yet. |
Specifying and testing a lower limit seems like a nice approach since there is indeed a webcompat concern. |
@sisidovski asked me for my perspective on how this works in the web platform in general. The main guidance is given in https://infra.spec.whatwg.org/#algorithm-limits , which advises that by default there should be no limits. But since we have a good reason to have limits here, the ideal is to make them interoperable. So the discussion of leaving flexibility to implementations for the upper limit is a bit scary from that perspective, as it could cause interop issues. If Chrome supports 256 and Safari/Firefox support 512, then it is pretty easy to make sites that work only in Safari/Firefox and not Chrome. So I think the ideal would be if all implementations were willing to pick a specific limit, and support everything in the range [0, thatLimit]. For both number of rules and depth of rules. If implementers have strongly different ideas on what the upper limits should be, and don't want to converge, then the second-best option would be a defined lower limit, and leave the upper limit implementation-defined. This doesn't solve the interop concern in general, as it would still be possible for interop issues to arise between [specifiedLowerLimit, maxUpperLimitFromAllBrowserEngines]. But it at least gives web developers who read the docs a target: don't exceed specifiedLowerLimit, and your code will work in all browsers. |
Concrete numbers can be discussed later, but I believe the specification should mention that |
Thanks @domenic to share the guidance for limits and suggestions.
Considering the current phase and usage, we'd like to pursue this option. As Domenic mentioned only having a lower limit doesn't fully solve the interop concern.
Agree. That's said, we need concrete numbers as long as the spec mentions limitations, right? I wonder if the numbers that Chrome is currently using (256 for the max length of rules, 10 is the max depth) are the good start to be written in the spec. Chrome doesn't see any usage exceeding those limits so far. |
For Firefox/Gecko a specific limit is fine and even desirable given the inherent resource utilization and latency trade-offs for ServiceWorker registrations since handle fetch is on the critical path for page loads. That said, it probably would be good for any WPTs validating the limit to be structured to have both "impl works with values at the limit" and "impl throws for values exceeding the limit". (The chromium test currently only does the latter right now.) |
Created #1752 to address this issue. Formerly I suggested 256 for the max length of rules and 10 is the max depth, but I realized this makes exponential if 256 router conditions to the 10th power, which is obviously bad. Instead, I'd propose having two types of limits, one is for the total count of registered router conditions, which is counted across all nesting levels. The other one is the max depth of nesting. I propose 1024 to the former one, 10 to the latter one. We need to dig into more, but those are reasonable limits based on the current usage in Chrome so far. |
I guess the way to count conditions may need some discussions. I and @sisidovski agreed in the following way, though. We count the following as 1 condition and 1 depth because there is only
However, we count the following as 5 conditions and 2 max depths because 4
Please see the description of #1752 for details. Yes, I understand 1024 max conditions are decided upon our intuition, and it might be good to revisit later. |
If the service worker is adding too many routes, two things may happen:
I guess that setting a limit on the number of routes might be nice here.
For 1, maybe a TypeError or InvalidStateError be good.
For 2, maybe a QuotaError would be good.
Looking at Chrome's implementation, https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/public/common/service_worker/service_worker_router_rule.h;l=33;drc=27d34700b83f381c62e3a348de2e6dfdc08364b8;bpv=1;bpt=1, it seems that there is a max number of 256.
Chrome is also imposing a max depth for conditions of 10, which is aligned with the goal to limit the time to test the routes.
The spec should probably give guidelines for implementors, if not adding some checks in the algorithms.
The text was updated successfully, but these errors were encountered: