Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What should happen in case of large number of added routes? #1746

Open
youennf opened this issue Jan 22, 2025 · 10 comments
Open

What should happen in case of large number of added routes? #1746

youennf opened this issue Jan 22, 2025 · 10 comments

Comments

@youennf
Copy link

youennf commented Jan 22, 2025

If the service worker is adding too many routes, two things may happen:

  1. testing the routes will take a long time, delaying loads
  2. storing the routes may fill up the user's storage

I guess that setting a limit on the number of routes might be nice here.
For 1, maybe a TypeError or InvalidStateError be good.
For 2, maybe a QuotaError would be good.

Looking at Chrome's implementation, https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/public/common/service_worker/service_worker_router_rule.h;l=33;drc=27d34700b83f381c62e3a348de2e6dfdc08364b8;bpv=1;bpt=1, it seems that there is a max number of 256.

Chrome is also imposing a max depth for conditions of 10, which is aligned with the goal to limit the time to test the routes.

The spec should probably give guidelines for implementors, if not adding some checks in the algorithms.

@sisidovski
Copy link
Contributor

We've discussed this topic in WICG/service-worker-static-routing-api#5 and not concluded yet. I agree that we should mention the length/depth limit in the spec. Can we just say specific numbers in the spec? Or should we avoid mentioning specific numbers and give some flexibilities to implementers?

Yes, Chrome sets those numbers as length/depth limit, but I'm a bit unsure if those numbers are really reasonable.

@youennf
Copy link
Author

youennf commented Jan 22, 2025

Or should we avoid mentioning specific numbers and give some flexibilities to implementers?

That seems reasonable to me.

@yoshisatoyanagisawa
Copy link
Collaborator

Note that #1714 is trying to cover the case. It suggests existence of a rule size limit and a depth limit, but it does not tell details.
In the review @mkruisselbrink suggested to have a lower limit, and upper limit is up to the browser vendors.

There was WPTs for checking rule size and depth in the public WPT repo, but I moved it to the internal because I believe the limit is Chromium specific.
https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/web_tests/wpt_internal/service-worker/static-router/static-router-invalid-depth-width.https.html

Maybe no WPTs for ensuring the lower limit yet.

@asutherland
Copy link

Specifying and testing a lower limit seems like a nice approach since there is indeed a webcompat concern.

@domenic
Copy link
Contributor

domenic commented Jan 23, 2025

@sisidovski asked me for my perspective on how this works in the web platform in general. The main guidance is given in https://infra.spec.whatwg.org/#algorithm-limits , which advises that by default there should be no limits.

But since we have a good reason to have limits here, the ideal is to make them interoperable. So the discussion of leaving flexibility to implementations for the upper limit is a bit scary from that perspective, as it could cause interop issues. If Chrome supports 256 and Safari/Firefox support 512, then it is pretty easy to make sites that work only in Safari/Firefox and not Chrome.

So I think the ideal would be if all implementations were willing to pick a specific limit, and support everything in the range [0, thatLimit]. For both number of rules and depth of rules.

If implementers have strongly different ideas on what the upper limits should be, and don't want to converge, then the second-best option would be a defined lower limit, and leave the upper limit implementation-defined. This doesn't solve the interop concern in general, as it would still be possible for interop issues to arise between [specifiedLowerLimit, maxUpperLimitFromAllBrowserEngines]. But it at least gives web developers who read the docs a target: don't exceed specifiedLowerLimit, and your code will work in all browsers.

@yoshisatoyanagisawa
Copy link
Collaborator

Concrete numbers can be discussed later, but I believe the specification should mention that addRoutes() may raise if the number of rules or depth of rules exceed the limitation.

@sisidovski
Copy link
Contributor

sisidovski commented Jan 23, 2025

Thanks @domenic to share the guidance for limits and suggestions.

So I think the ideal would be if all implementations were willing to pick a specific limit, and support everything in the range [0, thatLimit]. For both number of rules and depth of rules.

Considering the current phase and usage, we'd like to pursue this option. As Domenic mentioned only having a lower limit doesn't fully solve the interop concern.

Concrete numbers can be discussed later, but I believe the specification should mention that addRoutes() may raise if the number of rules or depth of rules exceed the limitation.

Agree. That's said, we need concrete numbers as long as the spec mentions limitations, right? I wonder if the numbers that Chrome is currently using (256 for the max length of rules, 10 is the max depth) are the good start to be written in the spec. Chrome doesn't see any usage exceeding those limits so far.

@asutherland
Copy link

For Firefox/Gecko a specific limit is fine and even desirable given the inherent resource utilization and latency trade-offs for ServiceWorker registrations since handle fetch is on the critical path for page loads.

That said, it probably would be good for any WPTs validating the limit to be structured to have both "impl works with values at the limit" and "impl throws for values exceeding the limit". (The chromium test currently only does the latter right now.)

@sisidovski
Copy link
Contributor

sisidovski commented Jan 31, 2025

Created #1752 to address this issue.

Formerly I suggested 256 for the max length of rules and 10 is the max depth, but I realized this makes exponential if 256 router conditions to the 10th power, which is obviously bad.

Instead, I'd propose having two types of limits, one is for the total count of registered router conditions, which is counted across all nesting levels. The other one is the max depth of nesting. I propose 1024 to the former one, 10 to the latter one. We need to dig into more, but those are reasonable limits based on the current usage in Chrome so far.

@yoshisatoyanagisawa
Copy link
Collaborator

I guess the way to count conditions may need some discussions. I and @sisidovski agreed in the following way, though.

We count the following as 1 condition and 1 depth because there is only RouterCondition even if some of its properties are set.

  event.addRoutes({
    condition: {
      urlPattern: { pathname: "/form/*" },
      requestMethod: "post",
      requestMode: "navigation",
      runningStatus: "running"
    },
    source: "network"
  });

However, we count the following as 5 conditions and 2 max depths because 4 RouterConditions in the top-level RouterCondition.

  event.addRoutes({
    condition: {
      or: [
        {urlPattern: { pathname: "/form/*" }},
        {requestMethod: "post"},
        {requestMode: "navigation"},
        {runningStatus: "running"}
    ]},
    source: "network"
  });

Please see the description of #1752 for details. Yes, I understand 1024 max conditions are decided upon our intuition, and it might be good to revisit later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants