Skip to content
This repository has been archived by the owner on Mar 24, 2024. It is now read-only.

Case study: Offscreen Canvas #7

Open
ddbeck opened this issue Dec 14, 2022 · 3 comments
Open

Case study: Offscreen Canvas #7

ddbeck opened this issue Dec 14, 2022 · 3 comments

Comments

@ddbeck
Copy link
Owner

ddbeck commented Dec 14, 2022

Last week, @foolip nudged me with this idea:

How about https://caniuse.com/offscreencanvas, can you try to group together BCD features that make up that feature and see if you get the same answer?

Screenshot 2022-12-14 at 16-45-02 OffscreenCanvas Can I use  Support tables for HTML5 CSS3 etc

I set about to do just that and I'm opening this issue to record what I did and some consequent discoveries.

What I did

To see if I could generate something that might be readily consumed by Caniuse, I tried to duplicate something it already could do: represent Offscreen Canvas support.

  1. I created a JSON representation of a feature group which consisted of a flat list of 71 mdn/browser-compat-data (BCD) features, drawn from:

    • OffscreenCanvas
    • OffscreenCanvasRenderingContext2D
    • HTMLCanvasElement

    and their various methods, properties, and other descendant features.

  2. I ran the ./src/cli.js script against that feature group. It produced support results that closely matched caniuse, summarized here:

    Browser Since version Since date
    Chrome 69 2018-09-04
    Firefox 105 2022-09-20
    Safari N/A N/A
  3. I shared the results with Philip, who asked, "Did you have to make a decision to exclude the "contextlost" event?"

Philip discovered a notable omission: I didn't include the more-recent additions to the Offscreen Canvas API, for handling context loss and restoration.

This led to some more investigation and a number of interesting lessons learned.

Conclusions

Invite domain experts into the group authoring process and seek to detect unincorporated features

I constructed the feature list for Offscreen Canvas on my own. To make my feature list, I skimmed the MDN docs and the relevant bits of the HTML spec that I could find (for example). I completely missed the context loss and restoration features because it's not in BCD or the resulting compat table on MDN. This made the features somewhat less visible to me; it's likely that a domain expert would've known about this part of Offscreen Canvas.

What I learned:

  • While I can do "good enough" feature groups myself (which isn't nothing—I think a lot of folks would be stumped by such a project), I think we'll soon need a review process (especially one that invites experts) or explain that the initial groups that we put forward are provisional.

  • Since context loss and restoration was added to the API somewhat later, it's not surprising it hadn't made its way fully into the documentation too. It'd be interesting to explore generating presumed BCD feature identifiers from HTML spec IDL to discover features which have not yet made it into BCD (and, eventually, feature groups).

Avoid splitting groups before they achieve consensus (a.k.a "baseline") implementation status

After figuring out the details above, I started, but have not finished, experimenting with splitting Offscreen Canvas into three groups:

  1. A group for the Offscreen Canvas API as it was before the introduction of the context-loss-and-restoration API (i.e., a group of Offscreen Canvas features as they existed at the time of Chrome 69's release).
  2. A group for Offscreen Canvas context loss and restoration (i.e., a group consisting of OffscreenCanvasRenderingContext2D.isContextLost() and the contextlost and contextrestored events for offscreen canvases).
  3. An omnibus group consisting of groups 1 and 2.

This work continues because the mockup doesn't yet handle processing groups of groups (yet). But I didn't need to finish implementation to learn some useful things.

What I learned:

  • From a web developer's perspective, there's probably little point in splitting Offscreen Canvas in pieces like this. Support for such feature groups would look like this:

    Pre-context-loss-and-restoration Offscreen Canvas

    Browser Since version Since date
    Chrome 69 2018-09-04
    Firefox 105 2022-09-20
    Safari N/A N/A

    Context loss and restoration API

    Browser Since version Since date
    Chrome 99 2022-03-01
    Firefox 105 2022-09-20
    Safari N/A N/A

    Omnibus

    Browser Since version Since date
    Chrome 99 2022-03-01
    Firefox 105 2022-09-20
    Safari N/A N/A

    That is to say, web developers probably won't face a material difference in the way this API "works" across devices and browsers until Safari ships Offscreen Canvas.

    This suggests an initial guideline for authoring groups: don't split or version gruops until they've achieved some threshhold of mainstream support or usage. A simple guideline might be:

    Strive to avoid splitting groups unless and until there are three or more implementations of a subset of a group's constituent features.

Bug (now fixed): incorrect summarization of group support

In the course of all the preceding work, I did discover that my summarization of support across many features picked the wrong version and date from the pool of versions and dates that a group's support was calculated from.

For example, given a group of features which were supported from versions 50, 60, and 90, the group as a whole should be regarded as supported from version 90 (when the last of the requisite features was introduced). Instead, the script erroneously picked the earliest of the list of versions (50), because I unthinkingly reused code which picked the earliest (which made sense in another context).

What I learned:

  • I probably would've caught this with a test case or two. Tests are good—I should write more of them.
@foolip
Copy link

foolip commented Dec 14, 2022

Thanks for the write up, that's very helpful.

The discussion on avoiding splitting is especially interesting. I agree that it doesn't seem useful to distinguish between two versions of offscreen canvas, but I wonder how to think about this while a feature is supported only in a single engine. Do we treat that as an experimental and moving target, or are there cases where splitting makes sense even for single-engine features? I think trying to group more features is the best way to learn.

Regarding the bug, in addition to tests code review is also a way to catch bugs. However, I think that if we'll be comparing the output to caniuse, then we'll eventually spot almost all errors in the data+code through that process, so it's not necessary to go overboard with testing + review.

@ddbeck
Copy link
Owner Author

ddbeck commented Dec 15, 2022

are there cases where splitting makes sense even for single-engine features? I think trying to group more features is the best way to learn

My hunch is that some single-engine features might have meaningful groups anyway (e.g., what if web developers don't think of a group as being a single thing? We might need to split it anyway), but I completely agree that grouping more features is the best way to learn.

it's not necessary to go overboard with testing + review

👍

@atopal
Copy link

atopal commented Jan 4, 2023

Very interesting read. Thanks Daniel!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants