-
Notifications
You must be signed in to change notification settings - Fork 3
Why???
Eugene Lazutkin edited this page Jun 12, 2022
·
11 revisions
There are bazillions of helpers, which help to precompile artifacts and use them later on the user's computer. Unfortunately, they didn't fit my needs:
- Frequently they are huge projects with a lot of dependencies that take care of many things automatically.
- I want to avoid the dependency hell.
- Usually, they require to set up some file hosting with secure uploads and downloads.
- This is an extra dependency, which I should maintain and pay for.
- I want to be sure that my users are secure.
- I cannot review mounds of third-party code on every release.
- The inherent complexity (dependencies, a file service) makes the whole system too fragile, and prone to security glitches.
That's why I arrogantly decided that I can do better.
- The project is essentially two one-file utilities.
-
install-from-cache
is less than 250 lines. -
save-to-github-cache
is less than 150 lines. - Both files can be easily inspected to make sure there is no funny business or bugs.
-
- Both projects have no dependencies.
- You can literally copy utilities to your project if you want.
- It uses GitHub releases to save and serve files.
- It is free (at least now).
- It uses a geographically distributed CDN.
- It is easy to mirror a release (it is just some files!) and serve it locally.
- Unobtrusive security is implemented by GitHub Actions.
-
save-to-github-cache
is automatically supplied with a secure token, so no need to mess with any settings. It just works. - For public repositories, actions are free (as of now). For private ones, the free tier includes 2,000 free minutes per month, which is enough for many projects.
-
- Nowadays, when brotli compression is wildly supported and comes included with Node (since version 10) we can use it to reduce the sizes of stored artifacts.
- It irks me that almost nobody does it! Smaller size ⇒ faster downloads! It is especially important for popular packages.
-
save-to-github-cache
uploads two versions of an artifact:br
andgz
. Both use the best compression parameters. - If an underlying Node does not support
brotli
, it is skipped.-
install-from-cache
skips its download. -
save-to-github-cache
does not produce nor upload it.
-
-
install-from-cache
checks agz
version if abr
version was not available.
My workflow is usually like that:
- Development:
- Modify code ⇒ commit ⇒ repeat the cycle.
- In the background I have automatic tests running on every commit.
- Release:
- I make sure that a version is properly bumped and the README is updated.
- I tag the release:
git tag 2.0.1
git push --tags
- I publish it:
npm publish
Now I can sit back and relax watching how irate users complain about bugs in the new release. ;-)
With this project, I have a GitHub action for creating a release, which runs on tagging it. It was a minimally invasive procedure.
- What if something bad happens? A network is down, you generated a wrong unusable artifact, which is linked against the wrong libraries?
- No problem. We can afford a false positive.
- The downloaded artifact is checked, and if it doesn't work properly, it is discarded, and a new one is built locally from sources.
- What if my platform is a relatively exotic one? What if you don't have an artifact for my specific platform, architecture, Node ABI?
- In this case, it will be built from scratch.
So in the worse case, we have 1-2 extra HTTP requests, downloading an artifact, unpacking, and checking. Usually, it is much faster than building. The upside is that we save users' resources, including time, in most cases.