Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get-index: Enable deflate and gzip compression #3677

Merged
merged 4 commits into from
Jan 27, 2019

Conversation

the-kenny
Copy link
Contributor

Big wikis can take some time to load over bigger connections when using the Node server. This pull requests enables deflate and gzip compression on the get-index route which massively reduces the amount of transferred data.

It uses the zlib node module and (for simplicity) uses the sync versions of the functions. It's also possible to use the non-sync versions without much hassle.

I plan on abstracting this logic a big and add it to other routes which serve non-trivial amounts of data.

@the-kenny the-kenny force-pushed the feature/compression branch from d555d9e to 2738585 Compare January 3, 2019 12:49
@the-kenny the-kenny mentioned this pull request Jan 3, 2019
@Jermolene
Copy link
Member

Thank you @the-kenny. I'd held off adding gzip compression because I was worried that without caching the load on the server could get unmanageable. Having said that I haven't done any testing on the CPU load of compression. I don't think it's a problem to ignore caching if we add a switch to make compression optional, but I'd like to try to implement it in a way that makes it easy to add it to other routes.

@the-kenny
Copy link
Contributor Author

@Jermolene thanks for the response! While I don't have any benchmarks, I don't think enabling gzip compression like this results in any excess CPU load on any machine. I don't feel any difference on my old and rusty Raspberry Pi, but that's completely subjective.

As I wrote earlier, I like the idea of making this generic, but that would need some work. We can either refactor the response handling by introducing a middleware layer, or we could wrap the response object and transparently add gzip compression for every Content-Type starting with text/.

Would it be enough (for now) to add a new CLI-option to the listen command to get this merged? If so, is there a nice way of introducing global state like this from a command?

@Jermolene
Copy link
Member

Hi @the-kenny

Would it be enough (for now) to add a new CLI-option to the listen command to get this merged? If so, is there a nice way of introducing global state like this from a command?

Yes I think adding a gzip=yes|no option is the way to go. Within the server subsystem we have the concept of a variable that can be set via the CLI, or fall back to a default in the code; for instance, there is a port variable that corresponds to setting the port on the CLI with a parameter like port=8090. The debug-level variable should be a simple example to follow.

In terms of the implementation I'm not sure which approach would be best. Making the middleware handling more generally flexible is desirable, but I'd be open to other approaches.

If you're willing to have a go at this I'll be happy to help where I can.

@the-kenny
Copy link
Contributor Author

@Jermolene Added the option. Thanks for the hints! :-)

Note that I added server.enableGzip to server as a single source of truth for other endpoints to check if they should gzip their responses (until we implement a more generic approach)

@Jermolene
Copy link
Member

Many thanks @the-kenny, looks good.

@Jermolene Jermolene merged commit 049244e into TiddlyWiki:master Jan 27, 2019
jho1965us pushed a commit to jho1965us/TiddlyWiki5 that referenced this pull request Apr 3, 2019
* get-index: Enable deflate and gzip compression

* Spaces -> Tabs

* listen: Add optional `gzip=yes` parameter (defaults to "no")

* get-index: Add comment explaining the usage of `zlib.*Sync` instead of async.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants