-
-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory issue when linking to a large file #1439
Comments
Shlink should not try to download the whole body, specially if you disable the validation of the URL. A possible mitigation so far is that you disable both validation and title resolution. If any of those is enabled, Shlink will probably try to download the URL. One easy way to disable title resolution is providing a title actively. I will look into methods to mitigate this. I like the idea of allowing to provide a custom memory limit via env vars, but not as a solution for this, but an independent nice to have. |
I worked around it by tweaking the entrypoint to use sed to increase the memory before running the normal
Maybe doing an initial HEAD request for the URL and inspecting the Content-Type header to see if it's HTML or application/octet-stream....although I'm not sure if that would work as some sites probably block HEAD requests. |
Could you elaborate on this? Why do you say some sites block HEAD requests? |
This is what I have for now: https://github.com/shlinkio/shlink/pull/1440/files
The pull request above changes a bit the logic so that it makes a Now, the problem is when the title has to be resolved (either with validation enabled or not, that should not affect the approach). The options I have come up so far are these:
Related with this issue, I have opened a poll in order to better understand how users configure their Shlink instances: #1442 |
Ok, the option I wanted to implement is possible. The HTTP client supports it, and it does not depend on the server. Enabling the streaming option I can just download the headers, and if the content type is not html, I don't even try to continue downloading. And even if it is html, I can download chunks of the body until the body tag is present, and then stop. That should solve the problem. |
This has been merged and will be part of v3.1.1 |
I think it's probably rare, but I've run into a handful of sites that block HEAD requests. Regardless, I think your approach is probably the best solution. |
Summary
When trying to create a link to a large file, shlink says:
An error occurred while creating the URL :(
And the following error appears in the log:
The file in question is ~349 MB.
I have tried it with "Validate URL" checked and unchecked and it doesn't change anything.
Linking to smaller files (~100 MB) and other random URLs works without a problem.
The fix appears to be to increase the memory limits in
php.ini
, but modifying the official docker image is a bit of a pain. I basically have to create my own Dockerfile to generate an image based onshlink:stable
, upload it somewhere, and then pull it down into the cluster.I see three potential solutions:
memory_limit
(maybe usesed
to update the value inphp.ini
) from an environment variable so users can configure it themselvesmemory_limit
by statically assigning a larger value inphp.ini
The text was updated successfully, but these errors were encountered: