-
-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support blocklisting long URL domains to prevent malicious redirects #1305
Comments
Since I am the one who brought this issue up in the first place, I would like to suggest some potential ways of handling a feature like this, all with different pros and cons: 1. Internal databaseHaving an internal database of potentially harmful domains and URL would be an extremely convenient and simple way of letting users configure URL that they do not wish to redirect to. Shlink would then need to expose an API endpoint to submit URLs to the blacklist to prevent users from being redirected to these URI. This would also require to establish permissions within Shlink to determine which users / applications are allowed to interact with the blacklist and which aren't. Finally, importing lists with malicious domains would require the user to write a script that automatically consumes the API, and updates all of Shlink's blocked domains and URL. 2. HeuristicsAlternatively, a mechanism like many Spam filters use could be used for blocking bad redirections. Filters like Spamassassin will use rulesets that allow them to determine whether an email is spam. Similarly Shlink could use a series of rules to determine whether a URL is bad. This mechanism would probably only appeal to advanced users, and novices would probably avoid it completely. In return, it allows to combine lookups on public databases with private databases and even check for potentially harmful links or that look 'misleading' based on the contents of the URL. For a system like this Shlink would just have to expose an interface that could look somewhat like this: <?php
interface RedirectionFilterInterface
{
public function block(string $url) : bool;
} While unappealing to configure for novices, Shlink could ship with a few base rules that provide a reasonable baseline for novices and granular controls for experienced administrators. Since this mechanism isn't exposed through the API itself, there would be no changes required to it. 3. Shim the URL using an external serviceProbably the easiest of the three to implement is to expose a configuration that allows the administrator to shim all of the redirections using a third party application. So that the user just provides a shimming URL for the application to redirect to. This is similar to how some major networks do it, where the |
Thanks for the support detailed suggestion 🙂 |
@acelaya This would be a great addition. Since I turned on the service last week, I have received 2500+ "orphaned visits" which are pretty much all attempts by machines to try and find vulnerabilities, like phpmyadmin, wordpress and mysqladmin. For a more immediate fix, I thought I could add rules to block certain folders and files from being accessed, so I added them to the In fact, and correct me if I'm wrong, I was thinking of blocking access to any subfolders which are not supposed to be accessed anyway since the URLs are always domain.name/short-code. Or is there something I don't know about? |
That's not the purpose of what's described here. Shlink will never be able to prevent requests from happening in the first place. The purpose here is to protect actual visitors from being redirected to malicious sites. I have updated the title to make that clear.
There are no subfolders to block. Your document root should be the You can find more info here https://shlink.io/documentation/classic-web-server/ |
What I meant by subfolders is those machines are trying to reach domain.apex/wp/wp-config.php for example. Since there is no wp subfolder, why not block access to it in the first place so it doesn't spam the orphaned visits statistics. Like I said I tried changing the .htaccess inside the public folder but that's not enough. Now, should I play with this file or it's better to leave it untouched? I'll put my blocks in the apache virtual host instead. |
You can just disable orphan visits. |
But then the attacks and scans would continue, I would just be unaware of them. I'll find a solution. Tx |
Closing due to the complexity of the feature, lack of general interest from others and not enough time to work on it. |
Summary
Allow to somehow check for malicious/dangerous/unwanted domains in long URLs.
It could be via an external service, a manually managed blocklisting or some combination of both.
The feature should be optional, and handled via config options or env vars.
When enabled, it should check both when creating a short URL, but also when redirecting for an existing one (or maybe periodically, allowing to mark existing URLs as blocked).
To be decided how to proceed if visiting an existing URL that has "became" blocked. Options are:
Refs #1296
The text was updated successfully, but these errors were encountered: