-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poc #2
base: master
Are you sure you want to change the base?
Poc #2
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've not tried it out and I'm not super familiar with Python, but hopefully the feedback is somewhat helpful :)
Looks good overall, good job!
# Caddy installation | ||
RUN apt-get install -y debian-keyring debian-archive-keyring apt-transport-https | ||
RUN curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | tee /etc/apt/trusted.gpg.d/caddy-stable.asc | ||
RUN curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list | ||
RUN apt-get update --allow-releaseinfo-change | ||
RUN apt-get install caddy | ||
COPY Caddyfile /etc/caddy/Caddyfile |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this apparently setups a systemd service? Not sure how that works in the container or if it tries to enable that by default.
I like to use a separate container myself, since it makes it more flexible to what a user can go with (eg: Caddy, nginx-proxy, Tyk), and delegates configuration responsibility away. Useful if the user already has a reverse proxy setup too.
I can understand wanting to try consolidate it all though, especially if it's something users aren't familiar with setting up. I guess if port 8000 is exposed (not published), a user can just proxy to the containers port 8000 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the systemd service, that was indeed my expectation, however systemctl isn't even recognized as a command on docker-mailserver machines, and since it relies on supervisor, that's actually better I think. Note that I am in no way a sysadmin, so any advice is helpful there :)
When it comes to flexibility of only exposing the API and not using caddy, I can see that as an ENV var in the future, that could simply disable supervisor's caddy unit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When it comes to flexibility of only exposing the API and not using caddy, I can see that as an ENV var in the future, that could simply disable supervisor's caddy unit.
Well, by coupling Caddy you add on responsibility for deployment with it.
You need to make sure users are aware that HTTPS will be setup by default and if they configure it with a site address that isn't an IP or localhost, Caddy will try to register a certificate via Let's Encrypt, so they'll need to make sure those ports are open and that nothing goes wrong otherwise they'll run into rate limits which can prevent them from provisioning the certificate for a while (I think it's a week?) if they exceed it.
I suppose you could default to the staging ACME endpoint to avoid that. And then for the API part, CORS might be relevant. You can handle CORS within your Python code or at reverse proxy level with Caddy (or others). Along with setting up any other features that may be of interest such as rate limiting your own API and such.
Not major issues, and documentation or examples can be cover that. Just be prepared that users might field such questions/issues your way.
The default ENV for the site address could be 127.0.0.1
or localhost
perhaps if you want the Caddy service to work without requiring the user to modify config to test the API?
@@ -0,0 +1,38 @@ | |||
FROM mailserver/docker-mailserver:10.2.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In future, you might want to use an ARG for providing the version via build CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, this is just for PoC, keeping that in my notes though
<api-subdomain>.<domain> | ||
|
||
reverse_proxy 127.0.0.1:8000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could leverage environment variables here, along with a site block (just preference):
<api-subdomain>.<domain> | |
reverse_proxy 127.0.0.1:8000 | |
{$API_FQDN} { | |
reverse_proxy 127.0.0.1:8000 | |
} |
Then they can just set that ENV, or an entrypoint script could try derive it from other docker-mailserver
config/ENV. You can have more than one site address for a block as well, assuming $HOSTNAME
is configured correctly (it should match hostname -f
that does a lookup of /proc/sys/kernel/hostname
to /etc/hosts
, it should match the containers --hostname
which also concatenates --domainname
if specified, although we don't encourage specifying --domainname
anymore):
<api-subdomain>.<domain> | |
reverse_proxy 127.0.0.1:8000 | |
{$API_FQDN}, {$HOSTNAME} { | |
reverse_proxy 127.0.0.1:8000 | |
} |
Not quite sure how that functions if an ENV is empty though 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good, thanks. I planned (as mentioned in the linked issue) to use env for this, but didn't do it for PoC. If we decide to use the work done so far, I'll definitely use your suggestion there, less documentation to read for me ;)
@@ -0,0 +1,141 @@ | |||
# Byte-compiled / optimized / DLL files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is generated/copied from a template I take it?
Might be worth trimming out the stuff that isn't relevant to your project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not exactly generated, it's taken from https://github.com/github/gitignore/blob/master/Python.gitignore. I do that for all my projects because it ensures I won't be bothered, whatever I throw in the mix. It's definitely trimmable, but it has no impact on prod, so it's up to the team to decide what they prefer. With current state it could be something like 10 lines max, but if later on other tools are added then devs need to be careful :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine, I'm a little OCD about such things myself but pragmatically it doesn't really matter as you say 😅
|
||
|
||
@router.post( | ||
"/api/admin/v1/users/{email_address}/changePassword", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"/api/admin/v1/users/{email_address}/changePassword", | |
"/api/admin/v1/users/{email_address}/change-password", |
Probably don't want to have URI with mixed casing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used camel case because it's very common in APIs, but it's not actually a mix,because email_address
isn't exposed to users as it's just a variable in url. It's only seen internally as view parameter names. In the end, I'd go with whatever people prefer, be it your kebab-case suggestion or otherwise!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used camel case because it's very common in APIs
Do you have some examples of that being the case for well known services/platforms? I think camel case is fine elsewhere but for URIs I strongly prefer all lowercase, usually kebab case.
I'm a little biased coming from sysadmin experience with Linux where filesystems are case sensitive and it's nice just not having to think/check about any upper case letters in the mix 😅
I don't care about the variable {email_address}
there being snake case (I was more concerned about mix of upper/lower case not the styles), just the static part of the route/endpoint.
assert response.status_code == 204 | ||
|
||
used_args = mocked_run.call_args_list[0].args | ||
assert used_args[0] == ['./setup.sh', 'email', 'update', user_to_modify, new_password] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this calling from within the container? If so you can use setup
instead of setup.sh
now AFAIK. We moved the bulk of it into the container with v10.2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, my bad, this was a test written when I was using v10.1.3
or so. I changed the actual code to use plain setup
but didn't update the test. I'll do it!
Thanks for checking tests as well ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This scenario came to mind to be aware about.
There should be a relevant issue somewhere on docker-mailserver
issues history IIRC. Pretty sure it was regarding check-for-changes.sh
/changedetector
slowing down a user heavily from mass creating accounts to seed a new container with.
async def change_password(email_address: str, body: AdminChangePasswordPayload): | ||
result = subprocess.run( | ||
["setup", "email", "update", email_address, body.new_password] | ||
) | ||
result.check_returncode() # NOTE: safeguard this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question. Since this is async, what happens if this is hit at a high frequency?
I recall an issue of someone trying to setup a container and automate many users through setup.sh
but each call would block for about 1 second due to check-for-changes.sh
reacting to a change.
I'm not sure if that command is async friendly though, so calling it multiple times in parallel while the changedetector service is doing it's thing (or even if we ignore that entirely since we're writing/modifying a file rather than interacting with a DB), how is this endpoint going to behave?
Might not be worth worrying about as such an issue (race condition?) isn't likely to be encountered by most users in practice I think? A task queue might help, and while that kinda seems more the responsibility of the command being called, I don't think anyone is going to attempt to implement that in bash 😅
You may want to consider implementing this on your end for the time being if it's a problem worth addressing. That wouldn't apply to this code here, but rather some method that exclusively manages the task queue and calling the setup commands supported by this API, with each endpoint adding a job to the task queue instead?
That would also mean if the task queue grew too large (eg... add 100-1000 tasks, if each one takes 1 second or longer due to check-for-changes.sh
), that the requests made might not be able to respond in a timely manner about success/error, either possibly timing out or letting the client know after X time via a response about "pending" or similar status with some ID that could be polled for updates.
If the client is a server backend (which for the current auth approach seems likely), subscribing to a websocket endpoint could also provide events regarding completion?
Up to you regarding the scope / support you want to implement. Just noticed that may be a niche concern to take into account, but adds a lot more effort I think to support 😬
No description provided.