-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: have a "running hydra in production" section #354
Comments
I'm hoping this may be helpful for others along with the stuff I posted in #374 . I run Hydra and Hydra-idp in Kubernetes. Certain things in the configs are probably a bit sketchy like the dangerous auto logon but it's easier for me to setup if root credentials are printed to log. I could also reset Hydra, configs will be stored in db long term and logs won't carry the root credentials, if necessary. Also only our team has access to the logs (definitely helps). One of the pain points IMO is that it's difficult to automate deployment and I realize that part of this is by design, particularly the client set up. Our other components use ConfigMaps (yaml configs) or env vars in the Pod specs and usually just need to be deployed and work out of the box. So I've tried to replicate most of this in the configs below and try to automate most of the setup. Though maybe you know a better way to accomplish this :). Here are my configs.
apiVersion: v1
kind: Service
metadata:
name: hydra
namespace: user
labels:
name: hydra
spec:
ports:
- targetPort: 4444
port: 4444
selector:
app: hydra
---
apiVersion: v1
kind: ConfigMap
metadata:
name: hydra
namespace: user
data:
hydra.url: https://oauth.$cluster_domain
consent.url: https://idp.$cluster_domain
.hydra.yml: |
cluster_url: https://oauth.$cluster_domain
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hydra
namespace: user
spec:
replicas: 1
template:
metadata:
labels:
app: hydra
spec:
containers:
- name: hydra
image: oryam/hydra:latest
env:
- name: SYSTEM_SECRET
valueFrom:
secretKeyRef:
name: hydra
key: system.secret
- name: CONSENT_URL
valueFrom:
configMapKeyRef:
name: hydra
key: consent.url
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: hydra
key: database.url
- name: HTTPS_ALLOW_TERMINATION_FROM
value: 0.0.0.0/0
command:
- hydra
- host
- --dangerous-auto-logon
ports:
- name: default
containerPort: 4444
- name: other
containerPort: 4445
volumeMounts:
- name: hydra-volume
mountPath: /root
volumes:
- name: hydra-volume
configMap:
name: hydra
- name: client-data
secret:
secretName: hydra
apiVersion: v1
kind: Service
metadata:
name: consent
namespace: user
labels:
name: consent
spec:
ports:
- targetPort: 3000
port: 3000
selector:
app: consent
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consent
namespace: user
spec:
replicas: 1
template:
metadata:
labels:
app: consent
spec:
containers:
- name: consent
image: $docker_repo/hydra-idp-react:latest
env:
- name: HYDRA_URL
valueFrom:
configMapKeyRef:
name: hydra
key: hydra.url
- name: HYDRA_CLIENT_ID
valueFrom:
secretKeyRef:
name: hydra
key: client.id
- name: HYDRA_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: hydra
key: client.secret
- name: NODE_TLS_REJECT_UNAUTHORIZED
value: "0"
ports:
- containerPort: 3000
volumes:
- name: hydra-volume
configMap:
name: hydra
- name: client-data
secret:
secretName: hydra
apiVersion: v1
kind: Secret
metadata:
name: consent
namespace: user
data:
tls.crt: $tls_crt
tls.key: $tls_key apiVersion: v1
data:
tls.crt: $tls_crt_for_hydra
tls.key: $tls_key_for_hydra
# Root Credentials
#client_id:
#client_secret:
#hydra clients create -a [hydra.keys.get,openid,offline,hydra] -c [https://$redirect_url] -g [authorization_code,implicit,refresh_token,client_credentials] -r [code,token] -n $APP_NAME
# Response
#Client ID:
#Client Secret:
# Navigate to:
# https://oauth.$cluster_domain/oauth2/auth?client_id=$response_client_id&redirect_uri=https://$redirect_url&response_type=code&scope=&state=${...}&nonce=${...}
system.secret: $secret
# Root Credentials base64 encoded
client.id: $root_client_id #base64_root_client_id
client.secret: $root_client_secret #base64_root_client_secret
database.url: $database_host_connection
kind: Secret
metadata:
name: hydra
namespace: user
type: Opaque |
So in order to deploy hydra and idp I need to run:
I also realize that in some places I call idp by "consent". I call it by that in my configs and don't use "idp" much but tried to change it for the purposes of posting... Sorry for the inconsistencies :/. One other issue that I don't think can really be solved is that Hydra and Idp can't be served on the same domain name for Ingress. The Ingress setup either uses path-based routing or domain-based routing. In path-based, Idp won't find it's resources (css, js, ...) since it assumes the base URL is root. So I had no choice but to serve it on its own domain name. The only drawback here is that with that the situation gets a bit more complicated due to requiring more SSL certs for HTTPS. If you're in an environment where certs are not easy to order or wildcard certs aren't possible then this will be a bit more work. Ingress routes:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
name: hydra
namespace: user
spec:
tls:
- hosts:
- oauth.$cluster_domain
secretName: hydra
rules:
- host: oauth.$cluster_domain
http:
paths:
- backend:
serviceName: hydra
servicePort: 4444
path: /oauth2
- host: $host
http:
paths:
- backend:
serviceName: gatekeeper
servicePort: 8080
path: /api
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
name: consent
namespace: user
spec:
tls:
- hosts:
- idp.$cluster_domain
secretName: consent
rules:
- host: idp.$cluster_domain
http:
paths:
- backend:
serviceName: consent
servicePort: 3000
path: / Then gateway authentication is possible as follows: apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/auth-url: "https://oauth.$cluster_domain/api/proxy" # Custom introspect endpoint on Gatekeeper
ingress.kubernetes.io/enable-cors: "true"
name: $ingress_name
namespace: default
spec:
# SSL Settings
# tls:
# - hosts:
# - $host
# secretName: $some-secret-ssl-cert-for-main-app
rules:
- http:
paths:
- backend:
serviceName: $protected-service
servicePort: 80
path: /$protected-endpoint |
I'll try and write up something a bit more procedural and easier to follow as this is pretty messy. :) |
Thanks for the work! The ultimate goal is to have a section in the docs on running hydra in production - so we could probably add it there (or in the FAQ) |
@michael-golfi, hey just wanted to thank you for contributing your Kubernetes use case, it has helped us immensely. I was just curious, what are you doing for client management in production deployments? Are you having to manually set up critical clients each time you deploy? |
Hi @dkushner, I'm really happy I was able to help others! I think an important aspect to the deployment is that Hydra connects to a db (Postgres for us). So whenever we redeployed, most of the credentials were still available as long as the db is preserved. I can't remember where exactly I saw this in the docs but I believe that @arekkas recommended deploying twice. Once to get the root credentials and persisting setup in the db and a second time to erase any trace of the root credentials from any log files (such as our Docker logs). As for first time deploys and storing credentials in a running cluster in general, we kept most of our secret info in Kubernetes secrets. They are only base64 encoded so normally this may have been an issue but our environment is only accessible by my team so we didn't consider this to be a huge roadblock |
We now have a section on the deployment environment and some examples - but it still is up to each environment how the deployment actually works. Thus, I'm closing this issue |
No description provided.
The text was updated successfully, but these errors were encountered: