Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: have a "running hydra in production" section #354

Closed
aeneasr opened this issue Jan 6, 2017 · 7 comments
Closed

docs: have a "running hydra in production" section #354

aeneasr opened this issue Jan 6, 2017 · 7 comments
Milestone

Comments

@aeneasr
Copy link
Member

aeneasr commented Jan 6, 2017

No description provided.

@aeneasr aeneasr added this to the 1.0.0: stable release milestone Jan 6, 2017
@michael-golfi
Copy link

michael-golfi commented Feb 22, 2017

I'm hoping this may be helpful for others along with the stuff I posted in #374 .

I run Hydra and Hydra-idp in Kubernetes. Certain things in the configs are probably a bit sketchy like the dangerous auto logon but it's easier for me to setup if root credentials are printed to log. I could also reset Hydra, configs will be stored in db long term and logs won't carry the root credentials, if necessary. Also only our team has access to the logs (definitely helps).

One of the pain points IMO is that it's difficult to automate deployment and I realize that part of this is by design, particularly the client set up. Our other components use ConfigMaps (yaml configs) or env vars in the Pod specs and usually just need to be deployed and work out of the box. So I've tried to replicate most of this in the configs below and try to automate most of the setup.

Though maybe you know a better way to accomplish this :).

Here are my configs.

  • For Hydra:
apiVersion: v1
kind: Service
metadata:
  name: hydra
  namespace: user
  labels:
    name: hydra
spec:
  ports:
    - targetPort: 4444
      port: 4444
  selector:
    app: hydra
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: hydra
  namespace: user
data:
  hydra.url: https://oauth.$cluster_domain
  consent.url: https://idp.$cluster_domain
  .hydra.yml: |
    cluster_url: https://oauth.$cluster_domain
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
   name: hydra
   namespace: user
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: hydra
    spec:
      containers:
      - name: hydra
        image: oryam/hydra:latest
        env:
        - name: SYSTEM_SECRET
          valueFrom:
            secretKeyRef:
              name: hydra
              key: system.secret
        - name: CONSENT_URL
          valueFrom:
            configMapKeyRef:
              name: hydra
              key: consent.url
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: hydra
              key: database.url
        - name: HTTPS_ALLOW_TERMINATION_FROM
          value: 0.0.0.0/0
        command:
        - hydra
        - host
        - --dangerous-auto-logon
        ports:
        - name: default
          containerPort: 4444
        - name: other
          containerPort: 4445
        volumeMounts:
        - name: hydra-volume
          mountPath: /root
      volumes:
      - name: hydra-volume
        configMap:
          name: hydra
      - name: client-data
        secret:
          secretName: hydra
  • For idp:
apiVersion: v1
kind: Service
metadata:
  name: consent
  namespace: user
  labels:
    name: consent
spec:
  ports:
    - targetPort: 3000
      port: 3000
  selector:
    app: consent
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
   name: consent
   namespace: user
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: consent
    spec:
      containers:
      - name: consent
        image: $docker_repo/hydra-idp-react:latest
        env:
        - name: HYDRA_URL
          valueFrom:
            configMapKeyRef:
              name: hydra
              key: hydra.url
        - name: HYDRA_CLIENT_ID
          valueFrom:
            secretKeyRef:
              name: hydra
              key: client.id
        - name: HYDRA_CLIENT_SECRET
          valueFrom:
            secretKeyRef:
              name: hydra
              key: client.secret
        - name: NODE_TLS_REJECT_UNAUTHORIZED
          value: "0"
        ports:
        - containerPort: 3000

      volumes:
      - name: hydra-volume
        configMap:
          name: hydra
      - name: client-data
        secret:
          secretName: hydra
  • For Ingress:
apiVersion: v1
kind: Secret
metadata:
  name: consent
  namespace: user
data:
  tls.crt: $tls_crt
  tls.key: $tls_key
apiVersion: v1
data:
  tls.crt: $tls_crt_for_hydra
  tls.key: $tls_key_for_hydra

# Root Credentials
#client_id: 
#client_secret: 

#hydra clients create -a [hydra.keys.get,openid,offline,hydra] -c [https://$redirect_url] -g [authorization_code,implicit,refresh_token,client_credentials] -r [code,token] -n $APP_NAME
# Response
#Client ID: 
#Client Secret: 

# Navigate to:
# https://oauth.$cluster_domain/oauth2/auth?client_id=$response_client_id&redirect_uri=https://$redirect_url&response_type=code&scope=&state=${...}&nonce=${...}

  system.secret: $secret
# Root Credentials base64 encoded
  client.id: $root_client_id #base64_root_client_id
  client.secret: $root_client_secret #base64_root_client_secret
  database.url: $database_host_connection

kind: Secret
metadata:
  name: hydra
  namespace: user
type: Opaque

@michael-golfi
Copy link

michael-golfi commented Feb 22, 2017

So in order to deploy hydra and idp I need to run:

kubectl create -f hydra.yaml
kubectl create -f idp.yaml
kubectl create -f ingress.yaml
kubectl create -f hydra-secret.yaml # For hydra ingress and client ids/secrets
kubectl create -f idp-secret.yaml # For idp ingress route

I also realize that in some places I call idp by "consent". I call it by that in my configs and don't use "idp" much but tried to change it for the purposes of posting... Sorry for the inconsistencies :/.

One other issue that I don't think can really be solved is that Hydra and Idp can't be served on the same domain name for Ingress. The Ingress setup either uses path-based routing or domain-based routing.

In path-based, Idp won't find it's resources (css, js, ...) since it assumes the base URL is root. So I had no choice but to serve it on its own domain name.

The only drawback here is that with that the situation gets a bit more complicated due to requiring more SSL certs for HTTPS. If you're in an environment where certs are not easy to order or wildcard certs aren't possible then this will be a bit more work.

Ingress routes:

  • For hydra:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
  name: hydra
  namespace: user
spec:
  tls:
  - hosts:
    - oauth.$cluster_domain
    secretName: hydra
  rules:
  - host: oauth.$cluster_domain
    http:
      paths:
      - backend:
          serviceName: hydra
          servicePort: 4444
        path: /oauth2
  - host: $host
    http:
      paths:
      - backend:
          serviceName: gatekeeper
          servicePort: 8080
        path: /api
  • For idp:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
  name: consent
  namespace: user
spec:
  tls:
  - hosts:
    - idp.$cluster_domain
    secretName: consent
  rules:
  - host: idp.$cluster_domain
    http:
      paths:
      - backend:
          serviceName: consent
          servicePort: 3000
        path: /

Then gateway authentication is possible as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.kubernetes.io/auth-url: "https://oauth.$cluster_domain/api/proxy" # Custom introspect endpoint on Gatekeeper
    ingress.kubernetes.io/enable-cors: "true"
  name: $ingress_name
  namespace: default
spec:
# SSL Settings
#  tls:
#  - hosts:
#     - $host
#       secretName: $some-secret-ssl-cert-for-main-app
  rules:
  - http:
      paths:
      - backend:
          serviceName: $protected-service
          servicePort: 80
        path: /$protected-endpoint

@michael-golfi
Copy link

I'll try and write up something a bit more procedural and easier to follow as this is pretty messy. :)

@aeneasr
Copy link
Member Author

aeneasr commented Feb 22, 2017

Thanks for the work! The ultimate goal is to have a section in the docs on running hydra in production - so we could probably add it there (or in the FAQ)

@dkushner
Copy link
Contributor

@michael-golfi, hey just wanted to thank you for contributing your Kubernetes use case, it has helped us immensely. I was just curious, what are you doing for client management in production deployments? Are you having to manually set up critical clients each time you deploy?

@michael-golfi
Copy link

michael-golfi commented May 24, 2017

Hi @dkushner, I'm really happy I was able to help others! I think an important aspect to the deployment is that Hydra connects to a db (Postgres for us). So whenever we redeployed, most of the credentials were still available as long as the db is preserved.

I can't remember where exactly I saw this in the docs but I believe that @arekkas recommended deploying twice. Once to get the root credentials and persisting setup in the db and a second time to erase any trace of the root credentials from any log files (such as our Docker logs).

As for first time deploys and storing credentials in a running cluster in general, we kept most of our secret info in Kubernetes secrets. They are only base64 encoded so normally this may have been an issue but our environment is only accessible by my team so we didn't consider this to be a huge roadblock

@aeneasr aeneasr modified the milestones: unplanned, 1.0.0: stable release Jun 5, 2017
@aeneasr
Copy link
Member Author

aeneasr commented May 25, 2018

We now have a section on the deployment environment and some examples - but it still is up to each environment how the deployment actually works. Thus, I'm closing this issue

@aeneasr aeneasr closed this as completed May 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants