Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: openid-connect plugin causes nginx to crash on ARM64 #6666

Closed
dkrantsberg opened this issue Mar 20, 2022 · 11 comments
Closed

bug: openid-connect plugin causes nginx to crash on ARM64 #6666

dkrantsberg opened this issue Mar 20, 2022 · 11 comments
Assignees
Labels

Comments

@dkrantsberg
Copy link

dkrantsberg commented Mar 20, 2022

Current Behavior

I've enabled openid-connect plugin on a route but it when making a request to this route causes nginx process to crash (see debug level logs below). I'm running it on Mac M1 (arm64) so this issue could be specific to arm64.

It looks like it executes successfully up to this point:
https://github.com/zmartzone/lua-resty-openidc/blob/master/lib/resty/openidc.lua#L567

Then client.lua and resolver.lua somehow get involved and it results in nginx crashing:

worker process 48 exited on signal 11

This could be specific to arm64. I haven't tried it with x64.

Expected Behavior

Expected normal oidc flow

Error Logs

2022/03/20 16:18:19 [info] 48#48: *1893 [lua] radixtree.lua:346: pre_insert_route(): path: /get operator: =, client: 172.26.0.1, server: _, request: "GET /get HTTP/1.1", host: "localhost:9080"
2022/03/20 16:18:19 [info] 48#48: *1893 [lua] init.lua:398: http_access_phase(): matched route: {"orig_modifiedIndex":779,"createdIndex":178,"clean_handlers":"table: 0x1cadece16508","value":{"create_time":1647628837,"id":"5","plugins":{"openid-connect":{"bearer_only":true,"discovery":"https:\/\/auth.my-auth-server.org\/_api\/auth\/mytenant\/.well-known\/openid-configuration","timeout":3,"access_token_in_authorization_header":true,"ssl_verify":false,"set_userinfo_header":true,"set_access_token_header":true,"realm":"apisix","client_id":"mytenant-ui","scope":"openid","client_secret":"9180987f-bc65-6482-9300-812d3719faa6","logout_path":"\/logout","set_id_token_header":true,"introspection_endpoint_auth_method":"client_secret_basic"}},"update_time":1647654404,"uri":"\/get","status":1,"priority":0,"upstream":{"hash_on":"vars","type":"roundrobin","parent":{"orig_modifiedIndex":779,"createdIndex":178,"clean_handlers":{},"value":"table: 0x1cadecfab320","key":"\/apisix\/routes\/5","update_count":0,"modifiedIndex":779,"has_domain":true},"scheme":"http","pass_host":"pass","nodes":[{"weight":1,"host":"httpbin.org","port":80}]}},"key":"\/apisix\/routes\/5","update_count":0,"modifiedIndex":779,"has_domain":true}, client: 172.26.0.1, server: _, request: "GET /get HTTP/1.1", host: "localhost:9080"
2022/03/20 16:18:19 [debug] 48#48: *1893 [lua] openidc.lua:565: openidc_discover(): openidc_discover: URL is: https://auth.my-auth-server.org/_api/auth/mytenant/.well-known/openid-configuration
2022/03/20 16:18:19 [debug] 48#48: *1893 [lua] openidc.lua:571: openidc_discover(): discovery data not in cache, making call to discovery endpoint
2022/03/20 16:18:19 [debug] 48#48: *1893 [lua] openidc.lua:408: openidc_configure_proxy(): openidc_configure_proxy : don't use http proxy
2022/03/20 16:18:19 [info] 48#48: *1893 [lua] client.lua:126: dns_parse(): dns resolve auth.my-auth-server.org, result: {"name":"auth.my-auth-server.org","class":1,"address":"38.134.56.123","ttl":4502,"section":1,"type":1}, client: 172.26.0.1, server: _, request: "GET /get HTTP/1.1", host: "localhost:9080"
2022/03/20 16:18:19 [info] 48#48: *1893 [lua] resolver.lua:39: parse_domain(): parse addr: {"name":"auth.my-auth-server.org","class":1,"type":1,"section":1,"address":"38.134.56.123","ttl":4502}, client: 172.26.0.1, server: _, request: "GET /get HTTP/1.1", host: "localhost:9080"
2022/03/20 16:18:19 [info] 48#48: *1893 [lua] resolver.lua:40: parse_domain(): resolver: ["127.0.0.11"], client: 172.26.0.1, server: _, request: "GET /get HTTP/1.1", host: "localhost:9080"
2022/03/20 16:18:19 [info] 48#48: *1893 [lua] resolver.lua:41: parse_domain(): host: auth.my-auth-server.org, client: 172.26.0.1, server: _, request: "GET /get HTTP/1.1", host: "localhost:9080"
2022/03/20 16:18:19 [info] 48#48: *1893 [lua] resolver.lua:43: parse_domain(): dns resolver domain: auth.my-auth-server.org to 38.134.56.123, client: 172.26.0.1, server: _, request: "GET /get HTTP/1.1", host: "localhost:9080"
2022/03/20 16:18:19 [info] 52#52: *1916 [lua] timers.lua:39: run timer[plugin#server-info], context: ngx.timer
2022/03/20 16:18:19 [notice] 1#1: signal 17 (SIGCHLD) received from 48
2022/03/20 16:18:19 [alert] 1#1: worker process 48 exited on signal 11
2022/03/20 16:18:19 [notice] 1#1: start worker process 59
2022/03/20 16:18:19 [notice] 1#1: signal 29 (SIGIO) received
2022/03/20 16:18:19 [notice] 59#59: sched_setaffinity(): using cpu #3

Steps to Reproduce

  1. Run docker-compose with this file: https://github.com/apache/apisix-docker/blob/master/example/docker-compose-arm64.yml
  2. Enable openid-connect plugin:
curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
  "uri": "/get",
  "plugins": {
    "openid-connect": {
      "client_id": "my-client",
      "client_secret": "XXXX-XXXX-XXX",
      "discovery": "https://my-auth-server/myapi/.well-known/openid-configuration",
      "access_token_in_authorization_header": true,
      "bearer_only": true
    }    
  },
  "upstream": {
    "type": "roundrobin",
    "nodes": {
      "httpbin.org:80": 1
    }
  }
}'
  1. Make a request to the route:
curl -i -X GET http://127.0.0.1:9080/get -H "Authorization: Bearer #####token####"   

curl: (52) Empty reply from server
  1. Request results in no reply since nginx crashes

Environment

  • Host OS: macOS 12.2.1
  • Docker version: 20.10.13
  • APISIX version: 2.12.1
  • Operating system: Linux dc3c486f70f2 5.10.104-linuxkit #1 SMP PREEMPT Wed Mar 9 19:01:25 UTC 2022 aarch64 Linux
  • OpenResty / Nginx version: openresty/1.19.9.1
  • etcd version: 3.4.16
  • Plugin runner version: not sure how to get it
  • LuaRocks version: 3.8.0
@dkrantsberg
Copy link
Author

Update: Looks like the issue is specific to Alpine Linux for ARM 64. I've switched to Centos based docker image (apache/apisix:2.12.1-centos) and the error is gone.

@tzssangglass
Copy link
Member

Update: Looks like the issue is specific to Alpine Linux for ARM 64. I've switched to Centos based docker image (apache/apisix:2.12.1-centos) and the error is gone.

Yes, support on M1 is not perfect at the moment. It would be nice if you could provide core dump file about this alert .

@membphis
Copy link
Member

@soulbird @shuaijinchao pls take a look this issue

@soulbird
Copy link
Contributor

I'll take a look

@moonming
Copy link
Member

I think Apache APISIX should add ARM64 to CI

@soulbird
Copy link
Contributor

I used the same test environment as you, but I didn't reproduce it. And a difference is the discovery endpoint. So if you can open the service of auth.my-auth-server.org(38.134.56.123), I will try again.
It would be nice if you could provide core dump file about this alert

@moonming
Copy link
Member

I used the same test environment as you, but I didn't reproduce it. And a difference is the discovery endpoint. So if you can open the service of auth.my-auth-server.org(38.134.56.123), I will try again. It would be nice if you could provide core dump file about this alert

@soulbird Did you use the arm64 local environment?

@soulbird
Copy link
Contributor

I used the same test environment as you, but I didn't reproduce it. And a difference is the discovery endpoint. So if you can open the service of auth.my-auth-server.org(38.134.56.123), I will try again. It would be nice if you could provide core dump file about this alert

@soulbird Did you use the arm64 local environment?

I used MacOS with M1 and docker to reproduce it, but did not get the expected result.

@soulbird
Copy link
Contributor

soulbird commented Mar 23, 2022

When the process exits unexpectedly, you can use the following method to get coredump file

steup 1: adjust apisix mirror startup parameters

  apisix:
    image: apache/apisix:2.12.1-alpine
    restart: always
    privileged: true
    volumes:
      - ./apisix_log:/usr/local/apisix/logs
      - ./apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml:ro
    depends_on:
      - etcd
    ports:
      - "9080:9080/tcp"
      - "9091:9091/tcp"
      - "9443:9443/tcp"
      - "9092:9092/tcp"
    networks:
      apisix:

steup 2: adjust core file size

ulimit -c
unlimited
# If 0, use the following commands to lift the limit or set a custom size.
$ ulimit -c unlimited

steup 3: change the default core file directory

$ sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%t
$ sysctl -p /etc/sysctl.conf

You will see the coredump file in the /tmp directory, when a worker process exits unexpectedly.

$ ll /tmp
 rw------- 1  root  root  280 MiB  Mon Mar 21 14:11:22 2022  core-openresty.29770.1647843081

@github-actions
Copy link

This issue has been marked as stale due to 350 days of inactivity. It will be closed in 2 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the [email protected] list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Mar 15, 2023
@github-actions
Copy link

This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants