Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix for mutex key normalization #699

Closed
wants to merge 2 commits into from
Closed

Conversation

phi1ipp
Copy link
Contributor

@phi1ipp phi1ipp commented Oct 8, 2021

fixed normalizeKey for /api/v1/apps/xxxxxxx/[users|groups|sso] cases

fixed normalizeKey for /api/v1/apps/xxxxxxx/[users|groups|sso] cases
`regexp` was missed in the previous commit
case endPoint == "/api/v1/apps":
result = APPS_KEY
case strings.HasPrefix(endPoint, "/api/v1/apps/"):
case regexp.MustCompile("/api/v1/apps/[^/]+$").MatchString(endPoint):
Copy link
Collaborator

@monde monde Oct 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@phi1ipp some things come to mind:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@monde , the problem is with parallelism, so it's not easily reproducible

Let me explain what I discover while running the code with a lot of debugging statements injected here and there.

When I saw that it's waiting for reset b/c 95/1800 left in mutex for app_id I started looking which API call could write such values into apiMutex. Of course there was none for that combination, but 95 would be a limit remaining for /api/v1/xxxx/groups?limit=xx call, where 1800 is coming from /api/v1/apps/xxxxxxx

So I started checking this thing and that's fix suggested takes care of that, to segregate /api/v1/apps/{appId} calls for which limit by default is 600, from /api/v1/apps/{appId}/groups which limit is 100 (see https://developer.okta.com/docs/reference/rl-global-mgmt/)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a mutex on the struct that holds the path accounting / rate limits. I think the bug is that it was not taking into account paths for the /api/v1/apps/xxxxxxx/[users|groups|sso] cases. This made it look like a parallelism bug but I don't think it is that. I added more information to the sleeping log line in my PR #700

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@monde ,
I know, that there is a mutex. But it only acts during Update, so concurrent updates are not allowed. But other goroutine can still read apiMutex, when update is happening. And you can't guarantee, that it won't read it in the middle of the update.

I think my observations proved it, as how else would it get status 95/1800. It should be then 95/100, or 1795/1800.

Just thinking outload

okta/internal/apimutex/apimutex.go Show resolved Hide resolved
@monde
Copy link
Collaborator

monde commented Oct 8, 2021

I think I see the logical flaw re-reading https://developer.okta.com/docs/reference/rl-global-mgmt/ and @phi1ipp's regexp regexp.MustCompile("/api/v1/apps/[^/]+$"), I'll work on a PR.

@monde
Copy link
Collaborator

monde commented Oct 8, 2021

@phi1ipp @bogdanprodan-okta , also, this is an opportunity for me to also double check another optimization I've been thinking about for rate limiting.

@bogdanprodan-okta bogdanprodan-okta linked an issue Oct 8, 2021 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

max_api_capacity not working properly
3 participants