You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issue #5505 was fixed for single node in RC2 by PR #5520 but it is still broken for clusters. The fix for single node was to:
update user's password
on successful update, meta.Client deletes that user from meta.Client.authCache.
The problem in a cluster is that only the meta.Client on the node that executed the update knows to update its authCache. The other nodes still have an out of date authCache containing the hash for the old password.
A quick hack would be to deleted the entire authCache every time the loop in meta.Client.retryUntilSnapshot gets a new snapshot. A better way would be to individually update each user in the authCache from the user data in the new snapshot.
The text was updated successfully, but these errors were encountered:
I think we're going about authentication the wrong way. bcrypt is intended to slow down brute-force attempts at guessing passwords, and the authCache is just a way to make checking passwords faster(weaker). Once a user has successfully authenticated to a node, future attempts for that user will return near-instantly, just about eliminating the initial perceived benefit of using bcrypt.
What this means is that for a long-running node, authentication attempts for any active user can be done in bulk in a short amount of time.
If we really want bcrypt, and are willing to take the timing hit, we should just drop the cost factor down to an acceptable value. If speed is critical (and I would think it is), and we want to continue with our current authentication scheme of sending passwords over the wire, then I would suggest that we just store the passwords as salted hashes. It would be no less secure than what we currently have, other than the on-disk storage. This would require a slightly larger change, and code to gracefully transition user data in the meta store.
I'd be okay with fixing this quickly so that it works, and revisiting our authentication scheme afterward. The fix would be to fall-through the cache on an invalid password, instead of treating the cache as authoritative. If we're going to keep bcrypt, then it will still penalize invalid passwords while allowing the correct password through quickly (after the first attempt). This also means we can revert the change in #5520 that treats the local authCache differently than remote nodes.
On second thought, the old passwords would still be valid, so we would have to also store the bcrypt hash and compare them when a new snapshot arrives.
Issue #5505 was fixed for single node in RC2 by PR #5520 but it is still broken for clusters. The fix for single node was to:
meta.Client
deletes that user frommeta.Client.authCache
.The problem in a cluster is that only the
meta.Client
on the node that executed the update knows to update itsauthCache
. The other nodes still have an out of dateauthCache
containing the hash for the old password.A quick hack would be to deleted the entire
authCache
every time the loop inmeta.Client.retryUntilSnapshot
gets a new snapshot. A better way would be to individually update each user in theauthCache
from the user data in the new snapshot.The text was updated successfully, but these errors were encountered: