Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

syncs: add LockFunc, LockValue, LockValues, and Mutex #12802

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dsnet
Copy link
Member

@dsnet dsnet commented Jul 13, 2024

The first 3 functions are helpers for running functions under the protection of a lock.

The Mutex type is a wrapper over sync.Mutex with a Do method that runs a function under the protection of a lock.

Updates golang/go#63941
Updates tailscale/corp#11038
Updates #cleanup

@dsnet dsnet requested review from raggi, bradfitz and maisem July 13, 2024 21:35
The first 3 functions are helpers for running functions
under the protection of a lock.

The Mutex type is a wrapper over sync.Mutex with a Do method
that runs a function under the protection of a lock.

Updates #11038
Updates #cleanup

Signed-off-by: Joe Tsai <[email protected]>
@dsnet dsnet force-pushed the dsnet/syncs-lock branch from a0b9cea to 2057205 Compare July 13, 2024 21:40
}

// LockValue runs f while holding the lock and returns the argument.
func LockValue[T any](lock sync.Locker, f func() T) T {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about LockValue[L sync.Locker, T any](lock L, func() T) T?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

primarily to avoid the interface wrapper in hot code (not sure if that holds with generics)

}

// LockValue runs f while holding the lock and returns the argument.
func LockValue[T any](lock sync.Locker, f func() T) T {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LockValue is kinda weird, maybe GetWithLock and DoWithLock or just Get and Do like the must package?

Copy link
Contributor

@maisem maisem Jul 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maybe a locks.Get/locks.Do?

nvm, syncs is fine

Comment on lines +334 to +342
// Mutex is identical to [sync.Mutex], but with additional methods.
type Mutex struct{ sync.Mutex }

// Do runs f while holding the lock.
func (m *Mutex) Do(f func()) {
m.Lock()
defer m.Unlock()
f()
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raggi has aspirations of getting static analysis doing lock checking in the tree. I wonder what this would do both to that and to the readability/consistency of mutex usage in the tree.

I don't want to end up in a place where we have a bunch of different locking styles.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd started with gvisors static lock checks, sadly they don't yet support various patterns we use a lot. This pattern is helpful there as it disallows one of the harder cases.

Copy link
Member Author

@dsnet dsnet Jul 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not exactly sure what @raggi had in mind, but I would argue that this would assist future static analysis.

One challenge in static analysis is correlating the set of variables that are actually guarded by a lock to a particular lock. Today, the typical pattern is this:

mu.Lock()
defer mu.Unlock()

... // critical section

... // uncritical section accidentally covered by mu since defers execute late

However, this might cover more variables by a lock than is necessary.

The initial motivation of this was to make the critical sections more clear in code as now it would be:

syncs.LockFunc(&mu, func() {
    ... // critical section
})
... // uncritical section

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/tailscale/tailscale/actions/workflows/checklocks.yml was running for a while in a non-blocking mode. Unfortunately it has problems with bounds as you describe, but most particularly with patterns like:

m.Lock()
defer m.Unlock()

go func() { m.Lock(); defer m.Unlock(); ... }

Which show up in a good number of places in our code, in either local or non-local form.

We could do with cleaning those cases up too, for other reasons, but as it stands this blocks using the tool.

var nodesMu sync.Mutex
var nodes []string
n := syncs.LockValue(&nodesMu, func() int { return len(nodes) })
log.Printf("there are %d nodes", n)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer fmt.Printf/printf, and then you can add // Output: lines which will be used to make assertions that the output is still stable - making these functional tests

@raggi
Copy link
Member

raggi commented Jul 16, 2024

Did you consider an actual locked value, for example a bunch of modern languages now have values behind mutex APIs, e.g.

type Mutex[T any] struct {
	m     sync.Mutex
	value T
}

func (m *Mutex[T]) Do(f func(T)) {
	m.m.Lock()
	defer m.m.Unlock()
	f(m.value)
}

// Get returns the stored value, if the value is not a copy type or contains embedded references, the returned value may not be safe for use.
func (m *Mutex[T]) Get() T {
	m.m.Lock()
	defer m.m.Unlock()
	return m.value
}

func (m *Mutex[T]) Set(t T) {
	m.m.Lock()
	defer m.m.Unlock()
	m.value = t
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants