Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tenantcostserver: use the tenant_usage system table #68115

Merged
merged 3 commits into from
Jul 31, 2021

Conversation

RaduBerinde
Copy link
Member

This PR prototypes a new tenant_usage schema based on @ajwerner's suggestion in the RFC (#66436). I will update the RFC after getting some initial feedback here.

sql: add tenant_usage system table

Add the system table described in the RFC (#66436).

The table is only created for the system tenant.

Release note: None

tenantcostserver: use the tenant_usage system table

This change implements most of the interaction with the tenant_usage
system table, with the exception of dead instances detection and
clean-up.

We currently tolerate an empty table, but it would be cleaner to
initialize the tenant state (instance_id=0 row) at tenant creation
time (+ implement a migration). I will explore this in a future
change, when we add some configurable defaults for the refill rate
etc.

Release note: None

@RaduBerinde RaduBerinde requested review from ajwerner and a team July 27, 2021 16:39
@RaduBerinde RaduBerinde requested review from a team as code owners July 27, 2021 16:39
@RaduBerinde RaduBerinde requested review from adityamaru and removed request for a team July 27, 2021 16:39
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@RaduBerinde RaduBerinde force-pushed the mt-systable-new branch 2 times, most recently from 867e4a8 to f4f0be3 Compare July 27, 2021 21:43
Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. How do you feel about it?

Reviewed 32 of 32 files at r1, 6 of 11 files at r2.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 371 at r2 (raw file):

// checkInvariants reads all rows in the system table for the given tenant and
// checks that the state is consistent.
func (h *sysTableHelper) checkInvariants() error {

I find it surprising that this method returns an error but also panics. Did you intend to convert those to errors?


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 381 at r2 (raw file):

			next_instance_id,          /* 1 */
			ru_burst_limit,            /* 2 */
			ru_refill_rate,            /* 3 */        

trailing spaces


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 35 at r2 (raw file):

	}

	result := &roachpb.TokenBucketResponse{}

consider resetting the result at the top of the closure. If you make sure to update every field on all call paths, it's fine, but I've been bitten by early returns and populated results in the past.


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 74 at r2 (raw file):

Quoted 7 lines of code…
		m := s.metrics.getTenantMetrics(tenantID)
		m.totalRU.Update(tenant.Consumption.RU)
		m.totalReadRequests.Update(int64(tenant.Consumption.ReadRequests))
		m.totalReadBytes.Update(int64(tenant.Consumption.ReadBytes))
		m.totalWriteRequests.Update(int64(tenant.Consumption.WriteRequests))
		m.totalWriteBytes.Update(int64(tenant.Consumption.WriteBytes))
		m.totalSQLPodsCPUSeconds.Update(tenant.Consumption.SQLPodCPUSeconds)

Should this happen outside of the closure?


pkg/sql/catalog/systemschema/system.go, line 515 at r1 (raw file):

Quoted 50 lines of code…
  tenant_id INT NOT NULL,
	-- For each tenant, there is a special row with instance_id = 0 which contains
	-- per-tenant stat. Each SQL instance (pod) also has its own row with
	-- per-instance state.
	instance_id INT NOT NULL,
	-- next_instance_id identifies the next live instance ID, with the smallest ID
	-- larger than this instance_id (or 0 if there is no such ID).
	-- We are overlaying a circular linked list of all live instances, with
	-- instance 0 acting as a sentinel (always the head of the list).
	next_instance_id INT NOT NULL,
	-- -------------------------------------------------------------------
	--  The following fields are used only for the per-tenant state, when
	--  instance_id = 0.
	-- -------------------------------------------------------------------
  -- Bucket configuration.
  ru_burst_limit FLOAT,
  ru_refill_rate FLOAT,
  -- Current amount of RUs in the bucket.
  ru_current FLOAT,
  -- Current sum of the shares values for all instances.
  current_share_sum FLOAT,
  -- Cumulative usage statistics.
  total_ru_usage            FLOAT,
  total_read_requests       INT,
  total_read_bytes          INT,
  total_write_requests      INT,
  total_write_bytes         INT,
  total_sql_pod_cpu_seconds FLOAT, -- TODO: Maybe milliseconds and INT8?
	-- -------------------------------------------------------------
	--  The following fields are used for per-instance state, when
	--  instance_id != 0.
	-- --------------------------------------------------------------
	-- The lease is a unique identifier for this instance, necessary because
	-- instance IDs can be reused.
	instance_lease BYTES,
	-- Last request sequence number. These numbers are provided by the
	-- instance and are monotonically increasing; used to detect duplicate
	-- requests and provide idempotency.
	instance_seq INT,
	-- Current shares value for this instance.
  instance_shares FLOAT,
	-- Time when we last heard from this instance.
	instance_last_update TIMESTAMP,
	FAMILY "primary" (
	  tenant_id, instance_id, next_instance_id,
		ru_burst_limit, ru_refill_rate, ru_current, current_share_sum,
		total_ru_usage, total_read_requests, total_read_bytes, total_write_requests,
		total_write_bytes, total_sql_pod_cpu_seconds,
		instance_lease, instance_seq, instance_shares, instance_last_update
	),

nit: the indentation is a bit off


pkg/sql/tests/system_table_test.go, line 215 at r1 (raw file):

			t.Errorf("\n%#v\n", test.pkg.TableDesc())
			t.Errorf("\n%#v\n", gen.TableDesc())

nit: either remove or move into the below error call

Copy link
Member Author

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it too. Having the per-instance state makes things less fragile and easier to debug/understand, and it is more extensible - I can imagine various features that would need per-instance state.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru and @ajwerner)


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 371 at r2 (raw file):

Previously, ajwerner wrote…

I find it surprising that this method returns an error but also panics. Did you intend to convert those to errors?

Right, fixed.


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 381 at r2 (raw file):

Previously, ajwerner wrote…

trailing spaces

Fixed.


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 35 at r2 (raw file):

Previously, ajwerner wrote…

consider resetting the result at the top of the closure. If you make sure to update every field on all call paths, it's fine, but I've been bitten by early returns and populated results in the past.

Agreed, done.


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 74 at r2 (raw file):

Previously, ajwerner wrote…
		m := s.metrics.getTenantMetrics(tenantID)
		m.totalRU.Update(tenant.Consumption.RU)
		m.totalReadRequests.Update(int64(tenant.Consumption.ReadRequests))
		m.totalReadBytes.Update(int64(tenant.Consumption.ReadBytes))
		m.totalWriteRequests.Update(int64(tenant.Consumption.WriteRequests))
		m.totalWriteBytes.Update(int64(tenant.Consumption.WriteBytes))
		m.totalSQLPodsCPUSeconds.Update(tenant.Consumption.SQLPodCPUSeconds)

Should this happen outside of the closure?

I was concerned with having a race between two different requests and having them update the metrics in opposite order. But this has a different problem, we may update the metrics even if our transaction ultimately fails to commit.

I moved them out and added a TODO. Will go away if we implement the rangefeeds solution.

@RaduBerinde RaduBerinde requested a review from andy-kimball July 29, 2021 20:05
@RaduBerinde RaduBerinde requested a review from a team July 29, 2021 22:10
Copy link
Contributor

@andy-kimball andy-kimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 31 at r6 (raw file):

// pod for this tenant has a different instance ID (but note that IDs can be
// reused).
type InstanceID int32

Are you avoiding importing base.SQLInstanceID?


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 67 at r6 (raw file):

	// incarnations of the same ID.
	Lease tree.DBytes
	// Seq is a sequence number used to

Comment cut off.


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 282 at r6 (raw file):

// next_instance_id in the table (or updates tenant.FirstInstance).
//
// Note that this should only happen when a SQL pod starts up (it is not in the

We do consider startup of a SQL pod as a "hot path" and we carefully measure/monitor this time. This is because the latency of resuming from suspension is so important. If the tenant system table is in another region from the SQL pod, we'll want to make sure that creation of the pod isn't delayed by a roundtrip call to the table. From what I can see, we're only calling this lazily when we discover an instance does not yet exist, as part of reporting tenant usage to the server. That's good, since that will definitely keep this out of the pod startup hot path. But you may want to tweak this comment to make it more clear this won't delay pod startup.


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 286 at r6 (raw file):

func (h *sysTableHelper) accomodateNewInstance(tenant *tenantState, instance *instanceState) error {
	if tenant.FirstInstance == 0 || tenant.FirstInstance > instance.ID {
		// Thew new instance has the lowest ID.

SP: The


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 328 at r6 (raw file):

		h.ctx, "update-next-id", h.txn,
		sessiondata.NodeUserSessionDataOverride,
		// Update the previous instance's next_instnace_id.

SP: next_instance_id


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

	// Report current consumption.
	// TODO(radu): there is a possible race here, where two different requests

@ajwerner, how come these metrics are gauges and not counters? They can only go up, right?

To fix the race, can we add a CompareAndSwap primitive to the gauge struct, or maybe a SetIfHigher primitive to the counter struct? In both cases, we'd ensure that these metrics can only go up.

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, @andy-kimball, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

@ajwerner, how come these metrics are gauges and not counters? They can only go up, right?

To fix the race, can we add a CompareAndSwap primitive to the gauge struct, or maybe a SetIfHigher primitive to the counter struct? In both cases, we'd ensure that these metrics can only go up.

I don't think they can be counters in that we want them to reflect the aggregate value across all instances as observed from any kv node and the nodes communicate with each other through the table. I'm not clear on how we could use a counter.

A SetIfHigher primitive sounds reasonable and straightforward.

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, @andy-kimball, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, ajwerner wrote…

I don't think they can be counters in that we want them to reflect the aggregate value across all instances as observed from any kv node and the nodes communicate with each other through the table. I'm not clear on how we could use a counter.

A SetIfHigher primitive sounds reasonable and straightforward.

Speaking of the race, should we have a mutex or something to only allow one request for a tenant to proceed on a given gateway at a time? It feels like we can avoid extra contention here and also stop thinking about this race.

Copy link
Contributor

@andy-kimball andy-kimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, ajwerner wrote…

Speaking of the race, should we have a mutex or something to only allow one request for a tenant to proceed on a given gateway at a time? It feels like we can avoid extra contention here and also stop thinking about this race.

From the Prometheus documentation:

Counter vs. gauge, summary vs. histogram
It is important to know which of the four main metric types to use for a given metric.

To pick between counter and gauge, there is a simple rule of thumb: if the value can go down, it is a gauge.

Counters can only go up (and reset, such as when a process restarts). They are useful for accumulating the number of events, or the amount of something at each event. For example, the total number of HTTP requests, or the total number of bytes sent in HTTP requests. Raw counters are rarely useful. Use the rate() function to get the per-second rate at which they are increasing.

Gauges can be set, go up, and go down. They are useful for snapshots of state, such as in-progress requests, free/total memory, or temperature. You should never take a rate() of a gauge.

If Prometheus knows that a value can only go up (like these ones), it can do a better job handling gaps in the data as well as resets to zero. I realize that we currently only have an Inc method on the counter class, but I don't see a reason why a UpdateIfHigher method wouldn't also fit the intent and constraints of a counter. As long as the value always goes up, it doesn't seem important how it gets there - through delta increment or absolute update.

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, @andy-kimball, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

From the Prometheus documentation:

Counter vs. gauge, summary vs. histogram
It is important to know which of the four main metric types to use for a given metric.

To pick between counter and gauge, there is a simple rule of thumb: if the value can go down, it is a gauge.

Counters can only go up (and reset, such as when a process restarts). They are useful for accumulating the number of events, or the amount of something at each event. For example, the total number of HTTP requests, or the total number of bytes sent in HTTP requests. Raw counters are rarely useful. Use the rate() function to get the per-second rate at which they are increasing.

Gauges can be set, go up, and go down. They are useful for snapshots of state, such as in-progress requests, free/total memory, or temperature. You should never take a rate() of a gauge.

If Prometheus knows that a value can only go up (like these ones), it can do a better job handling gaps in the data as well as resets to zero. I realize that we currently only have an Inc method on the counter class, but I don't see a reason why a UpdateIfHigher method wouldn't also fit the intent and constraints of a counter. As long as the value always goes up, it doesn't seem important how it gets there - through delta increment or absolute update.

Makes sense. We can add an UpdateIfHigher method to the counter type with equal ease as to the gauge.

Copy link
Member Author

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, @andy-kimball, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, ajwerner wrote…

Makes sense. We can add an UpdateIfHigher method to the counter type with equal ease as to the gauge.

Part of the problem is that a couple of these are floats (including the most important one, the RUs). We don't have a float counter. We could add it but I don't know to what extent a float counter would make sense to Prometheus?

A mutex SGTM.

@andy-kimball
Copy link
Contributor


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, RaduBerinde wrote…

Part of the problem is that a couple of these are floats (including the most important one, the RUs). We don't have a float counter. We could add it but I don't know to what extent a float counter would make sense to Prometheus?

A mutex SGTM.

The Prometheus Go library returns all samples as float64:

// A SampleValue is a representation of a value for a given sample at a given
// time.
type SampleValue float64

Copy link
Member Author

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, @andy-kimball, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

The Prometheus Go library returns all samples as float64:

// A SampleValue is a representation of a value for a given sample at a given
// time.
type SampleValue float64

I see. I will investigate switching to counters in a separate change.

@andy-kimball
Copy link
Contributor


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, RaduBerinde wrote…

I see. I will investigate switching to counters in a separate change.

Maybe open a task with this info. This could be a task

Copy link
Member Author

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, and @andy-kimball)


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 31 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

Are you avoiding importing base.SQLInstanceID?

Fixed.


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 67 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

Comment cut off.

Done.


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 282 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

We do consider startup of a SQL pod as a "hot path" and we carefully measure/monitor this time. This is because the latency of resuming from suspension is so important. If the tenant system table is in another region from the SQL pod, we'll want to make sure that creation of the pod isn't delayed by a roundtrip call to the table. From what I can see, we're only calling this lazily when we discover an instance does not yet exist, as part of reporting tenant usage to the server. That's good, since that will definitely keep this out of the pod startup hot path. But you may want to tweak this comment to make it more clear this won't delay pod startup.

Yes, we are on the same page, improved the comment.


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 286 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

SP: The

Done.


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 328 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

SP: next_instance_id

Done.


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

Maybe open a task with this info. This could be a task

Filed #68291

Added the mutex in separate commit.

Add the system table described in the RFC (cockroachdb#66436).

The table is only created for the system tenant.

Release note: None
@RaduBerinde RaduBerinde force-pushed the mt-systable-new branch 2 times, most recently from 42aac1b to 0f92416 Compare July 30, 2021 21:27
Copy link
Contributor

@andy-kimball andy-kimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:, with a few nits.

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, @andy-kimball, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/metrics.go, line 112 at r9 (raw file):

	// Mutex is used to atomically update metrics together with a corresponding
	// change to the system table.
	mutex *syncutil.Mutex

I don't think this needs to be a pointer, does it? Why allocate an extra object?


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 33 at r9 (raw file):

	Present bool

	// Smallest active instance ID.

SUPER NIT: These comments should start with field name to be idiomatic go; similar issues in instanceState.


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, RaduBerinde wrote…

Filed #68291

Added the mutex in separate commit.

Whoops, my comment got cut off. I meant to say, "This could be task for someone else". Also, you could add the work to fix the TODO in that task by adding an UpdateIfHigher or RatchetUp method. How come you removed the TODO comment from the metrics update code below?


pkg/roachpb/api.proto, line 2276 at r9 (raw file):

  uint64 tenant_id = 2 [(gogoproto.customname) = "TenantID"];

  // ID of the SQL pod instance from where the request originates.

NIT: Would be nice to start with field name, using Go formatting, i.e. InstanceID.

This change implements most of the interaction with the tenant_usage
system table, with the exception of dead instances detection and
clean-up.

We currently tolerate an empty table, but it would be cleaner to
initialize the tenant state (instance_id=0 row) at tenant creation
time (+ implement a migration). I will explore this in a future
change, when we add some configurable defaults for the refill rate
etc.

Release note: None
Use a per-tenant mutex to avoid unnecessary transaction restart and
remove the possibility of out-of-order metric updates.

Release note: None
Copy link
Member Author

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTR!

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, and @andy-kimball)


pkg/ccl/multitenantccl/tenantcostserver/metrics.go, line 112 at r9 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

I don't think this needs to be a pointer, does it? Why allocate an extra object?

We store this struct in the map and pass it around by value


pkg/ccl/multitenantccl/tenantcostserver/system_table.go, line 33 at r9 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

SUPER NIT: These comments should start with field name to be idiomatic go; similar issues in instanceState.

Done.


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

Whoops, my comment got cut off. I meant to say, "This could be task for someone else". Also, you could add the work to fix the TODO in that task by adding an UpdateIfHigher or RatchetUp method. How come you removed the TODO comment from the metrics update code below?

The mutex fixes that, the metric update is now atomic with the transaction.


pkg/roachpb/api.proto, line 2276 at r9 (raw file):

Previously, andy-kimball (Andy Kimball) wrote…

NIT: Would be nice to start with field name, using Go formatting, i.e. InstanceID.

Done. I usually avoid that in .proto files because the name in the proto is not the same as the go name, but looking at the existing file, there is a mix of both practices.

Copy link
Contributor

@andy-kimball andy-kimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @adityamaru, @ajwerner, @andy-kimball, and @RaduBerinde)


pkg/ccl/multitenantccl/tenantcostserver/metrics.go, line 112 at r9 (raw file):

Previously, RaduBerinde wrote…

We store this struct in the map and pass it around by value

I see, OK.


pkg/ccl/multitenantccl/tenantcostserver/token_bucket.go, line 82 at r6 (raw file):

Previously, RaduBerinde wrote…

The mutex fixes that, the metric update is now atomic with the transaction.

Ah, got it.

@RaduBerinde
Copy link
Member Author

bors r+

@craig
Copy link
Contributor

craig bot commented Jul 31, 2021

Build succeeded:

@craig craig bot merged commit 8e15f99 into cockroachdb:master Jul 31, 2021
@RaduBerinde RaduBerinde deleted the mt-systable-new branch August 2, 2021 01:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants