Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure workers can send data popped off the queue at shutdown #239

Closed

Conversation

onlynone
Copy link
Contributor

@onlynone onlynone commented Feb 26, 2020

Previously, any data still in the queue would be handled by the main
thread. But, any data that a worker had already popped off the queue,
but not yet sent on the wire, would be lost.

This change pushes down the logic for stopping to the writer and
worker which now tell the threads to exit and wait for them to do so.

Previously, any data still in the queue would be handled by the main
thread. But, any data that a worker had already popped off the queue,
but not yet sent on the wire, would be lost.

This change pushes down the logic for stopping into the writer and
worker classes. The worker tells the threads to exit and waits for them
to do so.
@onlynone
Copy link
Contributor Author

My last attempt to address this issue in #199 only got part way there. That solved the problem of data being left on the queue not being written, but it didn't solve the issue of data that workers had already popped off the queue and not yet sent at the time the main thread shuts down. That data was still being dropped on the floor.

@onlynone
Copy link
Contributor Author

The builds that failed, failed with:

InfluxDB::Client
  is expected to be a kind of InfluxDB::Writer::Async
  #write_point
    sends writes to client
    when precision, retention_policy and database are given
      writes aggregate payload to the client (FAILED - 1)
      when different precision, retention_policy and database are given
        writes separated payloads for each {precision, retention_policy, database} set

It seems to me like with an async client, it's possible for two different workers to pick up the two datapoints written to the queue, and they wouldn't be able to aggregate them. Should this really be a test? Or maybe we could keep the test, but set the number of workers to 1?

The tests succeed on my laptop.

@onlynone
Copy link
Contributor Author

@dmke Have you had a chance to look at this?

@Esity Esity requested review from Esity and dmke April 29, 2020 14:00
@Esity Esity self-assigned this Apr 29, 2020
@Esity
Copy link
Collaborator

Esity commented Apr 29, 2020

@onlynone sorry for the delay on this one. Will take a look at this when I get a chance

Copy link
Contributor

@dmke dmke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@dmke
Copy link
Contributor

dmke commented Apr 29, 2020

Merged in 8df479a. Thank you!

@dmke dmke closed this Apr 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants