Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] [Zeta] Optimize CoordinatorService ThreadPool Configuration to Prevent Potential OOM #8241

Merged
merged 13 commits into from
Dec 19, 2024

Conversation

xiaochen-zhou
Copy link
Contributor

Purpose of this pull request

Optimize CoordinatorService ThreadPool Configuration to Enhance System Performance and Stability

Does this PR introduce any user-facing change?

yes

How was this patch tested?

add new tests

Check list

Copy link
Member

@hailin0 hailin0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update the docs

@@ -25,6 +25,9 @@ seatunnel:
print-job-metrics-info-interval: 60
slot-service:
dynamic-slot: true
coordinator-service:
core-thread-num: 30
max-thread-num: 1000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when add this parameter, it will limit the thread size.
now the default is int max, if we use a small value as the default value, it maybe cause exception when user upgrade version.
so I suggest use int max as default value, and describe this feature in document let user update it by self.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the default value is int max, we can remove max-thread-num config from yaml, If user need change, they can update it by self.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the default value is int max, we can remove max-thread-num config from yaml, If user need change, they can update it by self.

+1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the default value is int max, we can remove max-thread-num config from yaml, If user need change, they can update it by self.

done.

@xiaochen-zhou
Copy link
Contributor Author

Please update the docs

done.

# Conflicts:
#	seatunnel-e2e/seatunnel-connector-v2-e2e/connector-hive-e2e/src/test/java/org/apache/seatunnel/e2e/connector/hive/HiveIT.java
@github-actions github-actions bot removed the e2e label Dec 12, 2024

**max-thread-num**

The maximumPoolSize of seatunnel coordinator job's executor cached thread pool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe use the max job count can be executed at same time is more clear to explane this.

user don't need to know what's coordinator

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe use the max job count can be executed at same time is more clear to explane this.

user don't need to know what's coordinator

ok

Copy link
Member

@Hisoka-X Hisoka-X left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, waiting ci pass.

@xiaochen-zhou
Copy link
Contributor Author

lgtm, waiting ci pass.

done.

Copy link
Member

@Hisoka-X Hisoka-X left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Hisoka-X
Copy link
Member

Waiting CI passed.

@xiaochen-zhou
Copy link
Contributor Author

Waiting CI passed.

done.

@hailin0 hailin0 merged commit 775dbea into apache:dev Dec 19, 2024
5 checks passed
@xiaochen-zhou xiaochen-zhou deleted the pool_optim branch February 22, 2025 16:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants