Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ai_elastic_scheduling in tidb-operator #2235

Closed
wants to merge 3 commits into from

Conversation

Sullivan12138
Copy link

What problem does this PR solve?

Add a new feature of ai_elastic_scheduling in tidb-operator.

What is changed and how does it work?

Add an ai url in the pkg/autoscaler/autoscaler/autoscaler_manager.go. From this url we will get predict data generated by another python process. I define two structures, beacuse the python process send data in this structure. I temporarily comment the syncTiDB function beacuse now we cannot predict the TiDB replicas. Now the syncTiKV function will directly use the predicted replicas from ai url, instead of getting data from Prometheus and calculating it. I change the scaleInSeconds and scaleOutSeconds for later test.

Check List

Tests

  • No code

Code changes

  • Has Go code change

Side effects

None

Related changes

None

Does this PR introduce a user-facing change?:

NONE

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@Yisaer
Copy link
Contributor

Yisaer commented Apr 20, 2020

Hi @Sullivan12138 , I think you change is only worked in our internal test env that the community can't use it. However, instead of implementing the specific server, I think it would be quite useful to make tidb-operator auto-scaler have extended ability by querying the external service to decide whether the target tidbcluster should auto-scale.

I think this would be much useful to community and it still could work in your case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants