You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had searched in the issues and found no similar issues.
Java Version
1.8
Scala Version
2.11.x
StreamPark Version
2.1.4
Flink Version
1.14
Deploy mode
kubernetes-application
What happened
env: Flink 1.14 on k8s
reproduction:
config a error FlinkSQL but could verify success
release and start this job, the flink on k8s jobmanager will start failed
update the sql content, re release&start, this job also failed, see the log could found it is run the old version sql, not the newest sql
the jobmanager on k8s will repeat restart
when you try to delete the job
the job config on streampark will delete the config, but in k8s cluster, the wrong jobmanager will always restart again, it will reduce the k8s resource, and people could not found that
i think the delete job, maybe we need call kubectl delete deployment xxx -n namespace to delete the deployment, because the eroor jobmanager will restart
Search before asking
Java Version
1.8
Scala Version
2.11.x
StreamPark Version
2.1.4
Flink Version
1.14
Deploy mode
kubernetes-application
What happened
env: Flink 1.14 on k8s
reproduction:
Error Exception
Screenshots
like this
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: