Hi Team, we are using UCX to assess workspace, job is running for 6 hrs and getting failed in craw_ table step and as eventual steps are dependent on tables remaining job getting failed #1397
Answered
by
nfx
karthik-apisero
asked this question in
Q&A
-
Please find below error he spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached. Have increased number of workers, but still same issue |
Beta Was this translation helpful? Give feedback.
Answered by
nfx
Apr 15, 2024
Replies: 1 comment 1 reply
-
Increase the size of the memory by editing the cluster policy. External metastore might be another slowdown. This task usually runs few minutes for 100k tables Soon we'll use beefier memory nodes by default. |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
nfx
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Increase the size of the memory by editing the cluster policy. External metastore might be another slowdown. This task usually runs few minutes for 100k tables
Soon we'll use beefier memory nodes by default.