-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-35083][CORE] Support remote scheduler pool files #32184
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -252,10 +252,11 @@ properties: | |
|
||
The pool properties can be set by creating an XML file, similar to `conf/fairscheduler.xml.template`, | ||
and either putting a file named `fairscheduler.xml` on the classpath, or setting `spark.scheduler.allocation.file` property in your | ||
[SparkConf](configuration.html#spark-properties). | ||
[SparkConf](configuration.html#spark-properties). The file path can either be a local file path or HDFS file path. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, this line isn't completely true. It can be a local file path only when So, if users from old Spark versions use a path like Can we at least update the migration guide? We should also mention that it respects Hadoop properties now. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. +1 for @HyukjinKwon 's advice. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @HyukjinKwon sorry, not sure I get your point.
Why we need to write files into HDFS ? This PR is to support read remote file as the schedule pool. I think there is no behavior change but just a new feature. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see , if user specify a path There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. thanks man! |
||
|
||
{% highlight scala %} | ||
conf.set("spark.scheduler.allocation.file", "/path/to/file") | ||
conf.set("spark.scheduler.allocation.file", "hdfs:///path/to/file") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think HDFS also supports reading the local file. Could you mention it in the above document? e.g., "The file path can either be a local file path or HDFS file path." |
||
{% endhighlight %} | ||
|
||
The format of the XML file is simply a `<pool>` element for each pool, with different elements | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code can do many things. Please see my above comment. (#32184 (comment))