Skip to content

Commit

Permalink
replace train-gbdt-guide anchors with full paths
Browse files Browse the repository at this point in the history
Signed-off-by: Ricardo Decal <[email protected]>
  • Loading branch information
crypdick committed Feb 11, 2025
1 parent 43b088b commit 93c7d27
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions doc/source/cluster/kubernetes/examples/ml-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ In this guide, we show you how to run a sample Ray machine learning
workload on Kubernetes infrastructure.

We will run Ray's {ref}`XGBoost training benchmark <xgboost-benchmark>` with a 100 gigabyte training set.
To learn more about using Ray's XGBoostTrainer, check out {ref}`the XGBoostTrainer documentation <train-gbdt-guide>`.
To learn more about using Ray's XGBoostTrainer, check out {ref}`the XGBoostTrainer documentation </train/examples/xgboost/distributed-xgboost-lightgbm>`.

## Kubernetes infrastructure setup on GCP

Expand Down Expand Up @@ -179,7 +179,7 @@ you might not match {ref}`the numbers quoted in the benchmark docs <xgboost-benc
#### Model parameters
The file `model.json` in the Ray head pod contains the parameters for the trained model.
Other result data will be available in the directory `ray_results` in the head pod.
Refer to the {ref}`the XGBoostTrainer documentation <train-gbdt-guide>` for details.
Refer to the {ref}`the XGBoostTrainer documentation </train/examples/xgboost/distributed-xgboost-lightgbm>` for details.

```{admonition} Scale-down
If autoscaling is enabled, Ray worker pods will scale down after 60 seconds.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/cluster/vms/examples/ml-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ In this guide, we show you how to run a sample Ray machine learning
workload on AWS. The similar steps can be used to deploy on GCP or Azure as well.

We will run Ray's {ref}`XGBoost training benchmark <xgboost-benchmark>` with a 100 gigabyte training set.
To learn more about using Ray's XGBoostTrainer, check out {ref}`the XGBoostTrainer documentation <train-gbdt-guide>`.
To learn more about using Ray's XGBoostTrainer, check out {ref}`the XGBoostTrainer documentation </train/examples/xgboost/distributed-xgboost-lightgbm>`.

## VM cluster setup

Expand Down Expand Up @@ -119,7 +119,7 @@ you might not match {ref}`the numbers quoted in the benchmark docs <xgboost-benc
#### Model parameters
The file `model.json` in the Ray head node contains the parameters for the trained model.
Other result data will be available in the directory `ray_results` in the head node.
Refer to the {ref}`XGBoostTrainer documentation <train-gbdt-guide>` for details.
Refer to the {ref}`XGBoostTrainer documentation </train/examples/xgboost/distributed-xgboost-lightgbm>` for details.

```{admonition} Scale-down
If autoscaling is enabled, Ray worker nodes will scale down after the specified idle timeout.
Expand Down

0 comments on commit 93c7d27

Please sign in to comment.