Skip to content

Commit

Permalink
[DOC] add custom 404 page and fix some document issue [skip ci] (#10309)
Browse files Browse the repository at this point in the history
* add custom 404 page

Signed-off-by: liyuan <[email protected]>

* Update .github/404.html

Co-authored-by: Gera Shegalov <[email protected]>

* test markdownlink checker

Signed-off-by: liyuan <[email protected]>

* check all links

Signed-off-by: liyuan <[email protected]>

* fix all dead links

Signed-off-by: liyuan <[email protected]>

* revert markdown link checker config files after fix all dead links

Signed-off-by: liyuan <[email protected]>

* enable all links checking as discussed in #10322

Signed-off-by: liyuan <[email protected]>

* we already support zstd orc and parquet write, fix #10365

Signed-off-by: liyuan <[email protected]>

---------

Signed-off-by: liyuan <[email protected]>
Co-authored-by: Gera Shegalov <[email protected]>
  • Loading branch information
nvliyuan and gerashegalov authored Feb 6, 2024
1 parent b75b161 commit 6c01e1b
Show file tree
Hide file tree
Showing 5 changed files with 36 additions and 8 deletions.
26 changes: 26 additions & 0 deletions .github/404.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
layout: default
permalink: /404.html
---

<style type="text/css" media="screen">
.container {
margin: 10px auto;
max-width: 600px;
text-align: center;
}
h1 {
margin: 30px 0;
font-size: 4em;
line-height: 1;
letter-spacing: -1px;
}
</style>

<div class="container">
<h1>404</h1>

<p><strong>Page not found :(</strong></p>
<p> The requested page could not be found.
If you are looking for documentation please navigate to the <a href="https://docs.nvidia.com/spark-rapids/user-guide/latest/index.html">User Guide</a> for more information</p>
</div>
1 change: 0 additions & 1 deletion .github/workflows/markdown-links-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ jobs:
with:
max-depth: -1
use-verbose-mode: 'yes'
check-modified-files-only: 'yes'
config-file: '.github/workflows/markdown-links-check/markdown-links-check-config.json'
base-branch: 'gh-pages'

Expand Down
8 changes: 4 additions & 4 deletions docs/additional-functionality/rapids-udfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ implementation alongside the CPU implementation, enabling the
RAPIDS Accelerator to perform the user-defined operation on the GPU.

Note that there are other potential solutions to performing user-defined
operations on the GPU. See the
[Frequently Asked Questions entry](../FAQ.md#how-can-i-run-custom-expressionsudfs-on-the-gpu)
operations on the GPU. See the
[Frequently Asked Questions entry](https://docs.nvidia.com/spark-rapids/user-guide/latest/faq.html#how-can-i-run-custom-expressions-udfs-on-the-gpu)
on UDFs for more details.

## UDF Obstacles To Query Acceleration
Expand Down Expand Up @@ -52,7 +52,7 @@ Other forms of Spark UDFs are not supported, such as:

For supported UDFs, the RAPIDS Accelerator will detect a GPU implementation
if the UDF class implements the
[RapidsUDF](../../sql-plugin/src/main/java/com/nvidia/spark/RapidsUDF.java)
[RapidsUDF](../../sql-plugin-api/src/main/java/com/nvidia/spark/RapidsUDF.java)
interface. Unlike the CPU UDF which processes data one row at a time, the
GPU version processes a columnar batch of rows. This reduces invocation
overhead and enables parallel processing of the data by the GPU.
Expand Down Expand Up @@ -219,7 +219,7 @@ The following configuration settings are also relevant for GPU scheduling for Pa
--conf spark.rapids.python.memory.gpu.allocFraction=0.1 \
--conf spark.rapids.python.memory.gpu.maxAllocFraction= 0.2 \
```
Similar to the [RMM pooling for JVM](../tuning-guide.md#pooled-memory) settings like
Similar to the [RMM pooling for JVM](https://docs.nvidia.com/spark-rapids/user-guide/latest/tuning-guide.html#pinned-memory) settings like
`spark.rapids.memory.gpu.allocFraction` and `spark.rapids.memory.gpu.maxAllocFraction` except
these specify the GPU pool size for the _Python processes_. Half of the GPU _available_ memory
will be used by default if it is not specified.
Expand Down
7 changes: 5 additions & 2 deletions docs/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,9 @@ to work for dates after the epoch as described
[here](https://github.com/NVIDIA/spark-rapids/issues/140).

The plugin supports reading `uncompressed`, `snappy`, `zlib` and `zstd` ORC files and writing
`uncompressed` and `snappy` ORC files. At this point, the plugin does not have the ability to fall
`uncompressed`, `snappy` and `zstd` ORC files. At this point, the plugin does not have the
ability to
fall
back to the CPU when reading an unsupported compression format, and will error out in that case.

### Push Down Aggregates for ORC
Expand Down Expand Up @@ -307,7 +309,8 @@ When writing `spark.sql.legacy.parquet.datetimeRebaseModeInWrite` is currently i
[here](https://github.com/NVIDIA/spark-rapids/issues/144).

The plugin supports reading `uncompressed`, `snappy`, `gzip` and `zstd` Parquet files and writing
`uncompressed` and `snappy` Parquet files. At this point, the plugin does not have the ability to
`uncompressed`, `snappy` and `zstd` Parquet files. At this point, the plugin does not have the
ability to
fall back to the CPU when reading an unsupported compression format, and will error out in that
case.

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/microk8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ This token can then be specified in the `spark-submit` command with

## Building and exporting Docker images

Follow the instructions in [Getting Started with RAPIDS and Kubernetes](../get-started/getting-started-kubernetes.md)
Follow the instructions in [Getting Started with RAPIDS and Kubernetes](https://docs.nvidia.com/spark-rapids/user-guide/latest/getting-started/kubernetes.html)
to create Docker images containing Spark and the RAPIDS Accelerator for Apache Spark.

Note that an additional step is required to export the Docker images from the host and import them into the Microk8s
Expand Down

0 comments on commit 6c01e1b

Please sign in to comment.