-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fallback to CPU when aggregate push down used for parquet #4623
Fallback to CPU when aggregate push down used for parquet #4623
Conversation
Signed-off-by: remzi <[email protected]>
build |
sql-plugin/src/main/330+/scala/com/nvidia/spark/rapids/shims/v2/Spark33XShims.scala
Outdated
Show resolved
Hide resolved
sql-plugin/src/main/330+/scala/com/nvidia/spark/rapids/shims/v2/Spark33XShims.scala
Show resolved
Hide resolved
Signed-off-by: remzi <[email protected]>
build |
if (a.isInstanceOf[SupportsRuntimeFiltering]) { | ||
willNotWorkOnGpu("Parquet does not support Runtime filtering (DPP)" + | ||
" on datasource V2 yet.") | ||
} else if (a.pushedAggregate.nonEmpty) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The basic principle of tagging is to record more information about not running on GPU. So here, let's move the pushedAggregate checking out of the else if
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, I will change it.
Signed-off-by: remzi <[email protected]>
build |
} | ||
|
||
@pytest.mark.skipif(is_before_spark_330(), reason='Aggregate push down on Parquet is a new feature of Spark 330') | ||
@allow_non_gpu(any = True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this test is not supposed to fallback, why are we allowing non-GPU nodes in the plan?
assert_cpu_and_gpu_are_equal_collect_with_capture( | ||
do_parquet_scan, | ||
exist_classes= "GpuBatchScanExec", | ||
non_exist_classes= "BatchScanExec", | ||
conf= conf_for_parquet_aggregate_pushdown) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should just be assert_cpu_and_gpu_are_equal_collect
. We're honestly not that interested in exactly which GPU nodes are involved here, rather that the plan is all on the GPU. That's what assert_cpu_and_gpu_are_equal_collect
already checks, once we remove the @allow_non_gpu
decorator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! I have updated
Signed-off-by: remzi <[email protected]>
Signed-off-by: remzi <[email protected]>
build |
Signed-off-by: remzi [email protected]
close #3951
Spark 330 added support for aggregate pushdown for parquet.
In this PR, we: