-
Notifications
You must be signed in to change notification settings - Fork 931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] kyuubi failed to read the iceberg of another cluster across domains #6890
Comments
Hello @zxl-333, |
The issue is irrelevant to Kyuubi, you are expected to see the same behavior with The root cause is that Iceberg does not implement the To workaround this issue, put https://mvnrepository.com/artifact/org.apache.kyuubi/kyuubi-spark-connector-hive in your
See more technical details at #4560 |
I have tested this by adding it. Once I add this configuration, kyuubi cannot query the iceberg table of the cluster |
when you say something does not work, provide the concrete configuration and stacktrace, otherwise, you should not expect active responses to your question |
When I add the above configuration, using kyuubi cross-domain query iceberg is, kyuubi server throws the following exception
|
Using the following configuration, I can read the iceberg data across domains, but I cannot read the data of the local iceberg table, and the execution is normal spark-defaults.conf
kyuubi-defaults.conf
read local iceberg logs:
|
I just noticed you are using the client mode, the KSHC workaround only takes effect on cluster mode. the key points
turn on spark's debug logs, and monitor your
|
When no data is generated in the local query, the spark debug log displays the following error. The error may be caused by this problem.
|
You may also need to configure same token signature for hive/iceberg using secondary HMS, like:
|
I already configured it. It's no use |
When I use the following configuration, the local iceberg table can be queried, but the remote iceberg table can see that metadata cannot be queried, and the log display is normal -------------------begin----- #local cluster metastore #an other cluster metastore,remote metastore spark.sql.catalog.spark_catalog_ky=org.apache.kyuubi.spark.connector.hive.HiveTableCatalog ------------------end----------------- I see the following information in kyuubi's engine log 25/01/15 19:03:49 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.privilege.synchronizer does not exist |
When I remove the spark.sql.catalog.spark_catalog_ky=org.apache.kyuubi.spark.connector.hive.HiveTableCatalog Spark. The hive. HiveTableCatalog, throws connection metastore failed anomalies first method ----------------------- #local cluster metastore #an other cluster metastore #spark.sql.catalog.spark_catalog_ky=org.apache.kyuubi.spark.connector.hive.HiveTableCatalog exception: second method -------------- #local cluster metastore #an other cluster metastore spark.sql.catalog.spark_catalog_ky=org.apache.kyuubi.spark.connector.hive.HiveTableCatalog **sql:**select count(*),k from spark_catalog_ky.default.test_iceberg_1_backup group by k | **sql:**select count(*),k from spark_catalog.default.test_iceberg group by k; | |
Code of Conduct
Search before asking
Describe the bug
I configured spark-sql to read the iceberg table across domains, which is normal. However, when I read the iceberg table across domains using kyuubi, metastore failed to be connected
Affects Version(s)
1.8.2
Kyuubi Server Log Output
The kyuubi server logs are normal
Kyuubi Engine Log Output
Kyuubi Server Configurations
Kyuubi Engine Configurations
Additional context
No response
Are you willing to submit PR?
The text was updated successfully, but these errors were encountered: