-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hudi0.13.1 #13021
base: master
Are you sure you want to change the base?
hudi0.13.1 #13021
Conversation
This commit adds the missing Apache License in some source files.
This commit fixes `scripts/release/validate_staged_release.sh` to skip checking `release/release_guide*` for "Binary Files Check" and "Licensing Check".
Recently we have more flakiness in our CI runs. So, taking a stab at fixing some of the high frequent tests. Tests that are fixed: TestHoodieClientOnMergeOnReadStorage ( testReadingMORTableWithoutBaseFile, testCompactionOnMORTable, testLogCompactionOnMORTable, testLogCompactionOnMORTableWithoutBaseFile) Reasoning for flakiness: we generate only 10 inserts in our tests and it does not guarantee we have records for all 3 partitions(HoodieTestDataGenerator). Fixes: HoodieTestDataGenerator was choosing random partition among list of partitions while generating insert records. Fixed that to do round robin. Also, bumped up the num of records inserted in some of the flaky tests to 100 from 10. Fixed respective MOR tests to disable small file handling.
…om Metadata Table (apache#7642) Most recently, while trying to use Metadata Table in Bloom Index it was resulting in failures due to exhaustion of S3 connection pool no matter how (reasonably big) we're setting the pool size (we've tested up to 3k connections). This PR focuses on optimizing the Bloom Index lookup sequence in case when it's leveraging Bloom Filter partition in Metadata Table. The premise of this change is based on the following observations: Increasing the size of the batch of the requests to MT allows to amortize the cost of processing it (bigger the batch, lesser the cost). Having too few partitions in the Bloom Index path however, starts to hurt parallelism when we actually probe individual files whether they actually contain target keys or not. Solution to this is to split these 2 in different stages w/ drastically different parallelism levels: constrain parallelism when reading from MT (10s of tasks) and keep at the current level for probing individual files (100s of tasks) Current way of partitioning records (relying on Spark's default partitioner) was entailing that every Spark executor with high likelihood will be opening up (and processing) every file-group of the MT Bloom Filter partition. To alleviate that same hashing algorithm used by MT should be used to partition records into Spark's individual partitions, so that we can limit every task to open no more than 1 file-group in Bloom Filter's partition of MT To achieve that following changes in Bloom Index sequence (leveraging MT) are implemented Bloom Filter probing and actual File Probing are split into 2 separate operations (so that parallelism of each of them could be controlled individually) Requests to MT are replaced to invoke batch APIs Custom partitioner is introduced AffineBloomIndexFileGroupPartitioner repartitioning dataset of filenames with corresponding record keys in a way that is affine w/ MT Bloom Filters' partitioning (allowing us to open no more than a single file-group per Spark's task) Additionally, this PR addresses some of the low-hanging performance optimizations that could considerably improve performance of the Bloom Index lookup sequence like mapping file-comparison pairs to PairRDD (where key is file-name, and value is record-key) instead of RDD so that we could: Do in-partition sorting by filename (to make sure we check all records w/in the file all at once) w/in a single Spark partition instead of global one (reducing shuffling as well) Avoid re-shuffling (by re-mapping from RDD to PairRDD later)
…#7476) This change switches default Write Executor to be SIMPLE ie one bypassing reliance on any kind of Queue (either BoundedInMemory or Disruptor's one). This should considerably trim down on Runtime (compared to BIMQ) Compute wasted (compared to BIMQ, Disruptor) Since it eliminates unnecessary intermediary "staging" of the records in the queue (for ex, in Spark such in-memory enqueueing occurs at the ingress points, ie shuffling), and allows to handle records writing in one pass (even avoiding making copies of the records in the future)
Fixing flaky parquet projection tests. Added 10% margin for expected bytes from col projection.
Change logging mode names for CDC feature to - op_key_only - data_before - data_before_after
…er-bundle` to root pom (apache#7774)" (apache#7782) This reverts commit 7352661.
…che#7759) Updates the HoodieAvroRecordMerger to use the new precombine API instead of the deprecated one. This fixes issues with backwards compatibility with certain payloads.
We introduced a new way to scan log blocks in LogRecordReader and have named it as "hoodie.log.record.reader.use.scanV2". Fixing the config name to be elegant: "hoodie.optimized.log.blocks.scan.enable". Fixing the corresponding Metadata config as well.
Fix tests and artifact deployment for metaserver.
…7784) Fixes deploy_staging_jars.sh to generate all hudi-utilities-slim-bundle.
Co-authored-by: hbg <[email protected]>
Cleaning up some of the recently introduced configs: Shortening file-listing mode override for Spark's FileIndex Fixing Disruptor's write buffer limit config Scoped CANONICALIZE_NULLABLE config to HoodieSparkSqlWriter
…ache#7790) - Ensures that Hudi CLI commands which require launching Spark can be executed with hudi-cli-bundle
Fix typos and format text-blocks properly.
…g in duplicate data (apache#8503)
…out ACTION_STATE field (apache#8607)
apache#8631) * Use correct zone id while calculating earliestTimeToRetain * Use metaClient table config
…ition field (apache#7355) * Partition query in hive3 returns null for Hive 3.x.
* Disable vectorized reader for spark 3.3.2 only * Keep compile version to be Spark 3.3.1 --------- Co-authored-by: Rahil Chertara <[email protected]>
This commit adds the bundle validation on Spark 3.3.2 in Github Java CI to ensure compatibility after we fixed the compatibility issue in apache#8082.
…E_UPSERT is disabled (apache#7998)
There was a bug that the delete records are assumed to be marked by "_hoodie_is_deleted"; however, custom CDC payloads use "op" field to mark deletes. In such a case, AWS DMS payload and Debezium payload failed with deletes. This commit fixes the issue by adding a new API isDeleteRecord(GenericRecord genericRecord) in BaseAvroPayload to allow the payload to implement custom logic to indicate if a record is a delete record. Co-authored-by: Raymond Xu <[email protected]>
Co-authored-by: hbg <[email protected]>
…ype as not nullable (apache#8728)
java.util.NoSuchElementException: No value present in Option
偶然发生 java.util.NoSuchElementException: No value present in Option |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hanleiit Did you apply the new fix then https://github.com/apache/hudi/pull/8935/files? Can you share with use the full error stacktrace? |
|
the new fix https://github.com/apache/hudi/pull/8935/files should be the same, but why is it not merged into Hudi version 0.13.1, this version of the source code is not modified |
Because the bug is reported after 0.13.1 is released, we put the fix in 0.14.0 release. |
thank you |
Change Logs
Describe context and summary for this change. Highlight if any code was copied.
Flinksql:select count(*) from table; No value present in Option #13019
Impact
hudi版本0.13.1
update MergeOnReadInputFormat
Documentation Update
Describe any necessary documentation update if there is any new feature, config, or user-facing change. If not, put "none".
ticket number here and follow the instruction to make
changes to the website.
Contributor's checklist