Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(storage): do not fetch all sst meta when create iterator #9517

Merged
merged 18 commits into from
May 5, 2023
Merged
1 change: 1 addition & 0 deletions proto/hummock.proto
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ message SstableInfo {
uint64 min_epoch = 9;
uint64 max_epoch = 10;
uint64 uncompressed_file_size = 11;
uint64 range_tombstone_count = 12;
}

enum LevelType {
Expand Down
4 changes: 1 addition & 3 deletions src/meta/src/hummock/compaction/level_selector.rs
Original file line number Diff line number Diff line change
Expand Up @@ -625,12 +625,10 @@ pub mod tests {
}),
file_size: (right - left + 1) as u64,
table_ids,
meta_offset: 0,
stale_key_count: 0,
total_key_count: 0,
uncompressed_file_size: (right - left + 1) as u64,
min_epoch,
max_epoch,
..Default::default()
}
}

Expand Down
6 changes: 1 addition & 5 deletions src/meta/src/hummock/test_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -159,12 +159,8 @@ pub fn generate_test_tables(epoch: u64, sst_ids: Vec<HummockSstableObjectId>) ->
}),
file_size: 2,
table_ids: vec![sst_id as u32, sst_id as u32 * 10000],
meta_offset: 0,
stale_key_count: 0,
total_key_count: 0,
uncompressed_file_size: 2,
min_epoch: 0,
max_epoch: 0,
..Default::default()
});
}
sst_info
Expand Down
6 changes: 2 additions & 4 deletions src/storage/hummock_test/src/hummock_read_version_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -161,8 +161,7 @@ async fn test_read_version_basic() {
stale_key_count: 1,
total_key_count: 1,
uncompressed_file_size: 1,
min_epoch: 0,
max_epoch: 0,
..Default::default()
}),
LocalSstableInfo::for_test(SstableInfo {
object_id: 2,
Expand All @@ -178,8 +177,7 @@ async fn test_read_version_basic() {
stale_key_count: 1,
total_key_count: 1,
uncompressed_file_size: 1,
min_epoch: 0,
max_epoch: 0,
..Default::default()
}),
],
epoch_id_vec_for_clear,
Expand Down
8 changes: 1 addition & 7 deletions src/storage/src/hummock/event_handler/uploader.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1081,14 +1081,8 @@ mod tests {
right: end_full_key.encode(),
right_exclusive: true,
}),
file_size: 0,
table_ids: vec![TEST_TABLE_ID.table_id],
meta_offset: 0,
stale_key_count: 0,
total_key_count: 0,
uncompressed_file_size: 0,
min_epoch: 0,
max_epoch: 0,
..Default::default()
})]
}

Expand Down
147 changes: 147 additions & 0 deletions src/storage/src/hummock/iterator/concat_delete_range_iterator.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
// Copyright 2023 RisingWave Labs
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

use std::future::Future;

use risingwave_hummock_sdk::key::{FullKey, PointRange, UserKey};
use risingwave_hummock_sdk::HummockEpoch;
use risingwave_pb::hummock::SstableInfo;

use crate::hummock::iterator::DeleteRangeIterator;
use crate::hummock::sstable_store::SstableStoreRef;
use crate::hummock::{HummockResult, SstableDeleteRangeIterator};
use crate::monitor::StoreLocalStatistic;

pub struct ConcatDeleteRangeIterator {
sstables: Vec<SstableInfo>,
current: Option<SstableDeleteRangeIterator>,
idx: usize,
sstable_store: SstableStoreRef,
stats: StoreLocalStatistic,
}

impl ConcatDeleteRangeIterator {
pub fn new(sstables: Vec<SstableInfo>, sstable_store: SstableStoreRef) -> Self {
Self {
sstables,
sstable_store,
stats: StoreLocalStatistic::default(),
idx: 0,
current: None,
}
}

/// Seeks to a table, and then seeks to the key if `seek_key` is given.
async fn seek_idx(
&mut self,
idx: usize,
seek_key: Option<UserKey<&[u8]>>,
) -> HummockResult<()> {
self.current.take();
if idx < self.sstables.len() {
if self.sstables[idx].range_tombstone_count == 0 {
return Ok(());
}
let table = self
.sstable_store
.sstable(&self.sstables[idx], &mut self.stats)
.await?;
let mut sstable_iter = SstableDeleteRangeIterator::new(table);

if let Some(key) = seek_key {
sstable_iter.seek(key).await?;
} else {
sstable_iter.rewind().await?;
}

self.current = Some(sstable_iter);
self.idx = idx;
}
Ok(())
}
}

impl DeleteRangeIterator for ConcatDeleteRangeIterator {
type NextFuture<'a> = impl Future<Output = HummockResult<()>> + 'a;
type RewindFuture<'a> = impl Future<Output = HummockResult<()>> + 'a;
type SeekFuture<'a> = impl Future<Output = HummockResult<()>> + 'a;

fn next_extended_user_key(&self) -> PointRange<&[u8]> {
self.current.as_ref().unwrap().next_extended_user_key()
}

fn current_epoch(&self) -> HummockEpoch {
self.current.as_ref().unwrap().current_epoch()
}

fn next(&mut self) -> Self::NextFuture<'_> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In current implementation, after a next call, next_extended_user_key may stay invariant. Is this bad-taste?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But everywhere we call next_extended_user_key would always collect all keys whose user-key equals

Copy link
Contributor

@soundOfDestiny soundOfDestiny May 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But everywhere we call next_extended_user_key would always collect all keys whose user-key equals

In MergeIterator, everywhere we call next_extended_user_key would always collect all keys IN OTHER ITERATORS whose user-key equals, but except the iterator itself

async {
if let Some(iter) = self.current.as_mut() {
if iter.is_valid() {
iter.next().await?;
} else {
let mut idx = self.idx;
while idx + 1 < self.sstables.len() && !self.is_valid() {
self.seek_idx(idx + 1, None).await?;
idx += 1;
}
}
}
Ok(())
}
}

fn rewind(&mut self) -> Self::RewindFuture<'_> {
async move {
let mut idx = 0;
while idx < self.sstables.len() && self.sstables[idx].range_tombstone_count == 0 {
idx += 1;
}
self.current.take();
self.seek_idx(0, None).await?;
while idx + 1 < self.sstables.len() && !self.is_valid() {
self.seek_idx(idx + 1, None).await?;
idx += 1;
}
Ok(())
}
}

fn seek<'a>(&'a mut self, target_user_key: UserKey<&'a [u8]>) -> Self::SeekFuture<'_> {
async move {
let mut idx = self
.sstables
.partition_point(|sst| {
FullKey::decode(&sst.key_range.as_ref().unwrap().left)
.user_key
.le(&target_user_key)
})
.saturating_sub(1); // considering the boundary of 0
self.current.take();
self.seek_idx(idx, Some(target_user_key)).await?;
while idx + 1 < self.sstables.len() && !self.is_valid() {
self.seek_idx(idx + 1, None).await?;
idx += 1;
}
Ok(())
}
}

fn is_valid(&self) -> bool {
self.current
.as_ref()
.map(|iter| iter.is_valid())
.unwrap_or(false)
}
}
78 changes: 13 additions & 65 deletions src/storage/src/hummock/iterator/concat_inner.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,49 +16,21 @@ use std::cmp::Ordering::{Equal, Greater, Less};
use std::future::Future;
use std::sync::Arc;

use itertools::Itertools;
use risingwave_common::must_match;
use risingwave_hummock_sdk::key::FullKey;
use risingwave_pb::hummock::SstableInfo;

use crate::hummock::iterator::{DirectionEnum, HummockIterator, HummockIteratorDirection};
use crate::hummock::sstable::SstableIteratorReadOptions;
use crate::hummock::sstable_store::TableHolder;
use crate::hummock::value::HummockValue;
use crate::hummock::{HummockResult, SstableIteratorType, SstableStore, SstableStoreRef};
use crate::hummock::{HummockResult, SstableIteratorType, SstableStoreRef};
use crate::monitor::StoreLocalStatistic;

enum ConcatItem {
Unfetched(SstableInfo),
Prefetched(TableHolder),
fn smallest_key(sstable_info: &SstableInfo) -> &[u8] {
&sstable_info.key_range.as_ref().unwrap().left
}

impl ConcatItem {
async fn prefetch(
&mut self,
sstable_store: &SstableStore,
stats: &mut StoreLocalStatistic,
) -> HummockResult<TableHolder> {
if let ConcatItem::Unfetched(sstable_info) = self {
let table = sstable_store.sstable(sstable_info, stats).await?;
*self = ConcatItem::Prefetched(table);
}
Ok(must_match!(self, ConcatItem::Prefetched(table) => table.clone()))
}

fn smallest_key(&self) -> &[u8] {
match self {
ConcatItem::Unfetched(sstable_info) => &sstable_info.key_range.as_ref().unwrap().left,
ConcatItem::Prefetched(table_holder) => &table_holder.value().meta.smallest_key,
}
}

fn largest_key(&self) -> &[u8] {
match self {
ConcatItem::Unfetched(sstable_info) => &sstable_info.key_range.as_ref().unwrap().right,
ConcatItem::Prefetched(table_holder) => &table_holder.value().meta.largest_key,
}
}
fn largest_key(sstable_info: &SstableInfo) -> &[u8] {
&sstable_info.key_range.as_ref().unwrap().right
}

/// Served as the concrete implementation of `ConcatIterator` and `BackwardConcatIterator`.
Expand All @@ -70,7 +42,7 @@ pub struct ConcatIteratorInner<TI: SstableIteratorType> {
cur_idx: usize,

/// All non-overlapping tables.
tables: Vec<ConcatItem>,
tables: Vec<SstableInfo>,

sstable_store: SstableStoreRef,

Expand All @@ -82,8 +54,8 @@ impl<TI: SstableIteratorType> ConcatIteratorInner<TI> {
/// Caller should make sure that `tables` are non-overlapping,
/// arranged in ascending order when it serves as a forward iterator,
/// and arranged in descending order when it serves as a backward iterator.
fn new_inner(
tables: Vec<ConcatItem>,
pub fn new(
tables: Vec<SstableInfo>,
sstable_store: SstableStoreRef,
read_options: Arc<SstableIteratorReadOptions>,
) -> Self {
Expand All @@ -97,30 +69,6 @@ impl<TI: SstableIteratorType> ConcatIteratorInner<TI> {
}
}

/// Caller should make sure that `tables` are non-overlapping,
/// arranged in ascending order when it serves as a forward iterator,
/// and arranged in descending order when it serves as a backward iterator.
pub fn new(
tables: Vec<SstableInfo>,
sstable_store: SstableStoreRef,
read_options: Arc<SstableIteratorReadOptions>,
) -> Self {
let tables = tables.into_iter().map(ConcatItem::Unfetched).collect_vec();
Self::new_inner(tables, sstable_store, read_options)
}

/// Caller should make sure that `tables` are non-overlapping,
/// arranged in ascending order when it serves as a forward iterator,
/// and arranged in descending order when it serves as a backward iterator.
pub fn new_with_prefetch(
tables: Vec<TableHolder>,
sstable_store: SstableStoreRef,
read_options: Arc<SstableIteratorReadOptions>,
) -> Self {
let tables = tables.into_iter().map(ConcatItem::Prefetched).collect_vec();
Self::new_inner(tables, sstable_store, read_options)
}

/// Seeks to a table, and then seeks to the key if `seek_key` is given.
async fn seek_idx(
&mut self,
Expand All @@ -132,8 +80,9 @@ impl<TI: SstableIteratorType> ConcatIteratorInner<TI> {
old_iter.collect_local_statistic(&mut self.stats);
}
} else {
let table = self.tables[idx]
.prefetch(&self.sstable_store, &mut self.stats)
let table = self
.sstable_store
.sstable(&self.tables[idx], &mut self.stats)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random thought: we can prefetch the next sstable meta similar to the block prefetch in sstable iterator. not sure how much improvement we can get though

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random thought: we can prefetch the next sstable meta similar to the block prefetch in sstable iterator. not sure how much improvement we can get though

We can get more memory :rolling_on_the_floor_laughing:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random thought: we can prefetch the next sstable meta similar to the block prefetch in sstable iterator. not sure how much improvement we can get though

We can get more memory :rolling_on_the_floor_laughing:

True. But it looks acceptable because the memory usage is proportional to number of levels, not number of SSTs, and we have very few levels.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But after we support meta-cache refill after compaction, the meta-cache miss rate is very low.
So it is not necessary to prefetch the next TableHolder for delete-range (

.await?;
let mut sstable_iter =
TI::create(table, self.sstable_store.clone(), self.read_options.clone());
Expand Down Expand Up @@ -198,13 +147,12 @@ impl<TI: SstableIteratorType> HummockIterator for ConcatIteratorInner<TI> {
.tables
.partition_point(|table| match Self::Direction::direction() {
DirectionEnum::Forward => {
let ord = FullKey::decode(table.smallest_key()).cmp(&key);
let ord = FullKey::decode(smallest_key(table)).cmp(&key);

ord == Less || ord == Equal
}
DirectionEnum::Backward => {
let ord = FullKey::decode(table.largest_key()).cmp(&key);

let ord = FullKey::decode(largest_key(table)).cmp(&key);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If right key range is excluded, we will skip the key itself unexpectedly.

ord == Greater || ord == Equal
}
})
Expand Down
Loading