Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a hard limit for index.number_of_shard #20682

Merged
merged 6 commits into from
Sep 29, 2016

Conversation

s1monw
Copy link
Contributor

@s1monw s1monw commented Sep 28, 2016

this change adds a hard limit to index.number_of_shard that prevents
indices from being created that have more than 1024 shards. This is still
a huge limit and can only be changed via settings a system property.

this change adds a hard limit to `index.number_of_shard` that prevents
indices from being created that have more than 1024 shards. This is still
a huge limit and can only be changed via settins a system property.
* 99% of the users have less than 1024 shards per index. We also make it a hard check that requires restart of nodes
* if a cluster should allow to create more than 1024 shards per index. NOTE: this does not limit the number of shards per cluster.
* this also prevents creating stuff like a new index with millions of shards by accident which essentially kills the entire cluster
* with OOM on the spot.*/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this should be reported in documentation too as a WARNING

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@pickypg
Copy link
Member

pickypg commented Sep 28, 2016

Should this be marked as breaking? 1024 is a ridiculous number, so maybe not.

@s1monw
Copy link
Contributor Author

s1monw commented Sep 28, 2016

Should this be marked as breaking? 1024 is a ridiculous number, so maybe not.

not sure I mean you can still set it but it's harder :)

@pickypg
Copy link
Member

pickypg commented Sep 28, 2016

I could go either way honestly. It's configurable, but system properties kind of leave it between a hard and soft limit. With the warning and the sheer number, it should be okay to not.

Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the user creates a template with a value that exceeds the hard limit? Does the template get rejected, or do we fail only at index creation time?

@@ -157,10 +157,24 @@ public static State fromString(String state) {
}
}

static {
final int maxNumShards = Integer.parseInt(System.getProperty("index.max_number_of_shards", "1024"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been trying to prefix the few system properties we have with es; I wonder if we should do this here?

@dakrone
Copy link
Member

dakrone commented Sep 28, 2016

What happens if the user creates a template with a value that exceeds the hard limit? Does the template get rejected, or do we fail only at index creation time?

Settings within templates are validated at template creation time, so they will get an error and the template will be rejected. I also double-checked this locally and it prevents it.

changed on a closed index. Note: the number of shards are limited to `1024` per
index. This limitation is a safety limit to prevent accidental creation of indices
that can destabilize a cluster due to resource allocation. The limit can be modified
by specifying `-Dindex.max_number_of_shards=128` system property on every node that is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe it is just me getting confused, but should this be a -D or -E type of parameter?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it's an actual system property, it requires -D. I think the confusion is coming because it's not prefixed with es., so +1 to that.

@s1monw
Copy link
Contributor Author

s1monw commented Sep 29, 2016

@jpountz @jasontedor @dakrone @javanna @pickypg I wonder if we should get this into 5.1 or rather push it to 5.0 since it's kinda breaking?

IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING
.get(Settings.builder().put("index.number_of_shards", 11).build()));
assertEquals("Failed to parse value [11] for setting [index.number_of_shards] must be <= " + limit, exception.getMessage());
System.clearProperty("es.index.max_number_of_shards");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to do a try/finally so that a failure with this test could not have impact on other tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@jpountz
Copy link
Contributor

jpountz commented Sep 29, 2016

I don't mind getting it into 5.0, the risk looks low to me.

@jasontedor
Copy link
Member

I think it's safe to backport this.

Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@jpountz
Copy link
Contributor

jpountz commented Sep 29, 2016

What happens if the user creates a template with a value that exceeds the hard limit? Does the template get rejected, or do we fail only at index creation time?

I think #8802 will make the template PUT operation fail. If the template is already there (eg. imported from 2.x), then we will fail at index creation time.

@s1monw s1monw added the v5.0.0 label Sep 29, 2016
@s1monw s1monw merged commit f2e6862 into elastic:master Sep 29, 2016
@s1monw s1monw deleted the hard_check_for_num_of_shards branch September 29, 2016 09:03
s1monw added a commit that referenced this pull request Sep 29, 2016
this change adds a hard limit to `index.number_of_shard` that prevents
indices from being created that have more than 1024 shards. This is still
a huge limit and can only be changed via settings a system property.
s1monw added a commit that referenced this pull request Sep 29, 2016
this change adds a hard limit to `index.number_of_shard` that prevents
indices from being created that have more than 1024 shards. This is still
a huge limit and can only be changed via settings a system property.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants