Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add replication strategy support (#1) #9

Merged
merged 3 commits into from
Jul 22, 2016

Conversation

markglh
Copy link

@markglh markglh commented Jul 14, 2016

  • Added support for configurable replication strategy when creating the keyspace. A valid config will look like:

Simple Strategy

pillar.faker {
    development {
        cassandra-seed-address: "127.0.0.1"
        cassandra-keyspace-name: "pillar_development"
        replicationStrategy: "SimpleStrategy"
        replicationFactor: 0
    }
}

Network Topology Strategy

pillar.faker {
    development {
        cassandra-seed-address: "127.0.0.1"
        cassandra-keyspace-name: "pillar_development"
        replicationStrategy: "NetworkTopologyStrategy"
        replicationFactor: [
            {dc1: 2},
            {dc2: 3}
        ]
    }
}
  • Added CassandraUnit to enable testing & CI without a local Cassandra:
    https://github.com/jsevellec/cassandra-unit
  • Tweaked README
  • Set the default consistencyLevel to QUORUM. This ensures that the migrations are immediately consistent (both read and write will use QUORUM). High consistency here is essential to ensure that we always read the latest migrations and don't try to re-apply. Additionally, performance is not a concern as this are executed as a one-off.

We deliberately didn't make the consistency level configurable as we don't believe it adds value, there isn't a valid reason to use a lower consistency level than QUORUM here.

* Basic setup for replication configuration support for Pillar.

* Removed ReplicationOptions as it was redundant.

* Error handling for reading ReplicationStrategy from our config.

* Tweaked the error handling.

* Modified the CommandExecutorTest and added a new test.

* Add qualifications for config params.

* added cassandra unit and tweaked app default params

* Improved README
@markglh
Copy link
Author

markglh commented Jul 14, 2016

Credit for the initial work goes to @j-potts

@@ -4,5 +4,3 @@ jdk:
- oraclejdk8
scala:
- 2.11.8
services:
Copy link
Author

@markglh markglh Jul 14, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not required now we're using cassandra-unit

import org.scalatest.{BeforeAndAfter, FeatureSpec, GivenWhenThen, Matchers}

class PillarCommandLineAcceptanceSpec extends FeatureSpec
with CassandraSpec
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gives us an embedded cassandra

@lichtsprung lichtsprung self-assigned this Jul 15, 2016
@markglh
Copy link
Author

markglh commented Jul 15, 2016

@lichtsprung we also have another pr almost ready to go which build on this one, adding quorum consistency for the migrations. This is essential when running against several nodes.

@lichtsprung
Copy link
Member

@markglh Awesome! I will try to review both PRs this Friday :)

@markglh
Copy link
Author

markglh commented Jul 20, 2016

@lichtsprung that's great, thanks! We'll raise the second PR later today then so it's ready for you. It builds upon this one so probably best to review + merge this one before reviewing the second

* Set default consistency level to QUORUM
@markglh
Copy link
Author

markglh commented Jul 21, 2016

I've merged the consistency level changes into this pr as they kind of go hand in hand and are fairly trivial.
See here: 2a95614

@lichtsprung
Copy link
Member

@markglh I agree with that you don't want to use a lower consistency level than QUORUM but what if you need LOCAL_QUORUM consistency?

@markglh
Copy link
Author

markglh commented Jul 22, 2016

If it's a single DC then we'd effectively have LOCAL_QUORUM anyway, however with multiple DC's my thinking was that you would always want these schema changes replicating across all of them as they're obviously pretty important. I was actually toying with EACH_QUORUM but I'm not sure if that's necessary.

Are you thinking we may want LOCAL_QUORUM under some circumstances? @lichtsprung

@markglh
Copy link
Author

markglh commented Jul 22, 2016

To elaborate on the multiple DC thing, my thinking was that having inconsistent keyspaces across DC's could potentially be a major issue if applications start running before they're consistent

@lichtsprung
Copy link
Member

lichtsprung commented Jul 22, 2016

@markglh Not sure. Was just wondering if there might be a (good) reason for using some variant of QUORUM.

I think I'd prefer EACH_QUORUM in a multi DC setting though.

@lichtsprung
Copy link
Member

...Not sure if this is really an issue or necessary for a multiple DCs.

@markglh We can merge this version and see if anyone complains that they need some higher consistency level for whatever reason.

@markglh
Copy link
Author

markglh commented Jul 22, 2016

@lichtsprung works for me. If feedback is that we want this configurable we can make the change. Would you like it changing to EACH_QUORUM before the merge?

@lichtsprung
Copy link
Member

@markglh Let me ask our ops team. If they prefer something other than quorum, we need to make it configurable.

@lichtsprung lichtsprung merged commit e446bbd into Galeria-Kaufhof:master Jul 22, 2016
@lichtsprung
Copy link
Member

@markglh I'll publish a new version on Monday.

@markglh
Copy link
Author

markglh commented Jul 22, 2016

@lichtsprung that's great, thanks!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants