Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: streaming results through pgwire #16626

Merged
merged 1 commit into from
Aug 10, 2017
Merged

RFC: streaming results through pgwire #16626

merged 1 commit into from
Aug 10, 2017

Conversation

tso
Copy link
Contributor

@tso tso commented Jun 20, 2017

No description provided.

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@tso tso assigned tso and unassigned tso Jun 20, 2017
@RaduBerinde
Copy link
Member

Review status: 0 of 1 files reviewed at latest revision, 3 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 43 at r1 (raw file):

to be pushed into `v3.go`.

Adding a buffer which holds a configurable amount of rows before sending

Great, it's a good idea to have a threshold so we don't affect most queries.


docs/RFCS/streaming_results.md, line 52 at r1 (raw file):

which is not ideal.

The second downside is a side-effect of being synchronous. We are unable to

Note that this is less of a concern with distsql which would have processors still doing working in parallel.


docs/RFCS/streaming_results.md, line 74 at r1 (raw file):

- How buffer size should be configured: it might be nice to have a buffer whose
  size can be configured per query?

How would that work? Some new syntax that provides a hint?


Comments from Reviewable

@jordanlewis
Copy link
Member

Review status: 0 of 1 files reviewed at latest revision, 6 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 12 at r1 (raw file):

At the time of writing the results of a SQL query are buffered into memory
prior to being sent back to the client through pgwire. This RFC outlines a
plan for streaming results through pgwire.

This would be a good place to mention that streamed results will be optional and configurable by the user in some fashion.


docs/RFCS/streaming_results.md, line 28 at r1 (raw file):

the client we are unable to automatically retry a query.

# Detailed design

Can you please add a section on how streaming results interacts with the Postgres wire protocol? Nothing too big - just something to explain why it's okay to start streaming results and send an error half way through if we encounter one. Include a citation from the Postgres wire protocol spec.


docs/RFCS/streaming_results.md, line 41 at r1 (raw file):

The largest change that will have to happen for `v3.go` comes from automatic
retry handling. Since we wish to keep automatic retries this logic will need
to be pushed into `v3.go`.

We discussed disabling automatic retries for queries with streaming results - can you add that idea to this section?


docs/RFCS/streaming_results.md, line 74 at r1 (raw file):

Previously, RaduBerinde wrote…

How would that work? Some new syntax that provides a hint?

On the subject of syntax, we had discussed making streaming results optional - can you mention that discussion here? That might belong at the top of the document.


Comments from Reviewable

@andreimatei
Copy link
Contributor

Review status: 0 of 1 files reviewed at latest revision, 10 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 28 at r1 (raw file):

Previously, jordanlewis (Jordan Lewis) wrote…

Can you please add a section on how streaming results interacts with the Postgres wire protocol? Nothing too big - just something to explain why it's okay to start streaming results and send an error half way through if we encounter one. Include a citation from the Postgres wire protocol spec.

+1


docs/RFCS/streaming_results.md, line 40 at r1 (raw file):

The largest change that will have to happen for `v3.go` comes from automatic
retry handling. Since we wish to keep automatic retries this logic will need

I'm worried about what this refactoring would entail. The automatic retries are driven by a Txn.Exec() call that takes a closure. This call is inside the Executor somewhere; I don't know how easy it'd be to extract it (or to replace it with retries happening at the higher level).
As you've discovered, significant refactorings of the Executor are not easy to do, and I'm worried this might turn out to be a large one. Unless you've thought about it and have an actual plan, in which case I could be wrong.

That's why I generally favor the "alternative" you've listed at the end; without having thought about this project in any detail, that always seemed to me like the natural way to do it without big code movements. I'll give a concrete suggestion on that section.


docs/RFCS/streaming_results.md, line 52 at r1 (raw file):

Previously, RaduBerinde wrote…

Note that this is less of a concern with distsql which would have processors still doing working in parallel.

Could the pgwire connection run async wrt the Executor and the planNodes? They could communicate through a channel or another sort of buffer.
Otherwise I wonder what the effect of sending the rows on the connection synchronously would be. Sockets are buffered, and noted that DistSQL is less affected, but still it seems to me that it'd be a good idea to decouple this communication with the client from the query processing (if feasible).
Consider writing a benchmark with a client on a network with some latency and see if this way of streaming results makes the query slower than buffering all of them in memory.


docs/RFCS/streaming_results.md, line 59 at r1 (raw file):

## Give executor an interface to write to

A naive approach for demonstration is to simply pass a channel from pgwire into

One option is to modify the current struct that the Executor returns to the connection - the ResultList - to act as this buffered channel (or a set of channels).


docs/RFCS/streaming_results.md, line 65 at r1 (raw file):

The first downside to this approach is that executor may do unnecessary work.
In the case where the connection closes pgwire is unable to inform executor

That need not be the case - the communication interface between the connection and the executor can be bi-directional (e.g. have a channel that the Executor listens on and the connection closes to signal cancellation).
In fact, with @itsbilal's work on query cancellation, we're bound to get a way to cancel a query's ctx; the connection could use that.


docs/RFCS/streaming_results.md, line 68 at r1 (raw file):

until executor attempts to send data through pgwire.

The second downside is that it will introduce asynchronicity to the pgwire

Again, I for one see that as a plus :)


Comments from Reviewable

@irfansharif irfansharif changed the title Add RFC for streaming results (docs/RFCS/streaming_results.md). RFC: streaming results Jun 21, 2017
@irfansharif irfansharif changed the title RFC: streaming results RFC: streaming results through pgwire Jun 21, 2017
@petermattis
Copy link
Collaborator

Review status: 0 of 1 files reviewed at latest revision, 11 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 18 at r1 (raw file):

It is currently impossible to do queries with large result sets, since we will
exceed our memory budget. A nice side effect of changing to a streaming
interface is lowering the time to first byte in some cases.

There is another motivation to this work: performance for simple queries. Buffering the result set in memory so it can be passed to pgwire and then encoded into the output format is an unnecessary step. It will be faster to encode results directly to the pgwire output format. Additionally, after the result has been generated, we perform other work such as recording statement statistics which should ideally take place concurrently with sending the response.


docs/RFCS/streaming_results.md, line 43 at r1 (raw file):

Previously, RaduBerinde wrote…

Great, it's a good idea to have a threshold so we don't affect most queries.

Agreed, though see my comment above about the performance motivation of this work. Can we encode the rows into the pgwire output format for this buffering?


Comments from Reviewable

@bdarnell
Copy link
Contributor

The cockroach dump command currently paginates its results by hand. It would be nice if it could use the streaming protocol instead. Check with the backup team to make sure that's going to be feasible. (The sticking point that I see is that after an error, you want to be able to restart from somewhere near where the error occurred instead of going back to the beginning).


Review status: 0 of 1 files reviewed at latest revision, 12 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 12 at r1 (raw file):

Previously, jordanlewis (Jordan Lewis) wrote…

This would be a good place to mention that streamed results will be optional and configurable by the user in some fashion.

And why a user might want control over this. Is the loss of automatic retries the only downside to the streaming mode? (Maybe this configurability should be its own section in the detailed design section instead of adding it to the summary)


docs/RFCS/streaming_results.md, line 18 at r1 (raw file):

Previously, petermattis (Peter Mattis) wrote…

There is another motivation to this work: performance for simple queries. Buffering the result set in memory so it can be passed to pgwire and then encoded into the output format is an unnecessary step. It will be faster to encode results directly to the pgwire output format. Additionally, after the result has been generated, we perform other work such as recording statement statistics which should ideally take place concurrently with sending the response.

@petermattis I see how streaming can improve performance (in terms of time-to-first-result), but I don't see much wasted work in the buffering per se. Isn't it just storing pointers to (unencoded) result data we already have? My gut says that the extra coordination costs of the proposed design will make this change a slight net negative for throughput (still worthwhile, though, IMHO).

In any case, accepting this as a goal seems to complicate the design somewhat. If we just want to control memory usage on the server, we don't have to deal with letting the user opt in or out of streaming. Just switch to streaming mode automatically when the result buffer fills up (even if we have to fail non-retriably later, that's no worse than failing immediately due to insufficient memory).


docs/RFCS/streaming_results.md, line 26 at r1 (raw file):

The main concern for streaming results is handling error cases – especially
retryable errors. It's important to note that once we've sent any results to
the client we are unable to automatically retry a query.

More precisely, we can't retry a query that would contradict the results we've already sent back. It would be tricky to take advantage of this, but there are some types of queries/errors that would be possible to retry. I was thinking of WriteIntentErrors here but they're already retried at the Store level, so I'm not sure if there are other cases where it might be worth looking into this any more.


docs/RFCS/streaming_results.md, line 43 at r1 (raw file):

Previously, petermattis (Peter Mattis) wrote…

Agreed, though see my comment above about the performance motivation of this work. Can we encode the rows into the pgwire output format for this buffering?

@petermattis Why does it matter whether the pgwire encoding happens before or after this buffering?


docs/RFCS/streaming_results.md, line 74 at r1 (raw file):

Previously, jordanlewis (Jordan Lewis) wrote…

On the subject of syntax, we had discussed making streaming results optional - can you mention that discussion here? That might belong at the top of the document.

Why would a user want control over the buffer size? Once you're using streaming mode (and lose the server-side retries), the size of the buffer shouldn't make much difference. Better to keep this entirely under server control.

As for opting in or out of streaming mode, I suggest that transactions that use the savepoint retry protocol should always use streaming mode, since they're promising to handle client-side retries. Not sure about how to specify this for standalone queries (as one data point, mysql has a SQL_BUFFER_RESULT modifier in its SELECT statement).


Comments from Reviewable

@petermattis
Copy link
Collaborator

Review status: 0 of 1 files reviewed at latest revision, 12 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 18 at r1 (raw file):

Previously, bdarnell (Ben Darnell) wrote…

@petermattis I see how streaming can improve performance (in terms of time-to-first-result), but I don't see much wasted work in the buffering per se. Isn't it just storing pointers to (unencoded) result data we already have? My gut says that the extra coordination costs of the proposed design will make this change a slight net negative for throughput (still worthwhile, though, IMHO).

In any case, accepting this as a goal seems to complicate the design somewhat. If we just want to control memory usage on the server, we don't have to deal with letting the user opt in or out of streaming. Just switch to streaming mode automatically when the result buffer fills up (even if we have to fail non-retriably later, that's no worse than failing immediately due to insufficient memory).

The current code in sql.Executor builds up a StatementResults struct which is internally composed of a []Result. Each Result has a []Row and each Row is a []Datum. Buffering involves copying datums. There is quite a bit of attention paid to doing this quickly (see RowContainer.AddRow), but my contention is that directly encoding the results into pgwire format will avoid a lot of allocations and provide a modest performance boost.


docs/RFCS/streaming_results.md, line 43 at r1 (raw file):

Previously, bdarnell (Ben Darnell) wrote…

@petermattis Why does it matter whether the pgwire encoding happens before or after this buffering?

See my response to your earlier comment. I believe buffering into pgwire encoding will be faster and use less memory. Is there a reason not to do it?

Btw, this RFC doesn't touch on the interface that will be plumbed into sql.Executor. I'm imagining something like:

type ResultWriter interface {
  BeginResult(pgTag string)
  SetRowsAffected(rowsAffected int)
  SetColumns(columns sqlbase.ResultColumns)
  AddRow(row parser.Datums)
  EndResult()
  Error(err error)
}

This is just a sketch, but the idea is that the buffering would be completely hidden by this interface and controlled by sql/pgwire. The interface would likely need to be more complicated to allow retries when possible, so don't read too much into this.


Comments from Reviewable

@tso
Copy link
Contributor Author

tso commented Jun 27, 2017

Review status: 0 of 1 files reviewed at latest revision, 12 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 12 at r1 (raw file):

Previously, bdarnell (Ben Darnell) wrote…

And why a user might want control over this. Is the loss of automatic retries the only downside to the streaming mode? (Maybe this configurability should be its own section in the detailed design section instead of adding it to the summary)

Jordan and I had talked about this offline, but I don't think that having optional streaming results makes sense. I believe it would complicate the code a lot to support both behaviors and I'm unsure the benefits are worth that.


docs/RFCS/streaming_results.md, line 40 at r1 (raw file):

Previously, andreimatei (Andrei Matei) wrote…

I'm worried about what this refactoring would entail. The automatic retries are driven by a Txn.Exec() call that takes a closure. This call is inside the Executor somewhere; I don't know how easy it'd be to extract it (or to replace it with retries happening at the higher level).
As you've discovered, significant refactorings of the Executor are not easy to do, and I'm worried this might turn out to be a large one. Unless you've thought about it and have an actual plan, in which case I could be wrong.

That's why I generally favor the "alternative" you've listed at the end; without having thought about this project in any detail, that always seemed to me like the natural way to do it without big code movements. I'll give a concrete suggestion on that section.

Yeah I'd like to go with the "alternative" approach unless others have strong arguments for the currently "proposed" approach.


docs/RFCS/streaming_results.md, line 41 at r1 (raw file):

Previously, jordanlewis (Jordan Lewis) wrote…

We discussed disabling automatic retries for queries with streaming results - can you add that idea to this section?

I think that if there's a buffer there's no reason to do this.


docs/RFCS/streaming_results.md, line 74 at r1 (raw file):

Previously, bdarnell (Ben Darnell) wrote…

Why would a user want control over the buffer size? Once you're using streaming mode (and lose the server-side retries), the size of the buffer shouldn't make much difference. Better to keep this entirely under server control.

As for opting in or out of streaming mode, I suggest that transactions that use the savepoint retry protocol should always use streaming mode, since they're promising to handle client-side retries. Not sure about how to specify this for standalone queries (as one data point, mysql has a SQL_BUFFER_RESULT modifier in its SELECT statement).

As I wrote in another comment: unsure if streaming results should be optional. I lean towards no at the moment.

Unsure how buffer size should be configured, or if it should be. To be honest, I'm unsure what options there are for configuration.

I believe @dt mentioned something along the lines of backup/restore having a use case for measuring progress. I forget the specifics, but something along the lines of there being 10 results indicating progress and wanting to get each result as soon as it's available, so having a configurable buffer size would allow this. Unsure if there's any use cases outside of that.


Comments from Reviewable

@bdarnell
Copy link
Contributor

Review status: 0 of 1 files reviewed at latest revision, 12 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 74 at r1 (raw file):

Previously, tristan-ohlson (Tristan Ohlson) wrote…

As I wrote in another comment: unsure if streaming results should be optional. I lean towards no at the moment.

Unsure how buffer size should be configured, or if it should be. To be honest, I'm unsure what options there are for configuration.

I believe @dt mentioned something along the lines of backup/restore having a use case for measuring progress. I forget the specifics, but something along the lines of there being 10 results indicating progress and wanting to get each result as soon as it's available, so having a configurable buffer size would allow this. Unsure if there's any use cases outside of that.

Maybe there should be a time-based component too: start streaming results when the buffer fills up or after the buffer has been non-empty for N seconds.


Comments from Reviewable

@andreimatei
Copy link
Contributor

Review status: 0 of 1 files reviewed at latest revision, 12 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 40 at r1 (raw file):

Previously, tristan-ohlson (Tristan Ohlson) wrote…

Yeah I'd like to go with the "alternative" approach unless others have strong arguments for the currently "proposed" approach.

Did you think about it any more?


docs/RFCS/streaming_results.md, line 41 at r1 (raw file):

Previously, tristan-ohlson (Tristan Ohlson) wrote…

I think that if there's a buffer there's no reason to do this.

btw there's now a way to tell the Executor that it can no longer do automatic retries -> you can advance the txn state from FirstBatch to Open. If the connection does this, there might be some synchronization with the Executor required to prevent previously impossible races about accessing that state.


Comments from Reviewable

@CLAassistant
Copy link

CLAassistant commented Jul 13, 2017

CLA assistant check
All committers have signed the CLA.

@tso
Copy link
Contributor Author

tso commented Jul 13, 2017

Review status: 0 of 1 files reviewed at latest revision, 12 unresolved discussions.


docs/RFCS/streaming_results.md, line 28 at r1 (raw file):

Previously, andreimatei (Andrei Matei) wrote…

+1

Done.


docs/RFCS/streaming_results.md, line 40 at r1 (raw file):

Previously, andreimatei (Andrei Matei) wrote…

Did you think about it any more?

Yes, I agree with you. I have swapped the main and alternative approaches.


docs/RFCS/streaming_results.md, line 41 at r1 (raw file):

Previously, andreimatei (Andrei Matei) wrote…

btw there's now a way to tell the Executor that it can no longer do automatic retries -> you can advance the txn state from FirstBatch to Open. If the connection does this, there might be some synchronization with the Executor required to prevent previously impossible races about accessing that state.

This is awesome! Should™ be easy enough.


docs/RFCS/streaming_results.md, line 43 at r1 (raw file):

Previously, petermattis (Peter Mattis) wrote…

See my response to your earlier comment. I believe buffering into pgwire encoding will be faster and use less memory. Is there a reason not to do it?

Btw, this RFC doesn't touch on the interface that will be plumbed into sql.Executor. I'm imagining something like:

type ResultWriter interface {
  BeginResult(pgTag string)
  SetRowsAffected(rowsAffected int)
  SetColumns(columns sqlbase.ResultColumns)
  AddRow(row parser.Datums)
  EndResult()
  Error(err error)
}

This is just a sketch, but the idea is that the buffering would be completely hidden by this interface and controlled by sql/pgwire. The interface would likely need to be more complicated to allow retries when possible, so don't read too much into this.

Added my best guess for the result writer interface (guaranteed to change upon implementation honestly).


docs/RFCS/streaming_results.md, line 52 at r1 (raw file):

Previously, andreimatei (Andrei Matei) wrote…

Could the pgwire connection run async wrt the Executor and the planNodes? They could communicate through a channel or another sort of buffer.
Otherwise I wonder what the effect of sending the rows on the connection synchronously would be. Sockets are buffered, and noted that DistSQL is less affected, but still it seems to me that it'd be a good idea to decouple this communication with the client from the query processing (if feasible).
Consider writing a benchmark with a client on a network with some latency and see if this way of streaming results makes the query slower than buffering all of them in memory.

I added a note about this: For the initial implementation I would like to avoid dealing with this, but I'll have to decide based on benchmarks. Regardless I'm happy to add this in a follow up, since I believe it will increase performance.


Comments from Reviewable

@andreimatei
Copy link
Contributor

Review status: 0 of 1 files reviewed at latest revision, 13 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 14 at r2 (raw file):

plan for streaming results through pgwire.

# Motivation and background

The Spanner SQL paper mentions another benefit of their result streaming: clients can do "pagination" without sorting by using streaming of results. So, instead of having to issue queries for every page and use sorting to get deterministic order of results (only used for keeping different pages coherent), a client can stream results and consume "a page" at a time. Spanner seems significantly more apt at error recovery within a query than we are, so they encourage long-running queries, but anyway something to keep in mind.


Comments from Reviewable

@knz
Copy link
Contributor

knz commented Jul 15, 2017

:lgtm:

Tristan: you can process my last comments below, then announce the RFC "final comment period".


Review status: 0 of 1 files reviewed at latest revision, 13 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 12 at r1 (raw file):

Previously, tso (Tristan Ohlson) wrote…

Jordan and I had talked about this offline, but I don't think that having optional streaming results makes sense. I believe it would complicate the code a lot to support both behaviors and I'm unsure the benefits are worth that.

Agreed. Add a note here explaining that a user can disable streaming in practice by setting a pre-stream buffer of size MaxInt64.


docs/RFCS/streaming_results.md, line 18 at r1 (raw file):

Previously, petermattis (Peter Mattis) wrote…

The current code in sql.Executor builds up a StatementResults struct which is internally composed of a []Result. Each Result has a []Row and each Row is a []Datum. Buffering involves copying datums. There is quite a bit of attention paid to doing this quickly (see RowContainer.AddRow), but my contention is that directly encoding the results into pgwire format will avoid a lot of allocations and provide a modest performance boost.

Tristan and I discussed this offline. The main thrust of the work here is to refactor the code to make streaming possible, this would bring the space complexity of returning N rows from O(N) to O(1). The kind of optimization discussed here can be added as a next step, it would merely shave a constant after we get into the O(1) terrain.

Tristan: add a cursory note in a section "Possible future steps" below to mention that the in-memory buffering can later be further pre-encoded in pgwire format.


docs/RFCS/streaming_results.md, line 26 at r1 (raw file):

Previously, bdarnell (Ben Darnell) wrote…

More precisely, we can't retry a query that would contradict the results we've already sent back. It would be tricky to take advantage of this, but there are some types of queries/errors that would be possible to retry. I was thinking of WriteIntentErrors here but they're already retried at the Store level, so I'm not sure if there are other cases where it might be worth looking into this any more.

We discussed this offline. The way to do this is to simply disable automatic retry as soon as anything leaves the in-memory buffer towards the client, no questions asked.

Tristan: add a reminded of this somewhere in the section "detailed design".


docs/RFCS/streaming_results.md, line 74 at r1 (raw file):

Previously, bdarnell (Ben Darnell) wrote…

Maybe there should be a time-based component too: start streaming results when the buffer fills up or after the buffer has been non-empty for N seconds.

Tristan: I encourage you to make the buffer size configurable per-session, with a default configured using a cluster setting (much like the distsql setting already).

Add a note to this effect here, together with the reminder from above that buffer size of MaxInt64 effectively disables streaming.


docs/RFCS/streaming_results.md, line 14 at r2 (raw file):

Previously, andreimatei (Andrei Matei) wrote…

The Spanner SQL paper mentions another benefit of their result streaming: clients can do "pagination" without sorting by using streaming of results. So, instead of having to issue queries for every page and use sorting to get deterministic order of results (only used for keeping different pages coherent), a client can stream results and consume "a page" at a time. Spanner seems significantly more apt at error recovery within a query than we are, so they encourage long-running queries, but anyway something to keep in mind.

Tristan: take that comment from Andrei and add it below as "potential additional benefit". No need for further action on this at this time.


Comments from Reviewable

@tso
Copy link
Contributor Author

tso commented Jul 17, 2017

Processed. Final comment period begins!


Review status: 0 of 1 files reviewed at latest revision, 13 unresolved discussions, some commit checks failed.


docs/RFCS/streaming_results.md, line 12 at r1 (raw file):

Previously, knz (kena) wrote…

Agreed. Add a note here explaining that a user can disable streaming in practice by setting a pre-stream buffer of size MaxInt64.

Done.


docs/RFCS/streaming_results.md, line 18 at r1 (raw file):

Previously, knz (kena) wrote…

Tristan and I discussed this offline. The main thrust of the work here is to refactor the code to make streaming possible, this would bring the space complexity of returning N rows from O(N) to O(1). The kind of optimization discussed here can be added as a next step, it would merely shave a constant after we get into the O(1) terrain.

Tristan: add a cursory note in a section "Possible future steps" below to mention that the in-memory buffering can later be further pre-encoded in pgwire format.

Done.


docs/RFCS/streaming_results.md, line 74 at r1 (raw file):

Previously, knz (kena) wrote…

Tristan: I encourage you to make the buffer size configurable per-session, with a default configured using a cluster setting (much like the distsql setting already).

Add a note to this effect here, together with the reminder from above that buffer size of MaxInt64 effectively disables streaming.

Done.


docs/RFCS/streaming_results.md, line 14 at r2 (raw file):

Previously, knz (kena) wrote…

Tristan: take that comment from Andrei and add it below as "potential additional benefit". No need for further action on this at this time.

Done.


Comments from Reviewable

@tso tso requested a review from a team as a code owner August 10, 2017 23:12
@tso tso merged commit 0367fc5 into cockroachdb:master Aug 10, 2017
@tso tso deleted the tohlson.2017.06.20-streaming-results branch August 10, 2017 23:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants