Skip to content
This repository has been archived by the owner on Dec 5, 2022. It is now read-only.

(perf) consume streams in parallel and flush them in series #256

Merged
merged 5 commits into from
Sep 10, 2018

Conversation

vigneshshanmugam
Copy link
Collaborator

@vigneshshanmugam vigneshshanmugam commented Sep 7, 2018

Previously we were consuming and processing the fragment streams in order they were declared in the template. This PR does the following things

  • Consume the fragment streams on fragment buffer in parallel and process them in series while flushing them in to the client to guarantee the order in which fragments are declared on the page.

  • This would ideally improve the time to last byte and latency. Check benchmarks below.

Note: One caveat might be increase in the memory usage of the application since we are buffering the response. We are clearing the buffer once the stream is written to the output stream, But it might be a issue if fragments are sending huge data. We can figure this out and warn by listening on fragment:response event which has the size of response

Benchmarks

In order to show the clear picture of the results, I created a dummy fragment server which creates known delays in producing the response so its easy to compare with before and after the changes

// fragment server code

let index = 0;
const delays = [100, 20, 100, 200, 10, 50, 400, 230, 40, 330, 130, 390, 250, 5];
function getDelay() {
    const currDelay = delays[index];
    index++;
    if (index >= delays.length) {
        index = 0;
    }
    return currDelay;
}

http
    .createServer((request, response) => {
        response.writeHead(200, { 'Content-Type': 'text/html' });
        setTimeout(() => {
            response.end('blah');
        }, getDelay());
    })
    .listen(8081);

The following changes are made to reduce the influence of other code in the benchmarks

  • Templates are cached (parsing computation is eliminated)
  • Delays from fragment servers are fixed for both results (consuming in parallel vs consuming in series)
  • 10 fragments were on the page with 7 sync and 3 async fragments to show real use case.

Ran load test with 100 req/sec for 30 seconds

Before

Latencies     [mean, 50, 95, 99, max]  403.800887ms, 404.042432ms, 425.609804ms, 469.933934ms, 558.989092ms

Latencies     [mean, 50, 95, 99, max]  401.960285ms, 403.285618ms, 409.419305ms, 465.813201ms, 669.399815ms

Latencies     [mean, 50, 95, 99, max]  404.582374ms, 403.701531ms, 429.116024ms, 467.322299ms, 587.042424ms

After

Latencies     [mean, 50, 95, 99, max]  399.522531ms, 402.666553ms, 406.765565ms, 410.959117ms, 451.260693ms

Latencies     [mean, 50, 95, 99, max]  399.411687ms, 402.191078ms, 406.360312ms, 409.321567ms, 426.356372ms

Latencies     [mean, 50, 95, 99, max]  399.693539ms, 402.11449ms, 407.070213ms, 410.855198ms, 428.636678ms

Huge thanks to @watson for the idea 👍

@codecov
Copy link

codecov bot commented Sep 7, 2018

Codecov Report

❗ No coverage uploaded for pull request base (master@887621a). Click here to learn what that means.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master     #256   +/-   ##
=========================================
  Coverage          ?   99.54%           
=========================================
  Files             ?       17           
  Lines             ?      665           
  Branches          ?      126           
=========================================
  Hits              ?      662           
  Misses            ?        3           
  Partials          ?        0
Impacted Files Coverage Δ
lib/streams/stringifier-stream.js 100% <100%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 887621a...8423962. Read the comment docs.

@vigneshshanmugam
Copy link
Collaborator Author

@ruiaraujo Its ready for review.

@ruiaraujo
Copy link
Contributor

👍

1 similar comment
@vigneshshanmugam
Copy link
Collaborator Author

👍

@vigneshshanmugam vigneshshanmugam merged commit c67615d into master Sep 10, 2018
@vigneshshanmugam vigneshshanmugam deleted the consume-parallel branch September 10, 2018 12:16
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants