-
-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flatten with user-defined concurrency: flattenParallel #161
Comments
This is the #1 feature keeping me off stream. |
Apart from "it would be nice", have any of you encountered a real need for this operator when writing real code? |
Can't it be used in any situation when you want to be able to specify concurrency? For example: If you want to read 10,000 files, and you want to open them in parallel, say 20 at a time. I wrote it to handle bulk API requests – I needed to make thousands of API calls, with some concurrency, but without hitting my rate limit. |
I'd say that's a use case for Node Streams.
From a client? Sounds like the job of an intermediate backend. I'm getting to a point here, which is: xstream is meant for frontend programming, and in specific, to support Cycle.js. I find it hard to find a usage for flattenParallel in that context. |
I didn't realize the project had such a narrow focus. Is there any reason xstream cannot or should not be used in a backend? |
@staltz Thank you for the replies. Here are my use cases, which are all from the browser client.
|
Because there are better options. xstream with its very small size is clearly meant for browsers, because kB size doesn't matter much in node.js backends. most.js is almost always a better choice in that case, with hyper performance in node.js.
Really good use cases to report. I wouldn't have been able to predict these types of use alone. Thanks. I'll consider adding this feature then. It would probably be named |
@staltz, were you thinking adding this as an extra vs. core? |
@xtianjohns as an extra, named |
I've got this implemented but the tests are pretty thin, I've just copied the tests for As I was working on this I realized that it was kinda funky to think about PR incoming. Also, this is a heck of an operator to write docs for, haha. I'll welcome any feedback about how to make it clearer. |
Add flattenSequentiallyAtMost(n) extra. flattenSequentiallyAtMost is designed to provide consumer-configurable concurrency to flattening operations. Two flattening extras exist which allow consumers to flatten a meta stream with maximum concurrency, or with no concurrency. This new operator supports a concurrency limit, representing the maximum amount of _additional_ streams to connect to during flattening. Resolve staltz#161.
Add flattenConcurrentlyAtMost(n) extra. flattenConcurrentlyAtMost is designed to provide consumer-configurable concurrency to flattening operations. Two flattening extras exist which allow consumers to flatten a meta stream with maximum concurrency, or with no concurrency. This new operator supports a concurrency limit, representing the maximum amount of _additional_ streams to connect to during flattening. Resolve staltz#161.
Add flattenConcurrentlyAtMost(n) extra. flattenConcurrentlyAtMost is designed to provide consumer-configurable concurrency to flattening operations. Two flattening extras exist which allow consumers to flatten a meta stream with maximum concurrency, or with no concurrency. This new operator supports a concurrency limit, representing the maximum amount of _additional_ streams to connect to during flattening. Resolve staltz#161.
Add flattenConcurrentlyAtMost(n) extra. flattenConcurrentlyAtMost is designed to provide consumer-configurable concurrency to flattening operations. Two flattening extras exist which allow consumers to flatten a meta stream with maximum concurrency, or with no concurrency. This new operator supports a concurrency limit, representing the maximum amount of _additional_ streams to connect to during flattening. Resolve #161.
There currently is no middle ground between
flattenConcurrently
andflattenSequentially
. It would be nice to be able to specify the amount of concurrency desired.Here's a half-baked snippet that I call
flattenParallel
, which handles a variable number (n
) of streams concurrently:This provides a flexible abstraction of
flattenSequentially
andflattenConcurrently
:flattenSequentially
isflattenParallel(0)
flattenConcurrently
isflattenParallel(Infinity)
The text was updated successfully, but these errors were encountered: