Skip to content

A boilerplate example to show a good way to read/fetch/do million or more records using nodejs streams instead of doing in memory

Notifications You must be signed in to change notification settings

ambianBeing314/mongodump-with-stream

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mongodump-with-stream

A boilerplate example to show a good way to read/fetch/do million or more records using nodejs streams instead of doing in memory

Benchmarks

  • Read the documents in memory and dumping the clone (Bad)

    • Number of records: ~1 Million
    • Memory consumption: 1.5GB
    • Time taken: ~35secs
  • Read the documents using stream and do batch dump with a threshold of ~8k records (Good)

    • Number of records: ~1 Million
    • Memory consumption: ~400MB
    • Time taken: ~18secs
    • Note: The memory cosumption is still large because we have to keep interval-data in memory. For writable stream e.g; write data to file - (practical use case) stream.pipe(writable) would consume a lot less.

About

A boilerplate example to show a good way to read/fetch/do million or more records using nodejs streams instead of doing in memory

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published