-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Allocator Fails On >=4GB Requests #7120
Comments
TBH I didn't imagine anybody to use allocations bigger than 2GB. ;-) |
3 years ago I had access to a box with 1TB RAM and another 1TB swap space..room for literally 100s of seqs that might exceed 4GB all at the same time. ;-) Anyway, https://github.com/mattconte/tlsf/ makes it seem like the changes to support 64-bits are pretty simple. (Just search for the code conditioned on TLSF_64BIT). That original TLSF paper was written in 2004 when 64-bit pointers were much more rare. For Nim it might be best to just use int64/uint64 unconditionally. |
Want to give it a shot? |
Sure. I should be able to work on this sometime over the next week. Will send a PR when I have something. |
FYI:
|
Personally, I allocate >4GB single seqs all the time. Even my phone has that much RAM.
I believe I tracked down the error to 32-bit arithmetic in alloc.nim:mappingSearch. { I just did a system call trace and noticed the roughly 4GB request translated to an mmap of 2**58 bytes. I then set a breakpoint at that crazy mmap and saw getBigChunk was the last call with a reasonable size request. Then reading the code for getBigChunk implicated mappingSearch. }
It looks like recent (Dec 7) code and that some 32-bit limitations are documented. So, this code failing on large request sizes may have been something that you are already vaguely aware of, but it seemed worth opening this issue.
The text was updated successfully, but these errors were encountered: