-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update_database fails for very large object ids #465
Comments
The deeper problem is that it leads to a large .map.idx file. This is a huge problem because the .idx files must be loaded into memory at each startup. I will figure out a way for a redesign. But for the moment I can only offer a meaningful error message instead of a crash. Message added in 7dd05ef |
Can you please also document in this issue what the new limits are? Update: Max id for nodes is 4398046511103 (2^42 - 1) |
I am also doing the same thing as @mmd-osm for the same purpose. I want to point out that an alternative to supporting huge integers as IDs is to make IDs signed integers rather than unsigned in Overpass. This would help achieve the goal to load custom datasets. Custom datasets could be negatives rather than super large. No sure how difficult of a change this would be. Supporting huge signed integers would be the best of course 😄 |
To be honest I don’t see much reason to go to a 500++ trillion range for custom data, let alone negative ids. We have about 5.5 billion nodes right now in OSM and anything in the 8-10+ billion range would already suffice for today’s and tomorrow’s data. |
As reported here: https://gis.stackexchange.com/questions/273204/what-is-the-max-value-for-node-ids-in-overpass, someone tried to load custom data into Overpass using very large node ids. This currently fails to a large memory consumption for Random Files.
Example:
The text was updated successfully, but these errors were encountered: