Hootenanny currently does not strive to conflate data at the global level. An earlier implementation of Hootenanny supported a map-reduce architecture that was capable of global conflation for some data types but was shelved due to general lack of interest and the maintenance costs to support the seldomly used capability. So some of the conflation algorithms are actually capable of supporting distributed computing, if you want to try and go that route and revive the capability. However, such capabilities are likely out of date with the rest of the codebase and may have limitations as far as the feature types they can conflate.
Hootenanny generally can scale well running on a single machine from the larger city level up to the smaller country level, depending on the density of the data being conflated and the RAM available on the machine. Beyond that, new conflation algorithms and/or parallelization of existing algorithms would need to be developed to handle very large quantities of map data.