That's not the same, John.
What you're seeing is because the map ceases to exist (i.e. the destructor invoked). Before that happens, the "map.clear()" does not guarantee cleanup.
It is possible to get close by calling something like this after the map.clear() calls.
Code:
void cleanup(std::unordered_map<int, int> &map)
{
std::unordered_map<int, int> temp(map.begin(), map.end(), 0); // 0 is the initially requested (lower bound for) bucket size
std::swap(temp, map);
// temp now holds the original contents of map, so that will be released by the destructor as the function returns
}
The catch is that the value for the bucket count (the third argument for the constructor when creating temp) is a lower bound - an implementation is still permitted to reserve more.
The other trade-off of this is that it creates a complete copy of the existing map, before cleaning up.
As I said previously, if there is actually a requirement to clear/fill the map repeatedly and to keep memory down, different strategies (or different container types) are needed. However, I doubt this is a genuine requirement - it is more likely just something golfinguy has decided is a "nice to have" i.e. another form of premature optimisation.