I don't like the notion of a global collection of objects at all.

I remember working on a smallish project in which a team leader created a global object named (unimaginatively) Globals, which included pointers to about 500 other objects of about 40 distinct class types. That program became a maintenance nightmare for a couple of reasons.

Firstly, because everything (he insisted) was needed was in "Globals", anything he needed to share was placed in it. It therefore became quite challenging to understand what several class functions did when they worked by manipulating several global objects. Eventually, it reached a point where a small change on one function could cause other, seemingly unrelated, functions to break because of a chain of interactions with other functions that accessed some common global objects.

Second, even though this was a small project, the use of "globals" meant there was one header file that included a considerable number of others. That globals header was touched almost routinely when new objects were added to the "Globals" pool, and was also #include'd by most source files. This meant that even small changes of functionality could trigger very large rebuild times - it is a pretty mean feat to have a small project of about 200K lines requiring most of a working day to compile.

In practice, I suggest minimising use of global objects (ideally, eliminate their use completely). Better to do a bit more work to ensure that all functions receive information they need as arguments (extra pointer or reference arguments). That makes the functions more controllable, and also more flexible in terms of being able to act on any object of a given type. If it becomes necessary to pass several objects as an argument, it is not difficult to create a class/struct container to manage the interactions.

The trade-off is a bit more work to design functions that are self-contained, but less risk of having code break because of interactions that aren't obvious.