I programmed a simulation model that uses lot's (~1e8) small objects, stored in std::map. These number objects increases as the simulation progresses in time. The model naturally uses a lot of ram, but according to my calculation is uses much more than expected, at least double. I tried to profile the program with both massif and by overloading the allocation functions. Both profiles give me expected memory usage, but as said, resident memory usage (linux) is much higher than the total allocated memory according to the profiles.

- I checked for memory leaks, but according to the profiles, and memcheck there are no leaks.
- I considered memory fragmentation, and tried to use boost's pool allocator, which helps somewhat, but not a lot (about 10%). I tried to use large pools (nestsize=10MB) , which increases initial memory usage a lot, but final memory usage is somewhat less.
- I build an 'early garbarge' collector which periodically erases a lot of objects that are stricktly speaking not needed anymore from the maps. But although this should reduce the number of objects with a factor 5 or so it reduces resident memory usage only by about 15%, but does increase computational times tremendously (10 times or so without the pool allocator. With the pool allocator it seems to be better).
- I tried to double the size of the small objects by including dummy variables, and that does increase memory usage, but only by about 20%.

Right now I am a bit puzzled where the large overhead comes from. Any suggestions for what It could be would be welcome. And especially, I would like to know how I could make it visible in the memory profiles.