Re: memory usage profiling
Hello jouke.postma
I'ts not surprising that your program takes up more memory than just your small objects. A map is typically implemented as a tree structure, so there will be overhead for the key and pointers to nodes. If your objects are really small, the overhead may be more than the objects.
Storing 1 million structs of 2 integers in a map uses 126,000K on my system. Storing the same in a vector uses only 60,000K
If you overloaded the allocation functions for your small objects, the map overhead would not show up there. Nor would it show as a memory leak.
Re: memory usage profiling
bertandernie,
Thanks for the reply, I considered that, but I think there is more happening. Let me give an example of a short run which I profiled. (My final runs run out of memory on a system with 64 GB ram)
Resident memory use 340 MB
Estimated number of objects 0.8e6
Average size of small objects is 32 bytes (about 3 doubles and an a 64 bit pointer).
Map overhead 3 integers, plus a double which is used for sorting, makes it 64 bytes/stored small object
Estimated memory usage 64*0.8e6~=54mb
Sum of memory allocated with new[] and new ~= 35mb, not far from the estimated 54 mb, but what is the other 300 mb used for?
Similarly, a longer run which uses ~800 mb resident memory shows only 150 mb total in the profiling graph.
Anyway, my biggest question is why, when I erase a large part of the objects in the maps, does the memory usage only go down about 10%. That made me think that the data in the maps is only a small fraction of the memory usage, but I can't work out what than is using the memory.
Re: memory usage profiling
Well, I wouldn't expect resident memory usage to go down fast just because you stopped using some of it. Just because the program isn't using it at that second, the OS doesn't know it won't again, and so it might keep a fair amount of it reserved for the program.
Re: memory usage profiling
Sorry I cannot help you much more.
On Linux, with a simple test program, the VM size and RSS do not go down after the map is populated and then cleared. But, on subsequent population/clearing, the memory usage does not go up. The program reserves the memory, and reuses it (as Lindley suggested). On Linux, the memory usage is about what you’d expect using your calculations about object size and map overhead so, without seeing your source code, I don’t know about the extra 300Mb that your program is using.
On windows the memory is released immediately.
Re: memory usage profiling
Thank you both for your answers.
Lindley, I was aware of that, but when I delete a bunch of objects, on subsequent generation of objects the resident memory shouldn't go up or not as much. But it does. As far as I know only severe memory fragmentation could cause that, or what I deleted was really only small compared to what was allocated, meaning that not the data but something else is using a lot of ram.
I did a bit more testing. If I add a large array of zeros to one of the objects, resident memory use goes up about 17 mb, the memory profiler shows the same increase of 17 mb, when I implement this array as an std::map memory usage goes up 137 mb but the profiler shows only an increase of 103 mb. So theres seems to be a 'hidden' cost in using map, but it is percent wise much smaller than the 'hidden' cost in the model. I did vary the map size, and the hidden cost is about 30% of the total size.
I guess I am still puzzled.
Re: memory usage profiling
I found an article that explains at least some of the extra resident memory use: it is about how malloc and free work and a very good read. It has two parts:
http://www.cs.utk.edu/~plank/plank/c...1/lecture.html
http://www.cs.utk.edu/~plank/plank/c...2/lecture.html