CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 12 of 12
  1. #1
    Join Date
    Mar 2006
    Posts
    151

    How best to prevent memory exhaustion in time-critical dynamic programming?

    Hello:

    I am trying to solve a knapsack problem in a time-critical application. I've found I can shave a couple dozen seconds off of some inputs with dynamic programming. The problem is, the amount of memory consumed by the "memoized" (<http://en.wikipedia.org/wiki/Memoization>) data is so large in many cases that the performance begins to drop on some inputs and the program crashes due to memory exhaustion (or maybe fragmentation???) on others. I'm quite sure I'm not leaking memory.

    I have modified my code so that I can put a hard limit on the amount of data the code tries to memoize, but it doesn't work consistently. I've found that a hard-coded ceiling on some inputs is too large while on others it could be larger (due to several other factors which impact memory consumption and involve the way the data is processed). Plus, I want to take advantage of additional RAM on computers which have it.

    In effect, I need a function which can tell me the amount of readily available memory. I've searched the web and have found little other than multitudinous cautions that such an idea is a bad one under modern operating systems. So my first question is, if that's true, then

    1.) what should I be doing instead?

    Assuming this is the least of the evils for my situation, I'm trying to use GlobalMemoryStatusEx. I take the smaller of the ullAvailPhys and ullAvailVirtual members, remove about 1GB for use by other processes, and then multiply the result by a factor less than one for a safety margin. My second question is,

    2.) am I even looking at the correct fields in the MEMORYSTATUSEX structure? (The last time I deeply understood how OS's and processors handled memory was on a computer with a Motorola 68000 processor, so I'm having to learn how modern OS's assign/page/virtualize/commit/whatever memory.)

    Thank you,
    GeoRanger

  2. #2
    Join Date
    Oct 2008
    Posts
    1,456

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by GeoRanger View Post
    the performance begins to drop on some inputs and the program crashes due to memory exhaustion (or maybe fragmentation???) on others.
    can't you catch the memory allocation error and the performance decrease at runtime and then shrink the memory consumed by the algorithm accordingly ? I ask, because you alluded to a time critical code yet running on the seconds order-of-magnitude time scale, hence such an auto profiling/error handling code should not impact the running time significantly anyway ...
    Last edited by superbonzo; February 13th, 2013 at 04:33 AM.

  3. #3
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    A tricky question actually.

    the simple solution is... "use as little memory as you can to get a reasonable performance gain".

    Allocating more than "a couple hundred Mb" on Win32 for a long period of time is asking for all kinds of performance problems not only for your app itself but for the entire system.
    A Win64 has some more leeway in that, but only if the system actually has the extra RAM as well, and some other app isn't hogging large amounts of it as well.



    if you really want to gain extra performance by potentially using all available actual RAM memory, then what you really want to do is match your memory consumption in such a way that you make maximum use of memory without the important parts of your application being forced out of ram and into the swap file.

    This means your app's memory use will need to dynamically adjust both up and down depending on the situation. if you only get more but never release than your app will still end up swapping out excessively as soon as you run extra programs or other programs start using more memory.

    It is possible, but it's not easy... I forgot off the top of my head how this exactly works in Windows (which API functions that is). But the basic idea is that windows will "randomly" notify you when extra memory becomes available which you can make use of, and when your app should really free up some memory to give other applications (and the system) the memory they need for the system to multitask all those apps reasonably. This requires your app to 'rapipdly' respond to both memory increase and decrease requests to make things run smoothly. That typically involves quite a bit of extra work for your app to respond to these "on the fly" changes in memory.

    If you think your app can handle it, I can see if I can find the info on how those api's work.

    Just don't get deluded into thinking you can start up and then determine a "fixed" amount of memory that you can allocate for the entire duration of the running time of your app that will always and guaranteed be in RAM. You can do this by VirtualLock()'ing memory, but if you do that for excessively large amounts of memory, you're effectively stiffling every other application the duration of the lock. VirtualLock for anything more than a few hundred KB to a couple Mb is asking for trouble.

  4. #4
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by superbonzo View Post
    can't you catch the memory allocation error and the performance decrease at runtime and then shrink the memory consumed by the algorithm accordingly ? I ask, because you alluded to a time critical code yet running on the seconds order-of-magnitude time scale, hence such an auto profiling/error handling code should not impact the running time significantly anyway ...
    this doesn't actually work. because you cal allocate considerably more than the system has RAM for, all this extra memory gets moved into the swap file, and it isn't until the swap file is full that you get actually denied memory allocations. By that time however, the system is already slowed down to a crawl by excessive paging in and out of the paging file.

    You really want to allocate memory up the point where paging does NOT end up slowing you down more than the extra memory gains you.
    But that amount of memory changes as windows is running and other apps are running and using and releasing memory as well.

  5. #5
    Join Date
    Oct 2008
    Posts
    1,456

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    still, it would solve half of the problem ( the crash, provided being it caused by a bad allocation, of course ). Yes, a slowdown clogging the entire system could be perceived by the user as a "crash" as well; nonetheless, it's not infrequent to see memory intensive programs hanging the system for seconds/minutes and successfully recovering from a low memory condition, so I'd say it also depends on the typical user expectation ...

    Quote Originally Posted by OReubens View Post
    You really want to allocate memory up the point where paging does NOT end up slowing you down more than the extra memory gains you.
    yes, for this reason I suggested also some sort of self-profiling code capable of estimating the slowdown and adapting dynamically. Of course, nothing 100% ( or even 90% ) effective, given the nature of these OSes ... moreover, I had the impression that the program slowdown was due also to other factors ( like memory access non locality due to excessive memoization ), so some deeper memory/timing analysis would be needed anyway.

    Quote Originally Posted by OReubens View Post
    this doesn't actually work.
    well, I'd say it depends ... for example, if the memory exhaustion is due to an unfavorable algorithm space complexity ( say, an algorithm requesting an exponentially increasing allocation size for some special input scenarios ) then the last successful allocation could be much smaller than the failing allocation, hence giving a bad allocation exception before the OS can try to do anything about it. I had the impression this fit the OP case ( memoization + dynamic programming -> NP space complexity ? ).

    Moreover, I'm not so convinced that an OS cannot gracefully detect a memory abusing process ... consider this simple code

    Code:
    #include <iostream>
    #include <cstdlib>
    #include <ctime>
    #include <vector>
    
    int main()
    {
    	std::vector<int> v;
    
    	try
    	{
    		std::srand( std::time(0) );
    
    		while(true)
    		{
    			v.resize( v.size() + std::rand() );
    			
    			std::cout << "ok: " << v.size() << std::endl;
    		}
    	}
    	catch( std::bad_alloc const& )
    	{
    		std::cout << "doh: " << v.size() << std::endl;
    	}
    }
    on my system ( optimzed build, vista32bit ) the process always fails gracefully at a size consistent with the available physical RAM limit. I don't have access to a 64 bit system at the moment. So, whether bad allocation detection works or not as a memory abusing diagnostic still depends on the specific allocation scenario ...

  6. #6
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by superbonzo View Post
    still, it would solve half of the problem ( the crash, provided being it caused by a bad allocation, of course ).
    Probably, but it's a wrong approach to the problem

    Yes, a slowdown clogging the entire system could be perceived by the user as a "crash" as well; nonetheless, it's not infrequent to see memory intensive programs hanging the system for seconds/minutes and successfully recovering from a low memory condition, so I'd say it also depends on the typical user expectation ...
    No this would be a failure of the programmer of the memory intensive app to properly tailor their app for use under circumstances where memory is low.

    By hogging memory and accessing that memory in a "random access" pattern, you're causing excessive paging, this slows down the system and so not only does your own app run more slowly, but every other app does, and this can go so bad that even the entire system becomes unresponsive and unusable for minutes/hours (I've seen this happen to servers which couldn't afford to be rebooted and effectively making several dozen employees gnarl their teeth and unproductive for the duration).


    Moreover, I'm not so convinced that an OS cannot gracefully detect a memory abusing process ... consider this simple code
    That bit of code prooves little. you're only allocating memory, and by being Win32 it's automatically limited to a max of 2Gb at best, but due to fragmentation you're lucky if you can even allocate a single contiguous vector of 1Gb.

    The memory is being allocated... but it's not being accessed, so there's no paging taking place. All this prooves is that you cannot allocate a bigger chunk of memory than the addres space layout allows. Which is... well... DUH.

    It's not uncommon to see people (even programmers, and even well seasoned programmers) totally confused/deluded about the subtle differences between address space, physical ram, virtual memory, paging. And... This doesn't even have to be a bad thing, for 99% of people and programmers the distinction isn't of much concern, until you make a program that actually needs very large amounts of memory. Then suddenly all sorts of nasties turn up.
    Last edited by OReubens; February 14th, 2013 at 06:45 AM.

  7. #7
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by OReubens View Post
    If you think your app can handle it, I can see if I can find the info on how those api's work.
    And following up on my own post...
    CreateMemoryResourceNotification
    QueryMemoryResourceNotification

  8. #8
    Join Date
    Oct 2008
    Posts
    1,456

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by OReubens View Post
    you're only allocating memory [...] but it's not being accessed
    false, std::vector::resize default constructs new elements, which means it sets those ints to zero; moreover, elements are copyed on reallocations. So it's false that only allocations are performed.

    Anyway, just to avoid any special treatment of zero initializing/copying a block of integers, here is the same code with a non trivial default and copy constructor and a "random" access pattern of the vector data:

    Code:
    #include <iostream>
    #include <cstdlib>
    #include <ctime>
    #include <vector>
    
    struct A { A(): a( std::rand() ) {} A( A const& other ): a( other.a + std::rand() ) {} int a; };
    
    std::size_t sampleid( std::vector<A> const& v ) { return std::size_t( ( double( std::rand() ) / RAND_MAX ) * v.size() ) % v.size(); }
    
    int main()
    {
    	int	accesses = 100;
    	std::vector<A> v;
    
    	try
    	{
    		std::srand( std::time(0) );
    
    		while(true)
    		{
    			v.resize( v.size() + std::rand() );
    
    			for( int c = 0; c < accesses; ++c )
    			{
    				v[ sampleid(v) ].a = v[ sampleid(v) ].a + std::rand();
    			}
    
    			std::cout << "ok: " << v.size() << "[" << v.capacity() << "]" << std::endl;
    		}
    	}
    	catch( std::bad_alloc const& )
    	{
    		std::cout << "doh: " << v.size() << "[" << v.capacity() << "]" << std::endl;
    	}
    }
    the results are qualitatively the same as before, the process fails gracefully and the system remains responsive for different <accesses> setting; moreover, in the following

    Code:
    int main()
    {
    	int	accesses = 100;
    	std::vector<A> v;
    	bool adapt = false;
    	
    	while(true)
    	{
    		try
    		{
    			std::srand( std::time(0) );
    
    			if( adapt )
    			{
    				adapt = false;
    				v.resize( v.size() / 2 );
    				v.shrink_to_fit();
    
    				std::cout << "shrinking: " << v.size() << "[" << v.capacity() << "]" << std::endl;
    			}
    
    			while(true)
    			{
    				v.resize( v.size() + std::rand() );
    
    				for( int c = 0; c < accesses; ++c )
    				{
    					v[ sampleid(v) ].a = v[ sampleid(v) ].a + std::rand();
    				}
    			
    				std::cout << "ok: " << v.size() << "[" << v.capacity() << "]" << std::endl;
    			}
    		}
    		catch( std::bad_alloc const& )
    		{
    			std::cout << "doh: " << v.size() << "[" << v.capacity() << "]" << std::endl;
    
    			adapt = true;
    		}
    	}
    }
    the vector is resized, accessed and shrinked on failure continuously; again my system mantains a reasonable responsivity ( I'm writing this in firefox right now while the code above is still running ... ).

    Whether this is due to the address space or whatever is not relevant; this is sufficient to prove that your statement that such code don't work in general is false ( well, I didn't tested the code for correctness so ... ). I'm not saying that this is a universal solution and I do understand why it cannot be. Nonetheless, saying that this cannot be a solution for some specific problem is just wrong.

    Quote Originally Posted by OReubens View Post
    this would be a failure of the programmer of the memory intensive app to properly tailor their app for use under circumstances where memory is low.
    I disagree, in general ...

    Quote Originally Posted by OReubens View Post
    I've seen this happen to servers which couldn't afford to be rebooted and effectively making several dozen employees gnarl their teeth and unproductive for the duration
    which exactly confirms my claim that it depends on the user expectation, it's not an absolute truth. When I run a numerically intensive calculation on, say, Mathematica, I expect it could hang my system for some reasoanble time ( for example if I ask him to compute a possibly combinatorially exploding problem ). So, if Mathematica programmers employ some OS specific trick that makes me wait seconds/minutes before stopping the computation I'm totally happy with that. If the system blocks for a longer time once in a while I can accept that too. I asked for a solution to a possibly untractable computation, the Mathematica programmers are aware of this possiblity, their duty is just to ensure that the program will stop the computation in a reasonable time with a reasonable reliabaility and that's it.

    One thing is claiming that you cannot control a resource reliably ( like time or memory in a general purpose OS ), but another thing is claiming that you cannot control such a resource in such a way to make your typical user happy .
    Last edited by superbonzo; February 14th, 2013 at 09:17 AM.

  9. #9
    Join Date
    May 2009
    Posts
    2,413

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by GeoRanger View Post
    1.) what should I be doing instead?
    Rather than reserving as much memory as possible for memoization you could pre-allocate a reasonable amount (possibly determined by the user) and then make memoization dynamic. When the ceiling is reached you remove some old memoized value before you add a new one. Some memoized values may not be accessed very frequently or even have become stale not in use anymore.

    One could think of many removal strategies but I'm very fond of random mechanisms because they're simple and often work well in practise. When the memoization pool is full you simply replace an old value with a new value at random. If the removed value is in frequent use it will quickly re-enter the memoization pool and it's unlikely it will be removed again anytime soon. If it was an infrequent value it's good it's gone. In this way the memoization pool tends to refresh itself with the most frequently accessed values. The most frequent values will be present with high probability although the pool is fairly small. Size isn't everything.
    Last edited by nuzzle; February 15th, 2013 at 03:35 AM.

  10. #10
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by superbonzo View Post
    the vector is resized, accessed and shrinked on failure continuously; again my system mantains a reasonable responsivity ( I'm writing this in firefox right now while the code above is still running ... ).
    And your response and changed code, pretty much prooves my point at the bottom in #6.

    You're missing the whole point and are just looking at address space within your own app and failing to grasp where the problem lies.

    I never claimed you couldn't make an app that allocates X Mb of ram and then make use of it with little noticable effect. You're testing this app under the wrong conditions, and thus aren't noticing a lot of the problem.

    Let me guess, you either have a Win64 with 8+Gb of ram
    Or you have a Win32 with 4Gb ram, and you were only running that app, possibly VS and firefox.
    How does any of that even remotely get you into a "low memory situation".
    You're exhausting your apps memory space way before you run out of Virtual memory.
    You're also catching the allocation exception, which you should, but which a lot of apps aren't. And that includes "big name" applications, they simply assume you run the program in a system that isn't low on memory and if you do. well then, it'll just "crash". Regardless, this is either "graceful" or "less than graceful" because the allocation failed. It doesn't give you the out of virtual memory condition. And even in that case you should get a "decent" message from the system, assuming you have the patience to wait for it. Chances are, a lot of apps will be doign all sorts of weird things well ebfore you actually run out of virtual memory.



    Whether this is due to the address space or whatever is not relevant; this is sufficient to prove that your statement that such code don't work in general is false ( well, I didn't tested the code for correctness so ... ). I'm not saying that this is a universal solution and I do understand why it cannot be. Nonetheless, saying that this cannot be a solution for some specific problem is just wrong.
    and I never claimed this.
    I said "doing something like this for prolongued periods of time will cause all kinds of performance issues".

    If you can afford to have your pc running for this duration and ONLY have this one app running. then sure. fine.
    but if you do it in programs people use on a daily basis and expect to run nicely alongside a handfull of other programs, then it's generally a bad idea to just allocate that amount of memory.

    The OP's post seems to indicate the need for a general solution. If he just needed something that worked on his PC, then it shouldn't have been that hard to find a resonable amount of memory he can allocate without problems. If you need something that works well on a windows with say 1.5Gb of ram and works better on a machine with 4Gb, then your memory usage will need to be tuned, but just checking total memory doesn't help, because what's running on the PC had a big impact on what you can allocate and use before your "increase performance by memoizing more" plan falls to shatters because of excessive paging.

    One thing is claiming that you cannot control a resource reliably ( like time or memory in a general purpose OS ), but another thing is claiming that you cannot control such a resource in such a way to make your typical user happy .
    And I never claimed you can't control it reliably, I even pointed to the API functions that allow you to control it reliably.

    And FYI, If some app does a heavy computation, I fully accept that the app itself will need some time, and possibly be unresponsive. I don't quite accept it when the computation needs of one app impact the entire system and prevent me from even doing basic OS tasks. The whole purpose of multitaskign OSes is that one app should never hog system resources to such a degree that it stiffles every other app (and even the OS itself) in working properly. If you make that kind of app... you're doing it wrong. Acceptable for something your write for yourself maybe, but not really acceptable for an app you release upon thousands of users out there.
    Last edited by OReubens; February 15th, 2013 at 09:53 AM.

  11. #11
    Join Date
    Mar 2006
    Posts
    151

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    I didn't mean to start a war!

    Actually, I'm finding the posts from all three of you potentially useful. Unfortunately I've not had the opportunity to try any of these ideas yet four a couple of reasons, one of which is that I found a bug in the way I've been reusing the memoized data. The bug causes some of the data to be reused for parts of the knapsack where theoretically it should violate physical constraints by causing the boxes within the knapsack to overlap each other. I think that might be why I was getting the significant performance increase.

    The bug slipped in because I thought I had found a clever way to code the idea that if a box fits in the knapsack within some region, then surely that same box will also fit within a superset of the original region and within certain subsets of it. I can fix the bug, but it seems the additional computation to make sure the original region is really a subset of the new is just as expensive as simply computing all over again whether the box can fit in the new region.

    The odd thing, however, is the buggy version doesn't seem to cause any erroneous outputs, even in a good-sized set of regression test data. I might actually be able to make use of it as a heuristic as long as I add a check later in the process to filter out any buggy outputs.

    From OReubens:
    It's not uncommon to see people (even programmers, and even well seasoned programmers) totally confused/deluded about the subtle differences between address space, physical ram, virtual memory, paging. And... This doesn't even have to be a bad thing, for 99% of people and programmers the distinction isn't of much concern, until you make a program that actually needs very large amounts of memory. Then suddenly all sorts of nasties turn up.
    That seems an accurate description of my situation!

    At any rate, the thing is back in the experimental realm. It'll probably be a few weeks before I'll be able to get back to this particular part of it.

    superbonzo - I can't rely on catching the exception from a bad allocation for a couple of reasons, one of them being as OReubens stated, the system seems to come to a crawl before the exception appears. Nonetheless, I'm intrigued by your idea of keeping a rolling average and switching between methods accordingly. That may help make the code even faster with some of the more trivial cases of the variable-sized computation for determining if a box fits.

    OReubens - Thanks for the links to the API calls. It'll take a little work to be able to deallocate before the computation ends, but it's certainly workable and I think that's what I'm ultimately looking for.

    nuzzle - Thanks for the random-deletion idea.

    Sincerely,
    GeoRanger

  12. #12
    Join Date
    May 2009
    Posts
    2,413

    Re: How best to prevent memory exhaustion in time-critical dynamic programming?

    Quote Originally Posted by GeoRanger View Post
    nuzzle - Thanks for the random-deletion idea.
    Well, the important idea is to make memoization dynamic. Random removals was just an example of how it can be accomplished.

    Regardless of how big you make the memoization pool it will still be too small for many cases. And when the limit is reached, to just stop adding new values may be detrimental to efficiency because existing values may become less and less asked for. In short the pool starts rusting the moment it stops growing. It's much better to have a dynamic pool that adds new values also after the ceiling is reached. And as a positive side-effect, it doesn't have to be that big to be efficient.

    Apart from that I would also check the implementation to make sure calculations are made in proper order. Order may have a huge impact on efficiency (especially if recursion is employed). Depth-first, breadth-first, top-down or bottom-up matter a lot. This may reduce the need for memoization in the first place. Also I would check out alternative solution strategies and algorithms one more time. Good luck.
    Last edited by nuzzle; February 19th, 2013 at 02:59 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured