CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 11 of 11

Thread: Horrible performance when destroying pointer arrays

  1. #1
    Join Date
    Aug 2008
    Posts
    902

    Horrible performance when destroying pointer arrays

    I'm getting horrible performance when trying to destroy large arrays of pointers, and I'm not sure if this is normal or not.

    simple example:

    Code:
    typedef std::shared_ptr<int> IntPtr;
    
    vector<IntPtr> vec;
    vec.resize(500000);
    int i = 0;
    generate(vec.begin(), vec.end(), [&](){ return IntPtr(new int(i++)); });
    vec.clear();
    This takes about half a minute or so to destroy all the objects.

    Code:
    int **points = new int*[500000];
    		
    for(int i = 0; i < 500000; i++)
    {
    	points[i] = new int(i);
    }
    
    for(int i = 0; i < 500000; i++)
    {
    	delete points[i];
    }
    delete[] points;
    this is about twice as fast, but still slow considering it's only dealing with 500k ints.

    My compiler is MSVC 2010, release mode, iterator debugging is disabled.

  2. #2
    Join Date
    Jun 2009
    Location
    France
    Posts
    2,513

    Re: Horrible performance when destroying pointer arrays

    Quote Originally Posted by Chris_F View Post
    this is about twice as fast, but still slow considering it's only dealing with 500k ints.
    It's not the ints that are slow, it's the act of calling 500k individual dynamic allocations (and individual releases).

    I don't know your design, but I would either recommend using a custom allocator, pre-allocate all your ints in a pool, or just not using dynamically allocated ints.

    EDIT: shared_ptr is typically twice as slow on construction (and on the "last" destruction), because it is typically implemented as a pointer to a shared object, which itself has a pointer to the int. This means creating a new shared_ptr needs 2 allocations. Copy construction of a shared ptr, is barelly slower than a normal ptr, and dereferencing a shared_ptr is exactly the same speed.
    Last edited by monarch_dodra; December 2nd, 2010 at 03:51 AM.
    Is your question related to IO?
    Read this C++ FAQ article at parashift by Marshall Cline. In particular points 1-6.
    It will explain how to correctly deal with IO, how to validate input, and why you shouldn't count on "while(!in.eof())". And it always makes for excellent reading.

  3. #3
    Join Date
    Oct 2008
    Posts
    1,456

    Re: Horrible performance when destroying pointer arrays

    Quote Originally Posted by monarch_dodra View Post
    EDIT: shared_ptr is typically twice as slow on construction ... .
    note that you can avoid the double allocation by using boost make_shared function that allocates enough contiguous space for both the object and the counter at once ... ( and there's also allocate_shared to further customizing the process ... )
    Last edited by superbonzo; December 2nd, 2010 at 03:59 AM.

  4. #4
    Join Date
    Jun 2009
    Location
    France
    Posts
    2,513

    Re: Horrible performance when destroying pointer arrays

    EDIT: Never mind, on my machine, with MinGW, it takes about 2 seconds for 5,000,000 allocations+deletions. My guess is some obscure MSVC switch that does behind the scenes checking.
    Is your question related to IO?
    Read this C++ FAQ article at parashift by Marshall Cline. In particular points 1-6.
    It will explain how to correctly deal with IO, how to validate input, and why you shouldn't count on "while(!in.eof())". And it always makes for excellent reading.

  5. #5
    Join Date
    Jul 2005
    Location
    Netherlands
    Posts
    2,042

    Re: Horrible performance when destroying pointer arrays

    Quote Originally Posted by Chris_F View Post
    My compiler is MSVC 2010, release mode, iterator debugging is disabled.
    I'm not sure about VS10, but previous versions did not enable all optimizations in the release build of a newly created project. You'll have to go through the project settings (just the C/C++ part, and maybe the Linker part) and turn on all optimizations. If you're unsure, post the .vcproj file.
    Cheers, D Drmmr

    Please put [code][/code] tags around your code to preserve indentation and make it more readable.

    As long as man ascribes to himself what is merely a posibility, he will not work for the attainment of it. - P. D. Ouspensky

  6. #6
    Join Date
    Jul 2002
    Location
    Portsmouth. United Kingdom
    Posts
    2,727

    Re: Horrible performance when destroying pointer arrays

    Quote Originally Posted by D_Drmmr View Post
    I'm not sure about VS10, but previous versions did not enable all optimizations in the release build of a newly created project.
    I pretty sure they changed the defaults in VS10 after so many people pointed out how dumb it was to cripple release builds.
    "It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong."
    Richard P. Feynman

  7. #7
    Join Date
    Aug 2008
    Posts
    902

    Re: Horrible performance when destroying pointer arrays

    The problem wasn't the allocation. >95&#37; of the cpu time was spent destroying the pointers. Once it hits vec.clear() or the delete loop in the 2nd example, I can watch the memory usage slowly decrease in task manager, and it is extremely slow releasing the memory.

    I've disabled everything I can think of that might hurt it and turned on as many optimizations as I can find, and it hasn't changed in the slightest bit.
    Last edited by Chris_F; December 2nd, 2010 at 04:33 AM.

  8. #8
    Join Date
    Aug 2008
    Posts
    902

    Re: Horrible performance when destroying pointer arrays

    OK, maybe I'm just stupid, but I always test my applications inside the MSVC IDE by hitting run. I've always assumed that if it was set to Release it would run without debugging, just as fast as the native executable, but it doesn't. If I run this code in the IDE, it takes over 30 seconds when set to release, and even longer when set to debug.

    If I run the native debug exe, it take maybe a half a second tops, and if I run the native release exe, it's virtually instantaneous. What gives?

  9. #9
    Join Date
    Oct 2008
    Posts
    1,456

    Re: Horrible performance when destroying pointer arrays

    Quote Originally Posted by Chris_F View Post
    OK, maybe I'm just stupid, but I always test my applications inside the MSVC IDE by hitting run. I've always assumed that if it was set to Release it would run without debugging, just as fast as the native executable, but it doesn't. If I run this code in the IDE, it takes over 30 seconds when set to release, and even longer when set to debug.

    If I run the native debug exe, it take maybe a half a second tops, and if I run the native release exe, it's virtually instantaneous. What gives?
    if I remember well, when you run the app (even in release) with the debugger attached the CRT debug heap manager is used; setting _NO_DEBUG_HEAP to 1 should solve the problem ( using the project property sheet should be somewhere in debugging\environment ... )

  10. #10
    Join Date
    Aug 2008
    Posts
    902

    Re: Horrible performance when destroying pointer arrays

    Sorry, #define _NO_DEBUG_HEAP 1 had no effect.

  11. #11
    Join Date
    Oct 2008
    Posts
    1,456

    Re: Horrible performance when destroying pointer arrays

    Quote Originally Posted by Chris_F View Post
    Sorry, #define _NO_DEBUG_HEAP 1 had no effect.
    _NO_DEBUG_HEAP is an environment variable, not a macro directive. You can set it by changing the debugger environment in the corresponding project property sheet.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Windows Mobile Development Center


Click Here to Expand Forum to Full Width




On-Demand Webinars (sponsored)