CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Page 17 of 19 FirstFirst ... 7141516171819 LastLast
Results 241 to 255 of 280
  1. #241
    Join Date
    Apr 2001
    Location
    San Diego CA
    Posts
    378
    Originally posted by Adam Gritt
    Try putting this in the watch window:
    iter->second

    or if you were using VS6 you would put:

    iter._Ptr->_Value
    or
    iter._Ptr->_Value.second to see the actual value.
    Thank you very much ..

    it._Ptr->_Myval.nRow did the trick.

  2. #242
    Join Date
    Aug 2002
    Location
    Madrid
    Posts
    4,588
    Ok, here are the results for VC.NET 2003 with STLPort 4.6 (the current version)
    Code:
    std::vector with reserve         : 0.016376
    std::vector without reserve      : 0.0394023
    Custom allocator with std::list  : 0.0794902
    Standard allocator with std::list: 0.152983
    C-style array                    : 0.0124926
    On the same machine, also with VC.NET 2003, the Dinkumware STL gives me:
    Code:
    std::vector with reserve         : 0.0172944
    std::vector without reserve      : 0.0637003
    Custom allocator with std::list  : 0.0883701
    Standard allocator with std::list: 52.548
    C-style array                    : 0.0123338
    Get this small utility to do basic syntax highlighting in vBulletin forums (like Codeguru) easily.
    Supports C++ and VB out of the box, but can be configured for other languages.

  3. #243
    Join Date
    Apr 1999
    Posts
    27,449
    Originally posted by Yves M
    Ok, here are the results for VC.NET 2003 with STLPort 4.6 (the current version)
    Code:
    std::vector with reserve         : 0.016376
    std::vector without reserve      : 0.0394023
    Custom allocator with std::list  : 0.0794902
    Standard allocator with std::list: 0.152983
    C-style array                    : 0.0124926
    On the same machine, also with VC.NET 2003, the Dinkumware STL gives me:
    Code:
    std::vector with reserve         : 0.0172944
    std::vector without reserve      : 0.0637003
    Custom allocator with std::list  : 0.0883701
    Standard allocator with std::list: 52.548
    C-style array                    : 0.0123338
    Wow. Maybe you should drop P.J. Plauger at Dinkumware an e-mail with your findings.

    Regards,

    Paul McKenzie

  4. #244
    Join Date
    Aug 2002
    Location
    Madrid
    Posts
    4,588
    I just did, let's see what he replies. I guess it's a matter of allocators. C.f. the notes on allocators on SGI's pages:
    • alloc
      The default allocator. It is thread-safe, and usually has the best performance characteristics.
    • pthread_alloc
      A thread-safe allocator that uses a different memory pool for each thread; you can only use pthread_alloc if your operating system provides pthreads. Pthread_alloc is usually faster than alloc, especially on multiprocessor systems. It can, however, cause resource fragmentation: memory deallocated in one thread is not available for use by other threads.
    • single_client_alloc
      A fast but thread-unsafe allocator. In programs that only have one thread, this allocator might be faster than alloc.
    • malloc_alloc
      An allocator that simply uses the standard library function malloc. It is thread-safe but slow; the main reason why you might sometimes want to use it is to get more useful information from bounds-checking or leak-detection tools while you are debugging.


    And yes, when I change STLPort's default allocator to use malloc then I notice things slowing down as well:
    Code:
    Default STLPort alloc	malloc_alloc	Dinkumware
    0.0027			0.1522		0.1410
    0.0054			0.1884		0.2193
    0.0085			0.5601		0.5145
    0.0123			1.1903		1.1231
    0.0165			1.9820		1.9543
    0.0208			3.0880		3.1050
    0.0254			4.4912		4.6766
    0.0331			8.4089		8.7176
    0.0368			16.9519		16.5752
    For this program:
    Code:
    std::vector<int>  g_data;
    
    void set_test(size_t max)
    {
      std::set<int> s;
      for (size_t i = 0; i < max; ++i) {
        s.insert(g_data&#091;i&#093;);
      }
    }
    
    int main()
    {
      for (int i = 0; i < 100*1000; ++i) {
        g_data.push_back(i);
      }
      srand(171);
      random_shuffle(g_data.begin(), g_data.end());
    
      CPrecisionTimer ct;
      for (i = 1; i < 10; ++i) {
        ct.Start();
        for (int o = 0; o < 10; ++o) {
          set_test(i * 1000);
        }
        cout << ct.Stop() << endl;
      }
      return 0;
    }
    Last edited by Yves M; November 26th, 2003 at 08:56 PM.
    Get this small utility to do basic syntax highlighting in vBulletin forums (like Codeguru) easily.
    Supports C++ and VB out of the box, but can be configured for other languages.

  5. #245
    Join Date
    Sep 2002
    Posts
    1,747

    the industrial revolution -- part i

    I think it would be very enlightening for students and professionals-just-starting out and needing to make decisions such as whether or not to use the standard containers in their code to read this thread all the way through. I've done it at points where I've decided to post to ensure I'm keeping up with both the current and past spirit of this thread, and I just did it again and was struck by some observations that I feel the targets I mentioned should also be aware of. Its been pointed out several times by several posters, but...

    ...particular points of those who advocate not using the standard containers... keep shifting.

    There is this shell game being played. Somehow, the topic of particular bugs gets dropped, and yet somehow the debugabillity is still called into question. Design issues are mentioned at several points, yet particulars get clouded over by a fog of "ugly" and "difficult". When particulars do come up, they are responded to quickly, and despite the fact that so far the most of the issues have been ones of understanding how to use the standard containers (not deriving but extending orthogonally), what their guarantees are (algorithmic complexity issues), and the language features they use (templates and namespaces), well...

    ...there is never an acknowledgement of those issues being resolved.

    I don't like flames (much), but souldog's earlier comment about obsolescence keeps bouncing around in my head. I'm not sure how to word this without sounding like I'm attacking someone, but let me try. I have seen similar opposition to using the standard library containers in person. And I have come to believe that much of the opposition is a psychology, a tiredness, a "I don't want to have to learn yet another thing." When I got out of college I was so exhausted with learning that I didn't pick up a technical book for close to 3 months. I know that feeling innately, it is a constant struggle for obsessive people like myself, and I see the exact same symptoms in responses in this thread.

    Specifics cases against the standard containers are avoided or incorrect (algorithmic assumption as a case in point) because they are not known, yet.

    And the whole point is to keep the task of learning away.

    And sometimes a defensiveness creeps in, and there is this explosion of sound and fury, which is supposed to be one of those societal signals to back off and let the issue die out.

    And I don't want it to die like that.

    I believe that a lot of those who come to codeguru to read the forums want solid answers to the questions they have in their professional life.

    I believe that there are actual, scientific metrics on which programming decisions should be made. We don't suggest to a person requesting information on how to make a driver to use Visual Basic. We don't suggest to a project which carries a strong requirement for stability and correctness (life support systems, for example), that the team use a language or language features they are not already intimate in. Sure, some of the metrics can have poor precision, and possibly there might be a problem choosing between several courses of actions do to different suggestions by different metrics. But even that type of case needs to be established, not passed over as an initial assumption.

    Some very good metrics I have seen in this thread included component construction and time. Those are great metrics because they are the basis for the industrial revolution. When John Hall built the first rifle factory to use interchangeable parts, the entire point was a realisation of the dreams of Thomas Jefferson and Eli Whitney to get some degree of automation possible in construction to increase production capabilities by decreasing the time for construction of the various phases.

    Software design has been undergoing its own industrial revolution. And just as in the physical construction sector, software domains have many concepts of components and various dynamics for connecting and interchanging them.

    Look at the evolution of MFC's containers. Originally, they were built in the zeitgeist of object-oriented design. So there were some basic object containers for holding pointers to any class derived from CObject. This allowed for a useful type dynamic where objects of various different concrete types could coexist inside a container and only the algorithmic specification of the container type was exposed. This was the polymorphic method of generic programming. However, it had to be stated outside of the code that such containers were most useful when the actual classes were all derived from a base furthur down the hierarchy, since one often uses such containers in code to loop through a more refined interface, and code could easily litter with casts as one tries to expose the interfaces. Some of that gets cleaned up if you derive from the containers, and MFC even provided some utility derivations for things like their string utility class, but even at this stage you begin to see the coupling involved in acquiring a solution, and the inability for the framework to provide a solution for those who do not desire the CObject interface with its emphasis on application core features like serialisation.

    Then we get language support for templates and eventually the MFC engineers start building in support for newer containers that take advantage of the template mechanism. However, to maintain backward support, their core framework is stuck with object relationships built in the object-oriented days. They desire these new containers to have support for the framework built in, so they are placed into the framework hierarchy. And they do not demonstrate some of the newer componentisation techniques made available by templates (I suspect because the engineers were unfamiliar with the techniques), and they also fail to even expose older componentisation techniques like iterators. So they restrict their use cases.

    But then you have David Musser and Alex Stepanov working over in Ada on this idea for a generics library. With the help of people like Meng Lee, there was all of this intellectual discussion about library design and abstraction in c++ going on over at HP. With heavy use of the iterator abstraction, factoring out allocation policies, and generic type access, the STL was built concentrating solely on the benefits of abstracting out the ideas of object relationships and algorithms applied to them. These are the fields that complexity theory and combinatorics had already well established a mathematical framework for, and there were widely known efficient algorithms that were useful all over the place. Using templates also had the benefit of allowing the compiler to use the static instantiation to make type-specific optimisations and much of the code necessarily inlined itself due to template visibility issues. The components were clear and customisable. One could add algorithms for entire classes of iterator abstractions. One could root a hierarchy on a container of polymorphic pointers however one desired.

    Even though MFC's newer template containers shared some of the benefits that the STL provided, they were still monoliths rooted in a framework hierarchy without an iterator abstraction for generic algorithmics. STL's containers were free objects and algorithms, customisable and transportable. And they're faster, about as fast as a container can get since the modern optimisers use all of the type information available to make specific access calls almost exactly the same as if there were no objects and only naked calls to a free structure (for instance, vector read access is very often found in assembly as a simple pointer add offset and dereference load). In fact, the only suggestions I have ever seen for improving the efficiency of the standard containers and algorithms (which I reiterate are built on mathematical notions well studied for efficiency) has been the proposal for new language semantics for move constructors to keep out the unnecessary copy or two that goes on in the language construction mechanism.

    On these metrics, STL happens to have the edge by quite a bit. It is much more useful as transposable components, it is faster both in milliseconds and actual labor-hours for immediate algorithm reuse. And now, these containers are standardised and an integrated part of the legal language we program in. These are all of the metrics that point to the STL being the current leader in this particular niche of the industrial revolution. Other competitors in this field of components, like the Booch c++ Components, also fail the challenge for similar reasons.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  6. #246
    Join Date
    Sep 2002
    Posts
    1,747

    the industrial revolution -- part ii

    I'm not saying that something better can't come along. There are certainly abstractions on object relationships inside a container and some other mechanisms of adaptability which we have discussed in this thread could lead to even better abstraction and reuse potential in the standard containers. Hand rolling such a library is not worth it when you are an actual part of a work force with job task lists, iteration time schedules, and the need to get things done now so your company can grow. It is the type of task that one takes on when that is the goal, as a library designer or secret dabbler in the black arts. Its exactly like elsewhere in today's modern, industrialised society. There are widget manufacturers and factories which consume widgets to build whatchamacallits. If you are on the assembly line of the whatchamacallit factory, pointing out that you can make a better widget will not get you noticed by the foreman unless you can actually produce designs and plans which you can show bring benefit to The Vendor. Consider me, now, a foreman (as well as a worker on the assembly line). I'm not saying the better widget is not possible. I'm only saying that you better have designs and plans available to point out where and how the widget can be improved before you run around telling all the workers on the assembly line to stop what they are doing and waste time building their own better widgets first. If everyone did that, we'd be back to pre-industrialised times with one-pass assembly.

    I'm also not saying that building your own containers is not a great learning experience. Obviously it is, and most data structures classes emphasise hands-on learning (take a look at some of the questions asked on these boards). Skills need to improve with time. But there also needs to be a point where some of that skill learning gets focussed on learning the useful existing tools for our times...

    Now, I just wanted to finish up by pointing to a problem I see in something said earlier. Just to make it perfectly clear and explicitly stated in this thread, the discussion about system level memory being done in terms of list linking was pure smokescreen (another of the symptoms I mentioned earlier). As someone who has actually programmed a few drivers, I understand quite clearly how the system makes the transformation between virtual and physical addresses. It is all done through the descriptor tables the processor already is being pointed to internally, they are page lists (so we are looking at access only in increments of the page size), they are quickly indexable, and the transformation occurs for all virtual addresses mentioned in a machine instruction. What all this means is that the process of transformation is processor level process that influences not a single bit the order of complexity calculations for an algorithm. Pointer arithmetic is still a simple addition to the processor, it only occurs prior to transformation if an access is required. There is nothing at all about the process of address transformation that suddenly makes the vector guarantees the same as a linked list, or anything freaky like that. I know some of that was already answered, but I wanted to emphasise it for any of the future readers of this thread.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  7. #247
    Join Date
    Apr 2003
    Posts
    893

    Thumbs up Re: the industrial revolution -- part ii

    First, thanks so much for your words, they are useful to me. I have learnt a lot from them. True...


    Originally posted by galathaea
    .......
    Now, I just wanted to finish up by pointing to a problem I see in something said earlier. Just to make it perfectly clear and explicitly stated in this thread, the discussion about system level memory being done in terms of list linking was pure smokescreen (another of the symptoms I mentioned earlier). As someone who has actually programmed a few drivers, I understand quite clearly how the system makes the transformation between virtual and physical addresses. It is all done through the descriptor tables the processor already is being pointed to internally, they are page lists (so we are looking at access only in increments of the page size), they are quickly indexable, and the transformation occurs for all virtual addresses mentioned in a machine instruction. What all this means is that the process of transformation is processor level process that influences not a single bit the order of complexity calculations for an algorithm. Pointer arithmetic is still a simple addition to the processor, it only occurs prior to transformation if an access is required. There is nothing at all about the process of address transformation that suddenly makes the vector guarantees the same as a linked list, or anything freaky like that. I know some of that was already answered, but I wanted to emphasise it for any of the future readers of this thread.
    The underlined sentences in the quote above are what I like most. That's the way it is...I guess.

  8. #248
    Join Date
    Nov 2002
    Location
    Los Angeles, California
    Posts
    3,863
    Originally posted by mclark
    Yes, I know about the reserve and swap trick but this doesn't help. (I can provide an example if you really want) but the main point is it is impossible (as far as I can tell) in STL to use std::vector<T> is such a way that memory use is linear to the number of elements (capacity is never greater than high watermark size).
    This is just a question stated as an answer.

    int some_stuff[] = {1,2,3,4,5}
    std::vector<int> vect_of_stuff(some_stiff, some_stuff + sizeof(some_stuff)/sizeof(some_stuff[0]));

    the capacity of a vector initialized in this way is equal to its size.

    Is there a reason why this does not refute the above claim?
    Wakeup in the morning and kick the day in the teeth!! Or something like that.

    "i don't want to write leak free code or most efficient code, like others traditional (so called expert) coders do."

  9. #249
    Join Date
    Jul 2002
    Posts
    107
    Hi galathaea,
    Just wanted to comment on a few points.

    And I have come to believe that much of the opposition is a psychology, a tiredness, a "I don't want to have to learn yet another thing."
    I have to disagree with this. Many of the people who have held opposition to STL have shown they understand it and have chosen to not use it. I have seen many propose that Darwin (who started this thread) simply doesn't understand STL and that's why he doesn't like it, even though he has shown several times that he does understand it and has rejected it, with several valid points. There is nothing complex about using STL. If you understand templates you can understand it. I feel that Darwin understands templates, and him not liking STL does not mean he just "doesn't get it" as suggested.

    Design issues are mentioned at several points, yet particulars get clouded over by a fog of "ugly" and "difficult". When particulars do come up, they are responded to quickly, and despite the fact that so far the most of the issues have been ones of understanding how to use the standard containers (not deriving but extending orthogonally), what their guarantees are (algorithmic complexity issues), and the language features they use (templates and namespaces), well...
    "ugly" and "difficult" are two major negative design flaws in any program. These are the "particulars". So I'm not too sure what you mean by "clouded over by a fog" since these are two of the major points in that reference. And again, I believe the main players in this thread understand STL, its purpose and its implementation.

    On these metrics, STL happens to have the edge by quite a bit. It is much more useful as transposable components, it is faster both in milliseconds and actual labor-hours for immediate algorithm reuse
    One of the points proposed in this thread has been that simply, anything that STL can do, you can write yourself. Meaning that STL is just code. It is not built into the construct of the compiler, so there is nothing that it can do, that anything else can't do. So STL doesn't have a edge over anything else, since with a little effort, anything else can do the exact same thing. And since STL runs exception handling (in .NET, not VC6), which has a overhead, if you reproduce the STL code yourself, and omit the exception handling, you already have a performance boost over it. You can then systematically remove unnecessary code or optimize the code, you then have another performance boost over it. I know the first thing said will be that STL exception handling is the best way to handle errors, as its been already proposed. Unfortunately, its an overhead and doesn't provide efficient enough handling of errors. Even MS's GetLastError() is more efficient by simply returning a code to deal with the error with out the overhead. A simple "Only do work when there's a problem" approach that MS API has keeps code running fast when there isn't a problem. I personally think that if there is no error, then I don't want anything to take a hit.
    Besides that however, the design flaws (which I'll agree are much better in .NET now that they have dropped the R&D code), such as hard to read and difficult to integrate into a complex environment, make STLs "edge" minimal.
    As for the amount of "man-hours" saved, they are only saved if there isn't a problem and only lost if you don't already have your own libraries to use. If you have to debug STL, which in a production environment is done by "debuggers", which usually consist of fresh college grads who start there to gain experience, then the "man hours" are lost to inexperience in the cycle. Which is a negative to STL in production.

    the discussion about system level memory being done in terms of list linking was pure smokescreen (another of the symptoms I mentioned earlier)
    Since you are referring to what I said, I think you misunderstood. But since you feel that I don't understand, lets analyze what you said. Please understand that I wish to only discuss what you said, not "flame" you. I know I come off a little abrupt to some, but understand that its not my intent.

    the discussion about system level memory being done in terms of list linking was pure smokescreen
    It is all done through the descriptor tables the processor already is being pointed to internally, they are page lists (so we are looking at access only in increments of the page size), they are quickly indexable, and the transformation occurs for all virtual addresses mentioned in a machine instruction.
    It does bother me a little that you call the claim that the system is a linked list configuration a smoke screen, then use the term page lists to describe it. So lets take a look at the structure that the system uses. Forgive any missing variables, but I'm running off my own memory since I don't have any documentation with me and its been a while since I've had to deal with it. MSDN has some articles on VM and how the system handles it.

    I will provide this link which I hope will help illustrate, just remembered this is out here, I'm sure there are better:
    http://www.windowsitlibrary.com/Content/356/04/1.html
    This talks about how memory works in NT systems, the theory has evolved since NT in 2000 and XP, however, the principle is more or less the same.

    Here is a very basic structure.

    struct memoryPage{
    void* data;
    UINT size;
    memoryPage* next;
    };

    Since the "next" points to the next page, this is a linked-list obviously. This is why windows has a linked-list implemented MMU or memory management unit. This is what I was talking about. I understand I confused several people with what I said, but I rush a little too much when I write and that's why I provided some links to help illustrate the point to clarify. But as you mentioned, people start getting defensive, and as a result information gets twisted.

    What this boiled down to as a point is, that the way the system handles your memory does not mean that "contiguous" is any faster then a user-managed linked list. It depends on the page size and the efficiency of the code to manage it, which again is demonstrated in my original articles. This is why I ask you to try both. Create a pool of memory and manage it yourself and then try the system an benchmark the results. Granted I still need to do the work I promised and post it, but Thanksgiving is in the way, but when I've got access to my comp, I will do as promised.

    it only occurs prior to transformation if an access is required. There is nothing at all about the process of address transformation that suddenly makes the vector guarantees the same as a linked list, or anything freaky like that.
    Again, I think we are talking about different things. The system attempts to store data in physical memory in such a way that the memory can be easily swapped between disk and physical, yes. Data is stored in several complex tables which do several things, two of which would be to protect system and application integrity, and provide efficient retrieval. However, you can't have super fast retrieval and safety. Safety will have overhead as there are system memory tables and application memory tables to ensure that you can't interfere or crash anything but your own program. The system must traverse these tables anytime you wish to reference your memory. So if you do the work of the system in your own program, then you could remove the hit the system would normally generate. Since you have programmed drivers, you can see how this is very possible.

    So to sum up, a question of system performance came in to play and an explanation of how the system worked was necessary to quell the belief that you are guaranteed that any memory you get is in any way "contiguous". I hope this clarifies that point.

  10. #250
    Join Date
    Nov 2003
    Location
    Pasadena, CA
    Posts
    48
    Originally posted by souldog
    This is just a question stated as an answer.

    int some_stuff[] = {1,2,3,4,5}
    std::vector<int> vect_of_stuff(some_stiff, some_stuff + sizeof(some_stuff)/sizeof(some_stuff[0]));

    the capacity of a vector initialized in this way is equal to its size.

    Is there a reason why this does not refute the above claim?

    I do not think this refutes my claim.

    The standard guarantees that the invariant of your construct (calling a vector's range constructor) is that the size is equal to the number of elements within the range (provided the range is valid).

    so size = 5 is guaranteed but capacity = 5 is not.

    This holds for the key-value constructor, copy constructor, resize member function, erase member functions and clear member function. None have a guarantee for capacity.

    std::vector<int> Example(0);

    Example.empty() == true; Example.size() == 0; Example.capacity() == ?

    The only vector member function that I think might have a hard guarantee on reducing capacity (reserve obviously makes one on increasing it) is swap. But swap cannot be performed by iterator - and hence not with a normal array (where you control the size).

    I thankfully concede that this is a 'standards' question. I know of no STL implementation so brain-dead that the constructors discussed can't be used to control size (but I bet they exist out there somewhere right now - and they're probably still 'standard' conforming)!
    The views expressed are those of the author and do not reflect any position taken by the Goverment of the United States of America, National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL), or California Institute of Technology (CalTech)

  11. #251
    Join Date
    Nov 2002
    Location
    Los Angeles, California
    Posts
    3,863
    Well I just broke down and bought a copy of the standard.

    What about this part of the standard. I think it is saying that there
    are no reallocations in this case:

    The constructor template <class InputIterator> vector(InputIterator
    first, InputIterator last) makes only N calls to the copy constructor of T (where N is the
    distance between first and last) and no reallocations if iterators first and last are of forward, bidirectional,
    or random access categories. It does at most 2N calls to the copy constructor of T and logN
    reallocations if they are just input iterators, since it is impossible to determine the distance between
    first and last and then do copying.
    Ordinary array pointers provide random access iterators.

    Do I miss understand this?

    [EDIT]
    Ah, I see, there are no guarantees about the initial allocation.

    OK
    Last edited by souldog; November 30th, 2003 at 05:27 AM.
    Wakeup in the morning and kick the day in the teeth!! Or something like that.

    "i don't want to write leak free code or most efficient code, like others traditional (so called expert) coders do."

  12. #252
    Join Date
    Nov 2003
    Location
    Pasadena, CA
    Posts
    48
    Originally posted by souldog
    Well I just broke down and bought a copy of the standard.

    What about this part of the standard. I think it is saying that there
    are no reallocations in this case:


    Ordinary array pointers provide random access iterators.

    Do I miss understand this?

    [EDIT]
    Ah, I see, there are no guarantees about the initial allocation.

    OK

    Yeah. The standard makes garantees only about the number of actual elements in the vector (calls to T's constructor) but says nothing about the vector's capacity. When a vector reserves capacity (storage space) the space is simply blank (no T's are constructed). When a new T is needed the placement new operator is called to create a new T in already allocated space. Hense guarentees about how many Ts are constructed do not restrict the space allocated for future Ts.
    The views expressed are those of the author and do not reflect any position taken by the Goverment of the United States of America, National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL), or California Institute of Technology (CalTech)

  13. #253
    Join Date
    Nov 2002
    Location
    Los Angeles, California
    Posts
    3,863
    THIS IS A CALL FOR SOME CLEAR STATEMENTS!!!

    I will attempt to make some, but I am no expert. I am just tired
    of hand waving. All of the following arte questions:

    1. When the standard guarantees that the the contents of
    a vector are stored in a single chunk of contiguous memory
    this refers to a single chunk of contiguous virtual memory?

    2. There is no guarantee made that if the contents of the vector
    require more than one page of memory then when and if these
    pages are brought into physical memory they will be arranged
    contiguously?

    3. The intel x86 processors use base-offset memory lookup on
    top of the VMM?

    4. The contents of a list are stored in several chunks of memory
    each chunk containing a single element of the list? There are no
    guarantees that these chunks are arranged contigously in VM?

    5. In order to maintain contiguous layout in virtual memory, the
    entire contents of a vector may be "moved" when elements are
    added or removed, even at the end?

    6. Since there is no guarantee for contiguous memory for a list,
    there is no reallocation needed when elements are added or
    removed. It is only neccesary to update the pointers?

    7. Having the vector layed out in contiguous memory has
    advantages with respect to locality of reference?
    In particlar if a TLB is used (and all major modern platforms
    use one) then iterating through a vector will not result in
    page faults occuring until a page boundary is crossed? If spatial
    locality of reference has been taken into account then this next
    page may already be cached?

    8. The considerations of question 7 are in no way guaranteed
    to be applied to a list since spatial locality of reference is not
    guaranteeed on an element by element basis?

    9. Since the intel x86 processor uses a base offset mechanism
    within a given page frame, it is not the case that iterating
    through a vector is the same as walking along the pointers
    in a linked list?

    10. The standard guarantees that random access into a vector
    is of O(1) complexity? This would not be possible if in fact the
    memory for the vector was implemented as a linked list?

    11. If random access is requested for a vector that is not paged
    into physical memory, then it must be the case that a base offset
    mechanism is being applied to VM to fetch the page? This is
    posible since the vector has a guarantee that it is contiguous in
    virtual memory? So it is not the case that a linked list of pointers
    is walked even in VM?

    12. Obviuosly custom allocation schemes will affect performance?
    For that matter one could just lock the entire vector or list into
    contiguous physical memory and bypass all page faults?


    MOST IMPORTANT

    13. One could hand roll containers, the language, or even the OS
    for that matter. Is this a revelation?

    So how about some answers to these questions
    Last edited by souldog; November 30th, 2003 at 11:59 PM.
    Wakeup in the morning and kick the day in the teeth!! Or something like that.

    "i don't want to write leak free code or most efficient code, like others traditional (so called expert) coders do."

  14. #254
    Join Date
    Sep 2002
    Posts
    1,747
    Originally posted by oktronic
    And I have come to believe that much of the opposition is a psychology, a tiredness, a "I don't want to have to learn yet another thing."
    I have to disagree with this. Many of the people who have held opposition to STL have shown they understand it and have chosen to not use it. I have seen many propose that Darwin (who started this thread) simply doesn't understand STL and that's why he doesn't like it, even though he has shown several times that he does understand it and has rejected it, with several valid points. There is nothing complex about using STL. If you understand templates you can understand it. I feel that Darwin understands templates, and him not liking STL does not mean he just "doesn't get it" as suggested.
    Design issues are mentioned at several points, yet particulars get clouded over by a fog of "ugly" and "difficult". When particulars do come up, they are responded to quickly, and despite the fact that so far the most of the issues have been ones of understanding how to use the standard containers (not deriving but extending orthogonally), what their guarantees are (algorithmic complexity issues), and the language features they use (templates and namespaces), well...
    "ugly" and "difficult" are two major negative design flaws in any program. These are the "particulars". So I'm not too sure what you mean by "clouded over by a fog" since these are two of the major points in that reference. And again, I believe the main players in this thread understand STL, its purpose and its implementation.
    Actually, I really don't want to bring darwen into this issue, because from the general quality of his posts, I have no doubts that he will eventually have to confront the standard containers head-on through some reason or another, and I think that after the first few test programs, he will finally have those epiphanies on why their structure is the way they are and not think twice about them. Although I am trying my best to keep focused on a general attitude, understand that darwen probably is not the major focus of my words and I don't feel it is appropriate to make that link. There are others displaying the attitude I am describing which I might focus more on if I were inclined to point fingers...

    Actually, "ugly" is not a particular at all, and "difficult" equates to me to "i have yet to learn this" which falls straight inline with my earlier analysis. Which particular is ugly? I saw a code snippet inserted into this thread. It showed many ways to go about similar tasks, with much overlap in goals, to illustrate that there are several pathways one might go about acheiving the goals depending on what information one had handy. It focused on the standard's dictionary (std::map) and the types associated with it's use (iterators, pairs, ...). But I never saw "see this line.... this is ugly because XYZ". Just as I never saw "see this code... this produces bug XYZ".

    It is important for code to be easily understandable within a development team so that code ownership and information lockout issues don't bite a team as people come and go. But that is not an argument to produce grade school code, either, and the choice is not between "simple" and "ugly". The choice is between clear code that follows closely the design or convoluted code with no apparent design. And the standard containers and algorithms have a very clear design. Its just that it is using a language feature that allows for a whole new class of type dynamics, one which allows for a completely different dimension of abstraction and connection outside the "contain or derive" hegemony, and it uses the iterator abstraction (i'll leave out the allocator abstraction, because I don't think that that is usually what is referred to by the use of "ugly" or "difficult" since the default usually is enough to get one familiar with the class and allocators only start popping up when particular sources of serialisation are needed).

    Concerning the iterator abstraction, its something I've had a long relationship with. I remember one of my first intermediate level books I read was Jeff Alger's c++ for Real Programmers. And although it was written in a confusing time where c++ was in a state of major language flux and the author had obviously not acheived that solid a grasp of many of the newer issues on the horizon, it tackled containers with a passion, and showed alot of nice little type dynamics with the development of smart pointers into iterators, cursors, etc. And it was quite clear how this abstraction improves algorithmics. But template use was stunted at best, and the separation was not as complete as it could be. When I started to learn about the standard containers (which was afterwards, because I too have had to take that strange path from c to c++ over time instead of properly using the language from the start -- to be fair, though, my early c++ use started prior to the 98 ISO standard), it was quite clear to me their intent and usage, and I was quite impressed at their ability to abstract out all of the little pieces.

    Now take a look at the MFC containers. I claim they are "ugly". Why? Because they do not provide an interface for algorithmic abstraction. One could build classes to do that, CList has its POSITION structure, and maybe wrap all of the abstraction up with static concept checks and there you go, an iterator abstraction over all of the MFC containers. Except that I also claim that MFC's containers have bad design and give yet more specific answers to a why by pointing to their rooting in the hierarchy, which requires one to bring inside any translation unit which wants to use it the entire CObject baggage. I personally don't use any GUI frameworks outside the presentation layer because they don't provide any tools which I can't find more useful variants elsewhere. But the use of a library component should not require the insertion of other library components. That is coupling. That is a poorer metric of reuse. That is bad design.

    You have to see that that is my point. Where are the specifics in the other direction? Where is the library that has better designed containers, with specific points illustrating the metrics used. Because what I am saying is:

    Listen. As a programmer, you will need to get a lot of things done, and the faster and more robustly you can get them done, the more valuable you are. So one of the things that can get you there is picking the right toolkit of libraries. The standard library currently has the best containers available according to my metrics, and might be a good place to start. If other libraries can provide better containers, show me the metrics used and the specific points where they are applied. Don't avoid the standard containers until then, though.
    Also posted by oktronic
    One of the points proposed in this thread has been that simply, anything that STL can do, you can write yourself. Meaning that STL is just code. It is not built into the construct of the compiler, so there is nothing that it can do, that anything else can't do. So STL doesn't have a edge over anything else, since with a little effort, anything else can do the exact same thing. And since STL runs exception handling (in .NET, not VC6), which has a overhead, if you reproduce the STL code yourself, and omit the exception handling, you already have a performance boost over it. You can then systematically remove unnecessary code or optimize the code, you then have another performance boost over it. I know the first thing said will be that STL exception handling is the best way to handle errors, as its been already proposed. Unfortunately, its an overhead and doesn't provide efficient enough handling of errors. Even MS's GetLastError() is more efficient by simply returning a code to deal with the error with out the overhead. A simple "Only do work when there's a problem" approach that MS API has keeps code running fast when there isn't a problem. I personally think that if there is no error, then I don't want anything to take a hit.
    Besides that however, the design flaws (which I'll agree are much better in .NET now that they have dropped the R&D code), such as hard to read and difficult to integrate into a complex environment, make STLs "edge" minimal.
    As for the amount of "man-hours" saved, they are only saved if there isn't a problem and only lost if you don't already have your own libraries to use. If you have to debug STL, which in a production environment is done by "debuggers", which usually consist of fresh college grads who start there to gain experience, then the "man hours" are lost to inexperience in the cycle. Which is a negative to STL in production.
    Anything that your computer can do can be done with pen, a paper, a competent mathematician, and enough labor-hours. The problem comes with the fact that there is more room for error and time was unnecessarily wasted when a tool already existed that could accomplish the same task.

    Sure one can rewrite the standard containers. The problem comes with the fact that there is more room for error and time was unnecessarily wasted when a tool already existed that could accomplish the same task.

    I am quite amazed at the fact that although I have repeatedly stressed the fact that no bugs in the standard containers have yet been introduced into this discussion, debugability continues to be used as an excuse to add more risk to a project, increase the likelihood of bugs, and waste labor-hours that translates directly to cold, hard cash.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  15. #255
    Join Date
    Sep 2002
    Posts
    1,747
    And now we get added this idea that exceptions generate slower code, whereas the correct statement would be that the concept of exceptions does not require any machine instruction to be executed along the normal execution path. Only during a throw does stack unwinding need to add any code, and many compilers do things exactly this way. There have been a few compilers in the past, particularly early on after exceptions were introduced, that added overhead to function calls to set up a more organised stack frame or make calls to OS services to help out, but much of that is old news as more flexible stack unwinding algorithms were developed. And even though that more qualified statement could have been stated, it still would be a moot point since the interfaces provide very few functions that actually throw anything other than a bad_alloc which is a possibility during any dynamic allocation in your program that isn't placement or std::nothrow qualified. I suggest turning off exceptions and building a program that tests every allocation for NULL return errors and see what runs faster. Error return testing is often much slower than exceptions on most compilers these days precisely because the error test necessarily introduces code where the exception path need not except in those exceptional circumstances.
    Also posted by oktronic
    the discussion about system level memory being done in terms of list linking was pure smokescreen (another of the symptoms I mentioned earlier)
    It is all done through the descriptor tables the processor already is being pointed to internally, they are page lists (so we are looking at access only in increments of the page size), they are quickly indexable, and the transformation occurs for all virtual addresses mentioned in a machine instruction.
    It does bother me a little that you call the claim that the system is a linked list configuration a smoke screen, then use the term page lists to describe it. So lets take a look at the structure that the system uses. Forgive any missing variables, but I'm running off my own memory since I don't have any documentation with me and its been a while since I've had to deal with it. MSDN has some articles on VM and how the system handles it.

    I will provide this link which I hope will help illustrate, just remembered this is out here, I'm sure there are better:
    http://www.windowsitlibrary.com/Content/356/04/1.html
    This talks about how memory works in NT systems, the theory has evolved since NT in 2000 and XP, however, the principle is more or less the same.

    Here is a very basic structure.

    struct memoryPage{
    void* data;
    UINT size;
    memoryPage* next;
    };

    Since the "next" points to the next page, this is a linked-list obviously. This is why windows has a linked-list implemented MMU or memory management unit. This is what I was talking about. I understand I confused several people with what I said, but I rush a little too much when I write and that's why I provided some links to help illustrate the point to clarify. But as you mentioned, people start getting defensive, and as a result information gets twisted.

    What this boiled down to as a point is, that the way the system handles your memory does not mean that "contiguous" is any faster then a user-managed linked list. It depends on the page size and the efficiency of the code to manage it, which again is demonstrated in my original articles. This is why I ask you to try both. Create a pool of memory and manage it yourself and then try the system an benchmark the results. Granted I still need to do the work I promised and post it, but Thanksgiving is in the way, but when I've got access to my comp, I will do as promised.
    it only occurs prior to transformation if an access is required. There is nothing at all about the process of address transformation that suddenly makes the vector guarantees the same as a linked list, or anything freaky like that.
    Again, I think we are talking about different things. The system attempts to store data in physical memory in such a way that the memory can be easily swapped between disk and physical, yes. Data is stored in several complex tables which do several things, two of which would be to protect system and application integrity, and provide efficient retrieval. However, you can't have super fast retrieval and safety. Safety will have overhead as there are system memory tables and application memory tables to ensure that you can't interfere or crash anything but your own program. The system must traverse these tables anytime you wish to reference your memory. So if you do the work of the system in your own program, then you could remove the hit the system would normally generate. Since you have programmed drivers, you can see how this is very possible.

    So to sum up, a question of system performance came in to play and an explanation of how the system worked was necessary to quell the belief that you are guaranteed that any memory you get is in any way "contiguous". I hope this clarifies that point.
    And I am not trying to flame you either.

    But.

    This is exactly the misunderstanding about the iterator abstraction and guarantees that I keep referring to when I suggest that the attacks are coming from people who do not understand the standard containers. I tried to make it clear in my first post, but let me amplify a little...

    The same mistake was made with the entire misunderstanding between vectors and lists in this thread that prompted this entire diversion. You see, contiguous addressable memory refers to the fact that locations in the container can be referred to with pointer arithmetic without traversal. This allows the container to expose an iterator of the random access variety. This allows for the iterator to use this feature to optimise the mathematical order of complexity of a move operation by jumping around freely in the container. A list iterator, however, is only of the variety of a bidirectional iterator. It can move only in single steps, since the operation of a move is a dereference of a node pointer. Moving a particular distance requires the number of steps to be of the order of the size of the move. Each of these operations occurs at the level of algorithmic complexity of the actual addressable space. Pointer addition instructions are performed by the processor without regard to dereference. On 32bit x86, we are talking only a simple DWORD + DWORD instruction. It is the dereference that is a list lookup (like a simple pointer move to register), but the order of that lookup is not dependent upon any algorithmic environment variables. A full detailing of the algorithmic complexity would detail that only as part of the orders of operations not part of the algorithm's variability space, and gets lopped off as some constant order of magnitude. An iterator into a vector of 30 elements will have no problem jumping from its first element to its 29th element with one addition and one dereference operation. A list would need 28 dereference operations just to move to the 29th node (and probably a node offset addition and furthur dereference to expose the data). And that is the data complexity information will be calculated on, since all that is assumed by a dereference operation is that it takes some amount of time, which everyone agrees with. And noting the system's actual implementation of dereference does not change the fact that the vector has a random access iterator which alters its complexity guarantees from that of a list.

    In other words, no matter what occurs during the dereference operation, the vector has the advantage in the move. And that is all that was being discussed. I know that when one feels they are being attacked, it is quite natural to jump back with proof that they really are special, even if it means changing the topic to something they know a bit better, but that is all that it was (topic changing). That is why I called it a smokescreen. It did not address the current topic of discussion.

    And just to reiterate what that topic was: it was your misunderstanding, oktronic, of the algorithmic discussion between vectors and lists in the hand roll example. The example was to illustrate that intelligent people could hand-roll their own vector in no time at all with more flexibility, less bugs, etc. What I think that discussion has shown, however, is that intelligent people can mistake the concepts embodied by the standard containers, attempt to hand-roll one type and mistakenly produce a different one with different complexity guarantees, without the abstractions built in, and that this example pretty much underlines the point I and others have been making all along.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

Page 17 of 19 FirstFirst ... 7141516171819 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured