CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Page 3 of 3 FirstFirst 123
Results 31 to 44 of 44
  1. #31
    Join Date
    Sep 2002
    Posts
    1,747
    I didn't respond directly to this, because I thought my other comments had underlined the points, but maybe I should mention briefly...
    Originally posted by etaoin
    The point here is not the actual number of code lines (we all know that an entire program can be written as one single physical line), but rather the complexity of a statement. So in that sense, a line like
    Code:
    std::transform(str.begin(), str.end(), myUpper.begin(), toupper);
    is no better (or maybe worse) than
    Code:
    for( i = 0; i < strlen(str); i++) str[ i] = toupper(str[ i]);
    Well, one of the consequential things is optimisation and reduced chance of making errors in intent. For example, although likely optimisable away, you write i++ when you likely intend only ++i (you don't use the temporary for anything). strlen may be tucked away in a different translation unit and not inlineable (and really, why intend a function of O(N) complexity when you have an O(1) member that returns the same information -- plus dont forget to at least expose c_str to get the thing to compile). Really thats incidental, but it underlines quite clearly that its much more straightforward to implement optimised algorithms in a library to prevent programmer error.

    However, I really do not think that was the main intent of the library architects. Instead, the main benefit is in providing a construction kit of actual function blocks or functors which can be combined, arguments bound, call interfaces adapted, so that one can build larger, more complex, optimum algorithms. You can't pass the for loop around. It cannot be reused except through cut and paste (so it cannot be used in template type deduction optimisations, command implementations, ...), etc.

    Again the standard library designers have provided a more usable solution here.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  2. #32
    Join Date
    Jan 2002
    Location
    Scaro, UK
    Posts
    5,940
    I do and always have thought that the STL library architects have superiority as one of their goals.

    Programming isn't difficult - design definately is because it is an art form.

    No matter whether you use C++, C# or any other object oriented programming language out there, design of a system is at the core of whether it will be useful in the future.

    It is, to put a fine point on it, whether you need 50 programmers, or 2 to work on the same project in the same timescales.

    Let me re-iterate : for all my usage of templates they could have been avoided if a general base class for all objects (int, double, etc etc) had been implemented. Then you could write virtual functions to cope with everything.

    At any rate, we're always going to disagree about the right way of doing things.

    MFC may have copied code (or at least algorithms) but for its time was really good and still is because of readability of code.

    I'm one of those people who thought 'ah .NET hah ! What a load of rubbish from Microsoft trying to dominate the market again'.

    However, and it's taken me a year to really discover this, it is very nice.

    Everything IS derived from a single object.

    Some may say it's Microsoft's answer to Java. Fine. But it does give you access to the underlying system if you wish (for instance window handles and HDCs) which to me is a really nice touch.

    I like simlicity of code, and especially how close to English code looks. Otherwise why was SQL (a FOURTH GENERATION LANGUAGE) developed ? Again this is an international standard even though IT IS ALL IN ENGLISH (to answer a few points).

    In the beginning we had people (and I was among them) who could look at raw binary data and read it as machine language.

    We are so far ahead of that, and we still invent APIs which are really hard to read.

    Ah well, again we're getting into the ins and outs of my previous discussion. Although with a seperate twist on it.

    Anything that makes things easy for me, and is reliable, I really don't have a problem with.

    STL just makes code harder to read and maintain (are you seriously telling me that a function c_str() is readable ?).

    Darwen.

  3. #33
    Join Date
    Jan 2002
    Location
    Scaro, UK
    Posts
    5,940
    This is going to explode again (unfortunately).

    However, I'm going to ignore all additional posts to this thread. I'm getting rather fed up of trying to put my thoughts across just to have them shot down.

    I have better things to do in my spare time (like writing software in .NET - yes I really do like it that much).

    Darwen.

  4. #34
    Join Date
    Sep 2002
    Posts
    1,747
    Originally posted by darwen
    This is going to explode again (unfortunately).

    However, I'm going to ignore all additional posts to this thread. I'm getting rather fed up of trying to put my thoughts across just to have them shot down.
    Well, then its kinda silly for me to respond, but I'm still left without that reasoning I'm looking for. I don't look to "shoot down". I just want to "talk things out". Try to figure out the reasons behind a particular preference. The choice to use dynamic dispatch on static choices and introduce an uber object would bring us back to a lot of the discussion in the previous thread on efficiency, vtable indirection, etc. and I'd probably want to explore the question of management of the inheritance graph, diverging interfaces and the use of multiple inheritance, etc. That's probably where I'd take things with you, darwen, if the conversation continued, and I don't really see those topics clouded by much anger.

    I mean, to me these are things I seriously think about. Alot. I want to discuss these things with people. That's why I come here. I don't understand the desire to avoid debate topics, its never been a trait of mine, but I'll respect the decision. It might be a little more difficult to get the last word in that way, though.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  5. #35
    Join Date
    Dec 2003
    Posts
    23
    I notice that this thread has evolved in the meantime, but I would like to reply first to an earlier post:
    Originally posted by galathaea
    c++ is a langauge of many paradigms.
    Yes, I agree. It has evolved that way, and maybe that's even good news. Maybe I'm just speaking for "my" generation, who came from the procedural world using languages like FORTRAN, Pascal, Modula and C, and then, during the early 1980ies, when the OO hype started to creep into the mainstream, slowly converted to an object oriented view of the world. We had to learn that we no longer had to write "algorithms" which operate on "data", but that everything was made up of "objects" which understood "methods" and cared for their internal states themselves. C++ was "sold" as an object-oriented superset of C, and we gladly saw that many of the techniques we were already using in C (data abstraction, encapsulation, keeping data and related methods together and so on) were explicitly supported by that language. It became natural (or chic) that instead of applying an algorithmical function like strupr() to some data, it was better to have a string object and just tell it to MakeUpper(). That view of the world is OK, and it is reflected in "real" OO libraries like MFC (from a "modern" point of view, MFC uses a quite naive approach, but from the original point of view, MFC is "pure" OO. The best thing is the uber-class-design we find with CObject). Now with the advent of templates, the STL, and the current C++ standard ANSI library, things got reversed (as I said in an earlier post, we have made a step backwards): We are getting back to algorithms. That's fine, as the algorithmical, functional way of looking at the world is just one valid approach, just like the object-oriented way is another. But please don't tell me that we have finally found the holy grail here: Using algorithms is not something fundamentally new, but just the way we used to do things in C and other procedural languages before. So when you tell me that I have to use std::transform(<somestuff>, <someotherstuff>, <yetsomeotherstuff>, <someold_c_library_function>) in order to convert a string to upper case, I find me back to the days of procedural programming when I had to write the same amount of code to describe the same algorithm. std::transform is a trivial skeleton, like an empty for-loop. The flesh comes in with the iterator expressions and the functor I have to provide. So "thinking algorithmically" is just old wine in new bottles.

    Just another thought about what's going on here: Generations of C programmers have used (or misused, as it is now often seen) the C preprocessor to add an additional level of indirection to their coding. They wrote macros which called macros which called macros... and created themselves an own universe of (questionable) concise metaprogramming techniques to efficiently generate efficient C code and have the preprocessor do a lot of work before the actual C compiler even had a chance to see the resulting code. Naturally, that approach, as anything which is overused, led to several problems. In particular, it was extremely hard to debug code which used macros extensively, as the errors the compiler flagged never corresponded to the line where the error actually occured: It always pointed to the line where the macro was called, while the actual coding error was some level deeper in some nested macro definition. C++ to the rescue: With the advent of the const keyword, so the language gurus now taught, preprocessor constants are no longer needed. Using "const", we can define constants in a type-safe manner. And likewise, preprocessor macros could be replaced by inline functions: Real C++ syntax, type-safety, but the same efficiency as macros. Fine. The new dogma was: The preprocessor is bad, ugly and harmful, everything can be done using const and inline. So far so good. Until templates arrived. I don't understand why so many fail to realize it, but the entire template stuff is little more than a fancy reincarnation of the C preprocessor: Templates are expanded before the compiler gets to see the code, x is substituted for y, and an uncontrolled bunch of "real" C++ code is generated behind the scenes. And if I make a typing error, let's say in the code which uses a a std::map, the compiler won't point me to that line, but tell me that there is a bug in the standard library. I don't know why I never called Microsoft telling them "Hey, I just tried to compile my project under VC++ 6.0, and the compiler complains about an error in line 25 of your MAP implementation. Please fix it." Maybe I should.
    I've always seen C++ templates, like C preprocessor macros, as friendly enemies. They're very helpful, but will both bite you when you're not extremely careful. In that sense, templates, too, are just old wine in new bottles: The old idea of a second level, a second language above the actual language has come back in new, fancy clothes.

    Again, don't get me wrong here: I'm not saying that templates are evil. That position would be hard to defend these days, just as it would have been hard to state two decades ago that preprocessor macros are bad. I see the power (and the drawbacks) of templates, as I saw the power (and the risks) of preprocessor macros. I just don't understand those people who told us a few years ago that preprocessor macros were a no-no, and now eagerly advocate the use of templates. Likewise (and that gets us back to our original subject), intelligent persons who might have stated a few years ago that we should use "objects" and send them "messages" instead of calling "algorithms" which operate on "data" now say the opposite: The C++ standard library wants us to think "algorithmically" again.

    But yeah, I agree that "a typical c++ programmer" does not have that much flexibility. I think, just statistically, you'd probably have more people giving up before they got that flexible.
    Be careful here, that sounds like a rather elitary view: A "real" programmer (like all of the tough guys we are, aren't we?) doesn't care about such superficial things as the main paradigm of a language. We understand C, C++ LISP, Prolog, FORTH, APL and all the rest, as everything boils down to machine language in the end. High-level languages are for wimps...

    Look, the main idea about every programming language ever created has been to make the programming of a computer easy for humans. So programming languages always try to allow humans to express ideas the way they are used to thinking about them, and transform that into the way the machine can deal with it. FORTH was an excellent language, it produced highly efficient machine code, but required the human programmer to think in terms of a stack with pushing and popping values, instead of using the much more common infix notation for mathematical expressions. Languages like FORTRAN and BASIC have been invented because they better allowed an unprepared human to write down algorithmical ideas the way they think.
    I always have to smile when I think back at some of my fellow students who had those expensive HP pocket calculators using RPN. Sure, "real" engineers and mathematicians had no problem in coverting every expression to postfix notation, and could even prove that you could save a few keystrokes when entering a formula using postfix instead of the usual algebraic notation. But eventually, pocket calculators with parentheses and algebraic notation got much more popular, because they adapted to the way humans use to express mathematical problems. There is still a small, hard core of RPN supporters, but IMO, they just fail to notice that they form an elitary little club of individuals who still think that it is better to show off with their abilitiy to bend to the way a machine works than to use a much more sophisticated machine that follows the way humans use to think.

  6. #36
    Join Date
    Dec 2003
    Posts
    23
    <sorry, had to split it up>

    And I was thinking particularly of the one line of code
    ...
    As in all functional languages, you can always name a function to collapse a parse tree
    so that one can write
    ...
    (which even saves you a "." from the CString version!!! )


    OK, but as already said earlier: I don't understand why I have to write it myself, why it is not there already. The exact makeUpper function you posted could be in the standard library, along with trimLeft, trimRight and the others mentioned, and this discussion would have never been started.
    As Paul has stated, the standard had to pick somewhere to stop.
    And that's really what my complaint is all about: It stopped either too early or too late. When a class which deserves to be called "string" is in the library, than those basic string manipulation functions should also be there. It's as if they had provided a "complex" class with a "+" operator, but without "-", because they had to stop somewhere. It's just so nonsensically incomplete.
    Because I am very curious about what is so strange about the syntax. I am curious about how you, etaoin, having the experience in many different paradigms, would code an algorithmics library.
    That depends entirely on the language used. In LISP, that would be straightforward. In C, Pascal or MODULA, it would be a set of procedurally coded functions which operate on data, much like the C standard library. In C++, Java or Eiffel, it would be a class library with some kind of uber-class design (maybe something similar to Smalltalk), collections which hold elements of that abstract type, and methods which operate on them (much like MFC). And yes, if someone asked me to design an algorithmics library based soleley on templates and making heavy use of template metaprogramming, that new paradigm that templates have introduced into C++, I would probably end up with something like the C++ standard library. But the choice of the pre-packaged methods would be different: According to the realneeds of real developers, there would be ready-made methods to convert strings to upper case, for trimming, and so on. The more exotic things, like string::reverse(), would probably be missing (when was the last time you had to reverse a string?), as they can be tossed together using some simple, generic algorithms (actually, the reverse algorithm would operate on any type of controlled sequence, and need not be there specifically for the string data type, as it is now).
    The natural language of algorithmics in mathematics is the lambda calculus because it offers a natural metric of complexity in the number of primitive morphisms evaluated. Most current theoretical study in optimisation theory occurs first as theorems in the lambda calculus and then are scope restructured to fit imperative descriptions.
    Exactly. But the original problem was not about mathematics, optimisation theory or language design, but about the result of it: The way the standard library presents itself to the user. The user I mean here is not doing algorithmics research using the lambda calculus, he wants to convert a string to uppercase.
    I am sincerely curious about what granularity a standard library accompanying a language specification should take in its algorithmics. Because the above line is quite an accomplishment for a computer language to make such a transformation so easy.
    It always depends on what you expect. When you buy a kitchen mixer, a digital camera or a car, you expect a functioning device, ready to use, and not a kit with separate parts you have to assemble first. Of course, there are contraptions which are also sold as kits (model airplanes, computers, radios), but that's done mainly in order to allow for cutting down on the production cost and hence the final price, and of course to satisfy the needs of hobbyists who like to put the pieces together. OTOH, when you buy a LEGO set, you expect a collection of basic building blocks which can be put together in many ways, and not a ready-built model. But you want at least the ready-made bricks and other pieces to be there, and not bags of ABS granulate and moulding stamps which require you to mould the bricks first. Or, to go a step further, perhaps a chemistry lab where you have to develop an appropriate synthetic material first... You see, it's a question of granularity, as you mentioned. A library for a computer language is much like a LEGO set - you expect a reasonal choice of universal, basic building blocks which can be used to put together a reasonal category of applications in a consistent way. So when I buy a motorized LEGO set, it will certainly contain some of the usual building bricks, and one of those bricks will be a motor brick, ready to be connected to another brick which holds the batteries. I can be sure not to find a reel of wire, some metal parts and a soldering iron (it's a kit, after all...), because the expected granularity of a LEGO set and the level at which you usually put the individual pieces together just makes it seem a natural choice to provide the motor as a ready-made brick. In much the same way (here we are back to the theme again), the expected level of granularity for a library which has "string" as an own type also demands, IMO, ready-made bricks like "MakeUpper" and "TrimLeft". They are on the same level. Would the library have stopped at, let's say, the "container" level, leaving it up to the user to implement a text string as a vector of chars (or maybe a linked list?), then it would be OK to leave the implementation of basic string manipulation methods, using basic algorithms, also up to the user - the library would just be, altogether and consistently, on a lower level of abstraction, building on a finer granularity. But as soon as they put in "string" as a data type, they should also provide the typical string-related algorithms as ready-made bricks.
    If the only concern is the amount of writing, then I have given you the solution and I hope you copy and paste it into your work so that you will even be free of the opressor dot. I can do similar one liners for all of the string manipulators you mention later, because they are merely applications of algorithm primitives that the standard has graciously supplied.
    Copy and paste? I thought we agreed that this is not the kind of code reuse we favour. The other possible way is to write those functions myself and copy them to every project in which I intend to use strings - just a higher level of copy-and-paste reuse. Or I could, of course, build a complete string library once and use it in my further projects. Build a string library? Oops, I always thought strings were already part of the C++ standard library...?
    What exactly is so clumsy and difficult about it? Honestly, I really want to understand.
    Look, it seems as we're somewhat running on the spot. I'm not trying to prove you wrong, galathaea. On the contrary, I share your points of view and highly appreciate nearly everything you said so far. There's nothing wrong about functional decomposition and templates (although, as said above, they're nothing really new), and the general patterns used in modern language design. What you stated so far is perfectly valid when talking about the theory and the implementation of languages and related standard libraries. Writing a makeUpper() the way you showed (using std::transform) is an elegant way of doing it, consistent with the way the library is designed. But, and here comes the only point where we seem to disagree, and which happens to be the initial point of my initial post in this (my initial) thread: The actual implementation of it should be part of the library, and not part of my code. Or, in more general words: In the special case of the actual C++ standard library, I'm convinced (referring to everything I said above) that the boundary line between "what's in" and "what's out" has been drawn in an inconsistent manner. You (and Paul McKenzie) have kept explaining how I can convert a std::string to upper case by writing a compound statement or by packing that in to an own function (all of which I already knew), but always avoided getting to my main point: Why you believe that it makes sense to keep those very common, often used, basic, fundamental, every-day string manipulation functions out of the library? Maybe we should ask the other way round: Why are things like string::length(), string::empty() and string::reverse() there? They can also be tossed together using algorithms. And please don't bring up the locale argument again: As said earlier, CString::MakeUpper() manages to deal with locales without any problem.
    Last edited by etaoin; December 21st, 2003 at 10:35 PM.

  7. #37
    Join Date
    Aug 2000
    Location
    West Virginia
    Posts
    7,721
    I can't believe this is such a big deal. At any rate my 2 cents :

    1) There are a number of ways of deigning containers. Each has
    its own advantages and disadvatages. A good summary can
    be found in "The C++ Programming Language" by Stroustrup.
    In my edition it is under Section 16.2, "Container Design".

    2) An argument for a minimal class member functions by Scott Meyers:

    http://www.cuj.com/documents/s=8042/cuj0002meyers/

    3) "Why are things like string::length(), string::empty() and string::reverse() there?"

    3a) length() ... not needed since size() is there and they do the same
    thing. size() is a member because you can not implement it as EFFICIENTLY
    with a non-member NON-FRIEND function.

    3b) no need for empty() ...
    Code:
    template<class charT, class traits, class Allocator>
    bool empty( const basic_string<charT, traits, Allocator>& s )
    {
      return s.size() == 0;
    }
    [ ok, the wording in the standard for size() leaves some wiggle
    room on its complexity, whereas there is no such wiggle
    room on empty() , so maybe empty() does need to be
    a member) ... but you get the idea]

    3c) reverse() ??? As far as I know there is no string::reverse(), and
    there does not need to be.

    3d) there are 103 member functions in std::string. Herb Sutter has suggested
    that the number could be reduced to somewhere in the 30's using non-member
    non-friend functions. The problem with std::string is there are too many
    member functions.

    4) "MFC is "pure" OO."

    How to explain the cut-and-paste classes :
    Code:
    	// Arrays
    	class CByteArray;           // array of BYTE
    	class CWordArray;           // array of WORD
    	class CDWordArray;          // array of DWORD
    	class CUIntArray;           // array of UINT
    	class CPtrArray;            // array of void*
    	class CObArray;             // array of CObject*
    
    	// Lists
    	class CPtrList;             // list of void*
    	class CObList;              // list of CObject*
    
    	// Maps (aka Dictionaries)
    	class CMapWordToOb;         // map from WORD to CObject*
    	class CMapWordToPtr;        // map from WORD to void*
    	class CMapPtrToWord;        // map from void* to WORD
    	class CMapPtrToPtr;         // map from void* to void*
    
    	// Special String variants
    	class CStringArray;         // array of CStrings
    	class CStringList;          // list of CStrings
    	class CMapStringToPtr;      // map from CString to void*
    	class CMapStringToOb;       // map from CString to CObject*
    	class CMapStringToString;   // map from CString to CString

    5) "are you seriously telling me that a function c_str() is readable ?"

    5a) what would you suggest ? Some type of cast ? maybe c_sytle_string() ??
    whatever ... I honestly don't know of any programmer that has trouble
    with c_str().

    5b) is this example from CMapStringToPtr readable ???

    Code:
    BOOL found = my_map.Lookup(str1,(void*&)pA);
    6) Did you ever write your code using CMapStringToPtr ?
    I'm still willing to write the corresponding std::map()
    version. I would be suprised if the performance was
    not good enough. Especially if you don't know the size
    of the container to start with.

    7) Are the MFC list classes bad because the don't supply
    a sort() function ?

    8) Microsoft has stated that in general the STL containers
    are more efficent than the MFC containers, and that they
    use them in the development.

    9) STL has no hash container (although every major
    implementation, including Dikumware has one). MFC
    has no binary tree container (as far as I know). Both
    ARE serious omissions (in my opinion).

    10) I have used BOTH the MFC and STL containers
    extensively. It has been my experience that I can
    write code faster, bug free, and more efficiently with
    the STL containers than MFC. I can write the exact
    same code to loop over a container in STL no matter
    what the container (vector,deque,set,multiset,map,
    multimap, and string). In my opinion they have more
    functionality (both in member functions and in the
    algorithms) than the MFC counterparts. This is just
    my experience. I welcome hearing an opposite opinion
    from someone who also has extensive experience in both
    and why.
    Last edited by Philip Nicoletti; December 28th, 2003 at 07:44 PM.

  8. #38
    Join Date
    Sep 2002
    Posts
    1,747
    etaoin, first I want to thank you for your detailed response. It was exactly what I was looking for, and unlike some of the mistruths that have been floated around (like memory leaks), actually focused honestly on the issue, and I really appreciate that.
    Originally posted by etaoin
    Yes, I agree. It has evolved that way, and maybe that's even good news. Maybe I'm just speaking for "my" generation, who came from the procedural world using languages like FORTRAN, Pascal, Modula and C, and then, during the early 1980ies, when the OO hype started to creep into the mainstream, slowly converted to an object oriented view of the world. We had to learn that we no longer had to write "algorithms" which operate on "data", but that everything was made up of "objects" which understood "methods" and cared for their internal states themselves. C++ was "sold" as an object-oriented superset of C, and we gladly saw that many of the techniques we were already using in C (data abstraction, encapsulation, keeping data and related methods together and so on) were explicitly supported by that language. It became natural (or chic) that instead of applying an algorithmical function like strupr() to some data, it was better to have a string object and just tell it to MakeUpper(). That view of the world is OK, and it is reflected in "real" OO libraries like MFC (from a "modern" point of view, MFC uses a quite naive approach, but from the original point of view, MFC is "pure" OO. The best thing is the uber-class-design we find with CObject). Now with the advent of templates, the STL, and the current C++ standard ANSI library, things got reversed (as I said in an earlier post, we have made a step backwards): We are getting back to algorithms. That's fine, as the algorithmical, functional way of looking at the world is just one valid approach, just like the object-oriented way is another. But please don't tell me that we have finally found the holy grail here: Using algorithms is not something fundamentally new, but just the way we used to do things in C and other procedural languages before. So when you tell me that I have to use std::transform(<somestuff>, <someotherstuff>, <yetsomeotherstuff>, <someold_c_library_function> ) in order to convert a string to upper case, I find me back to the days of procedural programming when I had to write the same amount of code to describe the same algorithm. std::transform is a trivial skeleton, like an empty for-loop. The flesh comes in with the iterator expressions and the functor I have to provide. So "thinking algorithmically" is just old wine in new bottles.
    I agree with a lot that you say here, but I see it more as the focus has sharpened. I too come from a long procedural background, and I think most programmers still do. The old ways of teaching c++ as c plus classes are still quite prevalent even today, and often teachers in high schools and colleges of object oriented languages (including the popular Java) often do not explain the reasons behind an OO approach to their students. And really, as with all epiphanies, the OO paradigm shift was accompanied by a lot of hype that made it sound like it might one day cure cancer or at least keep the programmer from having to input anything. In the process, alot of people have not kept their focus on the whole purpose of the shift.

    I love OO techniques. I'm "all about" the object. But I've never been about weaving almost everything into one hierarchy (it was the first bad taste I experienced with both MFC and Java). And the reason was very vague to me at first. There was this "stickyness" to such a design. I couldn't "massage" the code easily. Eventually reading other's theory and doing my own experiments, I began to understand that

    like writing a book
    or scientific theorising
    abstracting to identities
    or objects and their transformations
    its all
    just
    the study of model theory and process science

    I mean there's a whole lot of foundations in these sciences that is already laid. Theorems have been proved. And one of the problems known in these communities for some time is that with a huge hierarchical type relationship with inheritance being the only method used in absorbing structure specifications (functions and state) of other object types, the abstraction process is constrained in object space to grow types at higher order (in relation to the complexity of variation) than when containment or other type relationships occur (by growing types here I mean that classes must be created more often at intermediate stages in the hierarchy to target the points of variation in concept). And there's a partial ordering imposed on the structure of this constrained object space that is not always respected in concept space (there's the classic square - rectangle inversion type concept relationship, as one instance). Often an execution block will only use an object in specific ways that do not cover the concept space developed. This leads to huge, bloated objects with huge, bloated interfaces often found in "pure" OO to create flexibility (MFC's Cwnd is a classic example). Just changing the "pure" OO to look at containment as more than a by-value state mechanism and introducing polymorphic type containment opens up a ton of patterns for decoupling a hierarchy and changing multiplicative type growth to linear as feature sets are constructed. Its an interesting type dynamic calculation that is pure combinatorics (you look at object types as a string of objects (the state) and functions and calculate the minimal inheritance graph necessary to support immediate construction of an object type subcollection to represent the domain of variability at interest).

    This is why early OO theorists stressed the distinction between IS-A and USES-A relationships and emphasised the importance of containment as the choice to use when the type extension mechanism may extend a variable type. Inheritance itself is not polymorphic. It's a solitary form of extension that describes polymorphic type relationships. Containment uses polymorphism (or can). So you have to understand that an uber object is actually a sign of bad design that is not something to seek out. Its mathematically bad. It constrains expression space needlessly. Its going the opposite direction in variability, constricting instead of adding feature points.

    I understand the desire for an uber object, however. Its nice to look at your object hierarchy as level of abstraction, and where you want the most abstraction (like containers and your application core serialisation) and your only type relationships are inherit or contain, they fit a nice conceptual model. Its just that there are more appropriate, more flexible (both in extension and safety) techniques available both theoretically and in actual languages.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  9. #39
    Join Date
    Sep 2002
    Posts
    1,747
    Also posted by etaoin
    Just another thought about what's going on here: Generations of C programmers have used (or misused, as it is now often seen) the C preprocessor to add an additional level of indirection to their coding. They wrote macros which called macros which called macros... and created themselves an own universe of (questionable) concise metaprogramming techniques to efficiently generate efficient C code and have the preprocessor do a lot of work before the actual C compiler even had a chance to see the resulting code. Naturally, that approach, as anything which is overused, led to several problems. In particular, it was extremely hard to debug code which used macros extensively, as the errors the compiler flagged never corresponded to the line where the error actually occured: It always pointed to the line where the macro was called, while the actual coding error was some level deeper in some nested macro definition. C++ to the rescue: With the advent of the const keyword, so the language gurus now taught, preprocessor constants are no longer needed. Using "const", we can define constants in a type-safe manner. And likewise, preprocessor macros could be replaced by inline functions: Real C++ syntax, type-safety, but the same efficiency as macros. Fine. The new dogma was: The preprocessor is bad, ugly and harmful, everything can be done using const and inline. So far so good. Until templates arrived. I don't understand why so many fail to realize it, but the entire template stuff is little more than a fancy reincarnation of the C preprocessor: Templates are expanded before the compiler gets to see the code, x is substituted for y, and an uncontrolled bunch of "real" C++ code is generated behind the scenes. And if I make a typing error, let's say in the code which uses a a std::map, the compiler won't point me to that line, but tell me that there is a bug in the standard library. I don't know why I never called Microsoft telling them "Hey, I just tried to compile my project under VC++ 6.0, and the compiler complains about an error in line 25 of your MAP implementation. Please fix it." Maybe I should.
    I've always seen C++ templates, like C preprocessor macros, as friendly enemies. They're very helpful, but will both bite you when you're not extremely careful. In that sense, templates, too, are just old wine in new bottles: The old idea of a second level, a second language above the actual language has come back in new, fancy clothes.

    Again, don't get me wrong here: I'm not saying that templates are evil. That position would be hard to defend these days, just as it would have been hard to state two decades ago that preprocessor macros are bad. I see the power (and the drawbacks) of templates, as I saw the power (and the risks) of preprocessor macros. I just don't understand those people who told us a few years ago that preprocessor macros were a no-no, and now eagerly advocate the use of templates. Likewise (and that gets us back to our original subject), intelligent persons who might have stated a few years ago that we should use "objects" and send them "messages" instead of calling "algorithms" which operate on "data" now say the opposite: The C++ standard library wants us to think "algorithmically" again.
    The book c++ for Real Programmers (with an arrogant title that mirrors what you mention later) actually introduces templates by illustrating the preprocessor substitution they can be used to replace. I think that's how a lot of people see templates, as replacing the metaprogramming previously done by the preprocessor, with min/max as standard illustrators.

    But templates offer a skill set quite different from the preprocessor's. They actually create type relationships, and template lookup actually offers pattern matching algorithms over type space. Sure some of the first uses of templates focused on text substitution of typenames, but modern template mechanics do much more. Function types (return types and parameter type lists) can be parsed, object interfaces and state can be constructed orthogonal to inheritance type relationships, optimisations can be applied, complex calculations can be performed, and type relationships can be automated in a highly generative way. With concept checking and static assertions (things unfortunately not well developed prior to the 98 standardisation and not found in the current library -- though likely to be added and found in some of the new addition proposals), the error message location problem can be improved tremendously. Even though the standard response to templates over preprocessor macros focuses on type safety and expression evaluation concerns, really that only scratches the surface of the benefits templates bring to metaprogramming.

    Which is very relevant here. Maybe, however, the word "multiparadigm" confuses things. Kuhn has attached all of these connotations to the term paradigm that it is now popularly used in a mutually exclusive kind of way. When paradigms shift, foundations change. Everything is new. That's not what is happening. Computational linguistics is evolving. Objects are very useful in organising state and procedures, but you had the state and procedures prior. Templates bring a lot of power and safety to metaprogramming, but you had metaprogramming prior. And I've been trying to underline that the measure of fitness in this evolutionary landscape is reuse.

    Of course, mistakes were made and some people were mislead by the wrong goals. MFC is a wonderful example which has some measure of reuse as a library / framework used by many, but it also fails to be easily flexible in some of its core operations, and code is introduced by the framework in many places that cannot be reused in different contexts. The mathematicians never proved a theorem one day about how great this type of design was and some later point flipped sides, its just that some of the practitioners removed from the theory lost focus and thought everything was about the objects, when really everything is about reuse and the benjamins. And I see modern design as sharpening the focus back upon the true target.
    Also posted by etaoin
    Be careful here, that sounds like a rather elitary view: A "real" programmer (like all of the tough guys we are, aren't we?) doesn't care about such superficial things as the main paradigm of a language. We understand C, C++ LISP, Prolog, FORTH, APL and all the rest, as everything boils down to machine language in the end. High-level languages are for wimps...

    Look, the main idea about every programming language ever created has been to make the programming of a computer easy for humans. So programming languages always try to allow humans to express ideas the way they are used to thinking about them, and transform that into the way the machine can deal with it. FORTH was an excellent language, it produced highly efficient machine code, but required the human programmer to think in terms of a stack with pushing and popping values, instead of using the much more common infix notation for mathematical expressions. Languages like FORTRAN and BASIC have been invented because they better allowed an unprepared human to write down algorithmical ideas the way they think.
    I always have to smile when I think back at some of my fellow students who had those expensive HP pocket calculators using RPN. Sure, "real" engineers and mathematicians had no problem in coverting every expression to postfix notation, and could even prove that you could save a few keystrokes when entering a formula using postfix instead of the usual algebraic notation. But eventually, pocket calculators with parentheses and algebraic notation got much more popular, because they adapted to the way humans use to express mathematical problems. There is still a small, hard core of RPN supporters, but IMO, they just fail to notice that they form an elitary little club of individuals who still think that it is better to show off with their abilitiy to bend to the way a machine works than to use a much more sophisticated machine that follows the way humans use to think.
    Nouns and verbs.
    Objects and their transformations.
    Category theory.
    Concepts as attractors in neural net dynamics.
    Modern computer languages do model their domain in ways that approach models of human languages, and that is certainly where a lot of the theory is headed. The postfix / infix / other debate illustrates this quite nicely. People can train themselves to recognise the correct parsing whether the operator symbol is placed to the left, in the middle, to the right, or even above (as in common parse tree forms) the objects it operates on. All that is needed is a visual distinction that identifies objects and operations and a rule to topologically associate the order of operation evaluations. Linear writing is an important simplification over more general graph relationships (as one may find, for example, in mayan hieroglyphs) but much abstract math (algebraics, topology, foundations) is done today in the language of commutative diagrams because of their expressiveness. The only real requirement seems to be the presentation of objects and their transformations, as that appears to be the natural language of conceptualisation. The field of child psychology (and particularly mathematical learning) following Piaget's and similar work underline the fact that most of a preference is due to an arbitrary choice of standard chosen early on in education / training, and this is a position similar to (but not exactly) the idea of a universal grammar found in the works of Chomsky.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  10. #40
    Join Date
    Sep 2002
    Posts
    1,747
    But I use flexibility in the same context here as you seem to, etaoin, so I'm not sure why there'd be a caution. My point is that suboptimal design principles were learned by large portions of the community who never went on to learn the better techniques as they became better understood. Alot of this I blame first on education (the "those who can, do; those who can't, teach" principle coupled with the fact that academic positions don't place the reuse demands that industry positions do, so there was a definite lag behind theory and older professors comfortable with older paradigms understanding the reasons for the new language features and conveying them to their students), and one of the lacks I still see in modern curricula is the teaching of research techniques and continued study management. When I look through old SIGCSE Bulletins (Special Interest Group on Computer Science Education), I see tons and tons of proposals on creating cooperative networks with industry to try to bring educators up to speed on the latest developments, but only rarely is the root of the problem addressed, and never very constructively.

    And I am in no way suggesting a definition here that mimicks your parody on machine and other languages. I am suggesting flexible knowledge on the relationships between type dynamics to assist in the abstraction process. The idea is to move towards better use of abstraction mechanisms for the particular, stated reasons. Its not an empty show-off contest to see who has bigger body parts. And unlike suggested both by yourself, etaoin, and others, design does not fall always under aesthetic metrics. Structured programming founded the scientific metrics in a procedural language, and modern design theory refines these quite objective measures of good design.

    If I had taken an extreme position, like making absolute distinctions like homo neophobus / neophilus or Nietzsche's master / slave, that's getting elitist. Thats saying there is a permanent distinction between classes. I only stated the much less controversial position that there is a gradation in capabilities around one particular skill set. There are people that don't know how to program at all. And there are people that can code in miraculaous ways. And I only stated the much less controversial position that how far one goes with a particular skill is often a matter of how much effort they put into it. By "giving up" I don't in any way suggest that they are inferior or didn't go elsewhere and pursue one of many many other interests.
    Also posted by etaoin
    OK, but as already said earlier: I don't understand why I have to write it myself, why it is not there already. The exact makeUpper function you posted could be in the standard library, along with trimLeft, trimRight and the others mentioned, and this discussion would have never been started.

    ...

    And that's really what my complaint is all about : It stopped either too early or too late. When a class which deserves to be called "string" is in the library, than those basic string manipulation functions should also be there. It's as if they had provided a "complex" class with a "+" operator, but without "-", because they had to stop somewhere. It's just so nonsensically incomplete.

    ...

    Exactly. But the original problem was not about mathematics, optimisation theory or language design, but about the result of it: The way the standard library presents itself to the user. The user I mean here is not doing algorithmics research using the lambda calculus, he wants to convert a string to uppercase.

    ...

    Copy and paste? I thought we agreed that this is not the kind of code reuse we favour. The other possible way is to write those functions myself and copy them to every project in which I intend to use strings - just a higher level of copy-and-paste reuse. Or I could, of course, build a complete string library once and use it in my further projects. Build a string library? Oops, I always thought strings were already part of the C++ standard library...?
    Which is why I stated quite early on that such were features of MFC. I don't claim other libraries have nothing to offer. That would be a ludicrous position and dishonest as to my actual use. I use the standard library along with things like ATL, Qt, boost, Loki (still only cautionary) and others in various domains. Its all about gathering the right tool set. Sometimes, the tool sets I gather are procedural in nature or have fallen prey to the misunderstandings of good OO design, so I adapt them, refactor them (when open sourced and legally possible), or in some other way manipulate them to obey me better. But I acknowledge features.

    The standard picked a point where to stop that is consistent. They didn't write wrapper functions specific to any of their container interfaces. The above wrapper could easily be templated, since begin and end are standard interfaces across the container domain, that would certainly be more flexible. But still you are losing flexibility to use the components with other libraries that expose iterator concepts but may not respect the standard's choice of interface. Its a consistent point to stop at, even though perhaps a wrapper or two to around some of the common uses would have been useful.

    But we are still looking at one line here. I mean, I don't think its that horrible of a place for the standard to stop at such a difficult task being done in one line. Sure it can be shortened (at the loss of flexibility). And I agree completely that such a feature is useful. I just don't agree that behind the surface lies any point against the design of the library, since its certainly understandable that the library should focus on flexibility instead of tying functionality only to itself.

    The difference in complexity here is miniscule in comparison to full application domains.

    And the copy and paste referred to the movement of the code from the codeguru forums to your work's library code (whereever appropriate). I provided the feature so desperately desired, through my own blood, sweat, and tears, offering this feature that the standard library didn't have. Its a tool to add to your tool kit. Just as if you were to download a library from the internet (codeguru, anyone?). Except no download. No install. Just a cut and paste once. For use, not reuse. But this seemed more like a defensive argument, like someone was reaching for an argument because the position they had taken was falling. You obviously knew what I was talking about, etaoin, so I won't dwell too much on it.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  11. #41
    Join Date
    Sep 2002
    Posts
    1,747
    Also posted by etaoin
    That depends entirely on the language used. In LISP, that would be straightforward. In C, it would be a set of procedurally coded functions which operate on data, much like the C standard library. In C++, it would be a class library with some kind of uber-class design (maybe something similar to Smalltalk), collections which hold elements of that abstract type, and methods which operate on them (much like MFC). And yes, if someone asked me to design an algorithmics library based soleley on templates and making heavy use of template metaprogramming, that new paradigm that templates have introduced into C++, I would probably end up with something like the C++ standard library. But the choice of the pre-packaged methods would be different: According to the realneeds of real developers, there would be ready-made methods to convert strings to upper case, for trimming, and so on. The more exotic things, like string::reverse(), would probably be missing (when was the last time you had to reverse a string?), as they can be tossed together using some simple, generic algorithms (actually, the reverse algorithm would operate on any type of controlled sequence, and need not be there specifically for the string data type, as it is now).
    Again I want to point out that uber object design has never been the sound direction. Never. Most object evangelists focusing on the theory of design very early on even cautioned about this. It is quite unfortunate that many of the actual products released using object technology (MFC) were built unsoundly, and Java was built around the idea. Some of the uglier early patterns for serialisation focused on the uber object, and the uber object was the standard method for libraries to provide generic capabilities across all of object space, but more decoupled patterns were talked about quite openly around 96 / 97, where you would just mixin the base type conversion functionality through multiple inheritance or single inheritance with composition as an adapter. Even in Smalltalk, the reason for Object is not that of MFC or Java, but helps support the metaobject model and the inversion of type safety (that it must be requested in that language through interface verification instead of statically checked at compile). Smalltalk is a language that supported early on unbounded polymorphism and instance specific methods and state, so the mathematical complexity issues do not arise in its use of Object. In that kind of language, of course an uber object is okay, because it does not enforce poset structure down the hierarchy anyways, so there are no worries.
    Also posted by etaoin[/b]
    It always depends on what you expect. When you buy a kitchen mixer, a digital camera or a car, you expect a functioning device, ready to use, and not a kit with separate parts you have to assemble first. Of course, there are contraptions which are also sold as kits (model airplanes, computers, radios), but that's done mainly in order to allow for cutting down on the production cost and hence the final price, and of course to satisfy the needs of hobbyists who like to put the pieces together. OTOH, when you buy a LEGO set, you expect a collection of basic building blocks which can be put together in many ways, and not a ready-built model. But you want at least the ready-made bricks and other pieces to be there, and not bags of ABS granulate and moulding stamps which require you to mould the bricks first. Or, to go a step further, perhaps a chemistry lab where you have to develop an appropriate synthetic material first... You see, it's a question of granularity, as you mentioned. A library for a computer language is much like a LEGO set - you expect a reasonal choice of universal, basic building blocks which can be used to put together a reasonal category of applications in a consistent way. So when I buy a motorized LEGO set, it will certainly contain some of the usual building bricks, and one of those bricks will be a motor brick, ready to be connected to another brick which holds the batteries. I can be sure not to find a reel of wire, some metal parts and a soldering iron (it's a kit, after all...), because the expected granularity of a LEGO set and the level at which you usually put the individual pieces together just makes it seem a natural choice to provide the motor as a ready-made brick. In much the same way (here we are back to the theme again), the expected level of granularity for a library which has "string" as an own type also demands, IMO, ready-made bricks like "MakeUpper" and "TrimLeft". They are on the same level. Would the library have stopped at, let's say, the "container" level, leaving it up to the user to implement a text string as a vector of chars (or maybe a linked list?), then it would be OK to leave the implementation of basic string manipulation methods, using basic algorithms, also up to the user - the library would just be, altogether and consistently, on a lower level of abstraction, building on a finer granularity. But as soon as they put in "string" as a data type, they should also provide the typical string-related algorithms as ready-made bricks.

    ...

    Look, it seems as we're somewhat running on the spot. I'm not trying to prove you wrong, galathaea. On the contrary, I share your points of view and highly appreciate nearly everything you said so far. There's nothing wrong about functional decomposition and templates (although, as said above, they're nothing really new), and the general patterns used in modern language design. What you stated so far is perfectly valid when talking about the theory and the implementation of languages and related standard libraries. Writing a makeUpper() the way you showed (using std::transform) is an elegant way of doing it, consistent with the way the library is designed. But, and here comes the only point where we seem to disagree, and which happens to be the initial point of my initial post in this (my initial) thread: The actual implementation of it should be part of the library, and not part of my code. Or, in more general words: In the special case of the actual C++ standard library, I'm convinced (referring to everything I said above) that the boundary line between "what's in" and "what's out" has been drawn in an inconsistent manner. You (and Paul McKenzie) have kept explaining how I can convert a std::string to upper case by writing a compound statement or by packing that in to an own function (all of which I already knew), but always avoided getting to my main point: Why you believe that it makes sense to keep those very common, often used, basic, fundamental, every-day string manipulation functions out of the library? Maybe we should ask the other way round: Why are things like string::length(), string::empty() and string::reverse() there? They can also be tossed together using algorithms. And please don't bring up the locale argument again: As said earlier, CString::MakeUpper() manages to deal with locales without any problem.
    I don't want to harp on the "one line" response, but just wanted to point out something that I think you might find relevant. My intention in all of this is to try to show how the standard library was quite a prescient step several years back into what is becoming more and the more the driving goal of the language design community. Generative programming. The ability to use the language in a way that automates the construction of applications. To provide more and more immediate tranformations from intent specifications to final product.

    And what I have been trying to debate is that there are indeed objectively classifiable / quantifiable / and measurable design criteria that assist in this science. And really, very little of it has to do with differences between one-call-4-args v. one-call-1-arg when the 4-args can be collapsed. Instead, much of the theory concerns inter/intraobject relationships, system and subsystem composition, and architecture automation. The library provides function compositors, adapters, etc. so that one may construct more and more complex morphisms of object domains. It provides the most common utility containment objects and algorithms on them. One of those is a string, and the library has focused on making the use of strings extremely flexible. Locale facets may be provided for many different regions. These can encode date/time stringisation, numerics stringisation, or any of the other localisation concerns. The library provides defaults even, though it is always under pressure not to invalidate the business concerns of other library vendors. These are all very much the concerns of the entire application core architecture that permeates many business layer concerns (and its probably a good idea that much of the presentation layer concerns were not included in standardisation, because the GUI concerns of 1998 are nothing like the 3D graphics / video / audio concerns of today, with the cool new 3D shells coming out from Sun and others). And the code generation mechanisms being constructed include heavy use of aspect oriented programming, code compositors and generators, pattern generators. And the c++ standard library is in the logical form necessary to interact with these new technologies, whereas MFC is hopelessly lacking. And the reason is the one I have been stressing on hierarchy structure freeze.
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  12. #42
    Join Date
    Sep 2002
    Posts
    1,747
    There has been an evolution. Here is an allegory:

    First, the post procedural world had B inherit from A to get A's functionality and add its own. But then when a C would come along that wanted some functionality from B bit didn't need any A, it was forced to bloat and introduce type relationships it didn't desire. Then someone realised refactoring through multiple inheritance could solve some of this with a little restructuring. But when such decisions were encoded in a hierarchy graph with descendants coupled to their ancestors' particular inheritance relationships, this was not possible. The entire graph would need to be deconstructed, lots of changes were needed and propagated across the entire graph. Generative programming is not possible in such an environment. Flexible reconstruction eventually moves to a terminal-object that has consumed all coded functionality into one application object that has inherited from every otherwise-leaf in the graph. You have lost all conception of the abstract data type domain and reverted to a procedural style with even less type safety (because of convertibility down the graph). Encapsulation was lost by encapsulating everything. But, looking to conatinment and multiple inheritance still lead to a lot of coupling between types. Flexibility points still required a standardised interface assumption to be made. By looking to templates, however, most generative needs could be solved. Simple type relationships like genericity and traits were made available. And combining inheritance and templates through mechanisms like parametrised inheritance, the curiously recurring template pattern, and other type relationships suddenly openned up new vistas of decoupling. Expression templates were possible. Type classification optimisations were possible. And with the use of the algorithmic decomposition that had always been the proposed style in the more mathematical and theoretical communities, generators on large scales were possible. Inheritance is still quite important, its just that much of its use has been in more fine tuned relationships and the ever present dynamic binding (which will always be extremely useful), and inheritance graphs are much more flat and fragmented into unrelated components.

    For example, look at DEMRAL (not that one, vicodin451!). The Domain Engineering Method for Algorithmic Libraries foucuses directly on modular decomposition (ie. it is quite object oriented) and aspectual decomposition mechanisms, but like many other methodologies, it has learned the power of templates in flexible type dynamics. In fact, it entire layer decomposition is orthogonally extensible and adaptable with very little coupling. Other similar methodologies are everywhere these days. Template metaprogramming has taken on a firestorm ever since Unruh's presentation at the standards committee. Its turing complete. Its type safe and can operate pattern matching algorithms over type space.

    The standard library advanced to this stage years ago. Traits are everywhere. There are some policies as well, though they are not in some places documented as such yet. Inheritance is used where dynamic polymorphism would be useful. The algorithmic abstraction is in place.

    The only unfortunate thing is that the community has not always followed very quickly. Mistakes in design are being made that make actual effects on money that companies can earn due to flexibility. The response should be to educate the community about such theory, focus on the working the students through the difficult parts until the appropriate epiphanies are made. What I don't see as an acceptable solution, however, is to avoid the issue because some people might be sensitive about their lack of knowledge in a particular area or present handwaving about how all things are relative without substantiation. Nobody knows everything (yet). Everyone goes down mistaken paths. You don't let the child walk into the haunted forest (you don't slap them or yell at them either). You take their hand and try to lead them down the glorious path of knowledge (with the angelic choir accompaniment and weird, golden ambient light).

    You have been very useful to me in that end, etaoin. You have explained fairly clearly I think some of the psychological resistance to the standard library. There really were a lot of people led astray by some poorly informed zealots of the object revolution that tried to sway people into monolithic hierarchies. That's quite obvious (** cough -- MFC! **). Really, the pattern and design community theorists never promoted that idea, but somehow it popped up. Refactoring and reuse still seem to be issues affecting mainly startup companies and large corporations, much more so than for intermediate but established corporations with stable product lines or in-house corporate stuff. So I can definitely see a large community of programmers who may view the new technologies with skepticism ("first they taught me this, then they taught me that, and now they're telling me XYZ is gonna solve all my problems???"). And yes, some of the techniques are reminiscent of older styles that were abandoned due to the misinformation. So it seems to me that in order to coordinate a population education on these topics, it will be necessary to first cut through some of the bs that has polluted the hype around the various paradigms and stressing the importance of not looking at paradigms as mutually exclusive entities. Demonstrating the challenges of iterative production and reuse through examples of actual locking dynamics which occur in monolithic approaches, showing the evolution of the theory in the design community, and explaining the places where practise has diverged from theory. I probably shouldn't make blatant comments like "you're still not thinking algorithmically" either, since that probably pushes defensive buttons (even though you handled your responses quite rationally, etaoin). I think, probably along the way there needs to be a lot more discussion of templates, since some of the responses negative to the standard library have focused on misunderstandings in the use of the template mechanism. I definitely think, though, that it is becoming clearer how to approach the education in the future, and that is certainly a happy thing to me!
    */*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/

    "It's hard to believe in something you don't understand." -- the sidhi X-files episode

    galathaea: prankster, fablist, magician, liar

  13. #43
    Join Date
    Aug 1999
    Posts
    586

    Re: std::string (and other stl) problems...

    Originally posted by etaoin

    With CString, I simply use Format(). With std::string, it appears that I have to first create an ostringstream object, stream the values into it using the '<<' operator, and when I'm done retrieve the string using osstream::str(). Isn't there an easier, more intuitive way? The same holds true for the stl container classes:
    STL is a deep subject and something I won't jump into here. As for formatting strings, have a look at the (trivial) "FormatStr" class seen under my name at http://www.codeguru.com/forum/showth...ght=formatstr. I'd be surprised if you want to go back to "CString::Format()" again.

  14. #44
    Join Date
    Dec 2003
    Posts
    23
    galathaea, thank you very much for your many valuable, interesting and insightful contributions to this thread. I really appreciate and share them (on a theoretical level), although I'm still stuck (on a practical level) with the same problem: Using the C++ standard library made my code more complicated and less readable. But maybe it's just me.

    In any case, I wish you all a Happy New Year!

Page 3 of 3 FirstFirst 123

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured