CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Page 5 of 5 FirstFirst ... 2345
Results 61 to 64 of 64
  1. #61
    Join Date
    Jun 2015
    Posts
    208

    Re: Abstraction concept problem?

    Quote Originally Posted by OReubens View Post
    I do maintenance/revision a lot and you often see devs essently doing little more than writing 'raw pointer' interfaces and then just wrapping the pointers in a smart object and then patting themselves on the back for a job well done and a 'clean design'. With that in mind "pointers are evil" are a shorthand for getting 'bad developers' to think harder when they do introduce pointers.
    So it's really about personal frustration over a bad job situation right? No up-front designs. No communication. Haphazard introduction of public types and interfaces. Inexperienced programmers wreaking havoc with a power-tool like C++.

    Looks more like a management problem than anything else to me. Maybe it's time to change jobs.
    Last edited by tiliavirga; July 28th, 2015 at 03:37 AM.

  2. #62
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: Abstraction concept problem?

    Quote Originally Posted by tiliavirga View Post
    So it's really about personal frustration over a bad job situation right? No up-front designs. No communication. Haphazard introduction of public types and interfaces. Inexperienced programmers wreaking havoc with a power-tool like C++.

    Looks more like a management problem than anything else to me. Maybe it's time to change jobs.
    No... it's not personal frustration.

    It's just that pointers add complexity and confusion that can often be avoided by using better alternatives.

    there's nothing wrong with raw pointers, and sometimes (even in the world of proper C++), they're the right tool for the job and there are equally cases where smart pointers are the wrong tool for the job as well. Just because you changed a raw pointer into a smart pointer doesn't mean you did the right thing.

    like I said, "pointers are evil" is a jokingly, though semi serious, intended 'rule' to get developers to think harder about using pointers. If you can avoid them (even with some extra work) by using better types or other design approaches that tends to be the prefered way out because long term it will make maintenance of the code easier, and if you are making classes to be used by consumers that aren't in your own company (you're making class libraries, or tooling), then keeping pointers out of the public interfaces will make your classes easier to use by outsiders and generate less support issues.

  3. #63
    Join Date
    Jun 2015
    Posts
    208

    Re: Abstraction concept problem?

    Quote Originally Posted by superbonzo View Post
    unfortunately, I don't think so. The original issue was not just about the dangers of (raw) pointers, but rather their role in interface designs.

    I ( and probably OReubens ) claimed that values and then references should be preferred over pointers ( dumb and ( to a less extent ) smart ) whenever possible/reasonable.

    Whereas, tilavirga ( as far as I can tell ) claims that smart pointers should be *the* preferred ( and possibly only ) way of dealing with polymorphic objects; IOW, c++11 smart pointers ( referring to some object ) are to c++ as variables ( referring to some object ) are to Java.

    As a consequence of this claim it follows that the primary way of dealing with polymorphic objects ( always according to tilavirga's reasoning ) is via reference semantics.

    Now, both tilavirga and TheGreatCthulhu claim that value and reference semantics are equally safe at a design level, and that any observed issue about the latter ( in c++ ) should be imputed to a developer ignorance concerning OOP and/or to the use of unsafe constructs like raw pointers and such.
    I go even further and claim that it's the general consensus that shared-ownership polymorphic objects (ubiquitous in every OO program) are best treated as reference types with automatic memory management support. This is the established view since the introdction of Java some 20 years ago and it's now fully adopted by C++ as of version 11.

    Conversely, I claimed that reference semantics is intrinsically more complex than value semantics ( given two otherwise identical interfaces ) and that if we want to pass an object of type T,

    ---> responsabilities of the "handle" --->
    T, T const&, T&, unique_ptr<T>, weak_ptr<T>, shared_ptr<T>, T const*, T*

    we should always choose the type with the the smallest responsability compatible with the interface semantics.
    The semantics of a T object is an intrinsic property of T. Whether T is to be a value type or a reference type is determined by design. No handling of a T object can change its semantics this like for example how it's passed or stored.

    So your "principle" is moot since all "handles" are open for use with T regardless of whether it has been given value semantics or reference semantics.

    In practice a reference type T most likely will be a (reference counting) smart pointer, like

    Code:
    class T_Base {};
    typedef std::shared_ptr<T_Base> T;
    And a T parameter in an interface function will typically look like this,

    Code:
    void fun(T t) {}
    or possibly a const-ref version for efficiency reasons like this

    Code:
    void fun(const T& t) {}
    How evil is that?
    Last edited by tiliavirga; July 28th, 2015 at 09:00 AM.

  4. #64
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: Abstraction concept problem?

    Quote Originally Posted by tiliavirga View Post
    I go even further and claim that it's the general consensus that shared-ownership polymorphic objects (ubiquitous in every OO program) are best treated as reference types with automatic memory management support. This is the established view since the introdction of Java some 20 years ago and it's now fully adopted by C++ as of version 11.
    GOing out on a limb that I may be totally misinterpreting this... but...

    JAVA has automatic objects and a garbage collector.
    C++ does not. There is nothing like a "shared ownership consensus" AT ALL in C++. In fact C++ does not have ownership, period. If you desire owhership of any kind it is up to the programmer to explicitely program it that way, and there's a couple classes in the standard library that help you do this, but they are by no means the best or only way to achieve that.


    In practice a reference type T most likely will be a (reference counting) smart pointer, like
    How evil is that?
    The above is VERY evil. Probably even more evil than the simple 'in jest' uttered "pointers are evil" rule.

    Saying that a reference type will Always (or mostly) be a reference counted smart pointer is dangerous.
    It leads to people overly implifying by just making everything reference counted, and while that may mostly work most of the time, it can have serious consequences.

    The prefered type to use is unique_ptr<> by default, only use shared_ptr if you know you need shared ownership or if you don't know from the start if you are going to need shared ownership (you really should do more analysis beforehand).
    The main reasons to prefer unique_ptr is that:
    - it's simpler, and you Always want to use the simplest semantics that are sufficient. This isn't any different to choosing a type of iterator or type of container for a particular problem. Always prefer the simplest semantics that are sufficient. Doing otherwise will increase code size, add extra restrictions, and lower performance.
    - unique_ptr is more efficient since it doesn't need a control block, management of weak_ptr's or management of a reference count (which typically is synchronised, so we're talking several hundred clockcycles)
    - unique_ptr is more flexible and keeps your options open. You can easily convert to a shared_ptr later, the reverse might require lots of minor code changes. unique_ptr also allows you to return to a raw pointer easily if you need this.


    If you want to do things right.
    You use shared_ptr when you want to clearly indicate shared ownership of the pointer
    you use unique_ptr when you want to clearly indicate a unique ownership (when used as member) and transfering of ownership when used as a (value) parameter.
    You use a raw pointer (only as parameter) when you want to indicate that no ownership can be assumed, the caller maintains ownership and the lifetime of the pointer cannot be assumed to exceed the lifetime of the function scope.
    You use a raw pointer as member, only when you're managing ownership yourself and for whatever reason none of the STL smart pointers are a suitable enough fit to your needs.

    With the above: for function parameters.
    Use T& or T* as function parameters, only use smart pointers (see below) when you want to indicate some change in ownership is involved.

    void foo( unique_ptr<T> p) transfers ownership of the pointer to the function, the object will either be destroyed, or have it's ownership transfered further by this function. This is also called a "sink". After calling this foo(), the caller no longer has a (guaranteed) valid pointer to T.

    void foo (unique_ptr<T>& p) implies that this function is going to (or may) transfer ownership of a new T INTO p.

    void foo (const unique_ptr<T>& p) this is semantical nonsense. it's const, so it's only an input, so why use unique_ptr since no ownership can ever be transfered in this function. This function should have been prototyped as void foo(T* p) (is null is required) indicating no ownership is assumed or foo(T&)

    void foo( shared_ptr<T> p) implies that shared ownership will be MOVED (std::move). For anything other than that it's a load of performance lost (the ref count is incremented and decremented) with nothing gained. There is a small/tiny usase where you are given a temporary shared pointer and this need to temporarily take ownership, but this is more indicative of bad design desicions elsewhere. With reference counting, you want to avoid doing any actual reference counting unless there is an explicit and actual intent to create a new reference to T.
    What people often get wrong (even experts) is that the above is NOT NEEDED to imply lifetime of T for the duration of the function. You already get that for free even if passed as a plain T*.

    void foo (shared_ptr<T>& p) ok, implies that this function is going to (or may) change p into having ownership of another T.

    void foo (const shared_ptr<T>& p) probably wrong and you wanted foo(T*) (if nullable) or foo(T&). I know from a previous discussion about this that there seems to be some confusion on a "somwhat obscure" usage case, but I can't say I've encountered a realistic need for it.


    The problem with above is that it isn't immediately Obvious when looking at code whether the above rules are being followed. is that foo(T*) obeying the above "rules", or did whomever write this just use a sloppy/oldschool raw pointer. I have suggested to some people in this predicament to add a unowned_ptr<T> class which is a simple wrapper over a T* implying no kind of ownership. That way you can change T* parameters to unowned_ptr<T> and distinguish "old/bad" code from "modern" code.


    still not convinced pointers are complex and 'evil' ?
    Last edited by OReubens; July 29th, 2015 at 08:34 AM.

Page 5 of 5 FirstFirst ... 2345

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured