CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com

View Poll Results: Do you design for Parallel Processing?

Voters
14. You may not vote on this poll
  • Yes, and I have for more than 2 years

    4 28.57%
  • Yes, and I have for more than 1 year

    1 7.14%
  • Yes, and I have for more than 6 months

    1 7.14%
  • No, but I plan to start

    4 28.57%
  • No, I dont know how

    4 28.57%
  • What is Parallel Processing

    0 0%
Page 1 of 3 123 LastLast
Results 1 to 15 of 39
  1. #1
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Today is the day most software turned to "garbage"...

    It is here...8 logical processors on a reasonably priced dekstop machine.

    Unfortunately it is estimated that 95% of all existing software is unable to make use of more than one processor.
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

  2. #2
    Join Date
    Dec 2004
    Location
    Poland
    Posts
    1,165

    Re: Today is the day most software turned to "garbage"...

    On topic: Generally I do not program for parallel processing because I do not need to. I have no experience in that either. But I am aware of it and when I should create / work on OS, or software performing extensive, "paralellizable" calculations, I would give it a shot.


    My OT rant: It's not only that, man... I think that we need to introduce new measurement of software quality: it would be expressed in percentage of cases it works correctly. Like, you know: "This Windows XP is whole heavelny 98% good! Not that [bad word] 50% Windows 98 was!" or "This new Java Enterprise Server is just lousy 70%'er, I had problems configurating it on 2 machines, and I can't get some stuff to work on remaining ones". I think you get the idea. It's just because nowadays, when silicon is cheap, mass storage is really massive, and so on, the only things that grow are system requirements and advertisement folders for new software. Quality drops dead, "brand-new, improved, easy to configure and deploy software which makes your industry running on top levels" is usually slow, unconfigurable, undeployable, expensive. Or, is that just me?
    And tell me, what is the difference in FUNCTIONALITY, ERGONOMY and USABILITY between Word '97 and Word TheNewestOne, which justifies tens to hundreds times bigger minimal requirements?

    Doh, I could go with this for hours...

    EDIT: feel free to discuss my points in IT Grumble thread! http://www.codeguru.com/forum/showth...82#post1783182
    Let's keep this thread and survey on topic
    Last edited by Hobson; November 17th, 2008 at 09:45 AM.
    B+!
    'There is no cat' - A. Einstein

    Use [code] [/code] tags!

    Did YOU share your photo with us at CG Members photo gallery ?

  3. #3
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Re: Today is the day most software turned to "garbage"...

    Quote Originally Posted by Hobson View Post
    On topic: Generally I do not program for parallel processing because I do not need to. I have no experience in that either. But I am aware of it and when I should create / work on OS, or software performing extensive, "paralellizable" calculations, I would give it a shot.
    That is the general attitude I expect. But I have to ask, would the people who are using your programs:

    a) Be perfectly happy running them on a 333MHz computer as they are on a 3.2Ghz computer

    b) would they turn down an upgrade to a 25GHz computer??


    My OT rant: It's not only that, man... I think that we need to introduce new measurement of software quality: it would be expressed in percentage of cases it works correctly. Like, you know: "This Windows XP is whole heavelny 98% good! Not that [bad word] 50% Windows 98 was!" or "This new Java Enterprise Server is just lousy 70%'er, I had problems configurating it on 2 machines, and I can't get some stuff to work on remaining ones". I think you get the idea. It's just because nowadays, when silicon is cheap, mass storage is really massive, and so on, the only things that grow are system requirements and advertisement folders for new software. Quality drops dead, "brand-new, improved, easy to configure and deploy software which makes your industry running on top levels" is usually slow, unconfigurable, undeployable, expensive. Or, is that just me?
    And tell me, what is the difference in FUNCTIONALITY, ERGONOMY and USABILITY between Word '97 and Word TheNewestOne, which justifies tens to hundreds times bigger minimal requirements?

    Doh, I could go with this for hours...
    THAT woud be an excellent thread, and I have distinct opinions.....
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

  4. #4
    Join Date
    Dec 2004
    Location
    Poland
    Posts
    1,165

    Re: Today is the day most software turned to "garbage"...

    Back on topic: does anyone here know, whether Windows, or its any part, helps me to utilize CPU cores in any way, which is helpful but invisible for me? Like, when I create new thread, does OS pick a core on which it is run? When I write native Win32 app, or MFC app, anything like that, are mouse messages processed on one core, painting messages on another, and other ones somewhere else?
    Does Windows (or MS libraries, like MFC and others) help me to use computational force of my CPU in a single native application without the need to use multi-cpu / multithreading APIs by myself?
    I would be really greatful for any explanations, readings and info on this topic.

    Cheers
    B+!
    'There is no cat' - A. Einstein

    Use [code] [/code] tags!

    Did YOU share your photo with us at CG Members photo gallery ?

  5. #5
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Re: Today is the day most software turned to "garbage"...

    The Parallel Extensions library [which will be baked in to .NET 4.0] allows you to effectively use multiple execution patchs, the Library in conjunction with the OS will very nicely make use of a significant number of cores (if used properly).
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

  6. #6
    Join Date
    Dec 2004
    Location
    Poland
    Posts
    1,165

    Re: Today is the day most software turned to "garbage"...

    Thank you for your answer.

    But as I understand, Parallel Extensions library will be available for .NET platform. And what about native applications?

    My main point is that I think, there should be transparent, implicit support for parallel processing provided by environment used by developer. I think that it would be good thing that i.e. when I make a call to some asynchronous WinAPI function, it is automagically executed on least utilized core. Or when I use some GUI library (let it be MFC, wxWidgets, anything), it processes messages, calls handlers and paints after picking the best (least used?) CPU. I think that these are library providers who shuold think how to make their code cope well with multicore systems, not me.

    You ask in your survey whether I design for parallel processing. My answer is: should I? Of course, sometimes I should, when performing paralellizable calculations etc. But in general case, should not it be components designed for parallel processing? Then, each desktop application created by me would efficiently use multiple CPUs without any effor from my side.


    I hope you understand my point, I feel that I did not express myself clear enough, but I can't better...
    B+!
    'There is no cat' - A. Einstein

    Use [code] [/code] tags!

    Did YOU share your photo with us at CG Members photo gallery ?

  7. #7
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Re: Today is the day most software turned to "garbage"...

    For native applications I am not aware of any Microsoft solutions at the higher level, but the information in the "slow chat" covers a bunch of lower level tools (e.g. Futures/PRomises) and their statuc in Dev10 (aka VS-2010).

    There is also the Intel libraries, which I have been using in production code for a few years.

    The biggest issue is that the entire philosophy of a massively mult-path system is radically different in design.

    For example, using immutable objects (ie EVERYHING is done in the consturcotr and ALL methods are const) rather than locking/synchronization is a powerful tool that usually causes people heads to explode the first time they encounter id (imagine NOT being able to add or remove items from a collection!!)
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

  8. #8

    Re: Today is the day most software turned to "garbage"...

    My vote was, No, but I plan to.

    I haven't the most prolific of software portfolios at the moment, most of the stuff I am doing is for work.

    My current home project won't be designed for parallel programming, but I know its successor will be.

    The biggest joke is my workplace where we have code that would run best on P4's, but we produce the servers to run it on and are now onto quad core processors. Three out of those four cores are left practically untouched.

    The response from the company is that there are no plans to do anything about it.

    Along with other issues, it can be like banging your head against a concrete wall to get them to move on anything progressive.

  9. #9
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Re: Today is the day most software turned to "garbage"...

    Quote Originally Posted by goatslayer View Post
    ....The biggest joke is my workplace where we have code that would run best on P4's, but we produce the servers to run it on and are now onto quad core processors. Three out of those four cores are left practically untouched.

    The response from the company is that there are no plans to do anything about it.

    Along with other issues, it can be like banging your head against a concrete wall to get them to move on anything progressive.
    This scenario illustrates my point quite well. IMPO: The real problem is that the architect for the project did not design for parallization 3-6 years ago.

    "Migrating" an application for single execution unit to multiple typically involes a scrap and re-write. But designing for it from the beginning is a fairly small incremental cost (relative to a scrap and re-write).

    Imagive how "delighted" everyone would have bit if the first time it was fun on a Quad, it immediatelyshowed a 3x performance improvement (you never get 100% scalability due to overhead).
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

  10. #10
    Join Date
    Mar 2004
    Posts
    235

    Re: Today is the day most software turned to "garbage"...

    I would like to learn how to code for multiple cores. I see some people have managed to optimize this way; but I cant find resources on how to do it? Do I simply create threads and the OS will put the processes on different cores? Are there specific OS constructs? I couldn't find online sources yet on how to code for multicore processes, only concepts.

    The best I have found is Taking advantage of multiple CPUs for games. Which has a few concepts.
    01101000011001010110110001101100011011110010000001110011011001010111100001111001

  11. #11
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Re: Today is the day most software turned to "garbage"...

    Quote Originally Posted by answer View Post
    Do I simply create threads and the OS will put the processes on different cores?
    If only it was that simple.

    The truth is that most people are taught from a very young age (3-5 years old) to think in sequential steps, to address each aspect of a "problem" individually and one after the other.

    Writing parallel executing programs requires that a person breaks this lifetime habit, and develops a thought process of "how many things can I do at the same time?", "How can i address the problem so that the parts are independant?".

    This is why the "thought" provide by Hobson a while back (intrinsic support) is only a very small part of the overall issue.

    One great "real-world" example (which has nothing directly to do wih programming) is the TV show "Extreme Makeover - Home Edition". For those not familiar with it, it involves demolishion of a home, the building and furnishing of a home in under 7 days for a deserving family". I can attest that they actually do it in that time frame because one of the projects was in a town about 7 miles from where I lived).

    They approach it by having a few hundred contracters (and often upwards of up to a thousand volunteers) all doing whatever they can, as early in the project as they can, with minimal dependancies on other workers completing theiur tasks. [It is really amazing to see]. There is no "specialized" technology involved.

    Compare this to the average home which takes 2-6 months to build....

    If someone was to approach a "conventional" builder, hand them a blank check, and tell them that they had 168 hours to COMPLETE (including furnishing and decorating) a project, 99.999% of them would be completely incapable of performing the task.

    It goes directly back to what I said at the beginning of the post....their mindset and approach is simply not suited to the task at hand.

    This is why I am disappointed (but not at all suprised that the MAJORITY of people who have responded to the poll answered "I dont know how". I really wish I could sample THAT category and ask if they had completed an education in programming at a school or other institution.....
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

  12. #12
    Join Date
    Jun 2008
    Posts
    592

    Re: Today is the day most software turned to "garbage"...

    I think the os should handle all the splitting of processes and threads onto the appropriate cpu.

    The os has way more leverage than the coders for processes do. It has many processes it can offload and not just little to big threads, so it should manage the cpu executions for all cpus.

    if you have 60 active process, why shouldn't the os split up the processes and redirect execution of these processes on seperate cpus while trying not to maximize one cpu out while another one doesn't get any use. I find it hard to believe that every coder out there will have to manage this issue. the amount of cpus on a single cpu isn't going to stay the same. The only good ideal is to let something have full control over all processes and let it deal with what gets to what cpu.
    0100 0111 0110 1111 0110 0100 0010 0000 0110 1001 0111 0011 0010 0000 0110 0110 0110 1111 0111 0010
    0110 0101 0111 0110 0110 0101 0111 0010 0010 0001 0010 0001 0000 0000 0000 0000
    0000 0000 0000 0000

  13. #13
    Join Date
    Mar 2001
    Posts
    2,529

    Re: Today is the day most software turned to "garbage"...

    Like, when I create new thread, does OS pick a core on which it is run?
    Hyperthreading is a big help here.

    In the past people had to use an API to start a thread on a processor other than 0.

    With Hyperthreading, threads are started on the core/processor deemed by the system.

    What I am saying is if you app is currently multi-threading and Hyperthreading is turned on in your bios, you are taking advantage of multiple processers/cores.
    Last edited by ahoodin; January 9th, 2009 at 12:02 PM. Reason: grammer
    ahoodin
    To keep the plot moving, that's why.

  14. #14
    Join Date
    Jun 2008
    Posts
    592

    Re: Today is the day most software turned to "garbage"...

    So I vote not to worry about threading unless you do lots of loops that can be broken out of sync. I will wait until c++0x threading library comes out to consider it further
    0100 0111 0110 1111 0110 0100 0010 0000 0110 1001 0111 0011 0010 0000 0110 0110 0110 1111 0111 0010
    0110 0101 0111 0110 0110 0101 0111 0010 0010 0001 0010 0001 0000 0000 0000 0000
    0000 0000 0000 0000

  15. #15
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Re: Today is the day most software turned to "garbage"...

    Quote Originally Posted by Joeman View Post
    So I vote not to worry about threading unless you do lots of loops that can be broken out of sync. I will wait until c++0x threading library comes out to consider it further
    That (along with Ahoodin's post) is there part that does get me "upset" (when seen in professional developers).

    First, there is no reason to wait for the ISO standard implementation of tools that help (but do not magically accomplish) parallalization; there have been very good libraries available for YEARS (consider Intel's offerings as an example).

    Second, this does NOT address the fact that nearly every "real-world" application can significantly beneift from taking advantage of all of the processors (hyper-threads, cores, cpus) that are available.

    Third, the entire philosophy of arctitecture and implementation is radically different. Unless this is taken into account now (or ideally 3-5 years ago), it will basically mean an entire scrap and re-write of the application (which most companies will not be willing to do, thus perpetuating the under utilization of the hardware).

    As a trivial example, consider the following sequence:
    Code:
    1) Perform I/O to aquire data
    2) Do calculations that must be sequential on the data in the order it is received
    3) Perform calculations on the results that "could" be done in parallel.
    First, lest assume that #2 is "fast" compared to #1... this leads to the first improvement (which is already carefully common
    Code:
    1) Initiate I/O as asynchronous blocks
    2) As blocks become available perform sequential calculations
    3) Wait for I/O (and calculations to complete
    4) Perform calculations on the results that "could" be done in parallel.
    Simple "overlapped I/O" (although it may be implemented using other techniques).
    Now lets work on the last step...The way most people approach it (and which every library and language implementation (present and near future) I know of) will take this is:
    code]
    1) Initiate I/O as asynchronous blocks
    2) As blocks become available perform sequential calculations
    3) Wait for I/O (and calculations to complete
    4) Spin up "processing paths" (threads, or fibers, or other constructs) to handle distributed work.
    5) Perform independant calculations on the individual paths
    6) Wait for all paths to complete.
    [/code]
    Better (probably) but still not taking into account the "philosophy".

    a) During steps 1-3 above our premise (which would of course be supported by measurements for any actual development) is that the CPU(s) will NOT be at 100% capacity, I/O will still be the controlling factor.

    b) Any method of spinning up "processing paths" will have overhead (e.g. creation of threads, initialization of state).

    So the application would be better designed as...

    code]
    1) Spin up "processing paths" (threads, or fibers, or other constructs) to handle distributed work at a lower priority, having them block when fully initialized.
    2) Initiate I/O as asynchronous blocks
    3) As blocks become available perform sequential calculations
    4) Wait for I/O (and calculations to complete
    5) Wait (if necessary) for all "processing paths" to become ready
    6) Perform independant calculations on the individual paths
    7) Wait for all paths to complete.
    [/code]
    This type of design requires a much more "global" view of the applicaiton as a whole, and it is very unlikly that we will see programming environments which can automatically adopt a "holistic" approach.


    If there is this much of a difference in structuring a simple path, imagine scenatios such as...

    a) A data intensive application where the user will typically perform some operations and then generate a report...if processing power is available, why not generate the report dynamically in the background so that from a users perception, the process becomes "instant"?

    b) A word process application that does spell checking on an existing document....Why not split the document up and do the checking on multiple threads...sure it will involve duplicate checks and more "total" processing, but if tuned to use available resources it will result in sghorter times

    c) An "Explorer" (not IE, but desktop) that pre-fetches the adjectent directories (ie those most likely to be navigated to) and puts a "watcher" on them?

    Compared to the list of applications that could not benefit from multiple execution paths, this list is virtual infinite.
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

Page 1 of 3 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured