CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Page 2 of 2 FirstFirst 12
Results 16 to 21 of 21

Thread: newmat vs boost

  1. #16
    Join Date
    Mar 2002
    Location
    St. Petersburg, Florida, USA
    Posts
    12,125

    Re: newmat vs boost

    Quote Originally Posted by cj-wijtmans View Post
    First of all if the case is the GPU is busy doing 3-D rendering CPU will most likely be at 100% if there is a multicore CPU, than the cores will be busy with other threads, even if they are idle.
    Depends alot on the motherboard...consider:
    Compared to a quad-socket Xeon X7460 (24 cores) at 2.66 GHz, the dual-socket X5570 at 2.93 GHz with HT enabled (two fewer physical CPUs, but 16 virtual cores and 8 physical cores) came in just 3.2% behind at 25,000 (compared to X7460's 25,830). With HT disabled (comparing 8 physical cores to 24 cores) it came in slightly lower at 23,650, about 8.4% behind X7460.
    if the machine is going to be dedicated to one application then that is ALOT of serious processing power.
    Quote Originally Posted by cj-wijtmans View Post
    and its not reasonable to assume that there is a 3D app running while doing complex heavy processing.
    The exact point is that without knowing more about the specific use case, either end of the spectrum (or somewhere in the middle) is possible.
    Quote Originally Posted by cj-wijtmans View Post
    multicore CPUs will be alot worse in data-processing than a single core.

    GPU's are really good at processing large ammount of data, it will be a rare case that in this situation a new GPU will lose against a new CPU.
    Much depends on the data coupling, and how much can be broken into independant chunks...again we dont know...

    Quote Originally Posted by cj-wijtmans View Post
    There are also cards available that are for physics processing. In the end the best soluttion would be a new card just for doing data-processing, but i dont see that happening.
    There is little doubt that a dedicated processor would be a good choice, but once again we know nothing about the actual situation to know if this would or would not be appropriate.

    Every point you have raised could easily be true, but there definately not enough known to make an accross the board declaration as to any of the approaches (including the one I introduced) being the "best" choice for a specific (but unknown) condition.
    TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
    2008, 2009,2010
    In theory, there is no difference between theory and practice; in practice there is.

    * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
    * How NOT to post a question here
    * Of course you read this carefully before you posted
    * Need homework help? Read this first

  2. #17
    Join Date
    Nov 2008
    Location
    Netherlands
    Posts
    77

    Re: newmat vs boost

    i agree its all "if"s

    but the fact that the multi task system will "poison" the core and that GPUs can process larger chunks at higher interval. i geuss there is a way to "dedicate" a CPU core to one thread, but i still dont see how that can beat a GPU

    hmm i was kinda hoping that physics processors would become standard in computers...

  3. #18
    Join Date
    Nov 2003
    Posts
    1,405

    Re: newmat vs boost

    Quote Originally Posted by Lindley View Post
    I know about CUDA and CTM and whatnot, I was just surprised to learn that Microsoft is throwing their own entry into the mix as well. It's not like they make graphics cards....
    Well, not owning hardware hasn't stopped Microsoft before. They already have their own low-level graphics driver called DirectX, and now they appearantly have their own language to control graphics cards for general computing.

  4. #19
    Join Date
    Nov 2003
    Posts
    1,405

    Re: newmat vs boost

    Quote Originally Posted by cj-wijtmans View Post
    i agree its all "if"s

    but the fact that the multi task system will "poison" the core and that GPUs can process larger chunks at higher interval. i geuss there is a way to "dedicate" a CPU core to one thread, but i still dont see how that can beat a GPU

    hmm i was kinda hoping that physics processors would become standard in computers...
    CPU's will get more and more cores but it will take some time until they reach hundreds which are available on the GPU today.

    Physics processors are kind of standard already because most new computers have a programmable GPU.

    http://www.ddj.com/hpc-high-performa...ting/207200659

    Both the CPU and GPU can be programmed in the same way using fine-grain parallelism. For the CPU the Intel TBB library can be used for example.

    http://www.threadingbuildingblocks.org/
    Last edited by _uj; January 13th, 2009 at 12:43 AM.

  5. #20
    Lindley is offline Elite Member Power Poster
    Join Date
    Oct 2007
    Location
    Seattle, WA
    Posts
    10,895

    Re: newmat vs boost

    I know they've got DirectX, I was just kinda hoping that for once everyone could just get behind one standard (OpenCL) rather than throwing multiple competing solutions at the problem yet again. Probably a good thing in the long run, but still slightly vexing.

    Anyway, the big bottleneck on GPUs for now is memory bandwidth, not core count. At least that was the case a year ago....I mean, everyone knows moving data between main memory and GPU memory is slow, but even on-card data transfers were a bottleneck in the last GPU program I worked on.

  6. #21
    Join Date
    Nov 2003
    Posts
    1,405

    Re: newmat vs boost

    Quote Originally Posted by Lindley View Post
    I know they've got DirectX, I was just kinda hoping that for once everyone could just get behind one standard (OpenCL) rather than throwing multiple competing solutions at the problem yet again. Probably a good thing in the long run, but still slightly vexing.

    Anyway, the big bottleneck on GPUs for now is memory bandwidth, not core count. At least that was the case a year ago....I mean, everyone knows moving data between main memory and GPU memory is slow, but even on-card data transfers were a bottleneck in the last GPU program I worked on.
    It's a complex world and big companies want to control it. There will never be just one standard. Although a relief for programmers I don't think one standard would be any good in the long run. Also standards need competition to stay fit. For example OpenGL really had become old and tired.

    The memory bandwidth problem isn't that much of a problem in practice. You just make sure to pass as little a possible as seldom as possible. Most applications can be decomposed to achieve this.

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured