CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 6 of 6
  1. #1
    Join Date
    Jan 2005
    Posts
    111

    rebasing and memory allocation

    Hi,

    Is there any relation between rebasing and memory allocation?

  2. #2
    Join Date
    Aug 2001
    Location
    Germany
    Posts
    1,384

    Re: rebasing and memory allocation

    With rebasing, you specify where the module needs to loaded when the application is loaded. This saves time (startup is little faster, helps the loader to load the module at the specified memory location, then to comeup with one by itself) I think if there rebasing is done incorrectly, the loaded still will be able to load the module (at the new computed address).
    Having said all that, I am not sure if that would effect memory allocation, one byte of memory would still equate to one byte.

    Hope this helps,
    Regards,
    Usman.

  3. #3
    Join Date
    Jan 2005
    Posts
    111

    Re: rebasing and memory allocation


  4. #4
    Join Date
    Nov 2006
    Posts
    1,611

    Re: rebasing and memory allocation

    Well, there's some relationship, but you're not going to work magic.


    In theory you might do as well either by switching to a non-MFC framework, linking statically, or moving to a 64bit version of the OS.

    However, if you ever encounter a situation where you have allocated near the 2Gbyte space limitation of the operating system, such a change is likely only going to open up just enough room to be annoying. It may even give you a slight margin more room, but whatever underlying problem you might face on space limitations isn't alleviated much. By the time you reach this limit, you're riding so close to the margin you need another solution, not this one.

    Are you looking for a specific solution to a problem, or just conducting research into the things pro's do to handle large memory application needs?
    If my post was interesting or helpful, perhaps you would consider clicking the 'rate this post' to let me know (middle icon of the group in the upper right of the post).

  5. #5
    Join Date
    Jan 2005
    Posts
    111

    Re: rebasing and memory allocation

    hey JVene,

    thanks for the reply.
    Yes am researching into the things pro's do to handle large memory application needs.

    In http://www.ddj.com/windows/184416272?pgno=1, I read that

    When selecting base addresses for DLLs, Microsoft suggests that you select them from the top of the allowed address range downwards, in order to avoid conflicts with memory allocated dynamically by the application (which is allocated from the bottom up).


    So I think huge allocations can be a problem in real world systems.


    -Ajay

  6. #6
    Join Date
    Nov 2006
    Posts
    1,611

    Re: rebasing and memory allocation

    So I think huge allocations can be a problem in real world systems.

    ...and this has been a relative judgment, or a moving target, in all the years I've been doing this.

    Back when the PC was new, and the 'basic model' RAM size was 64K, with 256k being among the most common RAM size the PC was sold in, a quote was attributed to Bill Gates, but I paraphrase:

    "You can rule the world in 64K."

    I don't really know if he was the origin of that remark, but it was part of the general discussion of the PC's upper memory limit of 640K in a machine that 'could' address as much as 1 Mbyte. Even before those days, when 64K was all you could get an 8bit CPU to address, were we trying to look beyond those limits imposed upon us by whatever architecture defined the leading edge.

    It is actually instrumental to recall the act of fitting larger than reasonable applications into that 640K limitation (that is, before the advent of extended or expanded RAM). Any boundary you mark off, and then want to exceed, leads to a study of methods and approaches that, despite the advancement from 640K limits to whatever the current 64bit leading edge IS now, are almost identical. It's easier to practice the techniques in a limit of 1Mbyte than a limit of 1Gbyte, but there are dynamics involved that alter the concepts slightly.

    For example - the 486 CPU could, in theory, offer you 2Gbytes of RAM under, say, Windows98 (I think it was still supported at that time). However, the speed at which the 486, and it's associated RAM and chipset, could provide throughput made it all but impracticable. If memory serves, even the Pentium 90 could offer little more than 300Mbytes per second, or so, such that if you DID have 2Gbytes of RAM, it would take several seconds just to DO something with all of it.

    That translates into a bit of wisdom. Beyond some practical limit of processing cycle/application speed/CPU performance (etc), there exists some maximum PRACTICAL amount of RAM you can manage. Not address - manage. For the old P90, I think that was about the region of 256Mbytes.

    You could GET it to run more than that, but the overall speed reduction wasn't worth the money - you were better off moving to the next gen platform (the Pentium 2 or better). If you had more than 256Mbytes on a P90, it would best be used as a disk cache rather than an OS/application processing area.

    Now, 2Gbytes is a strange place. You have 1Ghz machines, especially the older AMD chips, that could provide somewhere around 1GByte per second throughput (or so), matching, roughly, the 'balance' between speed and size that was the P90's 300Mbytes per second against, say, 256 or 384Mbytes of RAM. Some of the 1.2 and 1.4 Ghz chips had faster buses, giving perhaps 1.5 Gbytes per second, or so, bringing that 2Gbyte 'maximum' that was always part of the 32bit revolution (well, for Window's tasks, that is) into full use for the first time.

    Now, it's all but common to have 1 or 2Gbytes of RAM, but then the recent chips sport 4 to 6 times the RAM throughput, roughly, than the 32bit AMD 1.4Ghz chips, and again the practical limit is well beyond 2Gybtes, but nowhere near the upper theoretical limit of the chip itself. Here again, you have a potential of stuffing so much RAM on the machine that it would take it several seconds to 'do' much of anything with it. I'm guessing, but that's pushing the practical limit to somewhere around 6, 8 or - if you're tolerant - 16GBytes of RAM.

    There's more to the puzzle now than before. Up 'till now, there's been only one channel between a single CPU and it's RAM. That's no longer true, so this 'limit' of - let's pick 8Gybtes to be really well balanced - isn't the overall system's limit, but that of a single core/RAM connection. So, the usable RAM of a system will now depend on the overall CPU/BUS/RAM complement, and I think we have (expensive) examples that can reach the 32 and 64Gbyte range. I think it no coincidence that these also represent, roughly, the available motherboard features.

    I take the view that a good C/C++ programmer should have some notion of the assembler implications of their work, but not so much as to become an assembler developer. Likewise, I think it important to understand the underlying hardware so you understand why there are limits, what you can expect as you try to go beyond them, and thereby decide if you actually need to procure new hardware for the target you have in mind, relative to the expense of developing solutions.

    Some of us can't help but push the limit beyond reason, like those of us cramming 128Mbytes of RAM onto a 486 motherboard using soldering irons, schematics and firmware hacks. It wasn't pretty, but, in a way, amazing to even try. (In case you're too young to remember, 486 motherboards were limited to about 4Mbytes, sometimes 16Mbytes of RAM).


    One other reason I say that 2Gbytes is an odd size is because it's also the limit of the 'automatic' virtual memory the OS gives you in a 32bit OS. That didn't exist, practicably, before, I think, about 2002. In 2000, you were maxed if you had 384Mbytes of RAM, on most machines, so VM could carry you all the way to 2Gbytes, slowly.

    Now you have 2Gbytes of RAM (or more) on a 32bit OS, and the VM can't take you any farther without compiler switches and OS cooperation (to get past the 2Gybte per process limit).

    Memory mapped files can take you farther, but it's a squeeze.

    You can open a file, map it into a 'view' - which means you've mated your memory address space to a virtual memory subsystem that uses the disk as it see's fit to back up the RAM's contents. If you 'move' the 'view' up and down that file, well beyond the 2Gbyte storage limit, you can 'window' that block of memory to as much storage as you like. It operates like RAM, but as you move the 'view' up and down the file, the OS automatically stores any updated contents to the file (in VM style), and automatically loads from the new window (also in VM style) to accommodate.

    It's not unlike the old style windowed memory of the 386 while under DOS. There we had a 64K window we could 'swap' or 'page' into view from the RAM beyond the 1Mbyte boundary of the 16bit OS. The difference, though, was that THAT was RAM, memory mapped files are disk, and much larger 'windows' on a much slower approach.


    RAM problems are of two general flavors, fragmentation that leads to RAM exhaustion, or plain RAM exhaustion. If fragmentation is the problem, you can generally use object techniques to solve the problem (custom allocators, adjusting the kind of containers you use, various solutions exist). Of course, there's an entire crowd of Java, C# and otherwise .NET developers that will tell you just use GC. That's a change of channel, so I'll leave that choice to you. It still won't erase the upper limits of the process space according to whatever hardware/OS you're using, but it will help with the fragmentation problem.

    The general theme of stuffing more into a box than it was designed to handle is - bucket brigade. You break the problem into smaller, more manageable units, and swap up and down some larger container of those units. That's, summarily, what VM does - though fluidly (if slowly), and 'behind the scenes'. You have to get involved if that doesn't quite fit with your performance goals. That is, if whatever segments your job into small bits doesn't respect the boundaries of your processing (which VM doesn't), then you're going to 'thrash' back and forth, wasting time. You have to coordinate, possibly summarize and 'composite' the results of several smaller units into 'the big job' as required.

    Objects can give you considerable leverage in implementing such solutions. If you're careful, you can even design such notions in a way that are applicable to larger/future OS/hardware advancements AND networked machines that work in parallel (like a rendering farm, or the folding at home project).

    The one OTHER thing that comes up occasionally is new to the multi-core hardware becoming common, it's the fact that the heap is serialized. I have a few threads around here on a custom multiplexed allocator that helps with that, but the problem comes up if you spin off threads that allocate lots of objects rapidly. They 'could' operate in parallel, if it weren't for the fact that 'new' or 'malloc' must lock against a critical section, serializing all allocations. This will become more obvious in 4 and 8 core machines of the near future. My own solution is to multiplex the allocations per thread, reducing the potential for collision during allocation, ramping performance of this task dramatically.

    Certain tasks require rather specific solutions, like editing a huge video file (and, really, you must have reasonable hardware). Others just take the machine some time (a large bitmap in 16bit per channel color that occupies 2.5GBytes of RAM - say a high resolution/high def color photo from a U2 spy plane - 32K lines or more). You may have to specify a 64bit OS to even do it practicably, but then, you're not going to 'see' the whole thing at once - which opens up optimization possibilities that only cause performance issues when panning/zooming).
    If my post was interesting or helpful, perhaps you would consider clicking the 'rate this post' to let me know (middle icon of the group in the upper right of the post).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured