CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Page 2 of 2 FirstFirst 12
Results 16 to 20 of 20
  1. #16
    Join Date
    Jan 2014
    Posts
    8

    Re: C++ Memory Mapped Files Trouble

    Quote Originally Posted by 2kaud View Post
    Why don't you write some simple programs with files on this hardware configuration and find out? Why have all the data packages in one file? Why not a file per data package with the name as the time stamp?
    Hmm, that's an interesting idea actually. Creating one folder including thousands of files...

    But creating one file for each data item, writing the data in it and saving won't be so costly from perfomance point of view? I mean is it possible to match the performance req. like lets say 15 packages per second with each data 2 MB size?

    And what about searching the files according to names (timestamp) for the reader applications?...

    I don't have this hardware setup now, it will be in the real system, so I have to develop it first on my personal computer.

    Do you have any estimation for the performance?

    Thanks.

    Edit:

    Furthermore, if wirting and reading a file from hard drive doesn't cost too much and decide to go with that way, also I can create a circullar array in memory (shared memory perhaps), while writing the data to the file, I can add a new element to my "metadata array" as well with timestamp and a unique id attributes and I can give this unique id to filenames. So the reader applications can make the search first from this meta data array (without going to harddrive) and if the data is found then they can search the file from hard drive with this unique id. (How much time does it take to search a File with id (name) in for exmaple 100000 file sized repository???)

    Yeah it seems really cool, only question is can I match that writing speed??? Yeah I have to try and see I think...

    Thank you very much again.
    Last edited by vecihi; January 28th, 2014 at 04:22 AM.

  2. #17
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: C++ Memory Mapped Files Trouble

    Quote Originally Posted by vecihi View Post
    Thank you. Yes as you mentioned i know the limitations for 32 bit systems but the project will be definitely 64 bit. So at least for the addressinf there won't be any problems.
    It won't be a problem to allocate the memorymapping.
    That doesn't mean there are no problems associated with it entirely. Memorymapping does not come entirely for free either. Don't use memorymapping unless you specifically have a specific need for it. From yoru description, you don't really "need" the memorymapping and in effect, it may make your total problem worse because you're hogging vram which may mean not physical ram is busy and you could end up causing excessive paging which will worsen your alreay tight throughput issue.

    As i said before what i'm trying to implement is instead of mapping the whole file just map a portion (lets say 1 gb) of it. While writing data to that portion the reader applications theoretically can reach the written data immediately (since the data is available on ram)
    There is no immediate benefit to mapping portions. Settin gup a memorymapping takes some OS interaction and again, if you don't really need the mapping nature, it's a bad idea.

    don't use memorymapping assuming this will work better than straightforward linear streaming (reading/writing) the entire packet.

    Memorymapping doesn't pay off unless you have recurring i/o operations in a random access pattern. If the access pattern is Always linear/sequential, then memorymapping is not the way to go.

    Even with memory mapping, you WILL need to synchronize the readers and writers. It's not "ram" as you seem to think. The reads/writes are not synchronised by the OS and the reader and writer can "catch up" to eachother (in fact they often will due to caching). You need to fully protect/synchronize the entire mapped file write/read.

    Reading the data can wait a little bit, i mean a little delay is acceptable. But writing is critical because we can not miss any data package.
    This strengthens my observation that memorymapping is NOT the proper way to solve your problem. You need a read/writer logic that prioritizes the writer over the readers.
    The need to synchronize the entire mapped file will block the writer. Which you're claiming you can't afford.

    As far as i know the operations like flushing and mapping other portions of the file are handled by kernel so i don't need to deal with them.
    correct. But simultaneous access from multiple threads is NOT provided by the OS. It is possible for one thread to read "half written" data.

    Also boost and microsoft offers this method for sharing data between seperate processes especially for extremely large files. That why i'm into this way.
    it is a good way to share random accessed data. It is not so good for sequential patterns (it works, but there's better alternatives). You still need to synchronize access.


    In every documentation its said that "it can be done very easly" but there is no example or code part about it. Thats the reason that i wrote it here.
    "easy" is the claim for just about every new technology I've seen emerge over the last few decades.
    Nobody likes to claim that their solution is "difficult" (nomatter how powerful it is).

    memorymapping as a whole is easy to get in to (for single thread).

    Multithreading adds complexity to just about everything. Memorymapping is a common pitfall where proper synchronisation is hard (or even forgotten entirely) because devs tend to forget that the OS manages the memorymap at the virtual page level, but your app accesses it at the byte level (or word, dword, qword depending on the datatype). I've seen many failures because of forgetting about that little detail.

  3. #18
    Join Date
    Jan 2014
    Posts
    8

    Re: C++ Memory Mapped Files Trouble

    Oruebens, thank you very much for your detailed answer. Yes i think you're right since i don't have fully control over this language and memory mapping concept it is a risk to go with that.

    What do you think about the other idea that i just wrote above your last post? Does it make sense for you?

    Thanks again.

  4. #19
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: C++ Memory Mapped Files Trouble

    I already suggested single files per packet in an earlier post.
    Just make a file per packet, and maintain an array/list/some_container in memory with the references (filenames) to those files. The reader/writers then only need to have synchronised access to the array/list/container to fetch a packet (file) to proces or to add a newly written packet (file).

  5. #20
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: C++ Memory Mapped Files Trouble

    Side note: the main issue here is the size of the data packets. If the packets were onlly a few Kb in size, then a solution wher eyou keep everything in memory would be more Obvious / make more sense.

    But the sheer volume (50-150Gb) makes a memory based solution problematic.

Page 2 of 2 FirstFirst 12

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured