CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 5 of 5
  1. #1
    Join Date
    Apr 2011
    Posts
    20

    Question Real-time compression of image data

    Hello,

    I ran into the problem that the jpeg compression by means of JPEGLIB lasts too long, so that a real-time compression is not possible.
    My task is to capture video frame via an USB camera, compress the incoming frames on the fly and pass the compressed image frame further to the upper layers of my application.
    I tried to do so with the openSource library "JpegLib". However, if the frame dimension is 756x480 pixels ( about 1 MByte raw data) and the capture speed is 30 fps ( means each 33 ms a new frame arrives), then the compression duration should be below 30ms. The compression duration with default compression setttings and quality factor 100 is about 40 ms.
    This means that the latency will grow with time or even some data will be lost.

    Do you know how to increase the compression speed in the JpegLib? May be tuning some parameters? If it is not possible with JpegLib, then could you please suggest me a faster library?

    Thank you for responses in advance.

    best regards

    vanselm

  2. #2
    Join Date
    Apr 2009
    Posts
    598

  3. #3
    Join Date
    Apr 2011
    Posts
    20

    Re: Real-time compression of image data

    Now, I'm using the current version of the JpegLib ( release 8c of 16-Jan-2011). Is libjpeg-turbo really faster than release 8c?

    At the moment, I try to evaluate the Intel IPP library. I would like to check how fast it can compress image data. However, if the libjpeg-turbo is as fast as IPP, then I will use it.

  4. #4
    Join Date
    Nov 2002
    Location
    California
    Posts
    4,556

    Re: Real-time compression of image data

    Quote Originally Posted by vanselm View Post
    The compression duration with default compression setttings and quality factor 100 is about 40 ms.
    With a quality factor of 100, you get virtually no compression, which means that your approx 1 meg of raw image data takes up around 1 meg of compressed image data.

    So why bother compressing in the first place? Why not simply pass the raw image data up to your upper layers?

    If on the other hand your ultimate goal is to compress at a much higher compression ratio (for example, to compress the 1 meg of raw image data down to 100k at a very much lower quality factor than 100), then because of the very high compression ratio, the result when decompressed would be a very blocky-looking image. So under this circumstance, again, why compress at all? Why not simply perform some simple image decimation process, which takes virtually no time at all, and pass the decimated image data (without jpeg compression) on up to your upper layers. For example, you could replace each 3x3 block of pixels in the original image data with a single pixel whose value is set to value of a selected one of the pixels (e.g., the upper right hand corner in the 3x3 block), or whose value is set to the average of all 9 pixels in the 3x3 block. This would quickly "compress" your 1 meg of raw image data down to around 100k, and would result in an image that is no worse in quality than if you had subjected the 1 meg image to harsh jpeg compression and then decompresssed it.

    Mike

  5. #5
    Join Date
    Apr 2011
    Posts
    20

    Re: Real-time compression of image data

    Quote Originally Posted by MikeAThon View Post
    With a quality factor of 100, you get virtually no compression, which means that your approx 1 meg of raw image data takes up around 1 meg of compressed image data.

    So why bother compressing in the first place? Why not simply pass the raw image data up to your upper layers?

    If on the other hand your ultimate goal is to compress at a much higher compression ratio (for example, to compress the 1 meg of raw image data down to 100k at a very much lower quality factor than 100), then because of the very high compression ratio, the result when decompressed would be a very blocky-looking image. So under this circumstance, again, why compress at all? Why not simply perform some simple image decimation process, which takes virtually no time at all, and pass the decimated image data (without jpeg compression) on up to your upper layers. For example, you could replace each 3x3 block of pixels in the original image data with a single pixel whose value is set to value of a selected one of the pixels (e.g., the upper right hand corner in the 3x3 block), or whose value is set to the average of all 9 pixels in the 3x3 block. This would quickly "compress" your 1 meg of raw image data down to around 100k, and would result in an image that is no worse in quality than if you had subjected the 1 meg image to harsh jpeg compression and then decompresssed it.

    Mike
    Hello Mike,

    the quality factor of 100 does not mean that there is no compression at all. The size of the compressed image (at factor 100) is about 200kByte, where the original raw image is about 1 MB. Yesterday, I have read an article about jpeg compression and it says that going down to quality factor of 99 would decrease the size of the compressed image much. Indeed, if I set factor 95 for example, then the size is about 60kByte, instead of 200kByte.
    I cannot imagine that your suggestion will be faster than the jpeg compression and I guess the loss of image information is larger than using the jpeg compression.

    My next step is to try out the libjpeg-turbo and to see, whether it is faster than the original jpeglib.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured