CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 4 of 4

Hybrid View

  1. #1
    Join Date
    Jan 2013
    Location
    Lausanne, Switzerland
    Posts
    35

    How to visualize 16bit-value 2-dimensional matrix in grayscale

    Hello,

    In my project need to visualize data on a DialogForm (MFC project).
    The data is 2-dimensional array of 16-bit values. I want image to be displayed in grayscale.

    For displaying picture I use CStatic control variable m_Picture2, that is linked to Picture Control object on the Dialog Form.
    From my experiecies results, that pixel values must be 4 bytes, otherwise nothing is displayed.
    The problem is how to map my 16-bit values in order to have image in grayscale.
    Here is my code that doesn't work properly. In order to facilitate my excersises I form pixel values using bitset containers. Original CString cs with undersores (to facilitate value perception) is then cleaned from undersores in CleanString function and converted to string object, that is finally passed to bitset constructor. Then the same value is used to initialize bitmap array. Finally the bitmap array is passed to CreateBitmap function and finally HBITMAP object is used to attach bitmap to CStatic variable m_Picture2, that is linked to Picture_Control.
    I've never succeeded to obtain grayscale image.

    Thank you in advance.

    Pavel.

    Code:
    	unsigned long bitmap_data[512*476];
    	CString cs = _T("1111_1111__0001_1111__0001_1111__0001_1111");
    	CString cs1 = CleanString(cs);
    	CT2CA pszConvertedAnsiString (cs1);
    	std::string str(pszConvertedAnsiString);
    	bitset<32> offset (str);
    	
    	for (int i = 0; i < sizeof(bitmap_data)/4; i++)
    	{
    		unsigned long aa = offset.to_ulong();
    		bitmap_data[i] = aa;
    	}
    	HBITMAP hb = CreateBitmap(512, 476, 1, 32, bitmap_data);
    
    	m_Picture2.SetBitmap(hb);
    	m_Picture2.Invalidate();

  2. #2
    Join Date
    Apr 2000
    Location
    Belgium (Europe)
    Posts
    4,626

    Re: How to visualize 16bit-value 2-dimensional matrix in grayscale

    you are creating a 32bit RGBA image with each of the 4 color components being 8bit.

    for an grayscale image you will need to set each of the R, G, B values to the same value.

    Or in code
    Code:
    // bitmap_data[i]=aa; remove this
    //replace with
    unsigned char intensity = aa >> 8; // use only 8 highest bits
    long rgb = intensity | ((long)intensity << 8) | ((long)intensity << 16); // set R, G, B to intensity
    bitmap_data[i] = rgb;

    The above assumes a actual 16bits of limunessence in a linear scale. if you're not using all 16bits, or the scaling isn't linear, you'll need a more complex calculation of the color component intensity value.

  3. #3
    Join Date
    Jan 2013
    Location
    Lausanne, Switzerland
    Posts
    35

    Re: How to visualize 16bit-value 2-dimensional matrix in grayscale

    Thanks OReubens,

    Indeed this way I can get grayscale. But I didn't understand what you mean saying
    The above assumes a actual 16bits of limunessence in a linear scale
    If all colors R,G,B have char-type value, the grayscale depth is 8 bit ?

    Regards,

    Pavel.

  4. #4
    Join Date
    Jan 2013
    Location
    Lausanne, Switzerland
    Posts
    35

    Re: How to visualize 16bit-value 2-dimensional matrix in grayscale

    I've succeeded to show on DialogForm 576x476 16-bit .bmp depth grayscale image (unfortunately due to its size I can't attach it)

    Here is code:

    Code:
    	CImage lImage;
    	lImage.Load(_T("Buste-Louvre.bmp"));
    	m_Picture.SetBitmap((HBITMAP)lImage);
    	m_Picture.Invalidate();
    When I analize the bitmap of lImage object using the code below, I see, that double words, the bitmap consists of, have always the same pattern xyxyxyFF, where xy - some hexadecimal value. Does it mean, that initial depth of 16 bit was reduced to 8 bit ?

    Regards,

    Pavel.

    Code:
    	int size = GetBitmapBits((HBITMAP)lImage, 0, NULL);
    	BYTE * pBuffer = new BYTE[size];
                  for (int i = 0; i < lImage.GetHeight(); i++)
    	{
    		for (int j = 0; j < lImage.GetWidth(); j++)
    		{
    			CString cs, cs1;
    			BYTE* val_addr = pBuffer+i*lImage.GetWidth()+j*4;
    			for (int k = 0; k < 4; k++)
    			{
    				cs.Format(_T("%x"), *(val_addr++));
    				cs1.Append(cs);
    			}
    			cs1.AppendChar(' ');
    			OutputDebugString(cs1);
    		}
    		OutputDebugString(_T("\n"));
    	}
    	delete [] pBuffer;

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  





Click Here to Expand Forum to Full Width

Featured