How to visualize 16bit-value 2-dimensional matrix in grayscale
In my project need to visualize data on a DialogForm (MFC project).
The data is 2-dimensional array of 16-bit values. I want image to be displayed in grayscale.
For displaying picture I use CStatic control variable m_Picture2, that is linked to Picture Control object on the Dialog Form.
From my experiecies results, that pixel values must be 4 bytes, otherwise nothing is displayed.
The problem is how to map my 16-bit values in order to have image in grayscale.
Here is my code that doesn't work properly. In order to facilitate my excersises I form pixel values using bitset containers. Original CString cs with undersores (to facilitate value perception) is then cleaned from undersores in CleanString function and converted to string object, that is finally passed to bitset constructor. Then the same value is used to initialize bitmap array. Finally the bitmap array is passed to CreateBitmap function and finally HBITMAP object is used to attach bitmap to CStatic variable m_Picture2, that is linked to Picture_Control.
I've never succeeded to obtain grayscale image.
Thank you in advance.
unsigned long bitmap_data[512*476];
CString cs = _T("1111_1111__0001_1111__0001_1111__0001_1111");
CString cs1 = CleanString(cs);
CT2CA pszConvertedAnsiString (cs1);
bitset<32> offset (str);
for (int i = 0; i < sizeof(bitmap_data)/4; i++)
unsigned long aa = offset.to_ulong();
bitmap_data[i] = aa;
HBITMAP hb = CreateBitmap(512, 476, 1, 32, bitmap_data);
Re: How to visualize 16bit-value 2-dimensional matrix in grayscale
you are creating a 32bit RGBA image with each of the 4 color components being 8bit.
for an grayscale image you will need to set each of the R, G, B values to the same value.
Or in code
// bitmap_data[i]=aa; remove this
unsigned char intensity = aa >> 8; // use only 8 highest bits
long rgb = intensity | ((long)intensity << 8) | ((long)intensity << 16); // set R, G, B to intensity
bitmap_data[i] = rgb;
The above assumes a actual 16bits of limunessence in a linear scale. if you're not using all 16bits, or the scaling isn't linear, you'll need a more complex calculation of the color component intensity value.
When I analize the bitmap of lImage object using the code below, I see, that double words, the bitmap consists of, have always the same pattern xyxyxyFF, where xy - some hexadecimal value. Does it mean, that initial depth of 16 bit was reduced to 8 bit ?
int size = GetBitmapBits((HBITMAP)lImage, 0, NULL);
BYTE * pBuffer = new BYTE[size];
for (int i = 0; i < lImage.GetHeight(); i++)
for (int j = 0; j < lImage.GetWidth(); j++)
CString cs, cs1;
BYTE* val_addr = pBuffer+i*lImage.GetWidth()+j*4;
for (int k = 0; k < 4; k++)
delete  pBuffer;