How convert bytes to ASCII?
Hi
How can I convert bytes to ASCII?, I read wikipedia about UTF-8 and I understood a little bit about add or split bytes to change the value.
Now I have those bytes
0xC7 0xE6 0xC2 0x91 0x93 0x7B 0xCE 0x01
And I found a program (DCode) that convert to 64 bits little-endian, supposedly those bytes in ASCII is this.
lun, 08 julio 2013 04:28:17 UTC <---
But I don't understand how it works and I want to know how do it on C but I have no idea how can I do this.
Can anyone help me with a example? please...
regards
Re: How convert bytes to ASCII?
First you have posted to the wrong Forum. Your questions has nothing to do with the Visual C++, it is just plain C. So the thread will be moving now to the correct Forum.
Second, the DCode you refer to is for
Quote:
This utility was designed to decode the various date/time values found embedded within binary and other file types.
Note the date/time values, not just bytes!
Third, you have to learn C/C++ language before writing any conversion programs.
Re: How convert bytes to ASCII?
sorry was my mistake for the post, I apologize for that
And I am using Visual Studio 2012, but i'm very accustomed to use C, and I'm trying to do a software to recover data files deleted
I finally found $MFT and now can read Attributes, but I need decode those hexadecimals to UFT-8 ascii to see names and dates, etc...
am autodidact programming, and it's my first time learning about this, before of this post I read wikipedia about UTF-8 and I looked information but unfortunately I don't get it
that's the reason why I need a good explanation from professionals to understand this things.
I hope somebody can help with some explanation or example please
sorry for my english BTW
regards
Re: How convert bytes to ASCII?
Hi proxy,
I think that you are getting confused.
The utility you've mentioned was designed to decode various date/time values found embedded within binary and other file types.
That is the why your output is formatted as a date/time result with time-zone abbreviation.
The way to convert UTF-8 to ASCII is straight forward as UTF-8 was designed for backward compatibility.
For that reason UTF-8 only uses one-byte codes for ASCII values 0 through 127 which allows you to display the same character as the ASCII equivalent (and vice versa) when using this range.
For example:
UTF-8 Encoded Byte 0x41 = The capital letter 'A'
=
Unicode code point U+0041 = The capital letter 'A'
=
ASCII 0x41 = The capital letter 'A'
Any UTF-8 code point larger than 127 are represented by multi-byte sequences and will be lost when you attempt to convert it to ASCII as there is no equivalent character to display. These characters are usually replaced with a question mark.
Best regards,
Doron Moraz