
November 5th, 2004, 08:20 AM
#1
Binary to ASCII number conversion algorithms
I'm curious to know what algorithms are known for converting binary numbers to ASCII decimal.
When I first got started in assembly (6502), I came across someone's code that had a 24bit conversion routine. Since 6502 does not have a mul or div opcode, nor floating point instructions, all computation was done with addition/subtraction and bit shifting. What was most interesting is that the numbers were converted to ASCII starting with the one's place and worked up. I was easily able to adapt it to nbit number conversion.
Unfortunately, the code to this routine got lost somewhere and I never understood it enough to know how it worked.
So I was curious to know if others know this algorithm or others like it. I'm looking for efficiency.

November 18th, 2004, 03:15 PM
#2
Re: Binary to ASCII number conversion algorithms
One more shot... Can anyone provide some input?

November 18th, 2004, 03:44 PM
#3
Re: Binary to ASCII number conversion algorithms
Couldn't get much simpiler than this...
#define __toascii(_c) ( (_c) & 0x7f )

November 18th, 2004, 08:08 PM
#4
Re: Binary to ASCII number conversion algorithms
Originally Posted by Mutilated1
Couldn't get much simpiler than this...
#define __toascii(_c) ( (_c) & 0x7f )
Well... all I can say is that doesn't filter out nonascii control codes. Nor does it convert binary numbers to ASCII decimals.
Can anyone else spare their knowledge? I can't seem to search on this subject anywhere online.

November 19th, 2004, 12:47 PM
#5
Re: Binary to ASCII number conversion algorithms
oh so you want to take a binary 0000000000000010 and convert to "2" ? Is that what you want ? Your first post didn't say anything about nonAscii control codes either.

November 21st, 2004, 07:50 AM
#6
Re: Binary to ASCII number conversion algorithms
Originally Posted by Mutilated1
oh so you want to take a binary 0000000000000010 and convert to "2" ? Is that what you want ? Your first post didn't say anything about nonAscii control codes either.
Something...
I'm curious to know what algorithms are known for converting binary numbers to ASCII decimal.

November 21st, 2004, 10:17 AM
#7
Re: Binary to ASCII number conversion algorithms
It sounds like your major problem is dividing by 10 without using a DIV instruction. I came across the following algorithm by R.J. Mitchell when I was looking into a similar problem. It may help you:
Code:
unsigned int dividend = 1234; // The number to be converted
unsigned int divisor = 10;
unsigned int answer = 0;
unsigned int count ;
unsigned int MSB = 0x80000000; // The most significant bit of a 32bit unsigned number
// Normalize the number 10 for the size of an integer.
// This is a constant, of course, but I've included its calculation
// here to show how it's arrived at.
count = 1;
while( (divisor & MSB) == 0 )
{
divisor <<= 1;
++count;
}
answer = 0;
while( count > 0 )
{
answer <<= 1;
if( divisor <= dividend )
{
dividend = divisor;
++answer;
}
divisor >>= 1;
count;
}
// Quotient is in "answer", remainder is in "dividend"
The result of the division is in "answer", and the remainder is in "dividend".
To convert a binary number to ASCII, you would iteratively apply the above algorithm to the binary number and add hex 30 to each remainder.
After looking at this, and other associated overhead, I came to the conclusion that the hardware DIV instruction is a lot more efficient....
Regards
Robert Thompson

November 21st, 2004, 04:13 PM
#8
Re: Binary to ASCII number conversion algorithms
Interesting code bit. However you are all still misunderstanding the requirements. I merely stated I knew of a code snippet that did it without div/mul simply because the processor didn't support it. Obviously, it's good to use mul/div or even fmul/fdiv on today's Intel processors. I just want to see what other algorithms are known for binary to ASCII number conversion, meaning convert binary 1234 to string "1234".
So we know one obvious method of converting, based on the above code. Divide by powers of 10, starting in the highest place, to determine each decimal place in the ASCII number string, and subtract as you go. i.e. with 1234, divide by 1000, 100, 10, and the remainder in the one's place.
The method in the lost code snippet I mentioned went the opposite direction, by determining the number starting with the one's place and working upwards. I thought this was pretty interesting. I'd like to find the algorithm to do it, since it appeared to be a very efficient method.

November 21st, 2004, 04:36 PM
#9
Re: Binary to ASCII number conversion algorithms
The code actually works the way you initially described. Dividing 1234 by 10 gives a quotient of 123 and a remainder of 4 (low order digit). Dividing the quotient by 10 gives a new quotient of 12 and a remainder of 3 (next loworder digit), etc.
I suspect your 6502 code used the fact that some shift operations work with two registers "endtoend", while other operations work with a single register. The division by 10  which is the real crux here  then becomes easier. There are similar operations in the Intel world, too, but my assembler is far too rusty to attempt programming such an algorithm.
As you note, R.J. Mitchell's algorithm is interesting, but hardly of any practical use (other than as an assignment to some poor unsuspecting Computer Science class).
Regards
Robert Thompson

November 21st, 2004, 07:13 PM
#10
Re: Binary to ASCII number conversion algorithms
I hadn't thought of it that way. That's very straightforward. So then the only real optimization point is to divide by 10 to get a quotient and remainder.

June 2nd, 2007, 04:59 PM
#11
Re: Binary to ASCII number conversion algorithms
#include<stdio.h>
int main()
{
char buffer[8]="0010000";
char iVal={0};
for (int i = 0 ; i < 7 ; i++ )
{
iVal = (iVal << 1) + (buffer[i] & 1);
}
cout<<iVal;
}
return 0;
}
Posting Permissions
 You may not post new threads
 You may not post replies
 You may not post attachments
 You may not edit your posts

Forum Rules

Click Here to Expand Forum to Full Width
