-
December 29th, 2010, 07:15 PM
#1
Decimal to hex conversion - problems in code
Hi!
I'm trying to write a program that converts an inputted decimal number into its hex equivalent. Here is the source code along with the .h files:
Code:
.286
.model small
datas SEGMENT byte PUBLIC 'data'
newl DB 10,13,'$'
msg_hex DB "Hexa: $"
msg_zec DB "Dec : $"
tabela DB '0123456789ABCDEF'
bufin DB 6, 7 DUP (?)
zece Dw 10
datas ENDS
stacks SEGMENT PARA STACK 'stack'
db 256 dup(?)
stacks ENDS
codes SEGMENT BYTE PUBLIC 'code'
assume cs:codes,ss:stacks,ds:datas
mov ax,datas
mov ds,ax
jmp start
include cio.h
include hexa.h
include citcuv.h
include asciitab.h
start:
citzec:
outstr msg_zec
call citcuv
outstr newl
mov cx, 5
xor ax, ax
mov si, offset bufin + 2
trans:
mul zece
mov dx,ax
xor ax,ax
lods bufin
cmp al, '0'
jl citzec
cmp al, '9'
jg citzec
sub al, '0'
add dx,ax
mov ax,dx
loop trans
afisare:
outstr msg_hex
mov dx,ax
xor ax,ax
mov al, dh
call ascii
co ah
co al
xor ax,ax
mov al, dl
call ascii
co ah
co al
outstr newl
final:
mov ax,4c00h
int 21h
codes ENDS
end
cio.h:
Code:
;reading an inputted character
ci macro
mov ah,1
int 21h
endm
;reading an inputted string in a buffer (bufin) that starts with the even offset
in_str macro even
pusha
mov ah,0ah
lea dx,even
int 21h
popa
endm
;outputting a character
co macro even
pusha
mov ah,2
mov dl,even
int 21h
popa
endm
;outputting a string that starts with the even offset
outstr macro even
pusha
mov ah,9
lea dx,even
int 21h
popa
endm
hexa.h:
Code:
;the routine converts an ASCII character ('0'...'9' 'A'...'F' 'a'...'f') in a Hexa digit ('0'...'F')
;in: AL = the ASCII character to be converted
;out: CF = 1: invalid character
; CF = 0: CX(CL) = the coresponding Hexa digit
tabconvinv db 'fedcbaFEDCBA9876543210'
hexa PROC
cld
mov cx,22
push cs
pop ES
lea DI,tabconvinv
REPNE scasb
JZ cont
STC
RET
cont:
CMP CX,0FH
JBE OK
SUB CX,6
ok:
CLC
RET
hexa ENDP
citcuv.h:
Code:
;the routine transforms a ASCII string in a 16b word
;the routine needs declaration of a bufin strin in the data segment:
;BUFIN DB 6, 7 DUP (?)
;out: CF = 1: invalid character
; CF = 0: DX = word
citcuv proc
in_str bufin
mov cl,bufin+1
xor ch,ch
xor dx,dx
cld
lea si,bufin+2
altcar:
push cx
lodsb
call hexa ;CX = 000h
jnc oka
ret
oka:
shl dx,4 ;DX = xxx0
or dx,cx ;DX = xxxh
pop cx
loop altcar
clc
ret
citcuv endp
asciitab.h:
Code:
;the routine converts a Hexa digit ('0'...'F') in the coresponding ASCII character ('0'...'9' = 30h...39h; 'A'...'F' = 41h...46h)
;the routine uses a conversion tabel like the one declared in the data segment
;tabela db '0123456789ABCDEF'
;in: AL = x:h
;out: AL = car. ASCII
ascii proc
and al,0fh
lea bx,tabela
xlat
ret
ascii endp
The decimal numbers are inputted as 5 digits: 10 = 00010; 142 = 00142; 67528 = 67528
The problems I've encountered are:
1. The conversion seems to be going well, but only the second and forth hex characters are outputted correctly
2. How do I treat numbers greater that 65536? This code outputs 4 hex digits, whereas for numbers greater that 65536, 5 hex digits must be outputted
I appreciate any type of help.
-
December 29th, 2010, 08:15 PM
#2
Re: Decimal to hex conversion - problems in code
Sorry, I've noticed a few mistakes I made while writing the post.
First of all, in the source code, I forgot to include bascii.h. Also, this is the correct code for 'afisare':
Code:
afisare:
outstr msg_hex
mov dx,ax
xor ax,ax
mov al, dh
call bascii
co ah
co al
xor ax,ax
mov al, dl
call bascii
co ah
co al
outstr newl
bascii.h:
Code:
;the routine converts a byte (two hex digits) in two coresponding ASCII characters
;in: AL = h:l
;out:AX = AH:AL - 2 ASCII characters
bascii proc
mov ah,al
shr al,4
call ascii
xchg ah,al
call ascii
ret
bascii endp
Again, I'm sorry. And please help me.
-
December 30th, 2010, 10:23 AM
#3
Re: Decimal to hex conversion - problems in code
Originally Posted by kamina
1. The conversion seems to be going well, but only the second and forth hex characters are outputted correctly
This one is easy. You are outputting your hex digits by invoking the macro co like this:
Code:
xor ax,ax
mov al, dh
call bascii
co ah
co al
The first one of these macro invocations gets expanded to this:
Code:
;outputting a character
co macro even ; even is ah here
pusha
mov ah,2
mov dl,ah
int 21h
popa
endm
It should be obvious that AH does not hold the hex digit anymore at the point you access it.
2. How do I treat numbers greater that 65536? This code outputs 4 hex digits, whereas for numbers greater that 65536, 5 hex digits must be outputted
This one is not that easy. As this is 16-bit code, the biggest integer you can easily handle at a stretch is 16-bit, i.e. four hex digtis, as well. In order to handle more, 32-bit would be a natural choice e.g., your code would need substantial changes. (But it's not really horribly complicated either...)
HTH
Ah, and... Welcome to CodeGuru!
I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.
This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.
-
December 30th, 2010, 12:11 PM
#4
Re: Decimal to hex conversion - problems in code
You're right, I missed that. Thanks a lot. (Thank you for the welcome as well ) I made sure to save AH's content in DH and invoke the macro for DH.
Now, what changes do I have to make in the program to convert even greater numbers that 65536?
-
December 30th, 2010, 03:47 PM
#5
Re: Decimal to hex conversion - problems in code
Originally Posted by kamina
Now, what changes do I have to make in the program to convert even greater numbers that 65536?
Simplest part first: the output. Outputting a 32-bit hex number is about as simple as outputting a 16-bit number. Simply output the high-order 16 bit first, then the low-order 16 bits. If you don't leave any space between them you have a 32-bit hex number. Et voilĂ*!
Converting the input to a 32-bit binary number is a bit trickier: The MUL instruction you use already gives you a 32-bit result in DX:AX but that's not enough yet because it takes two 16-bit numbers as input while you need one that multiplies a 32-bit number by a 16-bit number giving a 32-bit result. Unfortunately, there is no such instruction in 16-bit x86 assembly. Therefore you need to multiply the high-order and low-order 16 bits of the 32-bit input value separately and add the partial results together. As I think it is about the same effort as explaining it, I modeled such a multiplication for you that you can use as a template. I did that in C++ inline assembler, though, to avoid the effort of using the actual assembler.
Code:
#define HIWORD(n) ((unsigned __int16)((n) >> 16))
#define LOWORD(n) ((unsigned __int16)((n) & 0xFFFF))
unsigned __int32 mul_32bit_by_16bit(unsigned __int32 a, unsigned __int16 b)
{
unsigned __int16 a_hi = HIWORD(a), a_lo = LOWORD(a);
unsigned __int32 result;
__asm {
xor dx,dx
mov ax,a_hi
mul b // Multiply high-order word
mov bx,ax // Put aside partial result in BX
xor dx,dx // Should be 0 anyway unless we hit an overflow, but to be sure...
mov ax,a_lo
mul b // Multiply low-order word
add dx,bx // Join the two partial results
// 32-bit result now is in DX:AX
shl edx,16 // Join result into 32-bit register for C++ function return
and eax,0FFFFh
or eax,edx
mov result,eax
}
return result;
}
Of course all that would be much simpler if you could use the 32-bit registers which are seamlessly available in 16-bit code as well as long as only the CPU supports them, but the .286 directive in your code seems to imply that this is not an option.
Keeping track of a 32-bit total instead of a 16-bit one occupies more registers of course, so you may need to either use the index registers SI and DI in addition to the GP registers or hold some temporary values in RAM storage.
And of course you need to increase the maximum number of decimal input digits that can be processed to 10 if you want to cover the entire 32-bit range.
I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.
This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.
-
January 2nd, 2011, 06:22 PM
#6
Re: Decimal to hex conversion - problems in code
I decided to work with 32-bit segments and, with the help of a friend, I managed to solve the problem (I even made it work for negative numbers, as well). I have, however, one (stupid) question. As you can see, the impart sequence and the preafisare sequence are identical. My question is: what does the hex sequence do to the initial number? I tried removing it or merging the code from between the two identical sequences, but that was useless because it simply outputted the number in its decimal form (no hex conversion).
Here is the source code:
Code:
start:
MOV AX, datas
MOV DS, AX
zechex:
outstr msg_zec
mov ah, 1
ci
mov dx, '1'
mov semn, dx
cmp al, '-'
je citire
mov dx, '0'
mov semn, dx
cmp al, '+'
jne zechex
citire:
call citcuvd
mov eax, edx
impart:
xor edx, edx
div saisprezece
inc nr_elem
push edx
cmp eax, 0
je beginHex
jmp impart
beginHex:
xor eax, eax
hex:
mul zece
pop edx
add eax, edx
dec nr_elem
cmp nr_elem, 0
je beginAfisare
jmp hex
beginAfisare:
outstr newl
outstr msg_hex
preafisare:
xor edx, edx
div saisprezece
inc nr_elem
push edx
cmp eax, 0
je afisare
jmp preafisare
afisare:
pop edx
mov eax, edx
cmp semn, '1'
jne nr_pozitiv
not al
nr_pozitiv:
call ascii
cmp semn, '1'
jne continue
cmp nr_elem, 1
jne continue
add al, 1
continue:
co al
dec nr_elem
cmp nr_elem, 0
je sfarsit
jmp afisare
sfarsit:
mov ah, 4ch
int 21h
-
January 2nd, 2011, 07:41 PM
#7
Re: Decimal to hex conversion - problems in code
Unfortunately your code is incomplete, entirely lacks comments (which can be particularly helpful in assembly language programs) and I don't understand the meaning of many of your labels (but that may be my fault).
Originally Posted by kamina
My question is: what does the hex sequence do to the initial number?
Apparently it pops nr_elem "elements" off the stack. These "elements" seem to be decimal digits that previously have been converted from ASCII to binary (based on the assumption that zece still holds the value 10 like it did in your initial post). The number represented by the "elements" is converted to binary. Later converting that binary value to hex is almost trivial because the numerical base of both systems is a power of two.
Maybe the code is a bit more complicated than it'd need to be because it is intended to be able to convert numbers with an arbitrary base to hex (or even another arbitrary base)?
Essentially, if this really is your code, then you should be the first one to at least know what it should do...
I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.
This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|