Ideally the following code would take a float in IEEE 754 representation and convert it into hexadecimal
void convert() //gets the float input from user and turns it into hexadecimal
{
float f;
printf("Enter float: ");
scanf("%f", &f);
printf("hex is %x", f);
}
I'm not too sure what's going wrong. It's converting the number into a hexadecimal number, but a very wrong one.
123.1443 gives 40000000
43.3 gives 60000000
8 gives 0
so it's doing something, I'm just not too sure what.
Help would be appreciated
When you pass a float
as an argument to a variadic function (like printf()
), it is promoted to a double
, which is twice as large as a float
(at least on most platforms).
One way to get around this would be to cast the float
to an unsigned int
when passing it as an argument to printf()
:
printf("hex is %x", *(unsigned int*)&f);
This is also more correct, since printf()
uses the format specifiers to determine how large each argument is.
Technically, this solution violates the strict aliasing rule. You can get around this by copying the bytes of the float
into an unsigned int
and then passing that to printf()
:
unsigned int ui;
memcpy(&ui, &f, sizeof (ui));
printf("hex is %x", ui);
Both of these solutions are based on the assumption that sizeof(int) == sizeof(float)
, which is the case on many 32-bit systems, but isn't necessarily the case.