Since SE 7 Java allows to specify values as binary literal. The documentation tells me 'byte' is a type that can hold 8 Bit of information, the values -128 to 127.
Now i dont know why but i cannot define 8 bits but only 7 if i try to assign a binary literal to a byte in Java as follows:
byte b = 0b000_0000; //solves to the value 0
byte b1 = 0b000_0001; //solves to the value 1
byte b3 = 0b000_0010; //solves to the value 2
byte b4 = 0b000_0011; //solves to the value 3
And so on till we get to the last few possibilitys using those 7 bits:
byte b5 = 0b011_1111; //solves to the value 63
byte b6 = 0b111_1111; //solves to the value 127
If i want to make it negative numbers i have to add a leading - in front like this:
byte b7 = -0b111_1111; //solves to the value -127
Now half of the problem i have is that i use just 7 bits to describe what they tell me is a 8 bit data type. Second half is that they dont seem to be threaded as twos complement unless using a 32bit int type where i can define all of the 32 bits ("sign indicator bit" included).
Now when i search on how to display the in-range number -128 i was told to do it this way without any further explanation:
byte b8 = 0b1111_1111_1111_1111_1111_1111_1000_0000;
I can clearly see that the last 8 Bit (1000 0000) do represent -128 in twos compelment using 8 Bit, still i never was confused more and try to ask my questions:
Or in general: Why do i have to assign it this way?
Any links/ informations about this would be great! Thank you for the time you took to read this as well as any further information in advance.
Regards Jan
According to the Java Specification,
http://docs.oracle.com/javase/specs/jls/se7/html/jls-3.html#jls-3.10.1
all your declarations (b, b1,..., and b8) use int literals, even when they would fit in a byte. There's no byte literal in Java, you can only use an int to initialize a byte.
I did some tests and byte neg128 = -0b1000_0000;
works fine. 0b1000_0000
is 128, so you just need to put a -
sign before it. Notice that that 1
is not a sign bit at all (don't think about 8-bit bytes, think about 32-bit ints converted to bytes). So if you want to specify the sign bit you need to write all 32 bits, as you have demonstrated.
So byte b8 = 0b1000_0000;
is an error just like byte b8 = 128;
is an error (+128 does not fit in a byte). You can also force the conversion with a cast:
byte b = (byte) 0b1000_0000;
or
byte b = (byte) 128;
The cast tells the compiler that you know 128 does not fit in a byte and the bit-pattern will be reinterpreted as -128.