In java, is it more efficient to use byte or short instead of int and float instead of double?

DisibioAaron picture DisibioAaron · Jan 25, 2013 · Viewed 41k times · Source

I've noticed I've always used int and doubles no matter how small or big the number needs to be. So in java, is it more efficient to use byte or short instead of int and float instead of double?

So assume I have a program with plenty of ints and doubles. Would it be worth going through and changing my ints to bytes or shorts if I knew the number would fit?

I know java doesn't have unsigned types but is there anything extra I could do if I knew the number would be positive only?

By efficient I mostly mean processing. I'd assume the garbage collector would be a lot faster if all the variables would be half size and that calculations would probably be somewhat faster too. ( I guess since I am working on android I need to somewhat worry about ram too)

(I'd assume the garbage collector only deals with Objects and not primitive but still deletes all the primitives in abandoned objects right? )

I tried it with a small android app I have but didn't really notice a difference at all. (Though I didn't "scientifically" measure anything.)

Am I wrong in assuming it should be faster and more efficient? I'd hate to go through and change everything in a massive program to find out I wasted my time.

Would it be worth doing from the beginning when I start a new project? (I mean I think every little bit would help but then again if so, why doesn't it seem like anyone does it.)

Answer

Stephen C picture Stephen C · Jan 26, 2013

Am I wrong in assuming it should be faster and more efficient? I'd hate to go through and change everything in a massive program to find out I wasted my time.

Short answer

Yes, you are wrong. In most cases, it makes little difference in terms of space used.

It is not worth trying to optimize this ... unless you have clear evidence that optimization is needed. And if you do need to optimize memory usage of object fields in particular, you will probably need to take other (more effective) measures.

Longer answer

The Java Virtual Machine models stacks and object fields using offsets that are (in effect) multiples of a 32 bit primitive cell size. So when you declare a local variable or object field as (say) a byte, the variable / field will be stored in a 32 bit cell, just like an int.

There are two exceptions to this:

  • long and double values require 2 primitive 32-bit cells
  • arrays of primitive types are represent in packed form, so that (for example) an array of bytes hold 4 bytes per 32bit word.

So it might be worth optimizing use of long and double ... and large arrays of primitives. But in general no.

In theory, a JIT might be able to optimize this, but in practice I've never heard of a JIT that does. One impediment is that the JIT typically cannot run until after there instances of the class being compiled have been created. If the JIT optimized the memory layout, you could have two (or more) "flavors" of object of the same class ... and that would present huge difficulties.


Revisitation

Looking at the benchmark results in @meriton's answer, it appears that using short and byte instead of int incurs a performance penalty for multiplication. Indeed, if you consider the operations in isolation, the penalty is significant. (You shouldn't consider them in isolation ... but that's another topic.)

I think the explanation is that JIT is probably doing the multiplications using 32bit multiply instructions in each case. But in the byte and short case, it executes extra instructions to convert the intermediate 32 bit value to a byte or short in each loop iteration. (In theory, that conversion could be done once at the end of the loop ... but I doubt that the optimizer would be able to figure that out.)

Anyway, this does point to another problem with switching to short and byte as an optimization. It could make performance worse ... in an algorithm that is arithmetic and compute intensive.