Javascript Typed Arrays and Endianness

Bob picture Bob · Oct 24, 2011 · Viewed 31.9k times · Source

I'm using WebGL to render a binary encoded mesh file. The binary file is written out in big-endian format (I can verify this by opening the file in a hex editor, or viewing the network traffic using fiddler). When I try to read the binary response using a Float32Array or Int32Array, the binary is interpreted as little-endian and my values are wrong:

// Interpret first 32bits in buffer as an int
var wrongValue = new Int32Array(binaryArrayBuffer)[0];

I can't find any references to the default endianness of typed arrays in http://www.khronos.org/registry/typedarray/specs/latest/ so I'm wondering what's the deal? Should I assume that all binary data should be little-endian when reading using typed arrays?

To get around the problem I can use a DataView object (discussed in the previous link) and call:

// Interpret first 32bits in buffer as an int
var correctValue = new DataView(binaryArrayBuffer).getInt32(0);

The DataView functions such as "getInt32" read big-endian values by default.

(Note: I've tested using Google Chrome 15 and Firefox 8 and they both behave the same way)

Answer

gsnedders picture gsnedders · Oct 24, 2011

The current behaviour, is determined by the endianness of the underlying hardware. As almost all desktop computers are x86, this means little-endian. Most ARM OSes use little-endian mode (ARM processors are bi-endian and thus can operate in either).

The reason why this is somewhat sad is the fact that it means almost nobody will test whether their code works on big-endian hardware, hurting what does, and the fact that the entire web platform was designed around code working uniformly across implementations and platforms, which this breaks.