Why are TypedArrays not faster than usual arrays? I want to use precalculate values for CLZ (compute leading zeros function). And I don't want they interpreting as usual objects?
http://jsperf.com/array-access-speed-2/2
Preparation code:
Benchmark.prototype.setup = function() {
var buffer = new ArrayBuffer(0x10000);
var Uint32 = new Uint32Array(buffer);
var arr = [];
for(var i = 0; i < 0x10000; ++i) {
Uint32[i] = (Math.random() * 0x100000000) | 0;
arr[i] = Uint32[i];
}
var sum = 0;
};
Test 1:
sum = arr[(Math.random() * 0x10000) | 0];
Test 2:
sum = Uint32[(Math.random() * 0x10000) | 0];
PS: May be my perf tests are invalid feel free to correct me.
Modern engines will use true arrays behind the scenes even if you use Array
if they think they can, falling back on property map "arrays" if you do something that makes them think they can't use a true array.
Also note that as radsoc points out, var buffer = new ArrayBuffer(0x10000)
then var Uint32 = new Uint32Array(buffer)
produces a Uint32 array whose size is 0x4000 (0x10000 / 4), not 0x10000, because the value you give ArrayBuffer
is in bytes, but of course there are four bytes per Uint32Array entry. All of the below uses new Uint32Array(0x10000)
instead (and always did, even prior to this edit) to compare apples with apples.
So let's start there, with new Uint32Array(0x10000)
: http://jsperf.com/array-access-speed-2/11 (sadly, JSPerf has lost this test and its results, and is now offline entirely)
That suggests that because you're filling the array in a simple, predictable way, a modern engine continues to use a true array (with the performance benefits thereof) under the covers rather than shifting over. We see basically the same performance for both. The difference in speed could relate to type conversion taking the Uint32
value and assigning it to sum
as a number
(though I'm surprised if that type conversion isn't deferred...).
Add some chaos, though:
var Uint32 = new Uint32Array(0x10000);
var arr = [];
for (var i = 0x10000 - 1; i >= 0; --i) {
Uint32[Math.random() * 0x10000 | 0] = (Math.random() * 0x100000000) | 0;
arr[Math.random() * 0x10000 | 0] = (Math.random() * 0x100000000) | 0;
}
var sum = 0;
...so that the engine has to fall back on old-fashioned property map "arrays," and you see that typed arrays markedly outperform the old-fashioned kind: http://jsperf.com/array-access-speed-2/3 (sadly, JSPerf has lost this test and its results)
Clever, these JavaScript engine engineers...
The specific thing you do with the non-array nature of the Array
array matters, though; consider:
var Uint32 = new Uint32Array(0x10000);
var arr = [];
arr.foo = "bar"; // <== Non-element property
for (var i = 0; i < 0x10000; ++i) {
Uint32[i] = (Math.random() * 0x100000000) | 0;
arr[i] = (Math.random() * 0x100000000) | 0;
}
var sum = 0;
That's still filling the array predictably, but we add a non-element property (foo
) to it. http://jsperf.com/array-access-speed-2/4 (sadly, JSPerf has lost this test and its results) Apparently, engines are quite clever, and keep that non-element property off to the side while continuing to use a true array for the element properties:
I'm at a bit of a loss to explain why standard arrays should get faster there compared to our first test above. Measurement error? Vagaries in Math.random
? But we're still pretty sure the array-specific data in the Array
is still a true array.
Whereas if we do the same thing but fill in reverse order:
var Uint32 = new Uint32Array(0x10000);
var arr = [];
arr.foo = "bar"; // <== Non-element property
for (var i = 0x10000 - 1; i >= 0; --i) { // <== Reverse order
Uint32[i] = (Math.random() * 0x100000000) | 0;
arr[i] = (Math.random() * 0x100000000) | 0;
}
var sum = 0;
...we get back to typed arrays winning out — except on IE11: http://jsperf.com/array-access-speed-2/9 (sadly, JSPerf has lost this test and its results)