What will be the policy of memory allocation limits for webassembly programs?
Will the current (hard) javascript engine memory limits be inherited? E.g. will be possible to write real applications that needs more than a few hundred of megs of memory?
Current browser policies on memory allocation on javascript pose hard constraints to what is actually doable in a browser. Speed is no more a problem with emscripten/asm.js and jit compiling, but the memory constraints make hard or impossible to build any serious application in a browser.
See for example http://www.meshlabjs.net, the run-in-browser version of the MeshLab mesh processing system. With respect to the desktop application the main limit is that, in the javascript based version, large 3D models cannot be loaded for the intrinsic limits on allocation imposed by the js engine of the browsers.
WebAssembly has a WebAssembly.Memory
object and the binary has a memory section. Through these, a developer provides educated guesses about minimum and maximum memory usage, the VM then allocates at least the minimum (or fails). A developer can then, at runtime, ask for more through grow_memory
which tools like Emscripten will use under the hood of malloc
(it's somewhat similar to sbrk
).
For asm.js it was difficult to know how the ArrayBuffer
was going to be used, and on some 32-bit platforms you often ran into process fragmentation which made it hard to allocate enough contiguous space in the process' virtual memory (the ArrayBuffer
must be contiguous in the browser process' virtual address space, otherwise you'd have a huge perf hit). You'd try to allocate 256MiB and sometimes hard-fail. This got extremely difficult if the browser wasn't multi-process, because all the other tabs are competing for 32 bits of virtual address space. Browsers were a bit silly a few years ago, they got better, but 32 bits ain't much to go around.
WebAssembly is backed by WebAssembly.Memory
which is a special type of ArrayBuffer
. This means that a WebAssembly implementation can be clever about that ArrayBuffer
. On 32-bit there's not much to do: if you run out of contiguous address space then the VM can't do much. But on 64-bit platforms there's plenty of address space. The browser implementation can choose to prevent you from creating too many WebAssembly.Memory
instances (allocating virtual memory is almost free, but not quite), but you should be able to get a few 4GiB allocations. Note that the browser will only allocate that space virtually, and commit physical addresses for the minimum number of pages you said you need. Afterwards it'll only allocate physically when you use grow_memory
. That could fail (physical memory is about as abundant as the amount of RAM, give or take swap space), but it's much more predictable.
An implementation can pull a similar trick on 32-bit platforms (over-commit but keep PROT_NONE
and not physically allocated), assuming fragmentation allows, but that's up to the implementation and how it think this affects ASLR. Realistically it's hard to find memory when there's not much to go around, but virtually and physically.
WebAssembly is currently specified as an ILP32 process: pointers are 32 bits. You're therefore hard-limited to 4GiB. We may add wasm64 in the future.