Is it worth the effort to try to reduce JSON size?

Attila O. picture Attila O. · Jun 22, 2012 · Viewed 27.8k times · Source

I am submitting relatively lots of data from a mobile application (up to 1000 JSON objects), that I would normally encode like this:

[{
    id: 12,
    score: 34,
    interval: 5678,
    sub: 9012
}, {
    id: ...
}, ...]

I could make the payload smaller by submitting an array of arrays instead:

[[12, 34, 5678, 9012], [...], ...]

to save some space on the property names, and recreate the objects on the server (as the schema is fixed, or at least it is a contract between the server and the client).

The payload in then submitted in a POST request, most likely over a 3G connection (or could be wifi).

It looks like I am saving some bandwidth by using nested arrays, but I'm not sure it is noticeable when gzip is applied, and I'm not sure how to precisely and objectively measure the difference.

On the other hand, the nested arrays don't feel like a good idea: they are less readable and thus harder to spot errors while debugging. Also, since we're flushing readability down the toilet, we could just flatten the array, since each child array has a fixed number of elements, the server could just slice it up and reconstruct the objects again.

Any further reading material on this topic is much appreciated.

Answer

John Gietzen picture John Gietzen · Jan 3, 2013

JSONH, aka hpack, https://github.com/WebReflection/JSONH does something very similar to your example:

[{
    id: 12,
    score: 34,
    interval: 5678,
    sub: 9012
}, {
    id: 98,
    score: 76,
    interval: 5432,
    sub: 1098
}, ...]

Would turn into:

[["id","score","interval","sub"],12,34,5678,9012,98,76,5432,1098,...]