I'm trying to store binary data in a QR code. Apparently QR codes do support storing raw binary data (or ISO-8859-1 / Latin1). Here is what I want to encode (hex):
d1 50 01 00 00 00 f6 5f 05 2d 8f 0b 40 e2 01
I've tried the following encoders:
Decoding with zxing.org produces various incorrect results. The two javascript ones produce this (it's wrong; the first text character should be Ñ.
Whereas Google Charts produces this...
What is going on? Are any of these correct? What's really weird is that if I encode this sequence (with the JS ones at least) then it works fine - I would have thought the issue was non-ASCII characters but Ñ (0xd1
) is non-ASCII.
d1 50 01 00 00 00 01 02 03 04 05 06 40 e2 01
Does anyone know what is going on?
Update
It occurred to me to try scanning them with a ZBar-based scanner app I found. It scans both JS versions ok (at least they start with ÑP). The Google Charts one is just wrong. So it seems like the issue is with ZXing (which is surprisingly shit - I wouldn't recommend it to anyone).
Update 2
ZBar can't handle null bytes. :-(
"What is going on? Are any of these correct?"
Except for the google chart (which is just empty), your QR codes are correct.
You can see the binary data from zxing is what you would expect:
4: Byte mode indicator
0f: length of 15 byte
d15001...: your 15 bytes of data
ec11 is just padding
The problem comes from the decoding. Because most decoders will try to interpret it as text. But since it's binary data, you should not try to handle it as text. Even if you think you can convert it from text to binary, as you saw this may cause issues with values which are not valid text.
So the solution is to use a decoder that will output you the binary data, and not text data.
Now about interpreting the QR code binary data as text, you said the first character should be 'Ñ' which is true if interpreted it as "ISO-8859-1", which according to the QR code standard, is what should be done when there is no ECI mode defined.
But in practice, most smartphone QR code reader will interpret it as UTF-8 in this case (or at least try to auto-detect the encoding).
Even though this is not the standard, this had become common practice: binary mode with no ECI, UTF-8 encoded text.
Maybe the reason behind it is that no one wants to waste these precious bytes adding an ECI mode specifying UTF-8. And actually, not all decoders support ECI.