I'm using Kafka schema registry for producing/consuming Kafka messages, for example I have two fields they are both string type, the pseudo schema as following:
{"name": "test1", "type": "string"}
{"name": "test2", "type": "string"}
but after sending and consuming a while, I need modify schema to change the second filed to long type, then it threw the following exception:
Schema being registered is incompatible with an earlier schema; error code: 409
I'm confused, if schema registry can not evolve the schema upgrade/change, then why should I use Schema registry, or say why I use Avro?
Fields cannot be renamed in BACKWARD
compatibility mode. As a workaround you can change the compatibility rules for the schema registry.
According to the docs:
The schema registry server can enforce certain compatibility rules when new schemas are registered in a subject. Currently, we support the following compatibility rules.
Backward compatibility (default): A new schema is backward compatible if it can be used to read the data written in all previous schemas. Backward compatibility is useful for loading data into systems like Hadoop since one can always query data of all versions using the latest schema.
Forward compatibility: A new schema is forward compatible if all previous schemas can read data written in this schema. Forward compatibility is useful for consumer applications that can only deal with data in a particular version that may not always be the latest version.
Full compatibility: A new schema is fully compatible if it’s both backward and forward compatible.
No compatibility: A new schema can be any schema as long as it’s a valid Avro.
Setting compatibility
to NONE
should do the trick.
# Update compatibility requirements globally
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
--data '{"compatibility": "NONE"}' \
http://localhost:8081/config
And the response should be
{"compatibility":"NONE"}
I generally discourage setting compatibility to NONE
on a subject unless absolutely necessary.