Serving Keras Models With Tensorflow Serving

Sumit Ghosh picture Sumit Ghosh · Apr 27, 2017 · Viewed 8.1k times · Source

Right now we are successfully able to serve models using Tensorflow Serving. We have used following method to export the model and host it with Tensorflow Serving.

     ------------
      For exporting 
     ------------------
     from tensorflow.contrib.session_bundle import exporter

     K.set_learning_phase(0)
     export_path = ... # where to save the exported graph
     export_version = ... # version number (integer)

     saver = tf.train.Saver(sharded=True)
     model_exporter = exporter.Exporter(saver)
     signature = exporter.classification_signature(input_tensor=model.input,
                                          scores_tensor=model.output)
     model_exporter.init(sess.graph.as_graph_def(),
                default_graph_signature=signature)
     model_exporter.export(export_path, tf.constant(export_version), sess)

      --------------------------------------

      For hosting
      -----------------------------------------------

      bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=default --model_base_path=/serving/models

However our issue is - we want keras to be integrated with Tensorflow serving. We would like to serve the model through Tensorflow serving using Keras. The reason we would like to have that is because - in our architecture we follow couple of different ways to train our model like deeplearning4j + Keras , Tensorflow + Keras, but for serving we would like to use only one servable engine that's Tensorflow Serving. We don't see any straight forward way to achieve that. Any comments ?

Thank you.

Answer

Thomas Paula picture Thomas Paula · May 2, 2017

Very recently TensorFlow changed the way it exports the model, so the majority of the tutorials available on web are outdated. I honestly don't know how deeplearning4j works, but I use Keras quite often. I managed to create a simple example that I already posted on this issue in TensorFlow Serving Github.

I'm not sure whether this will help you, but I'd like to share how I did and maybe it will give you some insights. My first trial prior to creating my custom model was to use a trained model available on Keras such as VGG19. I did this as follows.

Model creation

import keras.backend as K
from keras.applications import VGG19
from keras.models import Model

# very important to do this as a first thing
K.set_learning_phase(0)

model = VGG19(include_top=True, weights='imagenet')

# The creation of a new model might be optional depending on the goal
config = model.get_config()
weights = model.get_weights()
new_model = Model.from_config(config)
new_model.set_weights(weights)

Exporting the model

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import     build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter

export_path = 'folder_to_export'
builder = saved_model_builder.SavedModelBuilder(export_path)

signature = predict_signature_def(inputs={'images': new_model.input},
                                  outputs={'scores': new_model.output})

with K.get_session() as sess:
    builder.add_meta_graph_and_variables(sess=sess,
                                         tags=[tag_constants.SERVING],
                                         signature_def_map={'predict': signature})
    builder.save()

Some side notes

  • It can vary depending on Keras, TensorFlow, and TensorFlow Serving version. I used the latest ones.
  • Beware of the names of the signatures, since they should be used in the client as well.
  • When creating the client, all preprocessing steps that are needed for the model (preprocess_input() for example) must be executed. I didn't try to add such step in the graph itself as Inception client example.

With respect to serving different models within the same server, I think that something similar to the creation of a model_config_file might help you. To do so, you can create a config file similar to this:

model_config_list: {
  config: {
    name: "my_model_1",
    base_path: "/tmp/model_1",
    model_platform: "tensorflow"
  },
  config: {
     name: "my_model_2",
     base_path: "/tmp/model_2",
     model_platform: "tensorflow"
  }
}

Finally, you can run the client like this:

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --config_file=model_config.conf