Tensorflow classifier.export_savedmodel (Beginner)

Jacob picture Jacob · Aug 11, 2017 · Viewed 7.7k times · Source

I know about the "Serving a Tensorflow Model" page

https://www.tensorflow.org/serving/serving_basic

but those functions assume you're using tf.Session() which the DNNClassifier tutorial does not... I then looked at the api doc for DNNClassifier and it has an export_savedmodel function (the export function is deprecated) and it seems simple enough but I am getting a "'NoneType' object is not iterable" error... which is suppose to mean I'm passing in an empty variable but I'm unsure what I need to change... I've essentially copied and pasted the code from the get_started/tflearn page on tensorflow.org but then added

  directoryName = "temp"

  def serving_input_fn():
    print("asdf")

  classifier.export_savedmodel(
    directoryName,
    serving_input_fn
  )

just after the classifier.fit function call... the other parameters for export_savedmodel are optional I believe... any ideas?

Tutorial with Code: https://www.tensorflow.org/get_started/tflearn#construct_a_deep_neural_network_classifier

API Doc for export_savedmodel https://www.tensorflow.org/api_docs/python/tf/contrib/learn/DNNClassifier#export_savedmodel

Answer

David Valenzuela Urrutia picture David Valenzuela Urrutia · Jan 18, 2018

There are two kind of TensorFlow applications:

  • The functions that assume you are using tf.Session() are functions from "low level" Tensorflow examples, and
  • the DNNClassifier tutorial is a "high level" Tensorflow application.

I'm going to explain how to export "high level" Tensorflow models (using export_savedmodel).

The function export_savedmodel requires the argument serving_input_receiver_fn, that is a function without arguments, which defines the input from the model and the predictor. Therefore, you must create your own serving_input_receiver_fn, where the model input type match with the model input in the training script, and the predictor input type match with the predictor input in the testing script.

On the other hand, if you create a custom model, you must define the export_outputs, defined by the function tf.estimator.export.PredictOutput, which input is a dictionary that define the name that has to match with the name of the predictor output in the testing script.

For example:

TRAINING SCRIPT

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_tensors')
    receiver_tensors      = {"predictor_inputs": serialized_tf_example}
    feature_spec          = {"words": tf.FixedLenFeature([25],tf.int64)}
    features              = tf.parse_example(serialized_tf_example, feature_spec)
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

def estimator_spec_for_softmax_classification(logits, labels, mode):
    predicted_classes = tf.argmax(logits, 1)
    if (mode == tf.estimator.ModeKeys.PREDICT):
        export_outputs = {'predict_output': tf.estimator.export.PredictOutput({"pred_output_classes": predicted_classes, 'probabilities': tf.nn.softmax(logits)})}
        return tf.estimator.EstimatorSpec(mode=mode, predictions={'class': predicted_classes, 'prob': tf.nn.softmax(logits)}, export_outputs=export_outputs) # IMPORTANT!!!
    onehot_labels = tf.one_hot(labels, 31, 1, 0)
    loss          = tf.losses.softmax_cross_entropy(onehot_labels=onehot_labels, logits=logits)
    if (mode == tf.estimator.ModeKeys.TRAIN):
        optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
        train_op  = optimizer.minimize(loss, global_step=tf.train.get_global_step())
        return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
    eval_metric_ops = {'accuracy': tf.metrics.accuracy(labels=labels, predictions=predicted_classes)}
    return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)

def model_custom(features, labels, mode):
    bow_column           = tf.feature_column.categorical_column_with_identity("words", num_buckets=1000)
    bow_embedding_column = tf.feature_column.embedding_column(bow_column, dimension=50)   
    bow                  = tf.feature_column.input_layer(features, feature_columns=[bow_embedding_column])
    logits               = tf.layers.dense(bow, 31, activation=None)
    return estimator_spec_for_softmax_classification(logits=logits, labels=labels, mode=mode)

def main():
    # ...
    # preprocess-> features_train_set and labels_train_set
    # ...
    classifier     = tf.estimator.Estimator(model_fn = model_custom)
    train_input_fn = tf.estimator.inputs.numpy_input_fn(x={"words": features_train_set}, y=labels_train_set, batch_size=batch_size_param, num_epochs=None, shuffle=True)
    classifier.train(input_fn=train_input_fn, steps=100)
    full_model_dir = classifier.export_savedmodel(export_dir_base="C:/models/directory_base", serving_input_receiver_fn=serving_input_receiver_fn)

TESTING SCRIPT

def main():
    # ...
    # preprocess-> features_test_set
    # ...
    with tf.Session() as sess:
        tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], full_model_dir)
        predictor   = tf.contrib.predictor.from_saved_model(full_model_dir)
        model_input = tf.train.Example(features=tf.train.Features( feature={"words": tf.train.Feature(int64_list=tf.train.Int64List(value=features_test_set)) })) 
        model_input = model_input.SerializeToString()
        output_dict = predictor({"predictor_inputs":[model_input]})
        y_predicted = output_dict["pred_output_classes"][0]

(Code tested in Python 3.6.3, Tensorflow 1.4.0)