Deployed Tensorflow Serving and ran test for Inception-V3. Works fine.
Now, would like to do batching for serving for Inception-V3. E.g. would like to send 10 images for prediction instead of one.
How to do that? Which files to update (inception_saved_model.py or inception_client.py)? What those update look like? and how are the images passed to the serving -is it passed as a folder containing images or how?
Appreciate some insight into this issue. Any code snippet related to this will be extremely helpful.
=================================
Updated inception_client.py
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#!/usr/bin/env python2.7
"""Send JPEG image to tensorflow_model_server loaded with inception model.
"""
from __future__ import print_function
"""Send JPEG image to tensorflow_model_server loaded with inception model.
"""
from __future__ import print_function
# This is a placeholder for a Google-internal import.
from grpc.beta import implementations
import tensorflow as tf
from tensorflow.python.platform import flags
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
tf.app.flags.DEFINE_string('server', 'localhost:9000',
'PredictionService host:port')
tf.app.flags.DEFINE_string('image', '', 'path to image in JPEG format')
FLAGS = tf.app.flags.FLAGS
def main(_):
host, port = FLAGS.server.split(':')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# Send request
#with open(FLAGS.image, 'rb') as f:
# See prediction_service.proto for gRPC request/response details.
#data = f.read()
#request = predict_pb2.PredictRequest()
#request.model_spec.name = 'inception'
#request.model_spec.signature_name = 'predict_images'
# request.inputs['images'].CopyFrom(
# tf.contrib.util.make_tensor_proto(data, shape=[1]))
# result = stub.Predict(request, 10.0) # 10 secs timeout
# print(result)
# Build a batch of images
request = predict_pb2.PredictRequest()
request.model_spec.name = 'inception'
request.model_spec.signature_name = 'predict_images'
image_data = []
for image in FLAGS.image.split(','):
with open(image, 'rb') as f:
image_data.append(f.read())
request.inputs['images'].CopyFrom(
tf.contrib.util.make_tensor_proto(image_data, shape=[len(image_data)]))
result = stub.Predict(request, 10.0) # 10 secs timeout
print(result)
if __name__ == '__main__':
tf.app.run()
You should be able to compute predictions for a batch of images with a small change to the request construction code in inception_client.py
. The following lines in that file create a request with a "batch" containing a single image (note shape=[1]
, which means "a vector of length 1"):
with open(FLAGS.image, 'rb') as f:
# See prediction_service.proto for gRPC request/response details.
data = f.read()
request = predict_pb2.PredictRequest()
request.model_spec.name = 'inception'
request.model_spec.signature_name = 'predict_images'
request.inputs['images'].CopyFrom(
tf.contrib.util.make_tensor_proto(data, shape=[1]))
result = stub.Predict(request, 10.0) # 10 secs timeout
print(result)
You can pass more images in the same vector to run predictions on a batch of data. For example, if FLAGS.image
were a comma-separated list of filenames:
request = predict_pb2.PredictRequest()
request.model_spec.name = 'inception'
request.model_spec.signature_name = 'predict_images'
# Build a batch of images.
image_data = []
for image in FLAGS.image.split(','):
with open(image, 'rb') as f:
image_data.append(f.read())
request.inputs['images'].CopyFrom(
tf.contrib.util.make_tensor_proto(image_data, shape=[len(image_data)]))
result = stub.Predict(request, 10.0) # 10 secs timeout
print(result)
if __name__ == '__main__':
tf.app.run()