Using deep learning models from TensorFlow in other language environments

Alex Alifimoff picture Alex Alifimoff · Jun 21, 2016 · Viewed 12.3k times · Source

I have a decent amount of experience with TensorFlow, and I am about to embark on a project which will ultimately culminate in using a TensorFlow trained model in a C# production environment. Essentially, I will have live data which will come into the C# environment, and I will ultimately need to output decisions / take certain actions based on the output of my model in TensorFlow. This is basically just a constraint of the existing infrastructure.

I can think of a couple of potentially bad ways to implement this, such as writing the data to disk and then calling the Python part of the application and then finally reading the result output by the Python application and taking some action based on it. This is slow, however.

Are there faster ways to accomplish this same integrated relationship between C# and the Python-based Tensorflow. I see that there appear to be some ways to do this with C++ and TensorFlow, but what about C#?

Answer

mrry picture mrry · Jun 22, 2016

This is a prime use case for TensorFlow Serving, which lets you create a C++ process that can run inference on a trained TensorFlow model, and serves inference requests over gRPC. You can write client code in any language that gRPC supports. Take a look at the MNIST tutorial: C++ server and Python client components.