Predictions
Explore how to restore saved TensorFlow inference models and retrieve essential tensors to implement prediction functions. This lesson helps you understand the process of loading a model, accessing input and output tensors, and making predictions in practice using real-world classification examples.
We'll cover the following...
Chapter Goals:
- Learn how to restore an inference model and retrieve specific tensors from the computation graph
- Implement a function that makes predictions using a saved inference model
A. Restoring the model
To restore an inference model, we use the tf.compat.v1.saved_model.loader.load function. This function restores both the inference graph as well as the inference model’s parameters.
Since the function’s first argument is a tf.compat.v1.Session object, it’s a good idea to restore the inference model within the scope of a particular tf.compat.v1.Session.
The second argument for tf.compat.v1.saved_model.loader.load is a list of tag constants. For inference, we use the SERVING tag. The function’s third argument is ...