Skip to content Skip to sidebar Skip to footer

Tensorflow: Layer Size Dependent On Batch Size?

I am currently trying to get familiar with the Tensorflow library and I have a rather fundamental question that bugs me. While building a convolutional neural network for MNIST cla

Solution 1:

In the typical scenario, the rank of features['x'] is already going to be 4, with the outer dimension being the actual batch size, so there's no need to resize it.

Let me try to explain.

You haven't shown your serving_input_receiver_fn yet and there are several ways to do that, although in the end the principle is similar across them all. If you're using TensorFlow Serving, then you probably use build_parsing_serving_input_receiver_fn. It's informative to look at the source code:

def build_parsing_serving_input_receiver_fn(feature_spec,
                                            default_batch_size=None):    
  serialized_tf_example = array_ops.placeholder(
      dtype=dtypes.string,
      shape=[default_batch_size],                                      
      name='input_example_tensor')
  receiver_tensors = {'examples': serialized_tf_example}
  features = parsing_ops.parse_example(serialized_tf_example, feature_spec)
  return ServingInputReceiver(features, receiver_tensors)

So in your client, you're going to prepare a request that has one or more Examples in it (let's say the length is N). The server treats the serialized examples as a list of strings which get "fed" into the input_example_tensor placeholder. The shape (which is None) dynamically gets filled in to be the size of the list (N).

Then the parse_example op parses each item in the placeholder and out pops a Tensor for each feature whose outer dimension is N. In your case, you'll have x with shape=[N, 30, 30, 1].

(Note that other serving systems, such as CloudML Engine, do not operate on Example objects, but the principles are the same).


Solution 2:

I just want to briefly provide my found solution. Since I did not want to build a scalable production grade model, but a simple model runner in python to execute my CNN locally.

To export the model I used,

input_size = 900

def serving_input_receiver_fn():
    inputs = {"x": tf.placeholder(shape=[None, input_size], dtype=tf.float32)}
    return tf.estimator.export.ServingInputReceiver(inputs, inputs)

model.export_savedmodel(
    export_dir_base=model_dir,
    serving_input_receiver_fn=serving_input_receiver_fn)

To load and run it (without needing the model definition again) I used the tensorflow predictor class.

from tensorflow.contrib import predictor

class TFRunner:
    """ runs a frozen cnn graph """
    def __init__(self,model_dir):
        self.predictor = predictor.from_saved_model(model_dir)

    def run(self, input_list):
        """ runs the input list through the graph, returns output """
        if len(input_list) > 1:
            inputs = np.vstack(input_list)
            predictions = self.predictor({"x": inputs})
        elif len(input_list) == 1:
            predictions = self.predictor({"x": input_list[0]})
        else:
            predictions = []
        return predictions

Post a Comment for "Tensorflow: Layer Size Dependent On Batch Size?"