Tensorflow: alimentation dict erreur: Vous devez fournir une valeur pour l'espace réservé tenseur

J'ai un bug je ne peut pas trouver la raison. Voici le code:

with tf.Graph().as_default():
        global_step = tf.Variable(0, trainable=False)

        images = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,33,33,1])
        labels = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,21,21,1])

        logits = inference(images)
        losses = loss(logits, labels)
        train_op = train(losses, global_step)
        saver = tf.train.Saver(tf.all_variables())
        summary_op = tf.merge_all_summaries()
        init = tf.initialize_all_variables()

        sess = tf.Session()
        sess.run(init)                                                 

        summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)

        for step in xrange(FLAGS.max_steps):
            start_time = time.time()

            data_batch, label_batch = SRCNN_inputs.next_batch(np_data, np_label,
                                                              FLAGS.batch_size)


            _, loss_value = sess.run([train_op, losses], feed_dict={images: data_batch, labels: label_batch})

            duration = time.time() - start_time

def next_batch(np_data, np_label, batchsize, 
               training_number = NUM_EXAMPLES_PER_EPOCH_TRAIN):

    perm = np.arange(training_number)
    np.random.shuffle(perm)
    data = np_data[perm]
    label = np_label[perm]
    data_batch = data[0:batchsize,:]
    label_batch = label[0:batchsize,:]


return data_batch, label_batch

np_data est l'ensemble de la formation des échantillons de lire à partir de hdf5 fichier, et le même à la np_label.

Après je lance le code, j'ai obtenu l'erreur comme ceci :

2016-07-07 11:16:36.900831: step 0, loss = 55.22 (218.9 examples/sec; 0.585 sec/batch)
Traceback (most recent call last):
File "<ipython-input-1-19672e1f8f12>", line 1, in <module>
runfile('/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py', wdir='/home/kang/Documents/work_code_PC1/tf_SRCNN')
File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 685, in runfile
execfile(filename, namespace)
File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 85, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)
File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 155, in <module>
train_test()
File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 146, in train_test
summary_str = sess.run(summary_op)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [128,33,33,1]
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[128,33,33,1], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
[[Node: truediv/_74 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_56_truediv", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'Placeholder', defined at:

Ainsi, Il montre que pour l'étape 0, c'est le résultat, ce qui signifie que les données ont été introduites dans les Espaces réservés.

Mais pourquoi a-t-il l'erreur de l'alimentation des données dans l'espace Réservé à la prochaine fois?

Quand j'essaie de commenter le code summary_op = tf.merge_all_summaries() et le code fonctionne très bien. pourquoi est-ce le cas?

OriginalL'auteur karl_TUM | 2016-07-07