How To Handle Different Queue Batch Size And Feed Value Batch Size In Tensorflow?
My code used to work on tensorflow 0.6, but it no longer works on the lastest tensorflow. I would like to perform inference every few training iterations. My training data is pulle
Solution 1:
We added this check recently to prevent errors (and add a few optimization opportunities). You can make your program work again by changing the declaration of x
to use the new tf.placeholder_with_default()
op:
x = tf.placeholder_with_default(q.dequeue(), shape=[None, 100])
Post a Comment for "How To Handle Different Queue Batch Size And Feed Value Batch Size In Tensorflow?"