You must feed a value for place holder tenssor'Place holder_1'with dtype froat and shop[?10]
1538 ワード
1.まず自分のfeedがtenssorに与えたデータの種類とサイズがplacholderで得られたtenssorと同じかどうかを確認します。もし違ったら同じにします。以下を見てください。
2.原文のリンク:http://www.itkeyword.com/doc/7197603979214654287/tensorflow-issue-with-placeholder-and-summaries
The problem here is that some of the summares in your graph-colleced by
The solution is to feed the same trining batch when you evalute
2.原文のリンク:http://www.itkeyword.com/doc/7197603979214654287/tensorflow-issue-with-placeholder-and-summaries
The problem here is that some of the summares in your graph-colleced by
tf.merge_all_summaries()
—depend on on your placceholders.For example,the code in cifar10.py
creates summares for various activativativativativativativations the apt ap.ap.ap.ap.ap.the。der.extre.extre.ted。The solution is to feed the same trining batch when you evalute
summary_op
:if step % 100 == 0:
summary_str = sess.run(summary_op, feed_dict={
images: image[offset:(offset + batch_size)],
images2: image_p[offset:(offset + batch_size)],
labels: 1.0 * label[offset:(offset + batch_size)]})
While this gives the smslest modification to your ororororininal code、itisslilightlyinefficient、because it will re- eexecute the trinininininsteps 100 steps.The best way to address s(although itititwill rererererererererererereststststststststrerererererererererereststststststststststststststrerererererererererererererererererererererererererererereres)))aaatttttinininininininininininep:if step % 100 == 0:
_, loss_value, summary_str = sess.run([train_op, loss, summary_op], feed_dict={
images: image[offset:(offset + batch_size)],
images2: image_p[offset:(offset + batch_size)],
labels: 1.0 * label[offset:(offset + batch_size)]})