Iがそれぞれ畳み込みニューロン層の行列計算の問題を有してい......Tensorflow畳み込みSoftmaxCrossEntropyWithLogits logitsとラベルは同じサイズでなければならない:logits_size = [640,2] labels_size = [10,2]
これにより、私は取得しています:
InvalidArgumentError(トレースバックについては上記参照):logitsとラベルは同じサイズでなければなりません。logits_size = [640,2] labels_size = [10,2]
誰かが私を指すことができます初心者のフレンドリーなリソースと詳細な説明? https://github.com/martin-gorner/tensorflow-mnist-tutorial/blob/master/mnist_3.0_convolutional.py
import os
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
import numpy as np
import glob
import fnmatch
import matplotlib.pyplot as plt
from PIL import Image
import random
import threading
import math
tf.set_random_seed(0)
def convertToOneHot(vector, num_classes=None):
assert isinstance(vector, np.ndarray)
assert len(vector) > 0
if num_classes is None:
num_classes = np.max(vector)+1
else:
assert num_classes > 0
assert num_classes >= np.max(vector)
result = np.zeros(shape=(len(vector), num_classes))
result[np.arange(len(vector)), vector] = 1
return result.astype(np.float32)
def make_labels(filenames):
n = len(filenames)
#y = np.zeros((n,2), dtype = np.int32)
#y = np.zeros(shape=[n], dtype = np.float32)
label_y = np.zeros((n,2), dtype = np.float32)
counter = 0
dog = 0
cat = 0
for i in range(n):
# If 'dog' string is in file name assign '1'
if fnmatch.fnmatch(filenames[i], '*dog*'):
label_y[i,0] = 1
#label_y[i] = 1
dog += 1
else:
label_y[i,1] = 1
#label_y[i] = 0
cat += 1
print("Dog: " , dog , " Cat: " , cat)
return label_y
def make_test_labels(filenames):
n = len(filenames)
test_label_y = np.zeros([n], dtype = np.int32)
for i in range(n):
test_label_y[i] = random.randrange(0,2)
one_hot = convertToOneHot(test_label_y)
return one_hot
train_path = "./data/train/*.jpg"
test_path = "./data/test1/*.jpg"
#Training Dataset
train_files = tf.gfile.Glob(train_path)
train_image_labels = make_labels(train_files)
train_filename_queue = tf.train.string_input_producer(train_files, shuffle=False)
train_image_reader = tf.WholeFileReader()
train_image_filename, train_image_file = train_image_reader.read(train_filename_queue)
train_image_file = tf.image.decode_jpeg(train_image_file, 1)
train_image_file = tf.image.resize_images(train_image_file, [224, 224])
train_image_file.set_shape((224, 224, 1))
train_image_file = tf.squeeze(train_image_file)
#Test or Eval Dataset
test_files = tf.gfile.Glob(test_path)
test_image_labels = make_test_labels(test_files)
test_filename_queue = tf.train.string_input_producer(test_files, shuffle=False)
test_image_reader = tf.WholeFileReader()
test_image_filename, test_image_file = test_image_reader.read(test_filename_queue)
test_image_file = tf.image.decode_jpeg(test_image_file, 1)
test_image_file = tf.image.resize_images(test_image_file, [224, 224])
test_image_file.set_shape((224, 224, 1))
test_image_file = tf.squeeze(test_image_file)
train_batch_size = 10
test_batch_size = 2
num_preprocess_threads = 1
min_queue_examples = 256
X = tf.placeholder(tf.float32, [None, 224, 224, 1])
Y_ = tf.placeholder(tf.float32, [None, 2])
lr = tf.placeholder(tf.float32)
pkeep = tf.placeholder(tf.float32)
# three convolutional layers with their channel counts, and a
# fully connected layer (tha last layer has 2 softmax neurons)
K = 4 # first convolutional layer output depth
L = 8 # second convolutional layer output depth
M = 12 # third convolutional layer
N = 200 # fully connected layer
W1 = tf.Variable(tf.truncated_normal([5, 5, 1, K], stddev=0.1)) # 5x5 patch, 1 input channel, K output channels
print "W1: " , W1.get_shape()
B1 = tf.Variable(tf.ones([K])/10)
print "B1: " , B1.get_shape()
W2 = tf.Variable(tf.truncated_normal([5, 5, K, L], stddev=0.1))
print "W2: " , W2.get_shape()
B2 = tf.Variable(tf.ones([L])/10)
print "B2: " , B2.get_shape()
W3 = tf.Variable(tf.truncated_normal([4, 4, L, M], stddev=0.1))
print "W3: " , W3.get_shape()
B3 = tf.Variable(tf.ones([M])/10)
print "B3: " , B3.get_shape()
W4 = tf.Variable(tf.truncated_normal([7 * 7 * M, N], stddev=0.1))
print "W4: " , W4.get_shape()
B4 = tf.Variable(tf.ones([N])/10)
print "B4: " , B4.get_shape()
W5 = tf.Variable(tf.truncated_normal([N, 2], stddev=0.1))
print "W5: " , W5.get_shape()
B5 = tf.Variable(tf.ones([2])/10)
print "B5: " , B5.get_shape()
# The model
stride = 1 # output is 28x28
Y1 = tf.nn.relu(tf.nn.conv2d(X, W1, strides=[1, stride, stride, 1], padding='SAME') + B1)
print "Y1: " , Y1.get_shape()
stride = 2 # output is 14x14
Y2 = tf.nn.relu(tf.nn.conv2d(Y1, W2, strides=[1, stride, stride, 1], padding='SAME') + B2)
print "Y2: " , Y2.get_shape()
stride = 2 # output is 7x7
Y3 = tf.nn.relu(tf.nn.conv2d(Y2, W3, strides=[1, stride, stride, 1], padding='SAME') + B3)
print "Y3: " , Y3.get_shape()
# reshape the output from the third convolution for the fully connected layer
#YY = tf.reshape(Y3, shape=[-1, 7 * 7 * M])
YY = tf.reshape(Y3, shape=[-1, 7 * 7 * M])
print "YY: " , YY.get_shape()
Y4 = tf.nn.relu(tf.matmul(YY, W4) + B4)
print "Y4: " , Y4.get_shape()
Ylogits = tf.matmul(Y4, W5) + B5
print "Ylogits: " , Ylogits.get_shape()
Y = tf.nn.softmax(Ylogits)
# cross-entropy loss function (= -sum(Y_i * log(Yi))), normalised for batches of 10 images
# TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
# problems with log(0) which is NaN
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
cross_entropy = tf.reduce_mean(cross_entropy) * 10
# accuracy of the trained model, between 0 (worst) and 1 (best)
correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
allweights = tf.concat(0, [tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])])
allbiases = tf.concat(0, [tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1]), tf.reshape(B4, [-1]), tf.reshape(B5, [-1])])
# training step, the learning rate is a placeholder
train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
train_images = tf.train.batch([train_image_file], batch_size=train_batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * train_batch_size, allow_smaller_final_batch=True)
test_images = tf.train.batch([test_image_file], batch_size=test_batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * test_batch_size, allow_smaller_final_batch=True)
train_labels = tf.train.batch([train_image_labels], batch_size=train_batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * train_batch_size, enqueue_many=True, allow_smaller_final_batch=True)
test_labels = tf.train.batch([test_image_labels], batch_size=test_batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * test_batch_size, enqueue_many=True, allow_smaller_final_batch=True)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
def training_step(i, update_test_data, update_train_data):
train_images_batch = train_images.eval(session=sess)
train_images_batch = np.expand_dims(train_images_batch, axis=(3))
train_labels_batch = train_labels.eval(session=sess)
test_images_batch = test_images.eval(session=sess)
test_images_batch = np.expand_dims(test_images_batch, axis=(3))
test_labels_batch = test_labels.eval(session=sess)
# learning rate decay
max_learning_rate = 0.003
min_learning_rate = 0.0001
decay_speed = 2000.0
learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
if update_train_data:
a, c, w, b = sess.run([accuracy, cross_entropy, allweights, allbiases], {X: train_images_batch, Y_: train_labels_batch})
print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
if update_test_data:
a, c = sess.run([accuracy, cross_entropy], {X: test_images_batch, Y_: test_labels_batch})
print(str(i) + ": ********* epoch " + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
# the backpropagation training step
sess.run(train_step, {X: batch_X, Y_: batch_Y, lr: learning_rate})
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord, sess=sess)
for i in range(10000+1):
training_step(i, i % 100 == 0, i % 20 == 0)
coord.request_stop()
coord.join(threads)
結果から
おかげ
ソースコードの参照:
('Dog: ', 12500, ' Cat: ', 12500)
W1: (5, 5, 1, 4)
B1: (4,)
W2: (5, 5, 4, 8)
B2: (8,)
W3: (4, 4, 8, 12)
B3: (12,)
W4: (588, 200)
B4: (200,)
W5: (200, 2)
B5: (2,)
Y1: (?, 224, 224, 4)
Y2: (?, 112, 112, 8)
Y3: (?, 56, 56, 12)
YY: (?, 588)
Y4: (?, 200)
Ylogits: (?, 2)
Traceback (most recent call last):
File "convolutional.py", line 306, in <module>
training_step(i, i % 100 == 0, i % 20 == 0)
File "convolutional.py", line 288, in training_step
a, c, w, b = sess.run([accuracy, cross_entropy, allweights, allbiases], {X: train_images_batch, Y_: train_labels_batch})
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 766, in run
run_metadata_ptr)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[640,2] labels_size=[10,2]
[[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_1, Reshape_2)]]
Caused by op u'SoftmaxCrossEntropyWithLogits', defined at:
File "convolutional.py", line 229, in <module>
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1449, in softmax_cross_entropy_with_logits
precise_logits, labels, name=name)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2265, in _softmax_cross_entropy_with_logits
features=features, labels=labels, name=name)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
op_def=op_def)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/dragon/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1128, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[640,2] labels_size=[10,2]
[[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_1, Reshape_2)]]
精度が良くない場合は、プーリング層を追加したり、アーキテクチャがより良くする(あなたがAlexnetのように分類ネットワークからのインスピレーション/アイデアを取ることができ、Imagenet etc.Youはハイパーの適切な組み合わせを見つける必要があります本当に良い精度を得るためのパラメータ。 –