2016-12-29 11 views
0

私はCNNにはかなり新しいです。これはkeras、tensorflowなどを使った初めてのことです。私はload_weights関数に問題があります。私はCNN(cifar100)を訓練しましたが、今は重みを読み込んで評価することでテストしたいと思います。load_weights Kerasモデルエラー

これは私が取得エラーのスタックトレースです:

Traceback (most recent call last): 

    File "<ipython-input-17-247d6312ea1b>", line 1, in <module> 
    runfile('/home/nikola/Desktop/cifar100-Version2.py', wdir='/home/nikola/Desktop') 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile 
    execfile(filename, namespace) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile 
    builtins.execfile(filename, *where) 

    File "/home/nikola/Desktop/cifar100-Version2.py", line 80, in <module> 
    model.load_weights('cifar100_best_accuracy.hdf5') 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 2520, in load_weights 
    self.load_weights_from_hdf5_group(f) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 2605, in load_weights_from_hdf5_group 
    K.batch_set_value(weight_value_tuples) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1045, in batch_set_value 
    assign_op = x.assign(assign_placeholder) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 575, in assign 
    return state_ops.assign(self._variable, value, use_locking=use_locking) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 47, in assign 
    use_locking=use_locking, name=name) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op 
    op_def=op_def) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2242, in create_op 
    set_shapes_for_outputs(ret) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1617, in set_shapes_for_outputs 
    shapes = shape_func(op) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1568, in call_with_requiring 
    return call_cpp_shape_fn(op, require_shape_fn=True) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn 
    debug_python_shape_fn, require_shape_fn) 

    File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl 
    raise ValueError(err.message) 

ValueError: Dimension 0 in both shapes must be equal, but are 3 and 32 for 'Assign_11' (op: 'Assign') with input shapes: [3,3,3,32], [32,3,3,3]. 

私はcifar100コードにkeras cifar10コードを拡張しようとしています。私はそれを鍛えることができましたが、私はそれを評価したいと思います。その評価によって、自分のモデルが良いかどうか、そのスコアは何かを判断することができます。

これは私のコードです:

from __future__ import print_function 
from keras.datasets import cifar100 
from keras.preprocessing.image import ImageDataGenerator 
from keras.callbacks import ModelCheckpoint 
from keras.models import Sequential 
from keras.layers import Dense, Dropout, Activation, Flatten 
from keras.layers import Convolution2D, MaxPooling2D 
from keras.utils import np_utils, generic_utils 
from six.moves import range 

#import numpy as np 
#import matplotlib.pyplot as plt 

batch_size = 32 
nb_classes = 100 

classes = [...100 classes...`enter code here`] 

test_only =True; 
save_weights = True; 

nb_epoch = 200 
data_augmentation = True 

# input image dimensions 
img_rows, img_cols = 32, 32 
# The CIFAR10 images are RGB. 
img_channels = 3 

# The data, shuffled and split between train and test sets: 
(X_train, y_train), (X_test, y_test) = cifar100.load_data() 
print('X_train shape:', X_train.shape) 
print(X_train.shape[0], 'train samples') 
print(X_test.shape[0], 'test samples') 

# Convert class vectors to binary class matrices. 
Y_train = np_utils.to_categorical(y_train, nb_classes) 
Y_test = np_utils.to_categorical(y_test, nb_classes) 

model = Sequential() 

model.add(Convolution2D(32, 3, 3, border_mode='same', 
         input_shape=X_train.shape[1:])) 
model.add(Activation('relu')) 
model.add(Convolution2D(32, 3, 3)) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Dropout(0.25)) 

model.add(Convolution2D(64, 3, 3, border_mode='same')) 
model.add(Activation('relu')) 
model.add(Convolution2D(64, 3, 3)) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Dropout(0.25)) 

model.add(Flatten()) 
model.add(Dense(512)) 
model.add(Activation('relu')) 
model.add(Dropout(0.5)) 
model.add(Dense(nb_classes)) 
model.add(Activation('softmax')) 

# Let's train the model using RMSprop 
model.compile(loss='categorical_crossentropy', 
       optimizer='rmsprop', 
       metrics=['accuracy']) 




if test_only: 
    model.load_weights('cifar100_best_accuracy.hdf5') 

X_train = X_train.astype('float32') 
X_test = X_test.astype('float32') 
X_train /= 255 
X_test /= 255 

if not data_augmentation: 
    print('Not using data augmentation.') 
    model.fit(X_train, Y_train, 
       batch_size=batch_size, 
       nb_epoch=nb_epoch, 
       validation_data=(X_test, Y_test), 
       shuffle=True) 
    score = model.evaluate(X_test, Y_test, batch_size = batch_size) 
    print('Test score:', score) 
else: 
    print('Using real-time data augmentation.') 
    # This will do preprocessing and realtime data augmentation: 
    datagen = ImageDataGenerator(
     featurewise_center=False, # set input mean to 0 over the dataset 
     samplewise_center=False, # set each sample mean to 0 
     featurewise_std_normalization=False, # divide inputs by std of the dataset 
     samplewise_std_normalization=False, # divide each input by its std 
     zca_whitening=False, # apply ZCA whitening 
     rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180) 
     width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) 
     height_shift_range=0.1, # randomly shift images vertically (fraction of total height) 
     horizontal_flip=True, # randomly flip images 
     vertical_flip=False) # randomly flip images 

    # Compute quantities required for featurewise normalization 
    # (std, mean, and principal components if ZCA whitening is applied). 
    datagen.fit(X_train) 


    model_check_point = ModelCheckpoint('cifar100_best_accuracy.hdf5', monitor='acc', verbose=0, save_best_only=True, save_weights_only=False, mode='auto') 


    # Fit the model on the batches generated by datagen.flow(). 
    model.fit_generator(datagen.flow(X_train, Y_train, 
         batch_size=batch_size), 
         samples_per_epoch=X_train.shape[0], 
         nb_epoch=nb_epoch, 
         callbacks=[model_check_point], 
         validation_data=(X_test, Y_test)) 
+0

どこかに形の不一致があるようです。トリックは、batch_size、画像の高さ、画像のサイズ、チャンネルの数のための正しい順序を設定することだと思います。私は問題がどこにあるかを見つけるためにちょっとデバッグする必要があると思います。私があなただったら、モデルを1層モデルに減らしてデバッグを簡略化します。 –

+0

私は1台のPC(Windows 10)でCNNを訓練しましたが、別のPCでUbuntuでload_weightsを実行しようとしています。これらの2つの間に不一致があれば問題になりますか? –

+0

私は何も考えることができません。同じPC(Windows 10)にロードできますか? –

答えて

0

あなたがモデルを保存して、それが再びあなたのモデルを訓練した後、重みとして、それをバックロードされています。

まず、重みを保存して戻して問題が解決するかどうかを確認するスクリプトを修正します。