、私はそれがバッチ方法でサンプルを処理することを意味しますラインオフトレーニングを(私はmodel.fit()
でケラスモデルはどのようにして1つのサンプルのみを予測できますか?
batch_size=100
を設定)やりたい、と私はリアルタイムで一つだけのサンプルを予測したいです
model.predict(x_real_time, batch_size=1)
を、それはエラーを示しています:ので、私は使用
`ValueError: Cannot feed value of shape (1, 3) for Tensor 'input_11:0', which has shape '(165047, 3)'`
でしたいくつかの1つは、この1つを解決する方法を教えてください?おかげ
は、全体のコード:
batch_size = int(data_num_.shape[0]/10)
original_dim = data_num_.shape[1]
latent_dim = data_num_.shape[1]*2
intermediate_dim = data_num_.shape[1]*10
nb_epoch = 10
epsilon_std = 0.001
data_untrain = data_scale.transform(df[(df['label']==cluster_num)&(df['prob']<threshold)].iloc[:,:data_num.shape[1]].values)
data_untrain_num = (int(data_untrain.shape[0]/batch_size)-1)*batch_size
data_untrain = data_untrain[:data_untrain_num,:]
x = Input(batch_shape=(batch_size, original_dim))
init_drop = Dropout(0.2, input_shape=(original_dim,))(x)
h = Dense(intermediate_dim, activation='relu')(init_drop)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.,
std=epsilon_std)
return z_mean + K.exp(z_log_var/2) * epsilon
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='linear')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * objectives.mae(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return xent_loss + kl_loss
vae = Model(x, x_decoded_mean)
vae.compile(optimizer=Adam(lr=0.01), loss=vae_loss)
train_ratio = 0.9
train_num = int(data_num_.shape[0]*train_ratio/batch_size)*batch_size
test_num = int(data_num_.shape[0]*(1-train_ratio)/batch_size)*batch_size
x_train = data_num_[:train_num,:]
x_test = data_num_[-test_num:,:]
vae.fit(x_train, x_train,
shuffle=True,
nb_epoch=nb_epoch,
batch_size=batch_size,
validation_data=(x_test, x_test))
# build a model to project inputs on the latent space
encoder = Model(x, z_mean)
x_test_predict = data_scale_.inverse_transform(vae.predict(x_test, batch_size=1))
x_test = data_scale_.inverse_transform(x_test)
for idx in range(x_test.shape[1]):
plt.plot(x_test[:,idx], alpha=0.3, color='red')
plt.plot(x_test_predict[:,idx], alpha=0.3, color='blue')
plt.show()
plt.close()
あなたがここにあなたの全体のコードを入れてくださいますか? – Arman
モデルのインスタンス化のコードとx_real_timeの形状は、あなたの問題を解決するのに非常に役立ちます。 –
ありがとう、私は上記のコードを添付しました。 – zb1872