私は「Livelinet:教育映像の活気を予測するためのマルチモーダル深層反復ニューラルネットワーク」から構造を実装しようとしています。torch.nn.LSTMランタイムエラー
簡単な説明として、10秒のオーディオクリップを10秒のオーディオクリップに分割し、その1秒のオーディオクリップからスペクトログラム(写真)を取得します。次に、CNNを使用してピクチャからリプレゼンテーションベクタを取得し、最終的に各1秒のビデオクリップの10個のベクタを取得します。
次に、私はこれらの10個のベクターをLSTMに供給し、そこにいくつかのエラーがあります。次のように 私のコードとエラー・トレースバックは、次のとおりです。
class AudioCNN(nn.Module):
def __init__(self):
super(AudioCNN,self).__init__()
self.features = alexnet.features
self.features2 = nn.Sequential(*classifier)
self.lstm = nn.LSTM(512, 256,2)
self.classifier = nn.Linear(2*256,2)
def forward(self, x):
x = self.features(x)
print x.size()
x = x.view(x.size(0),256*6*6)
x = self.features2(x)
x = x.view(10,1,512)
h_0,c_0 = self.init_hidden()
_, (_, _) = self.lstm(x,(h_0,c_0)) # x dim : 2 x 1 x 256
assert False
x = x.view(1,1,2*256)
x = self.classifier(x)
return x
def init_hidden(self):
h_0 = torch.randn(2,1,256) #layer * batch * input_dim
c_0 = torch.randn(2,1,256)
return h_0, c_0
audiocnn = AudioCNN()
input = torch.randn(10,3,223,223)
input = Variable(input)
audiocnn(input)
エラー:
RuntimeErrorTraceback (most recent call last)
<ipython-input-64-2913316dbb34> in <module>()
----> 1 audiocnn(input)
/home//local/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
--> 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)
<ipython-input-60-31881982cca9> in forward(self, x)
15 x = x.view(10,1,512)
16 h_0,c_0 = self.init_hidden()
---> 17 _, (_, _) = self.lstm(x,(h_0,c_0)) # x dim : 2 x 1 x 256
18 assert False
19 x = x.view(1,1,2*256)
/home/local/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
--> 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)
/home//local/lib/python2.7/site-packages/torch/nn/modules/rnn.pyc in forward(self, input, hx)
160 flat_weight=flat_weight
161 )
--> 162 output, hidden = func(input, self.all_weights, hx)
163 if is_packed:
164 output = PackedSequence(output, batch_sizes)
/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, *fargs, **fkwargs)
349 else:
350 func = AutogradRNN(*args, **kwargs)
--> 351 return func(input, *fargs, **fkwargs)
352
353 return forward
/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, weight, hidden)
242 input = input.transpose(0, 1)
243
--> 244 nexth, output = func(input, hidden, weight)
245
246 if batch_first and batch_sizes is None:
/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, hidden, weight)
82 l = i * num_directions + j
83
---> 84 hy, output = inner(input, hidden[l], weight[l])
85 next_hidden.append(hy)
86 all_output.append(output)
/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, hidden, weight)
111 steps = range(input.size(0) - 1, -1, -1) if reverse else range(input.size(0))
112 for i in steps:
--> 113 hidden = inner(input[i], hidden, *weight)
114 # hack to handle LSTM
115 output.append(hidden[0] if isinstance(hidden, tuple) else hidden)
/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in LSTMCell(input, hidden, w_ih, w_hh, b_ih, b_hh)
29
30 hx, cx = hidden
---> 31 gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)
32
33 ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
/home//local/lib/python2.7/site-packages/torch/nn/functional.pyc in linear(input, weight, bias)
551 if input.dim() == 2 and bias is not None:
552 # fused op is marginally faster
--> 553 return torch.addmm(bias, input, weight.t())
554
555 output = input.matmul(weight.t())
/home//local/lib/python2.7/site-packages/torch/autograd/variable.pyc in addmm(cls, *args)
922 @classmethod
923 def addmm(cls, *args):
--> 924 return cls._blas(Addmm, args, False)
925
926 @classmethod
/home//local/lib/python2.7/site-packages/torch/autograd/variable.pyc in _blas(cls, args, inplace)
918 else:
919 tensors = args
--> 920 return cls.apply(*(tensors + (alpha, beta, inplace)))
921
922 @classmethod
RuntimeError: save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition
これは私が探していた答えです!ありがとう! –