I would like to design a deep net with one (or more) convolutional layers (CNN) and one or more fully connected hidden layers on top.
For deep network with fully connected layers there are methods in theano for unsupervised pre-training, e.g., using denoising auto-encoders or RBMs.
My question is: How can I implement (in theano) an unsupervised pre-training stage for convolutional layers?
I do not expect a full implementation as an answer, but I would appreciate a link to a good tutorial or a reliable reference.
This paper describes an approach for building a stacked convolutional autoencoder. Based on that paper and some Google searches I was able to implement the described network. Basically, everything you need is described in the Theano convolutional network and denoising autoencoder tutorials with one crucial exception: how to reverse the max-pooling step in the convolutional network. I was able to work that out using a method from this discussion - the trickiest part is figuring out the right dimensions for W_prime as these will depend on the feed forward filter sizes and the pooling ratio. Here is my inverting function:
def get_reconstructed_input(self, hidden):
""" Computes the reconstructed input given the values of the hidden layer """
repeated_conv = conv.conv2d(input = hidden, filters = self.W_prime, border_mode='full')
multiple_conv_out = [repeated_conv.flatten()] * np.prod(self.poolsize)
stacked_conv_neibs = T.stack(*multiple_conv_out).T
stretch_unpooling_out = neibs2images(stacked_conv_neibs, self.pl, self.x.shape)
rectified_linear_activation = lambda x: T.maximum(0.0, x)
return rectified_linear_activation(stretch_unpooling_out + self.b_prime.dimshuffle('x', 0, 'x', 'x'))