Removing layers from a pretrained keras model gives the same output as original model

Koul picture Koul · Dec 29, 2017 · Viewed 7k times · Source

During some feature extraction experiments, I noticed that the 'model.pop()' functionality is not working as expected. For a pretrained model like vgg16, after using 'model.pop()' , model.summary() shows that the layer has been removed (expected 4096 features), however on passing an image through the new model, it results in the same number of features (1000) as the original model. No matter how many layers are removed including a completely empty model, it generates the same output. Looking for your guidance on what might be the issue.

#Passing an image through the full vgg16 model
model = VGG16(weights = 'imagenet', include_top = True, input_shape = (224,224,3))
img = image.load_img( 'cat.jpg', target_size=(224,224) )
img = image.img_to_array( img )
img = np.expand_dims( img, axis=0 )
img = preprocess_input( img )
features = model.predict( img )
features = features.flatten()
print(len(features)) #Expected 1000 features corresponding to 1000 imagenet classes

1000

model.layers.pop()
img = image.load_img( 'cat.jpg', target_size=(224,224) )
img = image.img_to_array( img )
img = np.expand_dims( img, axis=0 )
img = preprocess_input( img )
features2 = model.predict( img )
features2 = features2.flatten()
print(len(features2)) #Expected 4096 features, but still getting 1000. Why?
#No matter how many layers are removed, the output is still 1000

1000

Thank you!

See full code here: https://github.com/keras-team/keras/files/1592641/bug-feature-extraction.pdf

Answer

user3731622 picture user3731622 · Oct 24, 2019

Working off @Koul answer.

I believe you don't need to use the pop method. Instead just pass the layer before the output layer as the argument for the Model method's output parameter:

from keras.models import Model

model2 = Model(model.input, model.layers[-2].output)
model2.summary()