I am using a multiple output model in Keras
model1 = Model(input=x, output=[y2, y3])
model1.compile((optimizer='sgd', loss=cutom_loss_function)
my custom_loss
function is
def custom_loss(y_true, y_pred):
y2_pred = y_pred[0]
y2_true = y_true[0]
loss = K.mean(K.square(y2_true - y2_pred), axis=-1)
return loss
I only want to train the network on output y2
.
What is the shape/structure of the y_pred
and y_true
argument in loss function when multiple outputs are used?
Can I access them as above? Is it y_pred[0]
or y_pred[:,0]
?
I only want to train the network on output y2.
Based on Keras functional API guide you can achieve that with
model1 = Model(input=x, output=[y2,y3])
model1.compile(optimizer='sgd', loss=custom_loss_function,
loss_weights=[1., 0.0])
What is the shape/structure of the y_pred and y_true argument in loss function when multiple outputs are used? Can I access them as above? Is it y_pred[0] or y_pred[:,0]
In keras multi-output models loss function is applied for each output separately. In pseudo-code:
loss = sum( [ loss_function( output_true, output_pred ) for ( output_true, output_pred ) in zip( outputs_data, outputs_model ) ] )
The functionality to do loss function on multiple outputs seems unavailable to me. One probably could achieve that by incorporating the loss function as a layer of the network.