I am looking through the Caffe prototxt for deep residual networks and have noticed the appearance of a "Scale"
layer.
layer {
bottom: "res2b_branch2b"
top: "res2b_branch2b"
name: "scale2b_branch2b"
type: "Scale"
scale_param {
bias_term: true
}
}
However, this layer is not available in the Caffe layer catalogue. Can someone explain the functionality of this layer and the meaning of the parameters or point to a an up-to-date documentation for Caffe?
You can find a detailed documentation on caffe here.
Specifically, for "Scale"
layer the doc reads:
Computes a product of two input Blobs, with the shape of the latter Blob "broadcast" to match the shape of the former. Equivalent to tiling the latter Blob, then computing the elementwise product.
The second input may be omitted, in which case it's learned as a parameter of the layer.
It seems like, in your case, (single "bottom"), this layer learns a scale factor to multiply "res2b_branch2b"
. Moreover, since scale_param { bias_term: true }
means the layer learns not only a multiplicative scaling factor, but also a constant term. So, the forward pass computes:
res2b_branch2b <- res2b_branch2b * \alpha + \beta
During training the net tries to learn the values of \alpha
and \beta
.