I'm implementing the Histogram of Oriented Gradient features from "Histograms of oriented gradients for human detection" and I'd like to visualise the result. All papers on these features use a standard visualisation, but I can't find any description of how these are generated. I'd be grateful for an explanation or helpful link.
The visualizations you see in papers can be interpreted as follows:
The descriptor is made up of M*N cells covering the image window in a grid. Each cell is represented by a histogram of edge orientations, where the number of discretized edge orientations is a parameter (usually 9). The cell histogram is visualized by a 'star' showing the strength of the edge orientations in the histogram: the stronger a specific orientation, the longer it is relative to the others.
Note that there are various normalization schemes: local schemes, in which the cell in normalized with respect to neighboring cells only (as in the original paper by Dalal-Triggs), or global schemes, in which the orientation length is normalized by all the cells. Also note that some authors use multiple local normalizations per cell (e.g. the one I am referring to below), but visualization only shows one (or an average of them).
The Matlab code for the seminal work by Felzenszwalb et al. visualizes the cells by painting them over an image, where the strength is visualized by the intensity of the edge instead of the length. You can find it in the package they give here (DPM). Look for a function named HOGpicture.m
The example below shows a model of a bike (from Felzenszwalb et al.) with HoG consisting of 7*11 cells, each with 8 orientations