My lecture notes on computer vision mention that the performance of the k-means clustering algorithm can be improved if we know the standard deviation of the clusters. How so?
My thinking is that we can use the standard deviations to come up with a better initial estimate through histogram based segmentation first. What do you think? Thanks for any help!
Your lecturer might have the 2002 paper by Veenman et al in mind. The basic idea is that you set the maximum variance you allow in each cluster. You start with as many clusters as data points and then you "evolve" clusters by
(this evolution acts as a global optimization procedure, and prevents the bad consequences of initial assignment of cluster means you have in k-means)
To sum up, if you know the variance, you know how varied the clusters should be, so it's easier to e.g. detect outliers (which usually should be put into separate clusters).