I hear from some sources that Generative adversarial networks are unsupervised ML, but i dont get it. Are Generative adversarial networks not in fact supervised?
1) 2-class case Real-against-Fake
Indeed one has to supply training data to the discriminator and this has to be "real" data, meaning data which i would label with f.e. 1. Even though one doesnt label the data explicit, one does so implicitly by presenting the discriminator in the first steps with training data, which you tell the discriminator is authentic. In that way you somehow tell the discriminator a labeling of the training data. And on the contrary a labeling of the noise data that is generated at the first steps of the generator, which the generator knows to be unauthentic.
2) Multi-class case
But it gets really strange in the multi class case. One has to supply descriptions in the training data. The obvious contradiction is that one supplies a response to an unsupervised ML algorithm.
GANs are unsupervised learning algorithms that use a supervised loss as part of the training. The later appears to be where you are getting hung-up.
When we talk about supervised learning, we are usually talking about learning to predict a label associated with the data. The goal is for the model to generalize to new data.
In the GAN case, you don't have either of these components. The data comes in with no labels, and we are not trying to generalize any kind of prediction to new data. The goal is for the GAN to model what the data looks like (i.e., density estimation), and be able to generate new examples of what it has learned.
The GAN sets up a supervised learning problem in order to do unsupervised learning, generates fake / random looking data, and tries to determine if a sample is generated fake data or real data. This is a supervised component, yes. But it is not the goal of the GAN, and the labels are trivial.
The idea of using a supervised component for an unsupervised task is not particularly new. Random Forests have done this for a long time for outlier detection (also trained on random data vs real data), and the One-Class SVM for outlier detection is technically trained in supervised fashion with the original data being the real class and a single point at the origin of the space (i.e., the zero vector) treated as the outlier class.