In some (e.g. machine learning) libraries, we can find log_prob
function. What does it do and how is it different from taking just regular log
?
For example, what is the purpose of this code:
dist = Normal(mean, std)
sample = dist.sample()
logprob = dist.log_prob(sample)
And subsequently, why would we first take a log and then exponentiate the resulting value instead of just evaluating it directly:
prob = torch.exp(dist.log_prob(sample))
Part of the answer is that log_prob
returns the log of the probability density/mass function evaluated at the given sample value.