Interpreting Scipy.stats.entropy Values
Solution 1:
numpy.random.normal(-2.5,0.1,1000)
is a sample from a normal distribution. It's just 1000 numbers in a random order. The documentation for entropy
says:
pk[i]
is the (possibly unnormalized) probability of eventi
.
So to get a meaninful result, you need the numbers to be "aligned" so that the same indices correspond to the same positions in the distribution. In your example t1[0]
has no relationship to t2[0]
. Your sample doesn't provide any direct information about how probable each value is, which is what you need for the KL divergence; it just gives you some actual values that were taken from the distribution.
The most straightforward way to get aligned values is to evaluate the distribution's probability density function at some fixed set of values. To do this, you need to use scipy.stats.norm
(which results a distribution object that can be manipulated in various ways) instead of np.random.normal
(which only returns sampled values). Here's an example:
t1 = stats.norm(-2.5, 0.1)
t2 = stats.norm(-2.5, 0.1)
t3 = stats.norm(-2.4, 0.1)
t4 = stats.norm(-2.3, 0.1)
# domain to evaluate PDF on
x = np.linspace(-5, 5, 100)
Then:
>>> stats.entropy(t1.pdf(x), t2.pdf(x))
-0.0
>>> stats.entropy(t1.pdf(x), t3.pdf(x))
0.49999995020647586
>>> stats.entropy(t1.pdf(x), t4.pdf(x))
1.999999900414918
You can see that as the distributions move further apart, their KL divergence increases. (In fact, using your second example will give a KL divergence of inf
because they overlap so little.)
Post a Comment for "Interpreting Scipy.stats.entropy Values"