Fisher Information / Expected Information: Definition

举报
李锐博恩 发表于 2021/07/15 05:04:24 2021/07/15
1.2k+ 0 0
【摘要】 What is Fisher Information? Fisher information tells us how much information about an unknown parameter we can get from a sample. In other words, it tells us how well we can measure a...

What is Fisher Information?

Fisher information tells us how much information about an unknown parameter we can get from a sample. In other words, it tells us how well we can measure a parameter, given a certain amount of data. More formally, it measures the expected amount of information given by a random variable (X) for a parameter(Θ) of interest. The concept is related to the law of entropy, as both are ways to measure disorder in a system (Friedan, 1998).

Uses include:

Finding the Fisher Information

Finding the expected amount of information requires calculus. Specifically, a good understanding of differential equations is required if you want to derive information for a system.

Three different ways can calculate the amount of information contained in a random variable X:

  1. fisher-information
     
  2. This can be rewritten (if you change the order of integration and differentiation) as:
    fisher-2
     
  3. Or, put another way:
    fisher-3
     

The bottom equation is usually the most practical. However, you may not have to use calculus, because expected information has been calculated for a wide number of distributions already. For example:

  • Ly et.al (and many others) state that the expected amount of information in a Bernoulli distribution is:
    I(Θ) = 1 / Θ (1 – Θ).
  • For mixture distributions, trying to find information can “become quite difficult” (Wallis, 2005). If you have a mixture model, Wallis’s book Statistical and Inductive Inference by Minimum Message Length gives an excellent rundown on the problems you might expect.

If you’re trying to find expected information, try an Internet or scholarly database search first: the solution for many common distributions (and many uncommon ones) is probably out there.

Example

Find the fisher information for X ~ N(μ, σ2). The parameter, μ, is unknown.
Solution:
For −∞ < x < ∞:
fisher information 1


First and second derivatives are:
example-2


So the Fisher Information is:
example-3
 

Other Uses

Fisher information is used for slightly different purposes in Bayesian statistics and Minimum Description Length(MDL):

  1. Bayesian Statistics: finds a default prior for a parameter.
  2. Minimum description length (MDL): measures complexity for different models.

文章来源: reborn.blog.csdn.net,作者:李锐博恩,版权归原作者所有,如需转载,请联系作者。

原文链接:reborn.blog.csdn.net/article/details/83863337

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

作者其他文章

评论(0

抱歉,系统识别当前为高风险访问,暂不支持该操作

    全部回复

    上滑加载中

    设置昵称

    在此一键设置昵称,即可参与社区互动!

    *长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

    *长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。