Fano's inequality
In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook. It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation. Let the random variables X and Y represent input and output messages with a joint probability , being where
Wikipage redirect
primaryTopic
Fano's inequality
In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook. It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation. Let the random variables X and Y represent input and output messages with a joint probability , being where
has abstract
In information theory, Fano's ...... corresponding binary entropy.
@en
L'inégalité de Fano est un résultat de théorie de l'information.
@fr
Nella teoria dell'informazione ...... dallo scienziato Robert Fano.
@it
Link from a Wikipage to an external page
Wikipage page ID
Wikipage revision ID
659,881,855
comment
In information theory, Fano's ...... oint probability , being where
@en
L'inégalité de Fano est un résultat de théorie de l'information.
@fr
Nella teoria dell'informazione ...... dallo scienziato Robert Fano.
@it
label
Disuguaglianza di Fano
@it
Fano's inequality
@en
Inégalité de Fano
@fr