Frequentist probability

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by INic (talk | contribs) at 11:50, 17 October 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Jump to navigation Jump to search
John Venn.

The problems and paradoxes of the classical interpretation of probability motivated the development of the relative frequency concept of probability.

Most of the mathematics commonly used to make statistical estimates or tests are developed by statisticians who uses this concept exclusively. They are usually called frequentists, and their position is called frequentism. A statistician who uses traditional methods of inference is therefore referred to as a frequentist statistician. Frequentism is, by far, the most commonly held view among working statisticians, probability theorists and physicists.

Frequentists talk about probabilities only when dealing with well defined random experiments. The relative frequency of occurrence of an experiment's outcome, when repeating the experiment, is a measure of the probability of that random event.

This school is often associated with the names of Jerzy Neyman and Egon Pearson who described the logic of statistical hypothesis testing. Other influential figures of the frequentist school include John Venn, R.A. Fisher, and Richard von Mises.

See also

References

  • P W Bridgman, The Logic of Modern Physics, 1927
  • Alonzo Church, The Concept of a Random Sequence, 1940
  • Harald Cramér, Mathematical Methods of Statistics, 1946
  • P Martin-Löf, On th Concept of a Random Sequence, 1966
  • Richard von Mises, Probability, Statistics, and Truth, 1939 (German original 1928)
  • Jerzy Neyman, First Course in Probability and Statistics, 1950
  • Hans Reichenbach, The Theory of Probability, 1949 (German original 1935)
  • Bertrand Russell, Human Knowledge, 1948
  • John Venn, The Logic of Chance, 1866