Swipe to navigate through the chapters of this book
In this chapter, we introduce the elements of probability theory. In the spirit of the book, we confine ourselves to the discrete case, that is, probabilities on finite domains, leaving aside the infinite case.
We begin by defining probability functions on a finite sample space and identifying some of their basic properties. So much is simple mathematics. This is followed by some words on different philosophies of probability, and warnings of traps that arise in applications. Then back to the mathematical work, introducing the concept of conditional probability and setting out its properties, its connections with independence and its use in Bayes’ theorem. An interlude presents the curious configurations known as Simpson’s paradox. In the final section, we explain the notions of a payoff function and expected value.
Please log in to get access to this content
Introductory texts of discrete mathematics tend to say rather little on probability. However, the following two cover more or less the same ground as this chapter:
Lipschutz S, Lipson M (1997) Discrete mathematics, 2nd edn, Schaum’s outline series. McGraw Hill, New York, chapter 7
Rosen K (2007) Discrete mathematics and its applications, 6th edn. McGraw Hill, London, chapter 6
For those wishing to go further and include the infinite case, the authors of the first text have also written the following:
Lipschutz S, Lipson M (2000) Probability, 2nd edn, Schaum’s outline series. McGraw Hill, New York
For all of the above, you may find it useful to use Table 6.1 to keep track of the more traditional terminology and notation that is often used.
Finally, if you had fun with the interlude on Simpson’s paradox, you will enjoy the Monty Hall problem. See the Wikipedia entry on it, and if you can’t stop thinking about it, continue with, say:
Rosenhouse J (2009) The Monty Hall problem. Oxford University Press, oxford
- Weighing the Odds: Probability
- Springer London
- Sequence number
- Chapter number