Submitted by Reason-Local t3_11de5ag in explainlikeimfive
breckenridgeback t1_ja8ol32 wrote
Reply to comment by FellowConspirator in ELI5: why does/doesn’t probability increase when done multiple times? by Reason-Local
> There's nothing connecting one roll to the next; they are completely independent.
...under the assumption of a completely fair die. (An assumption you are usually making in a statistics class.)
In practice, though, the fairness of the die may be in doubt in many real-world scenarios.
EspritFort t1_ja9if1k wrote
>>There's nothing connecting one roll to the next; they are completely independent.
> ...under the assumption of a completely fair die. (An assumption you are usually making in a statistics class.) > > > > In practice, though, the fairness of the die may be in doubt in many real-world scenarios.
You may have quoted the wrong passage there, because even without a fair die it still holds true. Whether the die is weighted towards a 6 or not, the individual rolls are still independent from each other, merely the probabilities of the outcomes are different.
breckenridgeback t1_ja9jh3u wrote
> Whether the die is weighted towards a 6 or not, the individual rolls are still independent from each other, merely the probabilities of the outcomes are different.
The rolls are, but provided you have any uncertainty about the underlying probabilities, your beliefs about those rolls (and your expectations about the future rolls, which is the exact same thing) should be updating with each roll.
For a simple example, imagine I have two coins. One is loaded to always land heads, the other is fair. I pull one of the two from a box at random, and I do not know which I pulled. I want to estimate the probability of my next flip being heads. It's 75% in this case (50% to be loaded * 100% if it's loaded + 50% to be fair * 50% if it's fair).
I flip the coin, and it lands heads. This is evidence in favor of me having the biased coin. Specifically, I should update my probability that the coin is biased (using Bayes' rule) to:
P(loaded | heads) = P(loaded and heads) / P(heads) = 0.5 / 0.75 = 2/3.
Now I want to estimate the probability of the next flip. There is now a 2-in-3 chance (or more properly, that is my correct Bayesian estimation of that probability) that I am holding a biased coin, so the probability of the next flip being heads is 5/6 (it's 2/3 * 1 + 1/3 * 1/2 = 2/3 + 1/6 = 5/6). This is not equal to my original 3/4, even though the flips themselves are IID, because their underlying distribution depends on an unknown parameter about which I am gaining information.
Viewing a single comment thread. View all comments