Assuming we have two situations: (A) 500 People throw a six-sided dice, the result of each throw is the points given to each person. (B) One person throws a six-sided dice 500 times, the points of that person is the average of the throws

caritatsjq

caritatsjq

Answered question

2022-10-13

Many people one try vs one person many tries - is there a name for this?
Assuming we have two situations:
- (A) 500 People throw a six-sided dice, the result of each throw is the points given to each person
- (B) One person throws a six-sided dice 500 times, the points of that person is the average of the throws
The expected value in A over all throws is the same as for B (3.5), but those are in my opinion obviously very different situations for the statistical considerations of the single person.
Is there a name to differentiate between those scenarios?
For background:
I had a heated dispute over whether its better-as in aiming for higher points-for a single person to throw the dice once or 500 times (No choice in-between, though that would be interesting, too..)
I was of the opinion that the 500 throws as a single person are quantitatively beneficial, as you are guaranteed 3.5 points, while a single throw depends strongly on luck.
I guess there is a the topic of max-min and min-max. In any case, I did notice that I lack a formal description/understanding for this particular situation, would love some help on that front!

Answer & Explanation

hanfydded1c

hanfydded1c

Beginner2022-10-14Added 17 answers

Step 1
To make things simpler, consider instead a coin toss, with 0 points for heads and 1 point for tails. Suppose that you have an increasing utility function u over points.
Then if you toss the coin once, your expected utility is
U 1 = 1 2 u ( 0 ) + 1 2 u ( 1 ) .
If you toss the coin twice and are awarded the average points over the two coin tosses, then your expected utility is
U 2 = 1 4 u ( 0 ) + 1 2 u ( 1 / 2 ) + 1 4 u ( 1 ) .
Step 2
Your expected utility is higher from two coin tosses if
U 2 U 1 = 1 2 u ( 1 / 2 ) 1 4 [ u ( 0 ) + u ( 1 ) ] > 0
That is, if
u ( 1 / 2 ) > 1 2 [ u ( 0 ) + u ( 1 ) ]
This will hold if u is strictly concave. In economics, a person with a strictly concave utility function is said to be risk averse. In this example the option of two coin tosses is less risky because the distribution of points second-order stochastically dominates the distribution of points from one coin toss.
[Note that the preference for one coin toss or two in this example boils down to whether someone prefers to get the expected value of 1/2 for sure or to take the points from a single coin toss. Risk averse people always prefer a sure thing to a risky alternative with the same expected value, whereas risk neutral people are indifferent.]
Kayla Mcdowell

Kayla Mcdowell

Beginner2022-10-15Added 2 answers

Step 1
While I don't know that I can give you a name that is sufficient for this purpose, if you're willing to dive a bit deeper into probability theory, I can think of a mathematical concept which should work: the standard deviation of the distribution.
The point is that the standard deviation of one die roll is rather high, whereas the standard deviation of the average of 500 die rolls is reallllly tiny, very close to zero.
The value of the standard deviation is a number 0. To calculate it for 1 roll of a fair die you take each possible outcome of that one die roll (6 values from 1 to 6), subtract 3.5 and square the result, take the average, then take the square root, to get
( ( 1 3.5 ) 2 + ( 2 3.5 ) 2 + ( 3 3.5 ) 2 + ( 4 3.5 ) 2 + ( 5 3.5 ) 2 + ( 6 3.5 ) 2 ) / 6 b l
(around 1.2 by my quick mental calculation, which is what I mean by "rather high" in this scenario).
Step 2
While I don't know that I can give you a name that is sufficient for this purpose, if you're willing to dive a bit deeper into probability theory, I can think of a mathematical concept which should work: the standard deviation of the distribution.
Step 2
The point is that the standard deviation of one die roll is rather high, whereas the standard deviation of the average of 500 die rolls is reallllly tiny, very close to zero.
The value of the standard deviation is a number ≥0. To calculate it for 1 roll of a fair die you take each possible outcome of that one die roll (6 values from 1 to 6), subtract 3.5 and square the result, take the average, then take the square root, to get
( ( 1 3.5 ) 2 + ( 2 3.5 ) 2 + ( 3 3.5 ) 2 + ( 4 3.5 ) 2 + ( 5 3.5 ) 2 + ( 6 3.5 ) 2 ) / 6 b l
(around 1.2 by my quick mental calculation, which is what I mean by "rather high" in this scenario).
To calculate the standard deviation of the outcome of the average of 500 independent rolls of a fair die, one could follow the same principles although that is very tedious, but there are some very nice rules for doing the calculation much more quickly. This is what one learns in probability theory, and I'll leave it at that, other than to repeat what I said before: for this particular example, the standard deviation for 500 rolls is very close to zero.
In the two sides of the argument that you describe in your post, the value of the mean is the same in either scenario, namely 3.5. However, it would be a mistake to say that the two scenarios are statistically identical; the standard deviation is an important statistic that matters in applications. Just as an example, you can think of the standard deviation as a quantification of "risk". So if one is risk averse then, given two probability distributions with the same mean, one will choose the one that has smaller standard deviation.

Do you have a similar question?

Recalculate according to your conditions!

New Questions in High school probability

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?