Just a thought experiment. I’ve got a thousand people and I ask them all the following question:

“Choose a number between 0 and 100. The aim is to choose a number which is two-thirds of the average choice.”

A reasonable place to start would be to assume that everybody else randomly chooses a number anywhere between 0 and 100. Assuming a uniform distribution, the average of all the numbers chosen would be 50. Therefore you should choose the number that is two-third of that, which is 33.

What’s the possible flaw?

You’d want to choose 2/3 of 33, because everyone would assume you wanted 2/3 of 50. But then you’d have to choose 2/3 of 2/3 of 33 etc. So the best answer would probably be 0.

I wouldn’t expect the average to be anything like 50. Last time I read anything about that, in the case of people choosing from 1..10, most people settled on 7, so there’s a fair chance of scaling-up or related phenomena. Goodness knows what effect talking about taking 2/3rds of something will have on the results.

And of course, the question whether 1000 people is enough for a selection out of 100 to be valid is beyond my statistics…

What else could be wrong? ðŸ™‚

You have a thousand people, so there might be more weight on one number over another (as in, 2 people selecting “25” and 1 person selecting “24” makes 25 the more popular number). A simple arithmetic mean doesn’t incorporate the weights of numbers like this.

However… if you included these weights in the “uniformly distributed” statement… Then I’d have to go with shaun. Although, I would think that *most* people would select 50 as the average, and thus answer 33. Thus, the *true* answer would be 22 (not zero- because we’re talking about an average here, not an episolon neighborhood).