Correcting for Majority Voting without Using Confidence
Here’s a unique voting mechanism (let’s call it “the prediction-normalized vote”) posted in 2017 in the journal Nature (link: A solution to the single-question (link: A solution to the single-question crowd wisdom problem).
The majority decisions are often considered “plausible,” but of course, they can contain errors. In this paper, American students were asked, “Is Philadelphia the capital of Pennsylvania? The majority answer was “yes.”
The answer was NO (the actual state capital is Harrisburg), so a simple majority vote did not solve the question.
To adjust the majority vote, one can ask the participants an additional question, “How confident are you in the answer? “
Assuming that the more confidence one has in response, the more likely it is to be correct, then this modification sounded good.
However, according to the research in the paper, both those who answered “yes” and those who answered “no” to the previous question were fully confident in their answers, and there was no difference in the level of confidence. Wrong people don’t know they are wrong, so they are confident and wrong.
The authors added a twist: “What percentage of the other participants do you estimate to answer yes to this question?” This question means that you should compare the answers of your own opinions with those of others. In other words, they added a question about the degree of agreement between their view and that of others.
Take, for example, “Is Philadelphia the capital of Pennsylvania? If a person answers “yes” to the above question, he or she will naturally assume that others would say “yes” to it ( because he or she is incorrectly confident). Therefore, they will give a high percentage to the percent agreement question. However, those who answered “no” correctly to this question know that the problem is actually more complicated than it sounds, so they respond with a lower percentage to the agreement question (because they are sure that others will fail).
According to the authors, using actual voting and the voting predictions of others, we can derive the inequality that “the prediction for the true answer will always be lower than the probability that it is,” even if no one knows the true answer.
They also note that this conclusion cannot be reached by majority voting or voting with added confidence, but only by predicting the vote of others.
Using this fact, we can derive a strategy to choose the correct answer if the prediction is lower than the actual voting result, or conversely, if the voting result shows a higher percentage than the prediction (surprisingly popular algorithm). ( Or, a prediction-normalized vote with such a correction is also possible).
This strategy selects the knowledge of a minority in the Philadelphia case. Also, if the answer is such that no one knows the answer, in any case, it replicates mere majority voting, so it consistently outperforms the correctness of majority voting or voting with confidence.
In short, the prediction-normalized vote is a way of asking, “How similar is someone else to you?” or even “What is correct but not well known?” An updated version of collective intelligence executes decisions less likely to be incorrect, even as the majority vote erred.