What are we weighting for? A mechanistic model for probability weighting

It’s a common belief in areas such as psychology, economics or finance that people suffer from cognitive biases — systematic deviations from correct or rational behaviour when they make decisions. This idea traces back to the work of Daniel Kahneman and Amos Tversky, beginning in the late 1970s, who noted that subjects in experiments often made apparently poor statistical judgments. When engaged in wagers with monetary prizes, individuals systematically seemed to overestimate the likelihood of rare outcomes, and to underestimate the probability of more common outcomes. People weigh probabilities incorrectly.
Yet this conclusion has been questioned by many researchers, who question whether the behaviour of participants in such experiments really is less rational than the experimenters’ expected outcomes. An experimenter may choose outcomes from some definite probability distribution, yet a subject doesn’t know this distribution and has to estimate it from a small sample of outcomes. Moreover, the subject may not fully understand or believe the experimenter’s explanation of how the experiment is being run. These and other influences create more uncertainty for the subject than for the experimenter, raising an important question: do people really tend to overestimate rare outcomes, or might their behaviour simply reflect the different perspectives of individuals with distinct knowledge?
In a recent paper, London Mathematical Laboratory researchers Ole Peters, Alexander Adamou, Mark Kirstein and Yonatan Berman examine this question, and suggest that there’s little reason to take the claims of cognitive bias at face value. Rather, they find that the classic results usually presented as evidence of overestimates of rare outcomes should arise generically even from small differences in uncertainty between an experimenter and subjects. The authors also demonstrate that this apparent cognitive bias vanishes when considered from the perspective of ergodicity economics, a recent effort to place decision theory and other aspects of economic analysis on a more natural foundation than that provided by expected utility theory.
A simple curve expresses the core empirical regularity behind claims that individuals systematically overestimate the likelihood of rare events. Suppose w(x) is the empirical weighting subjects give to a set of possible experimental outcomes x, and p(x) is the actual probability distribution from which the experimenter draws outcomes. One way to compare w(x) and p(x) is to consider the cumulative probability distribution functions for these two quantities, Fw(x) and Fp(x), giving the total summed probability for a choice to be less than or equal to x. By eliminating x, one can plot Fw as a function of Fp to see how the subject’s empirical weightings compare to the experimenter’s assumed distribution. As presented in an early paper of Kahneman and Tversky, the curve shows the distinctive shape of an inverted S-curve (see figure below).

In one region, Fw > Fp, the subjects overestimate probabilities, and in another, region, Fw < Fp, the subjects underestimate probabilities. For Kahneman and Tversky, and many other researchers since, a curve with this qualitative shape gives a signature of a behavioural tendency to overweight probabilities for rare events, and to underestimate them for more common events.
Yet Peters and colleagues examine whether similar curves might arise quite generically, and for reasons having nothing to do with irrational misperceptions of probabilities. For example, they consider what happens if the experimenter chooses p(x) to be a standard Gaussian distribution, centred at some most likely value, and with some variance. Suppose the participant, for whatever reason, perceives some extra uncertainty in outcomes, and weighs them also as a Gaussian, centred on the same value, but with slightly higher variance. As shown below, this is enough to lead to a curve of the very same generic inverted S-shape. The curves on the left depict the two cumulative distribution functions (red and blue for the narrower and wider distributions). The figure on the right shows the resulting inverted s-curve for the subject’s cumulative function Fw plotted as a function of the experimenter’s, Fp. Clearly, it takes only a little extra uncertainty in the subject’s distribution to generate a curve of this qualitative shape.
The authors also argue that some extra uncertainty on the part of an experimental subject is by no means unlikely, and can arise in many ways. The experimenter has full control over the experiment, chooses the decision problem and sets the probabilities. The subject, in contrast, knows less, has little or no control, may wonder about his or her full understanding of how things work, and may even doubt the experimenter’s true intentions. As a result, a subject may intuitively assume some extra uncertainty beyond that described by an experimenter.
Another influence is possibly even more important. As the authors point out, a key difference between the experimenter and subject is that the latter has no access to the “true” probability distribution p(x) from which the experimenter chooses outcomes. Instead, the subject has to guess about this distribution by relying on a finite set of outcomes generated in the experiment. Rare outcomes, of course, occur infrequently in a small sample of outcomes. Hence, if the subject wants to estimate p(x), he or she might sensibly try to correct for the limited data by supposing rare events to be more likely than the outcomes observed so far would imply. Yet an experimenter might misinterpret this as an irrational overweighting of rare outcomes.
Indeed, Peters and colleagues consider how a sophisticated subject ought to estimate the errors in his or her probability estimates due to limited data. An estimate of p(x) comes from counting the number of instances n(x) of outcomes occurring on some small range around x. If this width is ∆x, then from T samples, an estimate of p(x) is Q = n(x)/(∆xT). But one also expects fluctuations. If the average is n(x), then the fluctuations – assuming Poissonian distribution of selections – should be proportional to n(x)1/2. If the best estimate of p(x) is Q, then the authors find that the uncertainty in this estimate to be roughly (Q/∆xT)1/2. This gets small as p(x) gets small, yet the error relative to p(x), (Q∆xT)-1/2, gets very large. This implies that the biggest mistakes in estimating the correct value of p(x) come from those parts of the distribution associated with the most rare outcomes.
Hence, a sophisticated subject, on this analysis, should expect the most costly errors to come from low probability events. Increasing his or her estimate of the probability of these events might then be a wise and cautious thing to do, understanding the problems of estimating probabilities from finite data. All this suggests that the naïve interpretation of Kahneman and Tversky’s findings as reflecting cognitive bias may indeed be just that – naïve. The apparent bias may only reflect a natural and sensible response to the information-scarce situation subjects find themselves in relative to an experimenter.
Peters and colleagues finish by tracing much of the confusion over this supposed cognitive bias to the broad influence of expected utility theory – the traditional starting point for many economic analyses of optimal decision making under uncertainty. This approach assumes that the optimal behaviour of an experimental subject can be calculated by an average over a statistical ensemble of outcomes. This statistical simplification is of course appealing, yet makes no sense unless it gives the identical result to the average over outcomes in time. Unfortunately, we know this is often not the case, especially in situations where subjects make choices affecting their wealth, models of which usually rest on non-ergodic multiplicative random processes. This is precisely the situation relevant to all experiments revealing the supposed overestimation of the probabilities of rare events.
This cognitive bias does not appear to be a bias at all when considered from an appropriate analytical perspective.
The paper is available at https://www.researchers.one/article/2020-04-14

Leave a Reply

Your email address will not be published. Required fields are marked *