When using a model to make estimates, we will often be uncertain about what values the model’s numerical parameters should have.
For example, if we decide to use 80,000 Hours’ three-factor framework for selecting cause areas, we may be unsure of what value to assign to a given cause area’s tractability, or if we are attempting to to estimate the value of a donation to a bednet charity, we may be unsure how many cases of malaria are prevented per bednet distributed.
It is important to make such uncertainty clear, both so that our views can be be more easily challenged and improved by others and so that we can derive more nuanced conclusions from the models we use.
By plugging in probability distributions or confidence windows, rather than individual estimates, for the values of the parameters in a given model, we can calculate an output for the model that also reflects uncertainty. However, it is important to be careful when performing such calculations, since small mathematical or conceptual errors can easily lead to incorrect or misleading results. A good tool for avoiding these sorts of errors is Guesstimate.
It has also been argued, e.g. by Holden Karnofsky, that in cases with high uncertainty, estimates that assign an intervention a very high expected value are likely to reflect some unseen bias in the calculation, and should therefore be treated with skepticism.
Karnofsky, Holden. 2016. Why we can’t take expected value estimates literally (even when they’re unbiased).
Approach to evaluating uncertain interventions.
A tool for carrying out calculations under uncertainty.