People often have to choose between options when the outcome of some option is uncertain. For instance, they might have a drug that succeeds in 60% of cases (probability 0.6), and that gives an extra year of fulfilled life when it succeeds, and has no impact if it fails.
One way to think about the value of this drug to a new patient is in terms of its expected value. The expected value is the sum of the value of each potential outcome multiplied by the probability of that outcome occurring.
In the case of the drug, there are only two outcomes: success and failure. So the expected value equals (0.6 x one year of fulfilled life) + (0.4 x 0) = 0.6 years of fulfilled life. Let us assume that each additional life is equally valuable to us.
Expected value is useful for selecting between projects. Suppose another drug succeeds with probability 0.4, and gives two years of fulfilled life when it succeeds, but causes harm equal to half a year of fulfilled life lost when it fails. Then the expected benefit of this project is (0.4 x 2 years of fulfilled life) + (0.6 x -0.5 years of fulfilled life) = 0.8 - 0.3 = 0.5 years of fulfilled life. Over many cases, the first drug will likely provide more years of fulfilled life than the second. So if they cost the same, funding the first drug would add more fulfilled life years on average.
Since probabilities and values are difficult to estimate, however, some have argued that people should in practice be cautious about taking expected value estimates literally (Karnofsky 2016).
Expected value can also be used in a slightly different sense: under some assumptions about rational decision making, people should always pick the project with the highest expected value (Wikipedia 2016a). Note however that this result implies only risk neutrality in the "pure" sense, and does not imply a more economic sense of risk neutrality, so this theorem does not imply that altruists should e.g. be neutral with respect to monetary risk (for more information see risk aversion).
GiveWell. 2016. Deworming might have huge impact, but might have close to zero impact.
An example of research using expected value thinking.
Karnofsky, Holden. 2016. Why we can’t take expected value estimates literally (even when they’re unbiased).
A caution about taking applied expected value estimates literally.
Wikipedia. 2016a. Von Neumann-Morgenstern utility theorem.
For proofs that rational agents should select projects with the highest expected value (note that this does not imply economic risk neutrality).
Wikipedia. 2016b. Expected value.
A more technical discussion, including a more general definition.
Wikipedia. 2016c. Subjective expected utility.