You believe a proposition like “it will rain tomorrow” if you think that it is true. Beliefs are all-or-nothing attitudes: either you believe that it will rain or you don't.
This feature of beliefs can make it difficult to use them in decision-making contexts. For example, suppose a person believes that action A will have a very large positive impact, and that action B will have an impact that is almost as large as that of A. Should the person undertake action A or action B?
There is no problem here if the person is certain that the actions will produce these outcomes: if so, then action A is clearly better than action B. But it seems entirely possible for a person to believe things even if, when pressed, they would admit that there's at least some chance that they are not true. For example, the individual in the example might admit that they think that action A only has a 95% chance of having a very large positive impact, whereas action B has a 99% chance of having an impact that's almost as large. Given this, it could be better to perform action B rather than action A.
For this reason, when we're trying to make ethical decisions, it's often better to express our attitudes in terms of credences, which reflect these likelihoods, rather than to beliefs.
Schwitzgebel, Eric. 2015. Belief. In Edward Zalta (ed.), The Stanford Encyclopedia of Philosophy.