A being is a moral patient if they are included in a theory of the good. While it is normally agreed that typical humans are moral patients in this sense, there is debate about the patienthood of human embryos, non-human animals, future people, and non-biological sentients.
Moral patienthood should not be confused with moral agency. For example, we might think that a baby lacks moral agency - it lacks the ability to judge right from wrong, and to act on the basis of reasons - but that it is still a moral patient, in the sense that those with moral agency should care about their well-being.
If we assume a welfarist theory of the good, the question of patienthood can be divided into two sub-questions: Which entities can have well-being? and Whose well-being is morally relevant?
First, which entities can have well-being? A majority of scientists now agree that many non-human animals, including mammals, birds, and fish, are conscious and capable of feeling pain (Francis Crick Memorial Conference 2012), but this claim is more contentious in philosophy (Allen & Trestman 2016). This question is vital for assessing the value of interventions aimed at improving animal welfare. A smaller but growing field of study considers whether artificial intelligences might be conscious in morally relevant ways (Wikipedia 2016).
Second, whose well-being do we care about? Some have argued that future beings have less value, even though they will be just as conscious as today’s beings are now. This reduction could be assessed in the form of a discount rate on future value, so that experiences occurring one year from now are worth, say, 3% less than they do at present. Alternatively, it could be assessed by valuing individuals who do not yet exist less than current beings, for reasons related to the non-identity problem (Robert 2015). It is contentious whether these approaches are correct. Moreover, in light of the astronomical number of individuals who could potentially exist in the future, assigning some value to future people implies that virtually all value—at least for welfarist theories—will reside in the far future (Bostrom 2009).
Allen, Colin & Michael Trestman. 2016. Animal consciousness. In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.
Discusses similar questions from a philosophical perspective.
Beckstead, Nick. 2013. On the overwhelming importance of the far future.
Justifies its importance.
Bostrom, Nick. 2009. Astronomical waste: the opportunity cost of delayed technological development. Utilitas 15(3): 308-314.
Francis Crick Memorial Conference. 2012. The Cambridge declaration on consciousness.
Declares that_ animals are capable of consciousness, from a group of leading scientists._
Roberts, M. A. 2015. The nonidentity problem. In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.
Tomasik, Brian. 2014. Do artificial reinforcement-learning agents matter morally?.
Wikipedia. 2016. Artificial consciousness.