Elsewhere, I have argued that all questions relating to facts about the real world can be addressed by scientific method, and that if we are truly interested in approaching the correct answers to such questions, then we are foolish to rely on any other method. One important class of questions most people, including, it seems, many scientists, think can’t be answered scientifically concerns how we should determine our moral goals. It seems to me that only a modest amount of reflection is required to dispel this pervasive myth.
The purpose of science is to provide ever more precise plausibility estimates for propositions concerning the sate of reality. Rightness and wrongness, though, are not among the objective properties matter. To attribute ideas such as good and evil to external nature is to commit the mind projection fallacy – to assume that properties of our internal model of reality correspond to properties of the reality.
Consider that classic image of natural selection in action, a lion chasing a zebra, with intention to devour it. There is no sense in which we can apply concepts of right and wrong, or good and evil to this situation. It is not evil that the zebra suffers agony at the teeth and claws of the lion, any more than it is evil if the lion suffers misery and protracted death from starvation. What we have here is simply chemistry unfolding in a mildly interesting manner – genes either preparing to replicate or losing the opportunity to do so. Furthermore, to assert that humanity is something more than this mildly interesting chemistry is certainly not founded on any rational procedure.
That we experience pain, misery, and torment are mind-bending, heart-wrenching facts about life, for us. But the universe does not care. There is no sense in which these values can be attributed as properties of the universe. To the universe, we are just yet more chemistry, a collection of slightly unusual aggregates of matter that dance about on the surface of an otherwise insignificant chunk of rock, one of an estimated 1020 planets in the known universe.
It is not that natural selection, or the universe, wants us to maximize our happiness. It is simply that a genome capable of generating an algorithm for producing the sensation of such a value apparently gains an additional advantage in the struggle to preserve its information. In fact, it is possible that our genes would prefer us to be unhappy – its that ‘Oh shit, I’d better not do that again,’ effect that seems to give the mechanism its selective advantage. If you are really happy, then you will stop striving to do better, and your cousin, who has the ‘try harder’ mutation, will walk all over you.
This might seem an odd line of reasoning, given the agenda I have defined in the opening paragraph, but we have already established enough to demonstrate undeniably that morality concerns matters of fact with objective truth, and that there is a clear scientific route to take in pursuit of those truths. In fact, it was formulating an argument along the above lines, a couple of years ago, while trying to refute the conclusions of this infamous TED talk by Sam Harris, that I came to my current understanding of the topic. I had formerly assumed the standard position, that science can not specify moral goals. (Harris's ideas were shortly afterward expanded to book length, in 'The moral landscape.')
The above paragraphs can be condensed into one very simple statement, one of only two simple principles (the other being a trivial tautology) needed to establish the validity of a potential science-based morality:
‘Good’ and ‘evil’ are concepts with no reality outside minds.
To obtain principle (2), we simply must remind ourselves what the word ‘morality’ means:
Morality is doing what is good.
Principle (1) states that good and evil do not exist outside minds. This does not, however, mean that they have no objective existence. Since they are words used to describe our mental reactions to different situations, then it is clear that they are values with some physical representation inside our minds. We forget this too easily, partly, perhaps, because of the scientific doctrine of objectivity, which typically means that our mental state should be kept as separate from the system under study and the process of gathering evidence as possible. This is normally a good principle – we have dozens of documented cognitive biases that make it essential to eliminate the influence of our preconceptions and emotional responses when conducting science. But what happens when our minds are the system under study? Crude application of the doctrine of objectivity can cause confusion. A mental state is not something lacking objective reality, as we might erroneously infer from this doctrine – we are, after all, physical entities. Our brains are made of atoms, and our thoughts and emotions necessarily correspond to specific configurations of matter and energy in our neurological hardware.
Because the mental states pertaining to right and wrong have no existence outside minds, then we can be confident that anything there is to be known about them can only be discovered by looking inside minds.
We already have instrumentation capable of measuring important data pertaining to mental states (functional magnetic resonance imaging being currently very popular), and since enormous improvements in this technology are quite likely, then it is in principle possible that we will at some time be capable of generating extremely detailed scientific information on the subject of what stimuli correspond to value judgements of right and wrong. I believe we already know enough to make a highly functional first approximation.
We can correlate measured mental states with people’s self-reported happiness, and other surrogate measures, and we can stochastically map those mental states to the stimuli that caused them. And since morality is simply doing good, we can scientifically optimize our behaviour to maximize the occurrence of the relevant mental states (experienced good), and scientific morality has in principle entirely achieved it objectives.
1. The value problem
Goes something like this: ‘How can we know that we should value wellbeing? Surely science can’t tell us what to value, can it?’ These questions are confused. It is not the goal for science to tell us what to value (not at the highest level, at least). We want science to tell us what we actually do value. Note that value and perceived good are completely synonymous, and there is no good other than the perceived kind. Similarly, the declaration that we value wellbeing is another unavoidable tautology. We are talking about measuring the perception of value in people’s brains, and trying to bring about the circumstances that enhance the frequency of the observed mental states. There is no ambiguity about whether or not we should do this: to say you don’t value wellbeing, or happiness, or whatever is good would be self contradictory. Goodness is the extent to which something is appropriate, so there is no sense asking why we should do good: you might as well ask for proof that we should do what we should do.
2. The measurement problem
Another commonly voiced objection is: ‘How do we know we are measuring the right thing?’ We can’t, but this does not nullify the existence of objective answers to questions about morality. While being one of the commonest complaints against objective morality, it is also one of the most unfair and idiotic. The same objection applies to all of science. Science systematically evaluates knowledge, in the form of direct sensory experience, in order to ascribe probabilities to propositions about phenomena in nature. There is no direct logical link between these sensory experiences and the phenomena we wish to learn about - the actual truth values of these scientific propositions remain forever obscured. This is exactly why probabilities constitute the ultimate expression of our knowledge.
Consider an apparently simple physical concept, temperature, and its measurement. The original concept of temperature related entirely to a subjective experience – if I put my hand on something hot, it feels hot, and this is how we originally knew that something possessed a high temperature. (There are occasionally other indicators as well, such as combustion, but all of these come down to subjective experience in the end.) At some point, a clever person realized that they could correlate these subjective symptoms of high temperature with the expansion of a fluid in a narrow glass capillary. The first accurate thermometer was born. How did they know that the rising fluid in the thermometer corresponded to the same phenomenon they were experiencing when they felt an object’s high temperature? They did carefully controlled experiments. But ultimately, they had no way of knowing for certain.
As science advanced, people realized that the assumed linear thermal expansion upon which calibration of these simple thermometers is based breaks down at extreme temperatures, and new technologies were developed to overcome the difficulty, giving progressively more accurate measurements, and greater ranges of validity.
Now note a curious thing. Even with the crude glass capillary style of thermometer, subjective experience is no longer the ultimate arbiter of temperature. A concept originally devised to account for subjective experience was found (with very high probability) to have an underlying physical mechanism that could be characterized accurately with external instruments. Given two objects of similar temperature, even a practiced human can easily be mistaken as to which one is hotter. A precise thermometer, however, will not be mistaken, and will give a result in direct conflict with the human. There has never been a serious suggestion, however, that such occurrences invalidate the use of the thermometer – instead, it is obvious that subjective experience is not perfectly correlated with the physics that it tries to characterize. So it is with feelings of wellbeing, and we must expect that neuroscience will quickly advance to a point where many kinds of mental states can be determined far more accurately than using a person’s self-reported state of mind (I would think this condition already holds in several cases). There is no contradiction here, as these mental states are real states of matter inside the substrates of minds (usually brains). People can be mistaken about what they want.
3. The persuasion problem
You can not persuade somebody who is committed to the contrary view that wellbeing should be valued. Does this mean that value is not an objective property of reality, or that wellbeing can’t be the basic guide of morality? Of course not. That wellbeing is valued is a tautology, as pointed out already. Being well, happy, and satisfied is just the fortunate condition of possessing much of what you value. That some people will not accept this analytical truth is a consequence of the failure of their minds to operate efficiently. The often-quoted fact that nearly half of all Americans do not accept the theory of evolution (link) has absolutely no impact on the validity of this model of how life reaches is various states of diversity.
4. What about psychopaths?
Is one morality less true than another? Since morality is rationally figuring out how to improve you state of happiness, no, symmetry demands that there is no privileged authority on ethical behaviour.
But this implies that psychopaths are also moral, right? Wrong, actually. It doesn’t imply that anybody’s actions are moral, the requirement for rationality needs to be met.
Ultimately, though, it is possible to imagine that there might be psychopaths who are perfectly rational, and highly successful at maximizing their own fulfillment by acting abhorrently to others. This still poses no real philosophical problem. Psychopaths are outnumbered by an estimated 99 to 1 (in the US), so our morality trumps theirs (actually, by those odds there seems a reasonable chance that some psychopath will read this). Our (non-psychopathic) morality dictates that we do what we can to limit their harmful tendencies. Even the psychopaths, by the way, presumably do not want to be the victims of other psychopaths.
Additionally, a person can be mistaken about what their highest-level desires are. A person can be mistaken about the relationship between their lower-level desires and their ultimate goals, and most obviously, a person can be mistaken about the likely outcomes of their actions and what will be the results on their happiness. This means that even the ethics of a person acting rationally can be improved by furnishing then with more accurate facts.
All rational, equally well-informed behaviour is equally valid. Pretending that there is something wrong with this principle, because it fails to unambiguously condemn the foul actions of sadists is to ignore the obvious truth that the appropriateness of one’s conduct is tautologically determined by its capacity to generate mental states corresponding to appropriateness. Furthermore, it is to insist that room be left open for some principle that, besides being wrong, will, almost by definition, not have the slightest impact on the behaviour of psychopaths, who don’t seem to care what is expected of them by moral philosophers.
5. Lack of uniqueness
It is not clear that there is a unique solution to the mathematical problem of maximizing wellbeing. There may very well be some upper limit to how happy I can be, and there may be many ways to arrive at that upper limit (but I’m not holding my breath). This does nothing to negate the arguments laid out. Whatever class of conditions enables that upper limit to be attained, it is determined by objective facts about nature.
6. Lack of universality
Different people have different values, but as discussed at number 4, this poses no philosophical threat to my thesis. Human beings, however, as members of a single species, are more similar than we are different, so substantial overlap between the ambitions of different individuals is to be highly expected. Furthermore, direct commonality of goals can also be confidently predicted – what’s good for you is good for me. The fact that our morality is ultimately selfish does not need to stand in the way of a high degree of cooperation.
A society that protects all people indiscriminately is very likely to protect me. A global economy that flourishes has a good chance to be one in which I am not poor. A society where learning and technology thrive is a good choice if I hope to have a comfortable life, and complex technology is very difficult to attain without close, structured cooperation between thousands of people.
In ‘The selfish gene,’ Richard Dawkins has written much on how cooperative behaviour emerges naturally among selfish agents, by mechanisms such as kin selection and reciprocal altruism. He has also made a nice TV documentary, 'Nice guys finish first,' on the topic, specifically to answer critics who unintelligently insisted that selfish genes imply selfish people. Further, by initiating the concept of memes, he has made it clear how human behaviour has evolved far beyond the consequences of mere population genetics. We are not bound by genetic evolution anymore, and cultural innovations capable of enhancing mutually beneficial technological and social developments are to be positively expected.
7. Which utility function?
There are different ways to model value, e.g. a rational utility function, or prospect theory (which more accurately models how humans naturally assess utility), but which one should we use? The answer is simple: the one that works. Briefly, (a bit late for that, I think) the point is that one day we might be smart enough to measure utility directly, by studying the physical states of brains (and potentially other substrates supporting minds).
8. Kahneman’s alternate selves
Daniel Kahneman and coworkers have repeatedly shown that there are different measures of happiness within a single human mind (see this TED talk, for example): the experiencing self, and the remembering self. Which one is correct for our purposes? Again, (and again very briefly) the one that works. Measure the success of basing your behaviour on one choice and compare to outcomes with the other choice, then adopt the policy that produces the optimal result.
9. How does wellbeing aggregate?
Assuming that my happiness is affected by some measure of global happiness (see number 6), how should we combine happiness measures for different people? Should they be added, multiplied, or what? Isn’t the final choice ultimately arbitrary?
No, its not. Once more, measure the outcome of some likely aggregation function and compare it with other candidates. Did we need some a-priori basis for accurately combining temperatures in order to accept its validity as a physical concept? Did the cave man need to know that two objects placed in contact would equilibrate to some intermediate temperature, rather than their temperatures adding algebraically, in order to know that the embers in his fire were hot?