Follow up to: Of Oughts and Is, Part III
Author’s Note: Honestly, this essay will likely make very little sense to you unless you start from the very beginning. At best you will want to backtrack all the “follow up to”s, but I think you can get by fine with reading just “Don’t Smuggle Your Connotations”, “The Folly of Debating Definitions”, “The Map and The Territory”, “The Meaning of Morality”, and “Of Ought and Is, Part I” “…Part II” and “…Part III”. (If you feel like this is too much prerequisite reading, read “The Sad Truth of Inferential Distance” to understand why I would do something like this.)
This is a recanted essay!: From the comments I’ve been getting, I now know that I definitely haven’t been as clear in communicating my views as I thought I was. Thus, I’m going to be scrapping this essay as a draft and starting over. Please note that this is an old and outdated draft of this essay. See “I’m Never As Clear As I Think I Am” for more information.
In “The Meaning of Morality”, I decided to crack open our notions of “morality” and “good” to see what was inside. This turned out to start an adventure through the tunnels of normativity, and will keep us on a train that will go for quite awhile longer, because there is a lot to say.
The first big idea was breaking apart the claim that an action fit the label “good” and the claim that we have some sort of intrinsic motivation (reason for action) to preform that action. This was the idea of pluralistic moral reductionism — that the word “good” was just that, a word, and like other words it could have multiple, even mutually exclusive definitions. Thus as long as we did not smuggle the connotation of intrinsic motivation, we could avoid the massive folly of endless squabbling over what exactly is “good” or not.
That being said, we do begin a nearly endless squabbling over what exactly gives us reasons for preforming or refraining from certain actions. In “Of Ought and Is, Part I”, I started outlining the Is-Ought Gap, which broadly put means that we can’t conclude what ought to be the case just from saying that something is “good” alone. More specifically and simply put, the Is-Ought Gap presents us with a “Why Challenge”: Why ought I follow your moral theory, anyway? Where are the reasons for action coming from?
It’s not enough to just huff and puff about what ought to be the case, and “Of Ought and Is, Part II” and “Of Ought and Is, Part III” carry this challenge all the way through, showing it to not be well-met by any current theory for normativity.
Now we begin the process of picking up the pieces of normativity that were left lying on the ground. The good news is that I didn’t bring you through a whirlwind tour of seven different theories about the motivational force behind certain moral statements just to tear them down, but also because they each contain key pieces needed to bring us back. I now want to construct a philosophical notion of reasons for action, and then dive into the psychology that actually underlies our motivation and see how these things together help us understand what it is we ought to do …if anything in particular.
For the first step, I turn our attention back to Immanuel Kant. He had outlined a categorical imperative that said we ought to follow certain moral commands regardless of whether we personally desire to. I found such categorical imperatives to be problematic, but Kant had something else to offer: the hypothetical imperative.
Categorical imperatives state You ought to do X, regardless of anything else. Hypothetical imperatives, however, incorporate an if-then element: If you desire Y, then you ought to do X, where X is the outcome that most likely gets you Y. Here, desire can be taken more-or-less as it is used in common parlance. More particularly and specifically, a “desire for Y” is philosophical shorthand for a mental state that motivates you to act as to bring about Y. If you desire to eat a cookie, you will go to the store and buy some, since that action brings about you eating a cookie.
Notice that here we’re keeping it simple. It’s clear that you might desire to get a cookie, but then have some other desire that would prevent you from going to the store and buying some, like a desire to not spend money, or a desire to stay on the couch and watch TV. It’s also clear that even while the notion of desire is pretty useful for explaining and predicting human behavior, it is not a concept that holds up well to current psychological research. All we do have to recognize at this point is that our internal mental states can give us reasons for action.
Not Quite The Gap Anymore
This brings us back to the “Is-Ought Gap” I mentioned earlier — the idea that, for any moral system, it makes sense to ask why we have reasons for action to follow that system. A theory like utilitarianism might say that we ought to do whatever maximizes the happiness of conscious creatures, but then might ask why that is the case — why must we do whatever maximizes the happiness of conscious creatures? Essentially, something is missing when we go from “Action X, if preformed by you, would maximize the happiness of conscious creatures” to “Therefore, you ought to preform action X”.
And here I lay out the simple answer for what is missing; for what actually would give us reasons for action to follow a system: our desires. Here’s how that works from a simplified philosophical standpoint:
- It’s true that we value certain states of affairs (how the world currently is right now) more than other states of affairs, and thus want the current state of affairs to be one which we value.
- Secondly, certain facts about how the world work means that certain actions (including refraining from action altogether) will bring about a certain state of affairs, whereas other actions will bring about other states of affairs.
- Therefore, valuing certain states of affairs will cause us to be motivated to act as to bring about those certain states of affairs.
- Therefore, we have reasons for action — namely, it becomes tautological to say we ought to act as to bring about states of affairs we value.
All this adds up to the bridge we need to get from the “is” side (we have specific mental states) to the “ought” side (we ought to preform certain actions). And we do indeed answer this Is-Ought Problem and solve the “Why Challenge”. Why ought you eat a cookie? Well, because you hypothetically value eating cookies, and if you don’t actually have this value, than our claim that you ought to eat a cookie is false.
Instrumental Means and Terminal Ends
It’s important to think about desires a bit differently than the way we currently may be thinking about them. For examples: why do we study, if we don’t actually enjoy studying? Why exercise, if we know that it only hurts us and makes us tired? The answer here is the difference between two types of desire, which are unfortunately blurred together (at least in English) by both using the same word.
The difference is between what is called an instrumental value (or desire-as-means) and a terminal value (or intrinsic value or desire-as-ends). The instrumental values are something you come to value only because it helps you achieve your terminal values (you desire them as a means to get what you want), whereas the terminal value is desired as an end in itself.
For examples again, while we may enjoy studying a bit for itself, we mostly do it as a means to get good grades, which itself might be valued as a means to graduate from college, which might be a means to get a good job, which might be a means to having a successful life. What exactly we desire-as-means versus what exactly we desire-as-ends is something to sort out when we look in depth at the actual underlying psychology later on.
Bringing in Beliefs
Here, it may seem that “ought” is tautological: we ought to do what we want to do, and thus it’s really meaningless to tell someone they ought to do something different than what they’re already doing, because they’re already either doing what they terminally value, or something that they desire-as-means to achieve their terminal values. But once we bring in beliefs, which may be in error, we can see that there is actually room for a true recommendation to be made as to what you ought to be doing, but aren’t currently doing.
For example, let’s keep it simple by going back to a hypothetical agent (an “agent” is an entity capable of deliberate action) that has only one terminal value: to consume a cookie. This agent, call him the Cookie Monster, will thus be motivated to preform whatever action is most likely to lead up to a state of affairs where he is consuming a cookie. Now let’s say that there is a red box, a green box, and a blue box, and the blue box contains a cookie. The Cookie Monster is offered only one of these boxes — he gets to make the choice, and then the other boxes are destroyed, along with their contents.
Now, the Cookie Monster has no information about the boxes, and thus can only choose at random, and selects the red box. To his dismay, he sees the blue box get destroyed with his cookie inside. Yet, there is indeed a sense in which we can say the Cookie Monster’s action was mistaken — he ought to have chosen the blue box (and thus also ought to have instrumentally valued choosing the blue box).
The Logical Underpinnings
These facts about what the Cookie Monster ought to do are not opinions, but rather actual facts about the hypothetical solely derived from two statements about how the hypothetical “is”: (1) the terminal value of the Cookie Monster and (2) the outcomes to which each hypothetical choice would lead. These facts are no different than deriving “the shape is a square” from knowing that the shape (1) a polygon with (2) only four ninety degree angles. We also need not have anyone personally recommend the action to the Cookie Monster for it to be true that the Cookie Monster indeed ought to pick the Blue Box.
Indeed, we can construct the hypothetical imperative that “If the Cookie Monster desired to eat a cookie and if choosing the blue box was the action most likely to bring about the cookie, the Cookie Monster ought to pick the blue box”, and it would be a logically true statement in any hypothetical scenario. The normativity comes from the scenario as a whole, but reduces to individual facts about the overall scenario — the desires for certain state of affairs and the causal relations of how actions bring about states of affairs.
The Two Ingredients
And it’s important we have both parts in order to construct normativity. Imagine that instead of the Cookie Monster, we had the Anti-Cookie Monster who terminally desired to avoid cookies at all costs. While none of the casual relations of how actions bring about states of affairs in the hypothetical changed, the change in the terminal value considered results differently: The Anti-Cookie Monster ought to choose the green or red box.
Likewise, if the cookie were moved to the green box, the regular Cookie Monster with the terminal value to eat a cookie now ought to do something differently — choose the green box instead of the blue box. Here, a change in the causal relationships of actions and states of affairs resulted in a different normative action, even though the terminal value wasn’t changed.
So in one sense this normativity is subjective — it comes from a desire that exists within the brain of the agent (or whatever stuff hypothetical agents are made out of). But in another sense this normativity is completely objective — it comes from the objective fact that the desire exists and indeed has motivational force, and the causal relationships about how actions bring into effect states of affairs relevant to that desire. Carefully note here what precisely is in the map, and what is in the territory.
Once you bring in beliefs that are capable of being wrong, you also get a difference between what the agent believes he or she ought to do, and what the agent actually ought to do, given the true facts about the world in which the agent lives. It will be important for later discussion to note that the first type — what the agent believes he or she ought to do — is called descriptive normativity (or descriptive ethics) and the second type — what the agent actually ought to do — is labeled prescriptive normativity (or prescriptive ethics).
Normativity Through Standards
The fact that we can ground normativity in hypotheticals about agents and causal relationships completely different than those found on Earth means that normativity need not take place in any actual desire, but rather can exist in strictly hypothetical relationships. What I mean by this is that all the hypothetical imperatives for the Cookie Monster and Anti-Cookie Monster are true statements, even though the Cookie Monster, Anti-Cookie Monster, and the three boxes all don’t exist here on Earth.
Thus demonstrating the truth of normative statements becomes something more akin to math, like adding one and one and getting two. It’s a simple matter of logical deduction. The problem on Earth, however, is that the causal relationships between actions and states of affairs that actually exist cannot be deduced solely by logic, but actually require observing the world empirically. Likewise, the terminal values of the agents that exist on Earth, humans, are so complex as to also require specific empirical study before a logical analysis can be applied. (Technically, given that nonhuman animals also have desires, they too can be agents to which true hypothetical imperatives apply, though this probably won’t be much use for them.)
The punchline that I’m getting at is that we can propose a certain standard, say “maximize the well-being of conscious creatures” and then construct a hypothetical imperative about that standard, and have that statement be true, even if no agent actually cares at all about maximizing the well-being of conscious creatures.
We can then say that “In order that you match this standard, you ought to X”, where X is whatever action actually does maximize the well-being of conscious creatures. These standards exist externally to any sort of desiring that is actually taking place, and are true by being a relationship between the causal relationships of the world and the standard itself.
To see why this matters, I turn you back to Searle’s idea of institutions that I discussed earlier. He tried to bridge the Is-Ought gap by saying that if you made a promise to pay someone $5, you entered into an institution where you ought to pay that person $5. Now we have the tools to see what is really going on: the promise to pay someone $5 means that “you ought to pay that person $5, in order that you match the standard of promise-keeping”.
Of course, this hypothetical imperative may not be at all relevant to you if you don’t personally care about the standard of promise-keeping, but it remains an unimpressively true statement nonetheless. You can jump the is-ought gap even with a factual “is” statement about what a standard entails and a factual “is” statement about whether a certain action meets that standard.
A Taxonomy of Oughts
Thus it becomes much simpler from a pragmatic linguistic standpoint to merging the concept of standards and the concept of terminal values by taking ought statements to just be about an end or a goal without any further elaboration. Thus we might say that “In order to maximize his terminal value (of eating a cookie), the Cookie Monster ought to choose the blue box (given that the blue box is the only box that contains the cookie)”.
However, we could just as well say “In order to maximize the well-being of conscious creatures, the Cookie Monster ought to not choose the blue box (given that the blue box all along secretly contained a bomb that would destroy Earth)”. While this second hypothetical imperative statement might not be relevant to the Cookie Monster, it is nonetheless true, and both are types of goals that a hypothetical imperative may have.
We thus end up with the following types of ought statements:
- A true ought connects an end with the action that most effectively accomplishes that end.
- A false ought connects an end with the action that does not most effectively accomplishes that end.
- A motivating ought is either a true or false ought that has as an end the terminal value or values of the agent in question. Some oughts will be motivating oughts for some agents and not for others.
Notice here that we’ve shifted the definition of “ought” away from “motivating force” and to just any connection between an action and an end that the action may bring about. This allows us to talk about more claims than we could otherwise. Just don’t get confused.
Where We Go From Here
So now I’ve traced normativity through a long path — jumping over the is-ought gap with ends that may include either generic standards or the specific values of agents. This ends up with the construction of hypothetical imperatives that, because an agent may not have all the relevant information or may have false beliefs, can be truly prescriptive in the sense that an agent can act incorrectly.
All of this can take place and be deduced by logic alone within a hypothetical, mathematically-defined world, but if it were to take place on our real Earth, the sheer complexity of the world would prevent us from using only logic to deduce oughts relevant to us or the standards we care about. Thus, we must turn to an empirical study of both sides of the equation — what it is we value and how to get it. Such is what I will consider the general study of normativity.
Here’s a tentative roadmap subject to update: In the next essay, I will write more about how this picture of normativity fits within the philosophy of ethics as a whole, with a specific focus on recommending that we reconsider how we use words like “good”, “bad”, “right”, “wrong”, “moral”, and “ethical”. I will then write an essay to stem the likely tide of fatalism that this may bring, showing that I do not intend to bring society crashing down with me just because there is no almighty moral law.
Then, I will delve deep within the relevant science and spend many essays to determine what we value, why, and a bit about how to get it. While I’d like to focus mostly on the psychology of motivation, I’d also like to focus on some of the underlying psychology and philosophy that explains why we make such large mistakes about ethics, and why ethics is such a confusing field.
After that, I will return to our philosophical analysis here and update it. Lastly, I will proceed to applied ethics, and talk about the states of affairs I value, and maybe why you should consider valuing them too.
Followed up in: I’m Never As Clear As I Think I Am
Before commenting further, please note that this is an old and outdated draft of this essay.
I now blog at EverydayUtilitarian.com. I hope you'll join me at my new blog! This page has been left as an archive.