Philosophy, utility functions, rationality, heartless economists

(This was a daily beemail from 2018 and I keep finding occasion to refer to it – most recently in the context of a new forum thread on satisficing vs maximizing – so I’m finally putting it in the forum!)

In case anyone else is into this, I was debating the concept of rationality today with one of you. There’s a Beeminder tie-in, of course, if a bit tangential. The thesis is that irrationality is really bad. I might be preaching to the choir here so let me know if any of this sounds remotely persuasive or misguided or just platitudinous!

(I’ve fictionalized the debate a bit, btw.)

“It feels wrong to prioritize money over other values.”

Leaving aside all the philosophy of what money is, it’s possible to hate some kinds of work enough for it to be rational to choose to not do it no matter how lucrative. But for example when one puts off certain simple adulting tasks and costs oneself thousands of dollars, there’s no way to rationalize that. We seem to both be guilty of that kind of irrationality.

“I don’t have a requirement that my actions be rational.”

Opinion: It’s a moral imperative to always reject irrationality. You can fall short of rationality 6 ways from Tuesday but you can never endorse or embrace irrationality. You can acknowledge your shortcomings and work around them, find ways to cope with them, etc. But throwing up your hands and saying that irrationality is ok is shameful.

Because irrationality is by its very definition wrongness, not-ok-ness.

When you’re arguing against rationality what you mean is that you disagree about what actions are most rational. You can’t argue against rationality itself. That’s like arguing against logic. No, worse. It can be rational to argue for intuition and emotion over logic, in some senses. Arguing against rationality is like arguing against Rightness.

To be clear, I’m also extremely irrational. I’m so deeply irrational that I created Beeminder. The use of Beeminder is incontrovertible proof of one’s irrationality. No rational person would ever purposefully limit their future options and set up a penalty to pay for no reason other than changing their mind about their priorities. It’s insane. But it’s also the epitome of not accepting irrationality – of doing whatever it takes to fix it.

“I do not have to prove my rationality to you, only to myself!”

Yes, you can have absolutely any utility function under the sun. No preference is verboten. (In the mathematical sense this even allows for preferences that really are bad, like pedophilia or thinking that the earth would be better off without humans.) But if you find an inconsistency in your utility function (hyperbolic discounting is Beeminder’s focus; others are sunk cost fallacy, uncalibrated predictions, scope insensitivity, there’s an enormous list that all humans are susceptible to) then, well, there’s still such a thing as bounded rationality where the cognitive cost of being more rational exceeds the benefit. But it’s just super icky to me to be like “oh well, I’m irrational but I like it that way”. Maybe it’s mainly a shameful lack of intellectual curiosity. Like don’t you want to understand what the different factors are that make your choices rational despite seeming irrational (and they must ultimately be rational for you to endorse them – as you say, you’re at least proving your rationality to yourself)?

In conclusion, always be fighting the good fight against irrationality, I guess. Like by beeminding!

Bringing it back to the debate that started this debate, about money, I view that as a question of quantifying preferences, another pet topic of Bee’s and mine. So I’ll end with a cute and potentially relevant vignette from Eliezer Yudkowsky. I’m so deep in this way of thinking that I honestly can’t tell if any of this sounds idiotic, heretical, immoral, or obvious to normal people. I would not be surprised by any of those reactions so I’m curious what yours is!

Let me try to clear up the notion that economically rational agents
must be cold, heartless creatures who put a money price on
everything.

There doesn’t have to be a financial price you’d accept to kill every
sentient being on Earth except you. There doesn’t even have to be a
price you’d accept to kill your spouse. It’s allowed to be the case that
there are limits to the total utility you know how to generate by
spending currency, and for anything more valuable to you than that,
you won’t exchange it for a trillion dollars.

Now, it does have to be the case for a von Neumann-Morgenstern
rational agent that if a sum of money has any value to you at all, you
will exchange anything else you have – or any possible event you
can bring about – at some probability for that sum of money. So it
is true that as a rational agent, there is some probability of killing
your spouse, yourself, or the entire human species that you will
cheerfully exchange for $50.

I hope that clears up exactly what sort of heartless creatures
economically rational agents are.

1 Like

PS: i got some predictable pushback on the money part of this at the ened and the following has seemed to convince people:

i think the way to make it intuitive is to consider how much you’d pay for a fancy safety feature on your car. the answer is… less than a million dollars. but that means you’re literally saying that it’s not worth a million dollars to reduce the probability of you and your whole family dying in a car crash. which is true, you’re happy to accept that small-but-nonzero chance of killing them in exchange for a million dollars. it makes it sound icky when quantified with money but the non-icky sounding claim (“my family has infinite value and i will not kill them with any nonzero probability in exchange for any amount of money ever”) is unambiguously contradicted by your behavior every day. such as ever leaving the house.

i think typical behavior (how much people pay for safety features, how much more you have to pay someone for a dangerous job, etc) implies that people will accept a 1/1,000,000 chance of death (known as a micromort) in exchange for something like $50.

2 Likes

I think the difference here is that when you leave the house, take a trip in your car, buy a slightly less safe car, etc. the purpose is not directly and specifically to gain money by increasing the chance of your family’s death, whereas if someone directly offers you $50 to play Russian Roulette with your family, there’s no other purpose.

Compare these two hypos:

Hypo 1: After doing research to purchase a safe car for your family, you have narrowed the choice down to 2 cars, both expensive due to being rated A+ for advanced safety features. One costs $50 more and contains a slightly newer model of airbag shown to decrease the chance of death or serious injury by 1/1,000,000. Since they are both very safe and you are already spending a lot of money on the car in order to provide your family with a safe and reliable means of transportation, you go for the car that costs $50 less.

Hypo 2: A completely reliable, honest, and trustworthy madman kidnaps your family. He offers to let them go, if you’d like, but suggests you play a little game first. If you want, you can roll 6 10-sided dice, guaranteed to be fair. You can roll them as many times as you want, and he’ll pay you $50 each time. But if they all come up ‘0’ the madman kills them all. You roll the dice once, then let your family go.

I’ve of course worded the hypos to make Hypo 1 seem reasonable and Hypo 2 seem psychotic. Mathematically, they’re the same, but the fact that in Hypo 2 your family dies immediately and unnecessarily due to your playing games with a madman, whereas in Hypo 1 you researched safety and chose the second-safest car, triggers our ethical feelings. Which makes sense, because in the real world people have to make safety trade-offs all the time, whereas someone who took money from a madman who might kill their family is someone you might not want to trust.

This is sort of like the trolley problem in philosophy, where people have no problem switching the track that the train is on to kill 1 person instead of 5, but see it as unethical to personally shove someone on the tracks to trip a switch that does the same thing, or to kill one person to harvest their organs to save 5 people’s lives. A pure utilitarian/economic perspective is missing deeper factors like intent and purpose, even if the pure math and the consequences work out the same way.

I think the key word here is “value.” What is it that you value in life, and how do you make choices to promote those values? That’s what rationality is - understanding what it is that you truly value, and setting up systems (like Beeminder) that help you life a live in line with those values. All the math and discussion of utility functions is just there to serve your values.

You can’t choose your values - they’re just there. They often conflict, since there isn’t just one unified self. Rather, as Marvin Minsky put it, there’s a whole society of mind - many different selves with different preferences, desires, and agendas.

Hofstadter wrote this amazing story in 1983 about rationality and existential risk, and it seems more timely now than ever:

Related: The Tale of Happiton, by Douglas Hofstadter

3 Likes

I remember that email from 2018! I think I disagreed with your thesis because it turns out that our irrationalities are often beneficial when taken as a set. For example, we have things like the sunk cost fallacy (which is irrational) to help us overcome inertia-in-the-moment (which is also irrational) – so I buy tickets for a show on Tuesday, but when Saturday comes around I think “I don’t WANT to put on pants and leave the house today, but I already spent my $40 and I don’t want that to go to waste, so I’ll go find my pants I guess” and have a great time. If you “rationally” cured your sunk cost fallacy, but did not cure your inertia-in-the-moment (I’m sure there’s a fancy name for this problem that everyone I know has), you’d end up missing out on a great show that you would have really enjoyed.

Which is I guess a lot of words to say that you should Chesterton’s Fence all your irrationalities before you work too hard on getting rid of them. To make it beeminder-relevant, what is your akrasia saving your from? In my case, it was saving me from doing too much; the thing I struggle with now that I have beeminder to force me to live up to all my hopes is chronic, persistent sleep deprivation (and beeminding bedtimes just means I pay a lot of money to beeminder!), especially now that I have a kid. In the long term, it’s certainly useful to realize that there is (far) more I want to do than I can do, but in the short term (by which I mean years!) it’s been a serious problem to untangle.

2 Likes

Ooh, check my old blog post about the sunk cost fallacy and hyper-correcting for it, which I think your pants-finding example also highlights:

http://messymatters.com/sunk/

Great point about chesterton’s-fencing seeming irrationalities in general too!

1 Like

Good post! I think my feeling about what counts as “hyper-correcting” is even more broad than yours though; the examples I’m thinking of are definitely cases where I wouldn’t have chosen to go to the thing if it was free, and in fact I was ugh and dreading the thing, but then when I went anyway I had a good time. This happens often enough that I now assume that my in-the-moment feelings about going to An Event are significantly less accurate than my long-term future feelings, so I try to pre-commit such that not going will feel dumb (either telling a friend I’ll be there, or paying money). Fighting dumb brain with dumb brain!

1 Like