PS: I think there are multiple relevant xkcds on this…
+1 dreev, although I don’t want to lose the usefulness of the words satisficer and maximizer.
Career-wise, I have been surrounded by clients (and paid to advise them) where they spend a lot of time and money making sure something is very very , and they usually don’t realize that caring about maximizing that thing is both at the expense of other things, and has an intrinsic assumption that the thing happens.
Opportunity cost rules everything around me.
Behavior that looks maximizing but is problematic is not including enough things in the system that’s being maximized. When a classic maximizer looks at a classic satisficer and envies them, that maximizer is probably not thinking “big picture” enough.
That sounds plausible. I recall there is scientific evidence for satisficers being happier. Is there also that sort of evidence for the project success thing?
A hypothesis I don’t know the answer to: I wonder if maximisers are more successful in the projects with the biggest impact? (Steve Jobs, yada yada)
So did you vote “Maximizer”? Or do you not believe that by nature you are one?
I’m a satisficer in that I will start using a thing that is not MAXIMALLY optimal… but I do that for the reasons mentioned above: that it’s a matter of it also being optimal to just make a decision now, with the understanding that I’m also willing to move on from that to something that is more optimal, so… yeah. I’m It’s Complicated, surprising probably no one.
Disagree. Bounded rationality is neither maximizing or satisficing - it’s sort of a middle ground. Yes, the point is to maximize utility in a “big picture” sense, as @adamwolf points out, by including consideration of resources, but that’s not what maximizing is. It’s also not what satisficing is.
Specifically, we’re talking about procedures for choosing one option of a set of multiple options.
I’ll illustrate with @mkalbert’s restaurant example:
Maximizer: there are 40 things on the menu. I’m going to read over each one and choose the best.
Satisficer: I’m going to look through the menu. Once I find one thing that’s good enough, I’ll order it.
Bounded rationality: in this context it doesn’t make sense to spend more than a few minutes choosing an option. So I’ll use those few minutes to first come up with a way of deciding and then decide. I know this restaurant is known for X, I’m kinda craving Y, and I’ve heard good things about Z, so I’ll restrict my choices to the 10 things that are close to X, Y, or Z. I don’t have time to choose the absolute best, so I’ll narrow further to a few that seem like they’d be good enough, and then I’ll pick the best of those few.
Or, put another way:
Maximizing means your goal is to choose the best option at all costs.
BR means your goal is to choose the best option given limited resources, and you will therefore use heuristics as shortcuts at the cost of possibly missing the best option.
Satisficing means you don’t care about choosing the best option. That’s not even a consideration. You just want to choose an option that’s good enough. It’s a completely different way of thinking.
I see satisficing as lazy and myopic. It’s being complacent. A satisficer isn’t willing to spend another minute looking at multiple “good enough” options. That might save time now, but could cost you later on if you end up needing to switch, or if another minute of looking would have revealed an option that would have been significantly better.
Sure, you could think of satisficing as a specific extreme BR strategy to be used in unusual situations with very limited resources (I hold a gun to your head - choose one NOW).
The interesting thing is, that you might not be able to pay all costs. And since we can’t escape the myriad contexts in which we act, literally everything is dependent on everything which makes the distinction moot again IMHO, when talking about ourselves. It might make - theoretically - sense to label a subset of decisions in this or that way, but the rest of your life cannot be boxed out like this.
Sooner or later not budgeting your time and attention will come and bite you in the ass. And not only that: It will prevent a maximizing strategy altogether. Not taking this into account in every decision you make, will lead to less and less actually maximized decision making.
I see maximizing as lazy and myopic. It’s almost tragic, really.
Yeah, you can rarely ever actually maximize 100% due to real-life constraints on time and information. You’re right, you always have to budget your time and attention. But you can sometimes come close to maximizing.
In real life you’re almost always engaging in some form of bounded rationality. It’s just a question of how bounded to make it.
Because discussions with purported “maximizers” tend to develop as this one does. First “maximizer all the way, others are dumb for not doing it”, then “yes, maximizing is not really possible you can’t escape your reality…”. Meanwhile purported “satisficers” tend to know all this, too and in comparison have learned to cope with the scarcity that is called life. Maximizers can’t understand this, or rather: Their memories about this fact seem to be rather short. This has to be pointed out in every context, again and again and again and again… that’s why it’s so tragic, lazy and myopic: They have “outsourced” the labor needed (they force the environment to point this out) to stay mindful of the scarce time and attention we have, because they can’t or don’t want to see this. And this behavior gets repeated over and over and over again. Lazy, myopic, tragic.
EDIT: Reading this again made me realize, that it sounds much more aggressive than how I’d like to come of. It almost sounds like I have some personal agenda against maximizers or something like that. This is not my intention. I tried to use hyperbole to bind the question in the context of humans with lifes to the problem of scarcity in the most obvious way. But what I said about the distinction losing its meaning still holds true, too: When we have to pick and choose what to maximize and how far and what we are willing to sacrifice for that, then the categorization becomes situational, meaning a decision might be a maximized one, but not the human making it. So I am not against maximizers, because no human is a maximizer in this sense. I tend to view maximized decisions with scepsis, though.
I can see how mine could come off as more aggressive than I would have liked as well. Sorry.
So it’s not actually maximizing that you’re criticizing, it’s arguing for maximizing?
As I see it there are three options when making a decision:
- Satisfice by default. Just choose something good enough.
- Engage in some type of bounded rationality. Look at your resources, decide how much time to put into the decision, come up with a procedure, and follow it.
- Try to maximize as best you can in the real world.
3 is my tendency, and it obviously causes a lot of problems. I don’t think I said others were dumb for not doing it - in fact I said it was my own anxiety triggering it. If anything I am the “dumb” one for over-maximizing lol.
With choice 2 there are people who tend to err on the side of perfectionism or analysis paralysis, spending too much time choosing, and people who tend to err or the side of impulsivity, just going with something without spending enough time choosing. I think this is typically what people mean when they talk about satisficers vs maximizers.
It’s choice 1 - generally satisficing by default, without thinking about it or determining whether you could possibly benefit from putting some time into decision-making - that I think is myopic because it might save energy now but cause problems later.
So - extremes are more likely to cause problems. Bounded rationality is helpful. The take-away for me here is to remember to bound my rationality and stick to the bind!
I still can’t decide on what web clipper to use and what note-taking app to use
(responding to Adam and Daniel)
I think you both already see this distinction, but: I wonder if you might both turn out to be talking about “System 2”, while the satisficer/maximiser distinction that’s been identified out there in the world in scientific work is about “System 1”? I think you are discussing ideas you have adopted rather than your personality types (isn’t it great to be human and able to do that?). In my poll I was referring to personality types rather than adopted behaviour when I wrote “by nature”.
Adam’s thought about maximisers not including enough things in the optimisation I guess is true in world of critical thought that sits on top of System 2, but it could also be reinterpreted as a testable hypothesis about System 1 psychology (and/or machine learning), which I guess is more about things like how people create ideas, make choices with imperfect information, respond quickly, etc.: are maximisers that way because they tend to focus on one thing rather than many, or for some other reason? I’m hopeful that’s the sort of question we might find a decent explanatory answer for in my lifetime!
Daniel’s mention of AI reminds me: Whenever I hear the word “optimize” in recent years, the dichotomy that comes to mind is not maximising vs. satisficing, but optimising vs., in machine learning terms, regularisation (“stability of generalisation”). Optimisers (I’m talking about people) often seem to me to neglect the other side of that dichotomy. I suppose I’m thinking of optimisers of a more everyday variety than Daniel though “Regularised” is a 50 dollar word and you might say I just mean “safe” – but the concept does show that there’s more depth to the concept of stability than mere safety or stasis. In social / legal / political problems for example – which do connect with everyday life – “stability” doesn’t seem far from “regularisation”. I guess that’s also true with things like getting through life, self-improvement, etc.
Unfortunately it seems difficult to test the real importance of this prejudice of mine, because this kind of stability does involve unusual situations and events.
Bringing my self indulgent ramble just about back on topic: I suspect this optimise/regularise axis has more to do with ideas than with personality, since I’ve never noticed psychologists talk about personality factors that resemble it. But perhaps it is connected in some way with the well-known personality factors that psychologists do talk about such as openness, and also with getting things done?
So… are you a maximiser? (geniune question – see first half of my previous post)
On the other side of the same coin: maybe the maximizer personality trait makes it even just a little bit easier to find some kinds of new knowledge (in other words: progress)? Unreasonable man and all that? It doesn’t have to be all knowledge, or even most: if any knowledge is easier to come by that way, having a few people for whom it does work that way is likely an enormous benefit to humankind. If so, if a maximiser wastes, in their own judgement, most of their life on what turn out to be unimportant obsessions, perhaps that’s not so important overall.
Progress is certainly none of lazy, myopic, or tragic.
I suppose this argument could apply (perhaps correctly) to almost any personality characteristic – “it takes all sorts” – but for what it’s worth, I have a hunch it might apply more so to this one.
(Answering all things backwards, because why not? It’s good enough for me.)
Let’s say this is the case: Since nobody is living in a vacuum, we have to take into account the burden of said obsessive individuals on others who may try to find new knowledge, too. So it might just be that maximizers might be better to find some kinds of knowledge to the detriment of the efforts of non-maximizers, who would without the outsourced coping labor put on them be even more prolific in finding new knowledge in this specific domain. In short: We can make a lot of if statements.
Let me also say, that it’ll take empirical studies to confirm any of those if-cascades. Let me further say, that it’ll take more than empirical studies, too (meaning the theoretical framework seems to oversimplify without a lot of benefits). There is a lot of work to do.
Which brings me to your question:
System 1 and System 2 are referring to Daniel Kahneman, I presume. So the next question is: Does this distinction actually relate to System 1? I mean, it seems plausible, but how to test for this? And furthermore: How to self-asses this? And is it really actually helpful to relate it only to one system? Maybe there is a System1-Version and a System2-Version of satisficing and maximizing? And if so is there only one distinction at play here?
But as I’ve tried to point out before: Satisficing and Maximizing are strategies that are applied situationally. People might tend to use one over the other and we might say then that they “are” a Satisficer. But then again in our self-assesments we need to be mindful that maximizing is satisficing - somewhere else. Which means we can transpose any maximizer into a satisficer and vice versa if we pick the right context. In this way we can accumulate a lot of complexity and uncertainty. Which brings me to my next point. When it comes to self-assesment, the distinction seems to me a not so worthwhile oversimplification. It might work in psychological studies where there is an external observer, who is looking at a defined situation, though the interesting question is how to carry the findings over into a more complex real-world context (for all the reasons stated above).
In short: I reject the premise that I am either/or.
No worries! I have enjoyed thinking about this.
Yeah. I regret stating it so strongly. Hyperbolic statements help me sometimes to inject some energy into the discussion, but I hope you won’t take it too seriously.
That’s a good, clear way of putting it. Funnily enough Option 1 seems the most intriguing to me.
That’s (Trigger Warning for sexual violence)->)the dice man approach! If you haven’t read it:
The Dice Man , a 1971 novel by career English professor George Cockcroft (writing under the pen name, “Luke Rhinehart”) tells the story of a psychiatrist who makes daily decisions based on the casting of a die.
But anyways: I said my piece about the distinction not being super useful for real life humans three times now (as per the DRY-Principle I should give my criticism a name… but I can’t think of one ). I won’t repeat it here again.
I have changed last year to evernote premium (after 10 years of absence) and REALLY love it (again).
(Disclaimer: I have not read this entire thread. Also, I don’t know much about this topic.)
This is a super good point. But could the distinction be made based on what the person is intentionally focused on? The stereotypical perfectionist maximizes at the expense of other considerations by failing to take anything except the current obsession into account. (I know from personal experience. ) So, yes, something is being satisficed, but not consciously.
My knowledge on this is limited to the following from the wikipedia article I linked, but apparently there have been twin studies (there’s a bit more there than I’m quoting):
I certainly regard myself as, though having particular tendencies one way, capable of changing them through rational thought – with as much help as I can get from tricks like beeminder of course.
I was trying to nudge people to notice that my poll was referring to the “System 1” idea that the wikipedia quote above points to, but I think I lost that tiny battle already
I’m sure I might be wrong, but here’s an unapologetically psychology-free explanation why your idea there seems implausible to me. Fundamentally – on various different levels (evolution, learning, science) – the only way we know to find new knowledge is to somehow make guesses and then correct them (I don’t know who it was in cinema who said “nobody knows anything”: same idea). Because that is how new things get found out, if all people were clones, we might tend to make the same guesses, and get stuck. Something like that does seem to happen in our most well-regarded fields of study: mathematical conjectures stick around for centuries until a bunch of unusual people have quirky new ideas, and fundamental physics is thought by many to be pretty stuck in a “string theory groupthink” rut as we discuss this.
So, if progress is held back in the way you describe, what will the effect be? We should expect progress is slower by some constant factor. On the other hand, if the problem is too much homogeneity? Then things can get stuck for as long as you like.
So though I may have put it kind of in these terms the first time around, perhaps even optimisation / pessimisation is not really the main problem here, but rather continuing to make steady progress at all – and in as many directions as possible at once. It’s not that I expect it to stop – just that diversity of thought is very important and I have merely a hunch that the maximizer personality trait might be good at providing that.
Do you believe your “outsourced coping labour” theory, or is that just an example of a possible way it doesn’t work out the way I’m guessing? If you do, according to your own standards, I guess you have some empirical study to back that up?
I wonder what you mean by “empirical” and “confirm”? It’s a trap, don’t answer that – you just triggered me a little bit with those words, what I’m getting at is: scientific truth doesn’t come from empirical tests, rather they (imperfectly) test explanatory theories. If our best theories – which aren’t limited to specifics like “how do maximizers affect other people’s lives”, or we’d know nothing about anything in science really – point one way, we should take them seriously, regardless of whether any empirical test focused on this particular subject ever gets done. In principle, anyway. In practice of course, our best theories of psychology are not super-powerful (yet), but I suppose I do think our best theories of epistemology are better than that in some ways, and that seems an important part of this, for the reason I tried to point at above.
Let me start with this: If I understand System 1-thinking at all, then it refers to a kind of automatic/unconscious thinking. On this level tendencies become high probabilities, I’d guess. So yeah. “Naturally” you might be a maximizer or satisficer. But: If we can change the strategy by employing triggers and retrain habits, then we have to be careful. Because this kind of retraining might happen through a suitable environment unintentionally. Can I become a natural Maximizer from a natural Satisficer? Maybe. But then: Why call it natural? Why not call it temporary?
Great stuff. Really enjoyed thinking about this. And I agree: We guess, we correct. We acquire knowledge. We learn what works, etc. But it’s all guessing. And it’s good to diversify the strategies. I agree with that too. And I furthermore agree that if there would only be those two options, then it’s inescapable that you’d want both approaches available to you.
(Also we could go down the rabbit hole of what if satisficers would differentiate themselves in much more productive ways after maximizers are gone, which in turn would produce much more progress then the current dichotomy could, etc. Not that interesting, I think, but still another way to continue this thread of the discussion…)
What I have a hard time to reconcile is that even a tendency of doing things a certain way doesn’t mean that maximizing and satisficing are more than strategies. And if they are strategies then any person can use any of those depending on what makes the most sense (or nonsense, which in the context of progress might be a good thing). But this means that a tendency of doing one thing over the other isn’t really the last word on the problem. It’s just that: A tendency, that we can choose to give into or not. Technology and policy can help to control this (at least somewhat, maybe even enough). I hope I could show that we can distinguish the question of what is the “best” strategy for acquiring knowledge from personality traits at least a little bit.
About that study:
(Disclaimer: I have read the abstract. And what is said in the wikipedia article.)
If I understand this at all correctly this study talks about self-reported cases and talks about personality traits. I have voiced my concerns regarding self reporting these things (since maximizing is satisficing somewhere else). And at least to me it seems that ending with personality traits seems incomplete (or an oversimplification as I tried to say earlier), if an interesting starting point for a lot more work/discussion.
About the ‘work’ to be done (“empirical studies” and all that):
Yeah. It was a truly clumsy way to put this. It was probably even wrong. As somebody who has studied history of science and technology I should know better. I just tried to underline that we can accumulate a lot of complexity and uncertainty about our conjectures regarding this. Which is, generally speaking, a good thing! In other words: There is potential here! But then the question is: Is it worth it? I hope I communicated that it seems to me too much of a oversimplification, since - say it with me - maximizing is satisficing somewhere else. Put differently: It’s not worth the trouble to make the distinction work. But what is worth the trouble: To think about what this simplified view articulates about a possible more complex view and work on that.
One might say that trying to make the distinction work is already the work I’m looking for. But then rejecting the distinction is part of this work in this view as well. So: yeah.
I didn’t even read the abstract, but: I’m not sure what alternative theory you’re discussing here that explains the data?
Are you saying you don’t believe certain people are prone by nature to spending hours finding a slightly cheaper gadget, and yes maybe in doing so neglecting to do their taxes? You’re an odd sort of beeminder if so, perhaps To my understanding, that’s the sort of phenomenon that’s being referred to with this research, however you prefer to label it or try to explain it.
Nothing you can hang your hat on. Just my own little thoughts, strung together. Do you think it has merit enough to be talked about as a thing or are the missing credentials of these ideas an actual problem? I hope not. But I could see that this might be important to you.
No, I’m not saying that at all. At least I hope! What I’m saying is that being prone to overuse one strategy is not the most interesting thing (I don’t even know if I needed confirmation that these personality traits exist per se). What I’m saying is that it gets interesting when we dissociate the strategies from our predetermined tendencies and ask ourselves: how could we work productively with these tools (we can choose strategies) we have found? What is needed to do so? So maybe what I’m suggesting is: “Alright, we might be prone to do certain things. But we might be able to change the impact of that. We might be able to employ these strategies consciously. We might need to observe our current situation and come up with some small little private theory, a concept here and there and we might find that employing these strategies is also a question of scarcity of resources, etc.” And of course we can turn this line of thinking also inside out in the sense that we can take for granted (at least as a thought experiment) that it’s not only subconscious traits that are at play here and we can speculate that this mix of intentional and unintentional decision making is probably a more real world version of what we can observe in our own lifes.
This seems worthwhile. But collecting or rather deciding between personality traits? Not sure. Does this mean I reject that personality traits exist? No (really: Me rejecting the premise of being either/or comes from not being willing to see my personality as determined by my traits, without any possible interference by my intentionality. It’s an ever changing mix of things that makes me, me). That people can self report traits under certain circumstances? No. That Genetics or Science or any of that is humbug? No. How does this follow from what I wrote? I can’t see it.
(Let me also say: I enjoy(ed) the discussion very much. I just find only talking about the distinction and only as it pertains to predetermined traits to be unfulfilling. Which is why I tried to open it up, with some thoughts of my own. But maybe that’s just annoying and/or easily misunderstood. I am not sure if me being a non native speaker has anything to do with this. Maybe I should have used many more disclaimers? And/Or maybe there is some culture divide at work here: Maybe I’m just more used to a free floating, ad-hoc theory building kind of discussion style, wherein credentials do not matter as much - which doesn’t mean they don’t matter at all either? Hm. This is what confuses me the most.)
I am tentatively on the side of @matti here (I think… the conversation honestly could do with a flowchart to keep track of who is proposing what!), based on my own experiences. I am both a hardcore maximizer (I have done multiple hours of research to choose which toothpaste I should use, which cell phone I’ll buy, and oh my god the hours I spent picking out my refrigerator) and a die-hard satisficer (not only do I pick the first good-looking item on a menu in a new restaurant, if that item does in fact prove tasty, I will order it every subsequent time I visit that restaurant). For every case of maximization (to an absurd degree) I’ve dedicated myself to, I can give an example of a non-trivial decision that I completely made off the top of my head/my gut feeling.
So I honestly don’t even know what I’d say I “naturally” leaned towards. I maximize the decisions that are important and interesting to me, and satisfice the ones that I don’t care about or that bore me. I’d say “isn’t this what everyone does?” except the preceding conversation makes it clear that no, this isn’t what everyone does But it does mean my status re: M vs S is strongly environmentally determined, and any personality test that attempted to determine my M/S ranking would swing wildly based on exactly what questions they asked or what context I was thinking of myself in.
So based on myself, I would be skeptical of an academic attempt to discuss M vs S as “personality attributes” unless they made some very specific definitions of what M and S mean, and when.