I may be a bit maximize-y by nature but since the functions I want to maximize tend to include factors like time spent and anxiety, being a satisficer is commonly what maximizes that function!
It’s like Bounded Rationality in AI/philosophy. Satisficing is the real maximizing.
Put another way: I’m a die hard maximizer, no compromises ever! Optimize literally everything! But taking too much time to decide or suffering anxiety is not optimal so I commonly appear to be satisficing. But it’s all still maximizing and if I find a way to improve I would never dream of saying “nah, I don’t care about improvements, this is good enough.” Instead I’d say “that’s an improvement in one dimension but it has this cost that may not be worth it.”
Related: Philosophy, utility functions, rationality, heartless economists