Maybe the key to seeing Beeminder’s incentives as wholly non-perverse is to remember that derailing does not equal failing. Derailing is a lot of things — a kick in the pants, paying for an immediate break because the goal has been really hard, valuable information about how realistic the goal is — none of which have anything to do with failing. The only ways to fail are to quit the goal, to set the yellow brick road to something stupidly easy, or to cheat.
That’s also the key of the article: This presupposes a certain definition of “failing”. And then goes on to say that derailing is no such thing. In my opinion what failing is, is a very subjective question that can’t really be dictated by Beeminder, because it’s an opinion (and emotions-) based matter and might change not only from individual to individual but also from goal to goal, day to day, etc.
I know Beeminder is a company and needs or at least wants to have a set of applicable principles (a (predictive) theory of beeminder) to gauge stuff like revenue and user engagement, but I think this is also a prime case of removing unknowables by presupposing certain definitions and then working under the assumption that the unknowables now have become knowable. I would even suggest that many users couldn’t even articulate why something feels like a failure, which makes following @dreev’s definition even harder.
To say it in an image: The article claims to be a map of a territory but it’s actually a blueprint for a terraforming project, since it doesn’t describe the landscape which is unexplored. Instead it draws up a class of use cases and tries to rework the discursive landscape in which it acts (ergo: convince other people that what they see is indeed what @dreev describes if we only put a fresh coat of paint on it. And remove this detail. And add this extra thing over here. And so on.).
But also that’s basically how you get to a predictive theory of anything: You control the discourse by standardization (of definitions) and so on… So I don’t even know if it’s a bad thing in that sense, but we also loose complexity (and in this case a lot of it) by clearing things up in that way…