The extreme part of this is saying “you can keep your money but we’ll delete your whole account”. But it’s ultimately the only leverage we have if want to offer maximum possible weaselproofing. Perhaps that means we shouldn’t offer that. But here’s the full idea:
Instead of a single “weaselproof me” checkbox, you choose your level of weaselproofing:
None (default)
Make me jump through hoops like providing screenshots or having friends vouch for me
Make me offer serious evidence like doctor’s notes
All derailments are final [1]
[1] We can’t and won’t actually ever keep money you don’t ultimately choose to give us. But to uphold our sworn duty to weaselproof you at the level you’ve chosen, we’ll also delete your account (!). So insisting on your money back is tantamount to admitting that you don’t have sufficient integrity for Beeminder to help you and so we simply break up.
As if that’s not enough webcopy with the long footnote, we might also want a weaselproofing FAQ like so:
What if a derailment is unambiguously not legit but I can’t prove it at the level I weaselproofed myself at?
That can feel frustrating but there’s no way to solve that without defeating the point of weaselproofing. Hopefully Beeminder is providing so much motivational value that getting charged once in a while is more than worth it!
What if I fabricate evidence?
This is us rolling our eyes at you. But the time and emotional cost of that should itself be a deterrent. More to the point, if you can imagine doing that, it sounds like you want the next higher level of weaselproofing!
Can I turn weaselproofing off, or drop to a lower level?
As usual in Beeminderland, the answer is that you can after the akrasia horizon, so starting next week. And the interface itself doesn’t allow it [nor does it exist at all yet, I’m just trying to keep the bar as low as possible for shipping this if we decide it’s worth doing!] so you’ll have to ask us nicely. We’re also anxious for feedback on why you didn’t like the level of weaselproofing you picked so we’ll grill you on that before dropping your weaselproofing.
Finally, here’s an anonymous poll to give a sense of where people fall on this weaselproofing continuum:
I’d never opt in to any weaselproofing
Existing binary weaselproofing is plenty for me
I’d value stricter weaselproofing than Beeminder now offers
I didn’t realize I could just “insist” on getting my money back and you’d cave and it’s probably bad to let on that that’s true
I’d value the most extreme weaselproofing where I can only weasel by deleting my whole account
Don’t offer the account-deletion weaselproofing – I might actually do it!
I love the extreme weaselproofing idea. Just to clarify, the process here is:
weasel calls not legit
we tell them hey, you told us to delete your account entirely if you tried to wriggle out of this commitment!
they, hopefully chastened, accept our wisdom and we keep their money. OR
their whole account is deleted. But what if
they just make another account?
Give them the benefit of the doubt? Treat them as a new non-weaselly user? I’m assuming yes, since we currently let people who ask nicely change weaselproofing a week out.
Good question. We could hash their email address (or even payment info). [1]
[1] For non-nerds, hashing something is a way to completely delete it and yet, mathemagically, have a way to recognize it in the future. For example, if we delete alice@example.com but save a hash of “alice@example.com” then when alice tries to sign up again we could say “sorry, that email address is perma-banned”. And yet there’s literally no way we can ever recover alice’s email address. Maybe that’s overkill but just pointing out that it’s technically possible to completely delete someone’s account and still prevent them from re-registering.
(Tangent off that tangent: I’m now genuinely unsure if the above sounds magical and amazing to non-nerds or if it seems like it’s obvious that that would be possible. After all, it’s common for a human to have no way to recall a piece of information and yet still unerringly recognize it if they see it again.)
That is literally not true. You just have to guess right, which is actually way easier for email addresses than it is for passwords.
(To be clear, keeping a hash of a deleted account’s email is a great idea and would work well in practice. But you said “literally no way” which awoke the Nitpicking Detail Gnome who lives in the back of my brain.)
If you send any requests to Beeminder to undo a derailment, we’ll hire an assassin to kill you (please deposit $X now so we can pay the assassin)
You know what, even under 5 I’m definitely going to weasel, so just kill me now
Please destroy the entire universe if I ever try to weasel, thus making the probability that, conditional on the universe still existing, I’ve ever tried to weasel, be ~0
So I would be super interested in trying out EXTREME BEEMINDING, aka “all derailments are final.” This actually seems far better to me than just “weaselproofing” or the current default “give us a good excuse” setup.
As an example, I’ve had multiple times where I’ve planned a vacation but forgotten to take time off in Beeminder. Then the vacation rolls around and I send you guys an email just before (or just after) I derail, asking for mercy. Which you give me! Because you’re awesome! But part of me thinks this is suboptimal. Like, one of the whole points of Beeminder is that I look ahead and make good long-term decisions instead of short-term decisions that–ooh, shiny!–are maybe less optimal.
I’m considering implementing this myself (in a way that creates no additional overhead for you) by just promising to start prefixing any emails to support with something like “ALL DERAILMENTS ARE FINAL - PLEASE CHARGE ME REGARDLESS OF EXCUSES.” Which I guess functionally would be the same as promising to never ask you not to charge me.
I would never use it, but I’m tremendously amused at the idea of XTREME BEEMINDING, and I would love to hear more stories of XTREME BEEMINDERS (like jds above).
I guess it depends on whether we want to give weasels a chance to turn over a new leaf with the punishment of losing all their data and ongoing goals – is that sufficient penance for them to start a new account? Or is this a tier of XTREME BEEMINDING somewhere between “all derailments are final” and “permabanned if you weasel out?”
When it comes to failing on your goals without paying, this discussion is only about the lie-to-support method of weaseling. But there is also the lie-to-computers method of fake data entry. Aren’t they linked directly? The more “proofed” the weaseling is, doesn’t it just make fake data entry more attractive?
It seems to me that the only difference between the two methods is that fake data ruins the quantified self aspect of Beeminder. But I wonder, are support weaslers actually thinking “I’ll let this goal derail and reply to the legit check because I want my data to stay pristine.”? Or am I missing something?
In any case, it seems like a stricter support weasel policy would just make data weaseling more attractive?
@drtall I think another difference would be that the support-weasel slope is less steep and slippery than the data-weasel / lying slope.
Confession: I’m a reformed data-weaseler. The first time I started using Beeminder, I loved it. A few months later, I fudged. Just a little, like, I couldn’t make it to the gym that day so I made, uh, a “preliminary” data point. MAN was that a slippery slope. Within a few weeks, actually probably more like days, I was just saying “oh, an eep day? HAVE ANOTHER DATA, BEEMINDER!” Literally I was only using Beeminder to lie to it. (… I’m sorry @dreev forgive me!)
Obviously at this point Beeminder was now providing zero or negative utility, so I just archived all my goals and stopped using it.
Afterwards though I would think back and remember how Beeminder had helped me, before The Weaseling. After maybe a year I decided to give things another try. These days I’m never even tempted to lie–it would literally instantly destroy all the considerable and obvious value that I get from Beeminding.
Sorry for the novel–all this is just to say that data-weaseling for me was ridiculously slippery. Less of a slope really than a sheer cliff with some giant jagged rocks at the bottom.
And I don’t know for sure, but I think support-weaseling would be less slippery. Lying to a computer is just too easy. I lied to Beeminder’s servers for weeks with no “external” guilt (like–I felt crappy because I was disappointing myself, but I didn’t feel bad about the lying-to-someone aspect at the time). Lying to a human though? I think most people would have a much harder time keeping that up.
And that less-slippery slope is nasty, because the whole point of Beeminder is to turn nice slow downward gradients that we happily descend into giant steep cliffs that we carefully avoid.
That makes sense. I suppose I was thinking about this as the one-time weasel question but you’re right this is actually an iterated game.
Confession: I have data weaseled periodically, and I have experienced the slippery slope you describe exactly. Thankfully it has only ruined my goals on an individual basis and didn’t ruin Beeminder entirely for me.
Dang, that’s brilliantly put! Can we embroider that somewhere?
And really good discussion about the difference between data weaseling and support weaseling. I hadn’t thought it through before but you’re both right – @drtall and @jds02006 – that tightening down support weaseling could push weasels towards data weaseling, except that that slope is so much slipperier that maybe the temptation to put a paw on it is diminished.
Slightly offtopic: is there a different term for “interpreting things in the most generous, perhaps even silly, but not technically undeniably untrue way” and “straight up lying to Beeminder”? Because if not, I think there should be. I’ve done the former many times, but can’t recall ever doing the latter.
I think they’re both data weaseling but of different severities.
Leaving the precise definition of a goal unspecified leads to wiggle room. Ideally every goal would have a fully specified, precise definition of what counts and what doesn’t. Then, changing that precise definition should be subject to the akrasia horizon just like any other adjustment to your commitment. If you’re editing the definition of your goal inside the akrasia horizon, you’re data weaseling.
In reality, a lot of goals have corner cases that are hard to think of in advance. Consider the discussion at Advice for goals which have external dependencies . It’s impossible to pre-enumerate all possible issues. But I wouldn’t call that “weaseling” unless that definition shifts around a lot or is applied inconsistently.
Yeah, it’s interesting which weaseling “counts” to us and which doesn’t. For instance, if I weigh in over my limit, I have no personal qualms deleting the datapoint, because I have a separate goal that requires that I weigh in every 3 days or so. This helps me get around instances where a salty meal increases my weight 8+ pounds–for a day.
On the other hand, I would never modify a weight, because that’s way too much cheating for me.
That’s funny, on our ancient Withings scale (now technically Nokia, and which we ordered from Europe in 2008 or 2009 before wifi scales were a thing in the US!) I often peek at my weight by stepping on and stepping back off before it registers. I also have a weighins goal to ensure I can’t do that indefinitely. So it’s functionally exactly the same as deleting the datapoint if it’s too high but touching the data at all is the bright line I’m afraid to cross.
I don’t think that I agree. I mean, yes, there’s a way I could interpret it to make it true, but I think it’s much simpler and equally accurate to say that at least sometimes, I intentionally leave (limited) wiggle room in the definition of the goal as part of what the goal even is. Making use of that wiggle room is IMO completely, utterly different from lying to Beeminder.