Beeminder Forum

Future Discounting


#1

In the Commits.to spec I propose a 6% or 36% per year discount rate: https://github.com/commitsto/commits.to/wiki#for-later-future-discounting

But then @byorgey said the following and everyone (who I asked in Slack) seemed to think he’s right:

I feel like a year is a really long time in commitment-land. That is, how reliable I was on my commitments a year ago has very little to do with how reliable I am now. In a year my life situation could have totally changed, I could have gotten a lot better (or a lot worse!) at picking good deadlines, etc. So to me a “reasonable-seeming” discount rate might be more like 90% a year (compounded continuously of course).

To translate from one brand of nerdery to another, in case that’s helpful, a discount rate is like a half-life. Your overall reliability is a weighted average of your promises’ scores, and a promise loses half its weight every so many years. A 6% discount rate means the half-life is 12 years. For 90% the half-life is 9 months. The formula is log(2)/r.)

So, straw poll, what do you think Commits.to’s discount rate should be? (It’s approval voting, so check all the ones that seem reasonable to you

  • 0% (half-life infinity – all promises weighted equally)
  • 6% (half-life ~12 years)
  • 36% (half-life ~2 years)
  • 50% (half-life 1.4 years)
  • 69% (half-life 1 year)
  • 90% (half-life ~9 months)
  • 100% (half-life ~8 months)
  • 139% (half-life 6 months)
  • 832% (half-life 1 month)
  • 3604% (half-life 1 week)

0 voters


#2

You can check all the options you want*!

* you are only allowed to want at most 3


#3

oops, try now!


#4

I had a very brief email exchange with @dreev where I wondered if using some kind of Bayesian updating for one’s commitment reliability, rather than the proposed discounting model. That would give it some theoretical underpinning, I thought. Thinking about it some more, though, I fear it may be hard: if what we are estimating is the probability distribution of some series of events, then the normal assumptions include that:

  1. The events are independent - so the probability of this event isn’t dependent on what happened before, and
  2. The probability of the event is stationary - that is, it doesn’t move around.

Both of these I think aren’t the case here. If I’ve failed on my previous commitment, then it might well make me more likely to try and achieve this one (or perhaps also more likely to think “well I’m failing already, so who cares!”). And the probability of meeting a commitment is likely to drift over time, too - that’s why we might need discounting in the first place.

I will continue to investigate these kinds of statistical models, though! There are certainly some models where the probability isn’t stationary, such as with time series (e.g. it’s more likely to rain at particular times of year). So they may hold something. Anyone else have any thoughts on this? Either appropriate models, or perhaps that it’s a daft idea and, really, what am I thinking with all this? :wink:


#5

I suppose we could also look empirically how past commitment capabilities is predictive of current commitment capabilities.


#6

Late to the party, but all of a sudden pondering this one… I like to think I’m consistent in delivering on things promised and wouldn’t need things to age out at all to accurately reflect my rate of follow-through. But I think that’s probably a rosy view of myself as i’d like to be and not as I am. I think a year would be fair in terms of allowing people to get an accurate idea of someone’s follow-through rate; six months might be even better. I guess it depends on whether it needs to be useful to judge a person as a whole or if circumstances should be taken into account. (The best of both worlds would be being able to selectively filter and the statistics shown relating to things in the current filter, maybe.)

(Absent a filter for UX reasons, I think a year would average out the two needs best.)


#7

This is basically a copy-paste of my conversation with Daniel Reeves in Slack.

I voted 0% - here’s my reasoning.

Definitely adds some reconciliation to a promise that hasn’t been fulfilled, such as one that is well overdue, as opposed to just having it drag down your score. My question would be: once that’s implemented (if it is) would there be a way to keep it in check so that people don’t abuse it?

Granted, I know that the reconciliation process takes place over a year and a very low (in general) count. However, are there enough counterbalances for when people make, keep, check, and don’t keep promises to ensure that the Future Discounting function doesn’t become something that gets abused and then not noticed until well in the future? (Ie, noticing you have 13,000 unread emails)

Here’s a sketch of a possible situation of abuse of the Future Discounting function: I make three half-assed “I-Will” statements with due dates that I don’t really care about or don’t pay attention to. I let them lapse, and they drag down my score. No big deal, because there’s Future Discounting which will mean that later on, my score will go up if I just make some other I-Will statements that I keep or mark as kept.

Then, promises, I-Will statements, the entirety of what commits.to is based upon means less. With Future Discounting, broken promises today matter less because you can make up for them tomorrow (or next year, depending on the structure).