Future Discounting


In the Commits.to spec I propose a 6% or 36% per year discount rate: https://github.com/commitsto/commits.to/wiki#for-later-future-discounting

But then @byorgey said the following and everyone (who I asked in Slack) seemed to think he’s right:

I feel like a year is a really long time in commitment-land. That is, how reliable I was on my commitments a year ago has very little to do with how reliable I am now. In a year my life situation could have totally changed, I could have gotten a lot better (or a lot worse!) at picking good deadlines, etc. So to me a “reasonable-seeming” discount rate might be more like 90% a year (compounded continuously of course).

To translate from one brand of nerdery to another, in case that’s helpful, a discount rate is like a half-life. Your overall reliability is a weighted average of your promises’ scores, and a promise loses half its weight every so many years. A 6% discount rate means the half-life is 12 years. For 90% the half-life is 9 months. The formula is log(2)/r.)

So, straw poll, what do you think Commits.to’s discount rate should be? (It’s approval voting, so check all the ones that seem reasonable to you

  • 0% (half-life infinity – all promises weighted equally)
  • 6% (half-life ~12 years)
  • 36% (half-life ~2 years)
  • 50% (half-life 1.4 years)
  • 69% (half-life 1 year)
  • 90% (half-life ~9 months)
  • 100% (half-life ~8 months)
  • 139% (half-life 6 months)
  • 832% (half-life 1 month)
  • 3604% (half-life 1 week)

0 voters


You can check all the options you want*!

* you are only allowed to want at most 3


oops, try now!


I had a very brief email exchange with @dreev where I wondered if using some kind of Bayesian updating for one’s commitment reliability, rather than the proposed discounting model. That would give it some theoretical underpinning, I thought. Thinking about it some more, though, I fear it may be hard: if what we are estimating is the probability distribution of some series of events, then the normal assumptions include that:

  1. The events are independent - so the probability of this event isn’t dependent on what happened before, and
  2. The probability of the event is stationary - that is, it doesn’t move around.

Both of these I think aren’t the case here. If I’ve failed on my previous commitment, then it might well make me more likely to try and achieve this one (or perhaps also more likely to think “well I’m failing already, so who cares!”). And the probability of meeting a commitment is likely to drift over time, too - that’s why we might need discounting in the first place.

I will continue to investigate these kinds of statistical models, though! There are certainly some models where the probability isn’t stationary, such as with time series (e.g. it’s more likely to rain at particular times of year). So they may hold something. Anyone else have any thoughts on this? Either appropriate models, or perhaps that it’s a daft idea and, really, what am I thinking with all this? :wink:


I suppose we could also look empirically how past commitment capabilities is predictive of current commitment capabilities.