Daerdemandt's Beeminder Journal (here be integrations!)

Hello!

I’ve been using the Beeminder for several years already, with a history of both successes and failures.

So far, my two major takeaways are: things change and autodata rules.
Manually inputting the data fails on me eventually.
Autodata breaks eventually too, but that’s due to lifestyle changes and it generally lasts way longer (months/years instead of weeks).

So, there’s a drain on a number of my data sources. To compensate for that, I have to set up an inflow of new data sources.

There is a certain failure mode: if I don’t have an automated data input, I procrastinate on setting up a goal. Or I don’t, set up the goal with manual data input and this decreases chances of me setting up some automated data input. When the manual input fails, which doesn’t take that long, the whole goal fails.

Ideally, I want to have a data source operational for a month or so before I set up a goal for it, just to see if the data is accurate and reliable. I did something similar with setting almost-horizontal slope for some “trial period”. Without a designated cue to switch from “trial period” to actual usage, though, I forget about this stuff and leave it in “trial period” for unspecified time.

Non-committal way of setting up a data source for a trial period is helpful against the procrastination, but data sources getting stuck in non-committal limbo for indefinite time without good reason are a problem in itself.

I’m setting up this topic mainly to publicly keep track of my data sources, with some (hopefully) relevant thoughts. Any input welcome!

4 Likes

The system is supposed to work the following way:

Each week, I commit to add 1 data source.
Each month, I commit to add 1 beeminder goal if there are appropriate data sources.
Each week that sees a round number of data sources, I commit to review this commitment - no sooner, no later.

The source starts in a trial mode :hatching_chick:. Each week of it’s successful performance, I’ll give it a golden star emoji :star:.
Instead of the 4th star, it graduates from a trial mode. Yeah, I could just keep track of dates, but this is more visual and is supposed to make weekly reviews feel more rewarding.

If the source breaks and then gets fixed, I’ll give it a scar ҂. Instead of getting the 4th scar, the data source is retired, unless it’s hooked up to something (then I call it an issue :eye:). In a monthly review (done in a week that has the end of the month), each scarred source has 1 scar healed.

By “breaks” I mean is not accessible, or loses / corrupts some data beyond previous month’s 2 sigma, or requires manual intervention to prevent anything of the former. System failures that affect several non-related and non-confounded sources don’t count here.

Metrics that are hooked to Beeminder goals, or something else that’s supposed to improve them, have the symbol :fast_forward:.
Metrics without this symbols are to be scored (during monthly reviews, before removing scars) via the following checklist:

Am I ready to pay $100 to have it magically improved for a year? (2)
Is it an actionable metric or a result metric? (1/0)
Does it have less than 2 scars? (2)

If the result is 5 and there’s no relevant issues, it’s appropriate for a beeminder goal.
If the result is less than 2, it’s not pulling its weight. Toss a coin and only heal it’s scars if it’s heads.

Each source can have issues associated with it to reflect relevant details that don’t fit into this system.

My current state is:

2 integrations

  1. Runkeeper integration :fast_forward:
  2. Misfit integration :eye:
    issue: data submitted weekly, not daily
4 Likes