Beeminder Mathematica Package?

I just talked myself into upgrading my Mathematica license this week and I’m one of the things I’m inclined to do is tinker with alternative visualizations and metrics for my Beeminder goals.

Is there any prior art for a Mathematica package that wraps the Beeminder API? Looking at you, @dreev , since I know you are a fan. If not, I will probably start working on one.


Two things:

  1. I was thinking of a Pandas DataFrame library, myself. I don’t know enough about Mathematica to know if those are similar enough targets, but I suspect they might be quite close.

  2. If we were to augment the API to make this an easier task, what would be necessary?


Most of my Mathematica experience dates to 2007-2010, so this project is largely an excuse to learn what has changed when it comes to API interaction and primitives for representing datasets. I don’t really know the answer to either of your questions (yet), but I’ll report back as I get into it and learn more.


Try – can say more when at keyboard!


Putting aside discussions of technology, the thing that would make this so much easier is if there was an API for the aggregated datapoints—i.e. with aggday, kyoom, odom, etc already applied.

Yes, it’s true that I could re-implement this myself, there doesn’t need to be an API for it. But this stuff isn’t trivial—given that Beeminder already has dealt with all the edge cases and complexities here, I’d rather Beeminder does the calculation for me, instead of each person who wants to try their hand at their own visualizations having to implement it themselves.


I got something very basic whipped up here:

No install instructions, and don’t assume any stability to this interface. One of my goals here is to make use of shiny new goods in Mathematica 12, so I intend to more transformation of the API payloads using things like the Dataset and TimeSeries types.

Also, I agree with @zzq on wanting the option more of the beebrain transformations to be applied server-side by the API. For example, one of the things I’ve considered doing with Mathematica is computing a probability of derailing for a goal. I imagined something like:

Given the empirical distribution of my behavior over the last N days and the upcoming road for the next N days, I will derail X percentage of the time.

Can’t really do this without duplicating the aggday functions today.


I totally agree, zzq. I have “get the aggregated daily values” already requested, but another voice is definitely helpful!


This is extremely correct, that the Beeminder API shouldn’t make clients reimplement aggday!

In the case of Mathematica, that’s what all this was originally implemented in so… I can just show you the original code for that (note that it’s out of date and doesn’t have the current slate of aggday functions):

(* Returns a pure function that aggregates a list of values in the way indicated
   by the string s, which is what was passed in as the aggday param. *)
aggregator["all"]      = Identity;   (* all & last are equivalent except that *)
aggregator["last"]     = {Last@#}&;  (* aggday all means all datapoints are   *)
aggregator["first"]    = {First@#}&; (* plotted; the last datapoint is still  *)
aggregator["min"]      = {Min@#}&;   (* the official one when aggday==all.    *)
aggregator["max"]      = {Max@#}&;                    (* WAIT: aggday=last    *)
aggregator["truemean"] = {Mean@#}&;                   (*   doesn't work for   *)
aggregator["uniqmean"] = {Mean@DeleteDuplicates@#}&;  (*   kyoom graphs!      *)
aggregator["mean"]     = {Mean@DeleteDuplicates@#}&;
aggregator["median"]   = {Median@#}&;
aggregator["mode"]     = {Median@Commonest@#}&;
aggregator["trimmean"] = {TrimmedMean[#, .1]}&;
aggregator["sum"]      = {Total@#}&;

(* Aggregate datapoints on the same day per the aggday parameter. *)
agg0[a_][d_]:= With[{x = d[[1,1]], y = aggregator[a][d[[All,2]]]}, {x,#}& /@ y]
aggregate[data_, a_:Null] :=
  Flatten[agg0[If[a===Null, aggday, a]] /@ SplitBy[data, First], 1]

Oh, right, and there’s kyoomify (applying auto-summing) and the odometer reset feature:

(* For every day with no datapoint, add one with the same value as the previous
   datapoint. Do this after kyoomify, etc.
   Implementation note:
   This has a side effect of throwing away all but the last datapoint on each
   day, which currently doesn't matter since we've ensured that there is only
   one datapoint per day when we call this. *)
fillZeros[{}] = {};
fillZeros[data_] := Module[{val, a,b, x},
  each[{t_,v_}, data,  val[t] = v];
  {a,b} = Extract[data, {{1,1}, {-1,1}}];
  (If[NumericQ[val[#]], x = val[#]]; {#,x})& /@ Range[a,b, SID]]

(* Transform data so, eg, values {1,2,1,1} become {1,3,4,5} *)
kyoomify[{}] = {};
kyoomify[data_] := Transpose @ {#1, Accumulate@#2}& @@ Transpose@data
(* The inverse of kyoomify -- gives differences from previous datapoint. *)
unkyoomify[{}] = {};
unkyoomify[data_] := Transpose @ {#1, #2-Prepend[Most@#2,0]}& @@ Transpose@data
         (* Or: Transpose@{#1, Differences[Prepend[#2,0]]}& @@ Transpose@data *)

(* Transform data as follows: every time there's a decrease in value from one
   datapoint to the next where the second value is zero, say {t1,V} followed by
   {t2,0}, add V to the value of every datapoint on or after t2. This is what
   you want if you're reporting odometer readings (eg, your page number in a
   book can be thought of that way) and the odometer gets accidentally reset (or
   you start a new book but want to track total pages read over a set of books).
   This should be done before kyoomify and will have no effect on data that has
   actually been kyoomified since kyoomification leaves no nonmonotonicities. *)
odo00[{prev_,offset_}, next_] := {next, offset + If[prev>next==0, prev, 0]}
odomify0[list_] := list + Rest[FoldList[odo00, {-Infinity,0}, list]][[All,2]]
odomify[{}] = {}
odomify[data_] := Transpose@{#1, odomify0[#2]}& @@ Transpose@data

Should I get all this somewhere publicly viewable in GitHub?

PS: We also have Python and Javascript versions of all this.

That would be great! I think I remember seeing the Python version of these functions somewhere in the forums in the past, but I couldn’t find them when I went looking this morning.

Would you consider these functions part of the formal specification of Beeminder goals? My guess is that you couldn’t change any of these definitions without massively breaking existing goals, anyway. Maybe there would be new aggday functions in the future, but the ones that exist will stay the same?

1 Like



Oh, awesome — thank you for posting all of this!

At least until something like this becomes part of the official Beeminder API, I’ve added an API route to Altbee that will do the aggregation. It hopefully will save us all from having to each reimplement it independently.

This API will enrich any given Beeminder goal data with an addition agg_data field containing the aggragated datapoints, taking into account odom, kyoom, and aggday.

That is: If you POST to the JSON that Beeminder’s goal API returns (with datapoints=true), then you’ll get a response containing all the fields you sent plus an agg_data field, containing the aggregated datapoints.

For example, try something like this:

curl -s \$GOAL.json?auth_token=$AUTH_TOKEN\&datapoints=true |
  curl \
    -H 'Content-Type: application/json' --data @-

Dang, this is brilliant and really excellent nudge for us to hurry and make this not be necessary!

Thank you! (Have we offered you stickers lately? You’ve earned an insane amount of them!)