# My take on the urgency metric

This is my take on implementing an urgency load metric as discussed here. In short it should be a single metric which reflects how close my goals are to derailing overall. In my experience edge-scating on goals results in more stress than the stress from the task on its own, and this grows exponentially with the number of goals I am edge scating on. It would therefore be sensible to encourage keeping a buffer. My hope is that encouraging higher buffers would eliminate this additional source of stress, and perhaps even reduce the amount I end up paying to Beeminder (sorry).

I use a normalized score which is always between 0 and 1 for a given day, and the amount of goals does not affect urgency.

Basically the process is as follows:

• For each goal calculate an urgency: u_i = (1-b/7)^2 (b is the safety buffer in days, capped between 0 and 7). This results in the following values:
Buffer (days) Urgency (u_i) Difference
0 1.00
1 0.73 0.27
2 0.51 0.22
3 0.33 0.18
4 0.18 0.14
5 0.08 0.10
6 0.02 0.06
7+ 0.00 0.02
• The overall urgency is a weighted average of the maximum urgency, and the mean urgency (I weigh the maximum urgency at 10% currently).

My motivation for squaring the urgency is that it punishes goals with low safety buffers disproportionally: I would have to bring ~14 goals from 6 days to 7 days buffer, to have an equivalent effect on urgency as bringing a single goal from 0 days to 1 day (ignoring effects of weighted max urgency).

This does remove from the intuitiveness of the metric. The metric does not map to a concrete, IRL thing (“emergency days”, “total buffer”, etc.) I try to counteract the unintuitiveness by highlighting the goal with the highest individual urgency in the comment of the datapoint. That way it should always be easy to see which goal can and be worked on to reduce urgency.

I have created a goal where I track the metric. The slope is set to 1, so a derailment is currently impossible. Once I have been testing it for a while and have developed more of a feel for how the score behaves, I’ll set a reasonable slope, and ratchet it.

The code I use is below, and it requires Pyminder. It can be run safely multiple times in a day, and will result in a single datapoint each day (keeping the last value sent). It probably contains bugs and/or doesn’t deal with edge cases well.

``````from time import time
from statistics import mean
from pyminder.pyminder import Pyminder
from pyminder.goal import Goal
from typing import Union
from requests import put
from datetime import datetime

# Configuration
## The goal name to which to beemind the urgency score.
GOAL_NAME = "urgency"

## The maximum buffer in days that will affect the urgency. In other words,
## this is the number of days of safety buffer that represents an urgency of
## 0.
MAX_BUFFER = 7

## The weight of the overall urgency that is given to the goal with highest
## urgency. The remainder is taken from the mean of the per-goal urgency values.
## As the goal with highest urgency is also included in the mean, any value >0 will
## result in a disproportionate contribution.
WEIGHT_MAX = 0.1

## Each per-goal urgency will be raised to this power. A higher value will punish goals
## with lower buffer more. Set to 1 for a linear relationship.
URGENCY_POWER = 2

TOKEN = "[AUTH_TOKEN]"

# Function to cap a number between a maximum and minimum, inclusive:
def cap(
min: Union[float, int], max: Union[float, int], value: Union[float, int]
) -> Union[float, int]:
if value < min:
return min
if value > max:
return max

return value

def is_finished(g: Goal) -> bool:
curval = g.curval
goalval = g.mathishard[1]

if g.yaw == 1:
implied_finished = curval >= goalval
else:
implied_finished = False

return g.won or implied_finished

pyminder = Pyminder(user=USER, token=TOKEN)

results = []
for x in pyminder.get_goals():
if not x.won:
safebuf = cap(0, MAX_BUFFER, x.safebuf)

if is_finished(x):
safebuf = MAX_BUFFER

proportion = 1 - (safebuf / MAX_BUFFER)

# Calculate per goal "urgency"
goal_urgency = proportion**URGENCY_POWER

print(
x.slug.ljust(20),
x.safebuf,
round(goal_urgency, 2),
)
results.append(goal_urgency)

if goal_urgency == max(results):
max_goal = x.slug

# Get max value and mean urgency.
urgency_max = max(results)
urgency_mean = mean(results)

overall_urgency = WEIGHT_MAX * urgency_max + (1 - WEIGHT_MAX) * urgency_mean

print("Overall urgency:      ", round(overall_urgency, 2))

# Get the goal to which to Beemind urgency.
g = pyminder.get_goal(GOAL_NAME)

# Feedback string to provide information on which goal contributes the most
# to overall urgency.
feedback_string = (
"Maximum contribution from "
+ max_goal
+ ", with an urgency of "
+ str(round(urgency_max, 2))
+ ". Last updated: "
+ datetime.now().strftime("%H:%M")
)

if g.todayta:
# Data point for today does exist. Modify existing datapoint.
id = g.last_datapoint["id"]
put(
f"https://www.beeminder.com/api/v1/users/{USER}/goals/{GOAL_NAME}/datapoints/{id}.json",
data={
"auth_token": TOKEN,
"value": round(overall_urgency, 2),
"time": int(time()),
"comment": feedback_string,
},
)
else:
# Datapoint for today does not exist. Create it.
g.stage_datapoint(
value=round(overall_urgency, 2), time=int(time()), comment=feedback_string
)
g.commit_datapoints()

``````
8 Likes

Thanks for building and sharing!

The Autodialer has been working for me when I’ve got a goal whose ideal slope is unclear. Useful features:

• it only sets a slope after any configured breaks, so I can ensure that I get a month or more of testing (and buffer) before it inflicts a better slope on me
• min/max slopes so that it varies within a range
• it has a ‘strict’ mode so that the slope only becomes harder rather than bouncing around
3 Likes

Nice work! For a little while I’ve been tracking the standard load metric that the Beeminder API reports. But I’ve been somewhat unsure how to map that to subjective feelings of overwhelm.

2 Likes

Autodialer looks useful. For now I still want to observe urgency, but I turned it on for a different goal where I want to reduce rate over time. Put it on autodialer on strict mode for now. We’ll se how it goes. I’m slightly concerned that it will encourage edge scating to keep the rate from going down, but we will see.

Yeah, I expect the correlation between the urgency and real overwhelm will be low if it exists at all (if nothing else, then due to time lag), but I do think that the behavior that leads to a reduction in urgency (increasing buffer, prioritizing goals with low buffer) will reduce some of the factors that lead to overwhelm. I’ll have to report back on whether this actually works.

2 Likes