I’d like to share with ya’ll my exploration into TagTime and get feedback on how to do it right.
I’ve been thinking about how to implement TagTime in my life for a few weeks. I plotted with ChatGPT about how to accomplish it with iOS shorcuts and or an Apple Watch complication (neither of which I know much about), but both seem clunky and too complected to other things. In particular, I want to be around my phone less.
But I got two insights (both from Beeminder ):
- @april is doing cool stuff with a voice recorder and automatic transcription/tagging
- There’s a thing called https://pavlok.com, which I discovered while indulging myself in scrolling through all the Beeminder Consider.it posts.
Pavlok is a programmable bracelet that can provide vibrations, sounds, and even shocks! There’s lots of interesting things to do with that, but at a high level it seems like a very interface for programmable attention - I can, via easy NodeJS scripting, send myself inputs remotely, without a phone (and the risk of distraction phones bring).
So with Pavlok as input and the voice recorder as output, I’d like to try implementing tag time. Specifically, I’d like to store the tags in my Roam Research graph, allowing me to link tag events to rest of what I’m working on.
The workflow I imagine is this:
- My notification daemon sends me a buzz every ~45 minutes, and creates an entry
#tagtime HH:MM #untagged
in Roam. - When I perceive it, I take out my voice recorder, and say what I’m doing.
- Once every N days
- I run a script to grab the audio files off the recorder, transcribe them with openai-whisper, and reconcile them with the
#untagged
tags. The audio files have a timestamp, so it might be possibe to automatically reconcile which files belong to which tags, if I was prompt in recording it. - I look at the text and manually add tags. Because it’s Roam, I can re-use tags from my notes and projects
- Maybe I can get fancy and have GPT automatically tag them. But I want to do it manually for a bit and see if that would even be worthwhile.
- The reconciliation process might be a good thing to beemind.
- I run a script to grab the audio files off the recorder, transcribe them with openai-whisper, and reconcile them with the
- Once a week during my weekly self-review, I’ll query my Roam graph and generate some report to see where all my time went. Not sure how to visualize this yet - I’m sure you guys have cool solutions there I can re-use or take inspiration from.
I’d be grateful for feedback on this plan from people who’ve tried TagTime before.
Additionally, I’d like some feedback on how to tag various situations. I want the data to be honest and meaningful, but answering the question “What are you doing right now” isn’t always straightforward Let’s try on some examples. I get a buzz when:
- I’m writing this post => “I’m writing a beeminder post on tagtime”
- I’m idly browsing a Github repository that popped up in my feed while in a meeting for a subproject A for big project X => “I’m in a meeting for subproject A”
- I’m pacing around the house jumping back and forth between thoughts on how to implement a TagTime system without a phone and the cryptoeconomic consequences of shipping Subproject A without Feature Z => ?
- I’m in the zone programming for subproject A, a discord notifcation from a coworker comes in, and BAM, I get the buzz => “programming for subproject A”? Or “Answering discord messages?”
- I’m browsing Twitter while heating my lunch in the microwave => “Browsing twitter”? Or “eating lunch”?
Lastly (for now), could some of you guys share your set of tags (or a public subset)? I’d like to develop a useful ontology by which to tag statements like the above, but I’m not sure how to break things down.