How to dump all your GitHub issues into Todournament

I’m a fan of Todournament by @alltom. It’s a simple to-do list that elegantly sorts your tasks by having you make binary comparisons. Do you want to Take Out Garbage first or File TPS Report first? Click which one you’re more inspired by, repeat until Todournament has enough data to put something at the top of your list, et voila. It’s a bit like the Mark Forster method. Here’s what the top of my current stack looks like:

(putting this in the forum is today’s mustdo so that’s part of “Saturday beemergencies” at the top of my list!)

Another thing I’ve been liking a lot lately is my personal private GitHub repo I call omnitask where I drop in as a GitHub issue (gissue) any crazy ideas I have. I have a freshening goal to keep it from being a black hole where ideas go to die. It’s great.

But I really wanted to combine it with Todournament to help things from omnitask bubble up to the top of my list. So here’s a hacky way to do that that I thought I’d share:

# Fetch the titles of all open gissues and put them in macOS's clipboard

TITC=0 # to count how many titles we fetch

while [ -n "$PURL" ]; do
  echo "Fetching $PURL"
  resp=$(curl -L -s -D $HEAD \
    -H "Accept: application/vnd.github+json" \
    -H "Authorization: Bearer $TOKE" \
    -H "X-GitHub-Api-Version: 2022-11-28" \
  titles=$(echo $resp | jq -r '.[] | .title')
  if [ $? -ne 0 ]; then
    echo "Error processing JSON"
  echo "$titles" >> $FILE
  if [ -n "$titles" ]; then
    x=$(echo "$titles" | wc -l)
    TITC=$((TITC + x))
  # Extract next page URL from Link header
  PURL=$(sed -n -e 's/.*<\(.*\)>; rel="next".*/\1/p' $HEAD)

cat $FILE | pbcopy
rm $FILE
rm $HEAD
echo "Voila, extracted $TITC gissue titles to your copy/paste buffer!"

(I’m not the most fluent in bash scripting and the above took me a couple hours going back and forth with GPT4 but it seems solid now. In retrospect maybe bash was a poor language choice but I was fixated on the part where it sends the tasks straight to your copy/paste buffer which seemed like a job for bash.)

Just put that in a file called, fill in your GitHub API key, and run it. Then go to Todournament and hit paste. You now have all your gissues in your task list. :tada:


This has inspired me to pull together a less-hacky python version. Here’s a script to fetch all your github issues in a particular repo:

#!/usr/bin/env python

import argparse
import requests
import json
import os

def fetch_issues(repo, token):
    github_api_url = f"{repo}/issues"
    params = {'per_page': 100, 'page': 1}
    headers = {
        'Accept': 'application/vnd.github+json',
        'Authorization': f'Bearer {token}',
        'X-GitHub-Api-Version': '2022-11-28'

    while True:
        response = requests.get(github_api_url, headers=headers, params=params)

        issues = response.json()

        yield from issues

        if 'next' in response.links:
            github_api_url = response.links['next']['url']
            params = {}

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Fetch all open GitHub issues for a specified repository and output them as NDJSON.")
    parser.add_argument('repo', help="The repository to query, formatted as 'username/repository'.")
    args = parser.parse_args()

    token = os.getenv('GITHUB_ACCESS_TOKEN')
    if not token:
        raise EnvironmentError("GITHUB_ACCESS_TOKEN is not set.")

    for issue in fetch_issues(args.repo, token):

Feel free to edit it to hardcode your token if you’d like, or set hte env variable. You can then run it in one line of bash as ./ USER/REPO | jq -r .title | pbcopy.

Or edit the script to use the pyperclip library and use it to copy the titles to the clipboard. But I actually like it that this script follows the unix philosophy of composablity.


Ah, a million times better; thank you so much!

1 Like