Anyone writing Claude Code skills and wants to try my tool helping with that?

Hi all,

I’m scratching my own itch and writing a tool to help with managing Claude Code skills. It’s currently in a very early stage, but it’s already functional and dare I say even useful.

Is there anyone here who’d like to try it out and give me some feedback?

6 Likes

I would definitely be interested in doing so, but it might take me a bit to get going because I’m still only gradually taking claude code seriously enough to figure out how to integrate it into my workflow. (And programming is just part of my job, not my job.) But over the coming weeks or months, certainly.

4 Likes

I also have Claude Code and I could maybe try it out, though I haven’t really played around with skills much.

If you’re writing a skill to interact with Beeminder directly, @narthur’s Buzz CLI might be useful?

3 Likes

I use Claude Code skills all the time. They’re super powerful.

And, yup, Claude Code is great at using Buzz! (And CLI’s in general.)

4 Likes

Oh, no, never! I would never let Claude anywhere close to my private data like Beeminder! Very, very, very bad idea!

My tool is not Beeminder-related (so I posted in in the Tech section of the forum, not Akrasia). It’s a tool to manage skills (go see the README, it starts with explaining the problems it solves/will solve).

2 Likes

I’ll take a look over the next few weeks. We use Claude Code quite a bit at work, so I have some pretty basic familiarity.

But yeah, I echo the concerns about letting it near your private data. Simplify processes, but don’t let it into your private details. Anthropic is not your friend.

1 Like

I’d suggest structuring your goals such that they don’t contain explicitly private data in the first place. E.g. if a goal is tracking something of a private nature, name it something ambiguous, and don’t put anything you wouldn’t want to become public at some point in your data point comments.

Reasons being:

  • Beeminder goals are public by default (excluding data point comments).
  • When Beeminder support helps you with a goal they can see everything.
  • If you’ve handled your goals this way, you can use things like third-party Beeminder integrations and AI tools without worrying what you’re giving somebody access to.
  • It’s just sound advice for using the internet generally. Data breaches happen to the best of companies.
2 Likes

Hi @mbork , why is it a very, very bad idea?

I just let Claude in Cowork help me gather docs for my taxes due to my ADHD & some of them are health-related.

3 Likes

There’s the usual reasons for not putting private data on the internet, and it may be worse with LLMs.

The one-on-one chat interface may lull a person into a false sense of security if it feels like they’re talking to another person and so they may reveal more than they normally would when putting their data online.

There’s risks from a purely technical perspective too. The LLM vendor may say that any information the user enters is kept private, but there’s no guarantee that the company can be trusted; they may be using the user’s data for training. Even if they do not do that deliberately, bugs in the LLM software may inadvertently cause the user’s data to be shared inappropriately, potentially publishing it on any website to which the user/LLM has write-access (e.g. a GitHub Gist). The Claude source code from its recent accidental release shows just how horrifically bad LLM software can be, so bugs are likely.

In addition, if the LLM does leak a user’s data, it may leak an incorrect or exaggerated version, leading to worse outcomes than if the true data was leaked. As an anecdote (which I realise is not statistically valid but which illustrates the point), a friend of mine was talking to their doctor about recreational drug use from many years in the past. The doctor was using an LLM to write up patient notes. The LLM’s output stated incorrectly that the person was a current illegal drug user.

3 Likes

Let me add that “using your data for training” might mean that things you have typed into an LLM (including private data like names etc.) might be spit out to someone else later. No idea how common it is, but it’s definitely possible.

Also, technically Beeminder is also a privacy nightmare due to the nature of data they keep (I’ve restricted privacy settings of some of my goals, but still), but:

  • it’s a small company and not a huge corpo (which means much more trust and fewer incentives for bad actors to target its database)
  • it doesn’t have any competitor anyway (so basically no choice)
  • if I really wanted a private goal, I have some code I wrote for my client that does not require renaming to something vague/codename – it just encrypts the goal name and datapoint comments. No reason for me to use that – I generally don’t beemind things I’m not comfortable making (potentially) public anyway.
1 Like