As a stopgap measure, one suggestion is to stagger correlated goals' deadlines. Forex, if you have 5 exercise goals that should each be done twice a week, set them up so they have different initial safety buffers: the first two goals will derail in 3 days, the next two in 5 days, and the final one in 7 days.
In practice, this is a highly flawed strategy: if you ever derail on one of them, its schedule will be pushed out two days, making it align with other goals. It would take perfect commitment or frequent – and artificial – tweaking to keep the goals in their relative derail alignments. (It also does nothing to help @drmaciver with his current, goals-already-exist, situation; he's already in the artificial-tweaking phase of this suggestion.)
Regarding an actual solution to the issue: if/when Beeminder gets some kind of goal grouping, possibly one feature of that could be "mutual repulsion." My (thinking-out-loud) idea is something like this:
– you create a goal group called (say)
Exercise. Then you create a goal in this group – say
Pushups – that repeats 2 times a week, with a safety buffer of 3 days.
– you create a second goal in the
Exercise group: say
Running. When you select its frequency and initial safety buffer, Beeminder checks what you've already got planned and says, "You already have 1 goal in this group with similar parameters. Would you like to choose a different frequency or initial buffer to keep them from always coming due on the same day?"
– as you add more goals to the
Exercise group, Beeminder attempts to spread them out across the week in a similar fashion. (Once you have a few goals in there, Beeminder should probably suggest the two or three best options to you, rather than making you figure them out.)
– Beeminder would use the actual current data at the time you create the new goal. So if you've derailed on any of your existing goals or changed their frequency or put them on pause or whatever, Beeminder will look at when they are actually, currently going to come due and optimize the new goal's settings for the current state at its time of creation.
– This does nothing to prevent goals glomming together once real life happens and you derail or go on vacay or whatever.
– This will probably break and/or be exceedingly difficult for goals that aren't simple do-more x-times-a-week goals.
So there'd need to be a second part to this, where Beeminder automatically adjusts other goals' current buffer to optimally distribute them whenever the group's state changes. Forex, if you derail on your
Pushups goal, it might now come due on the same day as your
Running goal. So when your derail on
Pushups becomes a fact, Beeminder would automatically change
Running's safety buffer so the two goals don't coincide.
The rule could be that Beeminder always shortens the safety buffer, never lengthens it, in keeping with the BM philosophy. That means the new distribution won't always be optimal, but will be as close as BM can get by adjusting other goals' buffers to be shorter-or-same.
The calculation effort might well get out of hand pretty quickly as the number of goals in a group grows (I'll leave the math as an exercise for someone who's currently more math-motivated than I). Two corollaries:
– goal grouping might need an upper limit: no more than X goals can be grouped.
– goals might need to be restricted to membership in a single group. 
I'm sure this is fraught with problems, but it's a start on an idea for a solution.
 This is essentially  the first corollary, said another way. It's just easier for users to use and for Beeminder to implement than saying "a goal can only be a member of multiple groups when the total number of goals in those groups is less than or equal to X," and it eliminates the need to do a lot of fancy checking when a user adds a new goal to a group or an existing goal to an additional group.
 Essentially, but not exactly, because the extra fancy checking means more computation, so the X for the multiple-membership scenario would probably be less than the X for the single-membership scenario. Or, actually, probably not, because the fancy checking would be O(n) whereas the optimization would be O(n!). 
 Probably not O(n!). I just pulled that out of a hat. I still haven't done the math. My point is just that the optimization effort will grow far faster than the fancy checking effort, making the fancy checking effort essentially negligible pretty fast.