Subtle change to self-destructing datapoints

Since you asked, here’s the note from the internal gissue:

See the forum thread for the embarrassing way this played out. Hint: we should not have ripped out the Chesterton’s fence! Who would’ve thought?? I still think the special case is dumb, where self-destructing datapoints fail to self-destruct because the replacement is also self-destructing. That still makes no sense. But we do need a special case, sort of. Namely, when a new datapoint comes in and we’re iterating through the existing datapoints looking for any that should self-destruct, DO NOT INCLUDE THE NEW REPLACEMENT DATAPOINT IN THAT LIST. We only want to delete any already existing datapoints marked as self-destructing.

When we naively removed that Chesterton-fencey special case, we started doing this:

  1. Add a new self-destructing datapoint, D
  2. For each datapoint X with the same date as D, delete X if X is self-destructing
  3. Oops, D itself was one of those X’s so we just deleted D no matter what

In conclusion, let’s nix the dumb special case but still exclude the replacement datapoint itself when doing the self-destructing.

So that is now done, thanks to @bee, who scoped the query properly so we don’t include the newly added datapoint in the candidates for self-destruction.

The only possible weirdness now is that if for some reason you try to add multiple self-destructing datapoints at once, only the last one in the list actually gets added. All the others self-destruct. Which makes sense when you think about it so I’m inclined to call that as-designed. It also makes for an easy answer to @byorgey’s question. Namely, no such thing as multiple self-destructing datapoints, ever.

2 Likes