A Stupidly Hard Dog Logic Puzzle

(aside: how did you manage to quote-reply things in spoiler tags? i’m kind of hating the spoiler tags a bit, though maybe it’s worth it.)

to continue in spoiler tags for now:

I think you have a good objection about not having a logical guarantee that “until your state matched your current state” is ever satisfied. I have an idea for fixing it!

But first, here’s why I don’t buy your trick:

Asking a dog a question is like stating a proposition that must be either true or false and letting the dog say “true” or “false”. I think the right constraint that’s in the spirit of the puzzle is that the dogs are like computer programs with access to every possible state of the universe but the only inputs they accept are propositions like that. You can use any hypothetical you want, like “if arf meant yes”, and they will entertain it, but they won’t alter how they use their own language.

For example, if you say “if arf meant yes, then the answer to whether the sky was blue would be arf” then the dog (let’s say it’s the truthful one) doesn’t care that you’re trying to force a reinterpretation of “arf”. It just takes the proposition totally literally. It will either be like “if arf meant yes, which it does, …” and answer “arf” for yes, or it will be like “if arf counterfactually meant yes, then the answer to whether the sky was blue would indeed be ‘arf’” and it will truthfully report that the answer to that is yes, which it speaks as “ruf”. See what I mean?

You’ve got to turn every question into an unambiguous proposition that is either true or false and the (truthful) dog will return that true/false value (in its own language).

Now to address the problem in my solution that you pointed out violates these rules…

I think the problem with it is similar to posing to a truthful dog the proposition: “If I asked you again and again until you became a liar, you would say that 2+2=5”. I think that statement is just vacuously true, as is the version with 2+2=4, because the conditional is false. So maybe I’m not technically violating the rules but my solution does fail in the case of a Random dog who, with infinitesimal probability, never again changes state.

My fix idea would be a disjunction like “or, if you’ll never again change back to this state then…”

…but I haven’t stepped through it to make sure that works. I can’t decide if it’s ok that my solution has this failure case. Taking “random” literally, my solution has a literally 0% chance of failing. But taking “random” to mean arbitrary/byzantine, it would be nice to not fail in the case of a Random dog who’s truthful now but never will or would be in any other state of the universe. Or vice versa.