A Stupidly Hard Dog Logic Puzzle

You encounter 3 magical dogs. One always tells the truth, one always lies, and one is Random. Whenever Random is asked a question, it flips a coin in its head to decide whether to be a liar or a truth-teller. (This is different from ignoring your question completely and just saying yes or no at random, though for normal, non-self-referential questions it’s probably equivalent.) The other complication is that yes and no in dog language are “arf” and “ruf” and those are perfectly consistent but you don’t happen to know which is which.

You can ask a total of 3 yes/no questions, each addressed to a single dog of your choosing. Your goal is to learn the identities – truth-teller, liar, random – of all 3 dogs.

What do you ask?

2 Likes

I am pretty sure this is completely impossible unless we are guaranteed that the dogs know something about each other. For example, do the dogs know the setup and their identities? If they don’t know anything or at least we have no guarantees that they know anything in particular then I don’t see what the puzzle actually is.

Oh yes, the dogs are omniscient! PS: I also just edited it to clarify that it’s 3 yes/no questions.

1 Like

…I know this one without the problem of arf v ruf, I think, but with that as well… :scream:

1 Like

I think this is another rules question for @dreev but if you’re submitting the questions in a language of your choice (e.g. English) then it must be that the dogs speak both your language and theirs, so it seems like leading the questions with “Assuming ‘arf’ means ‘yes’, …” and then do the usual business of asking them hypotheticals about what the other dogs would say. (I think we also have to assume that “liar” means “always answers the opposite of what the truth teller would say” as opposed to being Byzantine.)
I think that works because, if you consider the most basic question of that form it would be “Assuming ‘arf’ means ‘yes’, does ‘arf’ mean ‘yes’?” the answer to which should be ‘arf’ if you’re telling the truth and ‘ruf’ otherwise. So then just replace “does ‘arf’ mean ‘yes’?” with the actual question you want?

1 Like

Are all 3 questions asked to all 3 dogs, resulting in 9 individual inquiries made?

1 Like

Smart! But I’m going to rule that out on the following grounds: Consider “assuming arf means yes, is the pope protestant?” and suppose arf doesn’t mean yes. Then the question is equivalent to “if 1=2, is the pope protestant?” and that’s vacuously true. So even if the dog plays along and switches the meanings of arf and ruf for you, you don’t end up learning the answer to the question!

But yes that the lying dog is not Byzantine. If you knew you were talking to the lying dog it’d be an equally omniscient source of knowledge as the truth-telling one. Just flip its answers!

Negative, 3 questions total, each addressed to a single dog! (PS: I edited the question to make this clearer – thanks!)

1 Like

This is an invalid analogy. I’m not saying “suppose arf and yes are equivalent concepts” (i.e. suppose 1=2 in the formal sense, which is a contradiction) I’m saying “suppose the utterance ‘arf’ refers to the same concept underlying the utterance ‘yes’” (i.e. suppose ‘1’ refers to the quantity 2, e.g. 1+1=4 in the formal system (which still functions normally) and the only way to refer to the quantity of a single thing is indirectly, e.g. (4-3) yields the quantity 1 but you can’t write it ‘1’ anymore).

Does that make sense?

1 Like

That does make sense and that’s a super clever hack! I think the problem is still fairly stupidly hard even if that’s allowed, i.e., if the dogs just speak English. But I still want to disallow it. My new rationale is that the dogs simply don’t accept the macro substitutions. You’re saying “I’m going to ask a question and I’d like you to answer as if ‘arf’ meant yes” and the dog’s like “sorry, I’m going to answer it how I answer it”.

PS: In a couple hours (24 hours after posting this) I shall describe a trick I knew about that gives a huge head start on these kinds of problems and made it fun for me to work through the monkey wrenches in this one (the random dog, and arf vs ruf).

First, a non-helpful thing I figured out:

If you find a way to get true answers you can solve the arf-vs-ruf dilemma with “is the pope catholic?” because whatever the answer to that is – arf or ruf – it means yes. But it turns out we can’t afford to waste a question for that. I believe it’s not even possible to learn the 3 identities and what arf and ruf mean with only 3 bits of information! So we have to somehow take these 3 arfs/rufs and infer the identities of the dogs without ever learning what arf and ruf actually mean!

And now a kind of hint, or, basically the solution to all such logic puzzles with truthers and liars or knights and knaves or whatever the puzzle calls them. Including a version so classic it’s in Dr Who. But not this one with the Random and arf-vs-ruf monkey wrenches.

I call this the Double Negative Indirection (DNI) trick. (It has a standard name in the literature which I won’t mention yet to keep all this as ungooglable as possible.) If you know that someone either always tells the truth or always lies you can ask them “what would you say if I asked you proposition P?” Truth-tellers obviously tell you whether P is true and liars lie to you about what they would say, which means they also tell you the truth. Cuz they lie about their lying! Pretty guileless for an inveterate liar but we’ll let that slide.)

But that doesn’t work with Random. If Random is truthful then it’ll peek into the random number generator of the hypothetical world where you ask it the embedded question and might truthfully tell you that it’ll lie in that case. Or vice versa. Point is, we lose the nice “either truthing about truthing or lying about lying, yielding the truth either way” property.

But we can fix that! Stay tuned…

1 Like

Consider this: all you need to do is to reduce it to the problem you already know how to solve, and you’re done!

Perhaps the hint that doing so is even possible is enough, but here’s a guide on how to do it:

Whenever you’d normally ask a question X, instead ask “Would your answer to X be ‘arf’?” Then, treat a response of “arf” to that question like a response of “yes” to X, and a response of “ruf” to likewise be like a response of “no”.

Why does it work? Well, consider if “arf” really means “yes”. Then, the question is “Would your answer to X be ‘yes’?” This is equivalent to just asking X directly. (If the correct (truthful) answer to X would be “yes”, then the correct answer to “Would your answer to X be ‘yes’?” is also “yes” (that is, “arf”). If the correct answer to X would be “no”, then the correct answer to “Would your answer to X be ‘yes’?” is “no” (that is, “ruf”). So regardless of if the dog you’re talking to gives the correct answer or inverts it (either because it’s the lying dog or the random dog who happens to be lying this time), that dog would give the same answer (truthfully or not) that they would to X itself.

Consider on the opposite hand what if “arf” really means “no”. Then, the question means “Would your answer to X be ‘no’?” This is equivalent to asking the opposite of X (that is, the question whose answer is an inverse of X’s answer) for reasons much like the above. So, if the dog’s answer to X would be “yes” (that is, “ruf”), then their answer to the opposite question from X would be “no”/“arf”. (Again, regardless of the dog’s truthfulness or lack thereof.) Likewise, if the dog’s answer to X would be “no”/“arf”, then there answer to the opposite of X (and thus also to your question) would be “yes”/“ruf”. So as you can see, again the dog is answering “arf” if and only if their answer to X would be “yes”.

You don’t know which is the case, if you are asking the dog to invert the answer or not, but in either case, you can now assume that “arf” means “yes”, because an “arf” in answer to your question is equivalent to a “yes” to question X. This works for all X, so you can take the three questions you’d ask and put them into this template, and proceed exactly as you would if the dogs spoke English.

1 Like

I think @zzq got a key piece of this – how to convert any question A into a question A’ where hearing “arf” to question A’ means the answer to question A is yes and hearing “ruf” means the answer to question A is no. You don’t know what “arf” and “ruf” mean but you know the answer to the question you care about.

Except that doesn’t work with Random. So, since it’s been a week, I’m giving my solution (including my own explanation of the piece @zzq already explained, as well as repeating the bit about the Double Negative Indirection trick so this is self-contained). Here it is:

[UPDATE: I’m removing the spoiler tags because it messes up some of the formatting. Look away!]

We’ll start with what I call the Double Negative Indirection (DNI) trick. The standard name in the literature is the embedded question lemma. (And, by the way, this problem is known as The Hardest Logic Puzzle Ever so… I guess I should’ve started with some easier ones!) The DNI trick works if you know that the creature you’re talking to either always lies or always tells the truth. You just ask them “what would you say if I asked you proposition P?” Truth-tellers obviously tell you whether P is true and liars lie to you about what they would say, which means they also tell you the truth. Cuz they lie about their lying! Pretty guileless for an inveterate liar but we’ll let that slide.

But that doesn’t work with Random. If Random is truthful then it’ll peek into the random number generator of the hypothetical world where you ask it the embedded question and might truthfully tell you that it’ll lie in that case. Or vice versa. Point is, we lose the “either truthing about truthing or lying about lying, yielding the truth either way” property.

Here’s how we can fix that:

“If I asked you again and again until your state matched your current state, would you, at that point, agree that snibgiblets are slippery?” (Let’s say the answer to the embedded question – snibgiblets being slippery – is “arf”. Not that we know what “arf” means, but we’ll come back to that.)

  • For truthers and liars, the “until your state matched” doesn’t change anything – we’re in the usual situation of truthers saying “arf” cuz that’s the answer and liars lying about their lying and saying “arf” too.
  • A random truther would say “arf” because “arf”, being the truth, is truly what it’d say eventually, next time it was in a truthful state.
  • For a random liar, next time it’s in its lying state again it’d falsely say “ruf”. But it’s a liar right now and will lie about that lying and say the opposite of “ruf”, i.e., “arf”!

So we’re almost there and just need to pull in @zzq’s solution to the arf-vs-ruf problem! Here it is:

“If I asked you again and again until your state matched your current state, would you, at that point, agree that THE ANSWER TO WHETHER SNIBGIBLETS ARE SLIPPERY IS ARF?”

By the argument above, we’re always hearing the truth about the all-caps part. So consider the 4 possible worlds:

  1. Snibgiblets are slippery and arf means yes => the answer is yes and we hear “arf” (i.e., yes).
  2. Snibgiblets are slippery and arf means no => the answer is no and we hear “arf” (i.e., no).
  3. Snibgiblets are grippy and arf means yes => the answer is no and we hear “ruf” (i.e., no).
  4. Snibgiblets are grippy and arf means no => the answer is yes and we hear “ruf” (i.e., yes).

Voila. We still don’t know what “arf” and “ruf” mean but we can treat “arf” as yes because that’s what we hear if and only if the answer to the super-embedded question, the slipperiness of snibgiblets in this case, is yes.

So that was the interesting part and we can kinda brute-force it from here and use our 3 questions to do like a binary search through the 6 possible permutations of the 3 dogs. (Or gods – I changed it to dogs to make this less googlable.)

  1. TLR: Truth-teller, Liar, Random
  2. TRL: Truth-teller, Random, Liar
  3. LTR: Liar, Truth-teller, Random
  4. LRT: Liar, Random, Truth-teller
  5. RTL: Random, Truth-teller, Liar
  6. RLT: Random, Liar, Truth-teller

So the first question can be:

If I asked you again and again until your state matched your current state, would you, at that point, agree that the answer to whether you three are lined up in one of the first three orderings in this list is arf?

That question eliminates half the list and you can just repeat with the remaining list at most 2 more times!

You’re asking a question here that includes a hypothetical about the state of the world that may or may not ever be true. Can you explain why this is permissible but my above strategy of asking a hypothetical regarding language was invalid?

In both cases we’re saying to the dogs "Imagine a hypothetical world where in the future a specific event has happened (i.e. “you flipped the coin a certain way” or “the dog language evolved to treat certain words differently[1]”). These future events aren’t impossible but there’s also no guarantee that they will ever happen. There’s no finite amount of time or flips that will ensure you flip heads or change your language.

Alternatively (if you don’t buy the above), the suggestion “If I asked you again and again until…” is describing a sequence of events that might never stop. So how do you ask a question about something that comes after a potentially infinite sequence? It’s like asking someone what the unreachable code under an infinite loop will do when it’s run. (And if you say that you’ll amend the hypothetical to be “Suppose you flipped the coin the same way” without reference to attempting in a sequence, then I think you are making my argument above stronger)

[1] This isn’t the original sense in which I meant my suggestion. I did mean macro substitution at that time, but I’m trying to cast that in a new light here.

1 Like

(aside: how did you manage to quote-reply things in spoiler tags? i’m kind of hating the spoiler tags a bit, though maybe it’s worth it.)

to continue in spoiler tags for now:

I think you have a good objection about not having a logical guarantee that “until your state matched your current state” is ever satisfied. I have an idea for fixing it!

But first, here’s why I don’t buy your trick:

Asking a dog a question is like stating a proposition that must be either true or false and letting the dog say “true” or “false”. I think the right constraint that’s in the spirit of the puzzle is that the dogs are like computer programs with access to every possible state of the universe but the only inputs they accept are propositions like that. You can use any hypothetical you want, like “if arf meant yes”, and they will entertain it, but they won’t alter how they use their own language.

For example, if you say “if arf meant yes, then the answer to whether the sky was blue would be arf” then the dog (let’s say it’s the truthful one) doesn’t care that you’re trying to force a reinterpretation of “arf”. It just takes the proposition totally literally. It will either be like “if arf meant yes, which it does, …” and answer “arf” for yes, or it will be like “if arf counterfactually meant yes, then the answer to whether the sky was blue would indeed be ‘arf’” and it will truthfully report that the answer to that is yes, which it speaks as “ruf”. See what I mean?

You’ve got to turn every question into an unambiguous proposition that is either true or false and the (truthful) dog will return that true/false value (in its own language).

Now to address the problem in my solution that you pointed out violates these rules…

I think the problem with it is similar to posing to a truthful dog the proposition: “If I asked you again and again until you became a liar, you would say that 2+2=5”. I think that statement is just vacuously true, as is the version with 2+2=4, because the conditional is false. So maybe I’m not technically violating the rules but my solution does fail in the case of a Random dog who, with infinitesimal probability, never again changes state.

My fix idea would be a disjunction like “or, if you’ll never again change back to this state then…”

…but I haven’t stepped through it to make sure that works. I can’t decide if it’s ok that my solution has this failure case. Taking “random” literally, my solution has a literally 0% chance of failing. But taking “random” to mean arbitrary/byzantine, it would be nice to not fail in the case of a Random dog who’s truthful now but never will or would be in any other state of the universe. Or vice versa.

You mean how did I get the text to copy-paste? I copied it out of the dom after right-clicking and choosing “Inspect”. Horrible user experience lol

Ok, I see what you’re saying and I concede that this is a different line to draw than the thing I’m complaining about. So I surrender. :slight_smile:

Anyway…

Unfortunately I maybe ended up in trouble while trying to research something to disprove your statement “my solution has a literally 0% chance of failing”.

It seems intuitive to me that you’re mistaken when you say that, because it’s never impossible that any sequence of coin flips is all heads.

But here (Almost surely - Wikipedia) it says:

In my case, I’m trying to argue that the infinite sequence of all heads has a non-zero probability which wrecks your solution. But the quote seems to say that once I’ve specified the infinite sequence I care about, it has probability 0 of happening even though we already agreed it is possible. I don’t understand how this is so.

1 Like

Yeah, it sounds contradictory but is pretty standard! It’s “possible” to flip a fair coin heads an infinite number of times in a row in the sense that that is one of the possible outcomes of an infinite list of coin flips. To get the probability we have to take the limit of 1/2^n as n goes to infinity and the answer to that is zero.

But what does the phrase “possible” mean if not “probability > 0” ? It seems like the logical deduction ought to be:

  1. We agree this sequence is “possible” which means by definition it has probability > 0.
  2. We have a mathematical formula which calculates the probability (the limit business). It says the probability is 0.
  3. We made a contradiction so we know #2 is wrong (because #1 is a definition).

That’s not the definition of “possible” though! When infinity isn’t involved then it’s equivalent, but the definition is like I said: the possible outcomes are the ones in the list of things that could happen.

1 Like

Damn and blast!

1 Like