Having a graphical representation of the state register and the action table doesn't suddenly make it not a Turing machine. It's completely straightforward to translate these doodles into the usual technical description of Turing machines. These *are* Turing machines. (What's not at all clear is whether an arbitrary Turing machine can be given in this notation, because the 2-by-n grid is too restrictive.)
To head off the obvious objection, the point isn't just that you *can* translate this notation into the standard one for state registers and action tables, as then any computer program would be a Turing machine. Instead, the point is that you can do this translation without any cleverness at all. Similarly, just because you use trinary instead of binary wouldn't suddenly make it not a Turing machine, because the translation is completely straightforward without any important ideas.
(Well, I guess sorting out how to translate the "loop only 3 times" doohickey takes a very small amount of cleverness.)
I don't see how cleverness enters into it, or where trinary/binary enters into anything. (The tape can have a 1, a 0, or a blank on it, and the 1s and 0s are the only place that suggests anything to do with the binary to me—but those are just arbitrary symbols.)
Part of the point being made in 1-3, I take it, is that the thing with control flow is a way of graphically representing a state register and an action table. Control flow going all the way to the right is just a way of graphically representing the end of the finite set of instructions.
(Oh, actually it's the first thing said in 1! Right, that's where I got it.)
To take some more straightforward examples, it'd still be a turing machine if you used "one" and "zero" instead of "1" and "0." It'd also still be a Turing machine if the tape was only infinite in one direction. It'd still be a Turing machine if the action table was written in Spanish.
Going all the way to the right should be a way of graphically representing being put into an accepting state. There's a finite number of states and a finite alphabet, but if there's a finite number of symbols on the tape, that's a different matter—you could come to the end of the symbols on the tape without being in an accepting state. Anyway, I still want to know why cleverness of translation is an issue here. I mean, Brainfuck is Turing complete, and kind of similar to this graphical language, but I think it would be curious to maintain that it just is a Turing machine. (I may be in a minority there.)
So the solution to the puzzle for "e" on one of the iterations (must be the first) is like this:
1 2 3 4 5 6 7
<- <- 0v 0 -> bv g3
1 1^
where "g3" means go to pc 3 on this same row, "0v" means "go down if item under read head is 1", "bv" means "go down if item under read head is blank", "1^" is "go up if item under read head is 1", "1" and "0" mean to write a 1 or 0, and "<-" and "->" mean to go left or right.
You can follow this through and come up with the following states and transitions (I'm pretty sure about this, also NB that I don't really know what the conventional way of representing this stuff is):
1 0 b a 1Lb 0Lb bLb b 1Lc 0Lc bLc c 0Rd 1Rd 0Rd d 0Rd 1Rd bRx
"[]" denoting "do nothing, you're done" and "smS" meaning "write the symbol s (0, 1, or blank), move in the direction m, and enter the state S". The start state is "a", and "x" is the only accepting state.
I feel like it took some cleverness for me to do that, though (I got it wrong the first time, and maybe it's still wrong!). I'm sure it could have been done mechanically (and perhaps had it been done mechanically there would have been a different result), but of course there's no question that a mechanical translation of things that Upetgi wouldn't consider Turing machines.
it'd still be a turing machine if you used "one" and "zero" instead of "1" and "0."
… duh?
There's still no binary-ness here. The alphabet contains three symbols.
Oh, I see the point of 7. This is supposed to simply be an alternative notation, one in which the number and identity of the states is suppressed for some reason.
The trinary example is just that you could have tape with 4 symbols (0,1,2,blank) without changing the definition in other ways. You'd still call it a Turing machine, because you haven't changed anything in an interesting way and because it's *clearly* isomorphic (you just use pairs of adjacent spots on the binary tape to notate trinary digits and do the straightforward translation). The "clearly" part is why I was talking about cleverness. If the translation isn't clear, then I can see the argument for not using the name "Turing machine."
Using "trinary" to mean "four symbols" strikes me as potentially confusing.
How is clarity of translation established?
Your the philosopher, defining things like that is your problem, not mine.
Basically the point is that if you ask 10 mathematicians how to do it, most of them would do so quickly, and anyone who got stuck would agree when the explanation was given that "oh, yeah, that is obvious."
I mean, something can be clear to you but not to me—right?—but you know more and fancier math than I do; on the other hand, someone else might know even more and fancier math than you, and have a really intuitive feel for how something that you think requires too much cleverness to count as already a Turing machine can be translated straightforwardly into the usual technical definition of a Turing machine.
Anyway, the whole "straightforward translation" thing seems like an odd thing to invoke. There's a straightforward, mechanical procedure for converting an NFA into a DFA, but it would be odd to say that NFAs are DFAs.
I have to say my intuition is that NFAs and DFAs (just read the definition) are really the same things. But there's a good argument to be made there that they're not really the same because the translation is exponential. That is, although they're the same if you're only thinking about one of them at a time, they're subtly different when you're thinking about infinite families of them.
As long as there's no "for loops" in the doodle, the translation has no such complexity theoretic issues at all. That is, the number of states just is the number of circles in the doodle. Once you put in the "for loops" things are subtler, because you're putting in a multiplicative factor in the translation. But that's not really a big deal.
Your the philosopher, defining things like that is your problem, not mine.
You're invoking it, not me!
I half suspected the answer would be "how to perform the translation is well understood", but, again, that seems very odd to me. Your second comment concedes that some things don't count as Turing machines despite the availability of translations. When sufficiently many mathematicians learn how those translations are performed, will it become legit to refer to those things as Turing machines? Why wait?
Regardless of the straightforwardness of the translation, one of the reasons I object to calling these graphical dealies Turing machines is that the model you use to think about things matters, and the most straightforward way to think about these guys isn't as Turing machines (as classically described), even if you can do the translation easily; the most straightforward way is to use the graphical metaphors with which it presents you just as they are—first you go left, then you go left again, then you branch on this condition, then you write a 1, then you loop back, or whatever. And when you intervene in the graphical thing, how much more so! You change this branching condition here, or you change this loop's target, or whatever—that's a local change to the graphical model, but not necessarily a local change to the resulting Turing machine representation.
Google no longer shows me the doodle!
Lots of arguments are well-known but still clever! I wouldn't say that the existence of non-computable functions is clear, even though basically all mathematicians know the proof.
Also, I never got to any for loops, only unconditional jumps. There's only twelve of them, right?
But it is a local change to the resulting Turing machine representation! That's the point! Changing the branching condition or changing the loops target is just changing one line on the action table and nothing else.
(Again with the caveat that the "for loop" doohickeys make things slightly annoying. Really they shouldn't have used those ones.)
Ok, sufficiently many mathematicians both know the translation method and, like, feel it in their bones or whatever. Know and understand it.
The translation in 9 must be wrong, then, because changing the first condition to branch down on 1 rather than 0 would require changing the lines for both c and d.
Boy will my face be red if Upetgi succeeds in getting me to see this straightforward translation and I end up convinced that actually the doodles should be called turing machines.
There's some distinction I'm fumbling towards between having learned something and being able to sort it out quickly even if you never learned it. It's the same kind of distinction that's supposed to be made in patent law, where it's not an invention if any other expert would have thought of it.
Your translation is too clever. The really dumb translation has one state for each location on grid of states.
oic. So sometimes the head doesn't move.
That is we translate:
1 2 3 4 5 6 7
<- <- 0v 0 -> bv g3
1 1^
0 1 b
a 0Lb 1Lb bLb
b 0Lc 1Lc bLc
c 0SA 1Sd bSd
d 0Se 0Se 0Se
e 0Rf 1Rf 0Rf
f 0Sg 1Sg bSB
g 0Sd 1Sd bSd
A 1SB 1SB 1SB
B 0Sx 1Se bSx
Right, exactly. The states really are just the states. What's happening is just that they artificially limit the structure of lines in the lookup table. Thus it's clear that these are all Turing machines, what's not *clear* (even though it's gotta be true) is that you can write all Turing machines in this form while only increasing the number of states by a fixed multiplicative factor.
(I reformatted 30, I hope you don't mind.)
(Hrm, that brings up another point. Even if all mathematicians could do it, something's not clear if it requires a long calculation or a long fiddly construction. Getting what "clear" means in this context right would involve a lot of subtleties that I haven't thought through.)
32: Thanks! That looks better.
Well, I think we all learned something today.
Comity!
Instead, the point is that you can do this translation without any cleverness at all.
I'm not clever enough to follow this thread, but even I can still see that it's straightforward to translate it into the timeless "Ben, stop being a little bitch."
I don't think this post headline is referencing Missy Elliot and Gossip Folks, but it got it stuck in my head anyway.
I can't believe you don't get the reference Heebie.
I'd set aside "cleverness" and "clarity" and go with something closer to reducibility: the algorithm for converting one representation to another is poly-time and the size and running time of the target representation differ by at most a constant factor. This is stronger than Turing-completeness. E.g., simulation of a TM by a Universal Turing Machine incurs a log-time overhead.
I never understood why people got so hung up on the difference between "is" and "is isomorphic to". One of many reasons I never entirely grokked category theory, I guess.
It depends on what the meaning of the phrase 'is isomorphic to' is isomorphic to.
The problem with 40 is that it requires you to say that Brainfuck *is* a Turing machine. That feels wrong to me. Turing machines should have a tape and a head (though you don't have to call them a tape or a head).
Brainfuck is underspecified, but it's possible to construe it as having a tape and a head (the head is what's moved around by < and >, and the tape is the memory array).
However, since it's underspecified, it's also possible to construe it as having an infinitely large symbol set.
I couldn't figure out the user interface. I think this Turing guy needs to step up his game.
Where can this doodle be found now that it's the next day?
One doesn't appear to be able to interact with it there, though.
The way to think about the similarity between two notations is not to ask whether the similarity is "clear" (whatever that means) but to look at the computational complexity of transforming one notation into the other.
Google's notation can be transformed into Turing's notion in linear time and logarithmic space, which is good enough for me to say that a Google machine is a representation of a Turing machine.
The transformation the other way is not so easy because of the layout problem. It seems doubtful that layout can be done in log space (or, depending on how you specify a Google machine, at all), so I wouldn't be happy to say that a Turing machine is a representation of a Google machine.
I see Yawnoc made the same point in #40, but I think polynomial time is conceding too much. The idea of restricting equivalence to log space transformations is that the latter have to operate piecewise on the representation because log space isn't big enough to remember the whole thing.
The point of view in 49 and 50, though not the same as the one I was using, seems quite good to me. In particular, I totally agree that "polynomial" is way too loose for the concept that we're going for. "Linear time" is a nice formalization of "not doing anything clever" and "log space" is a nice formalization of "and you don't have to do a serious calculation."
I suppose this is the time to link to Scott Aaronson's fascinating "Why Philosophers Should Care About Computational Complexity"
Hm, I'm really unsure how the argument in §6 against the waterfall argument is supposed to work. Doesn't he give the game away by starting it "suppose we actually wanted to use a waterfall to help us calculate chess moves" (emphasis added)? If I thought that meaning was a property not relative to an observer, why would it matter whether some observers need to go through a lot of bother—enough bother that they could, by going through an equivalent amount of bother, get their answer in a different way—in order to determine the meaning of some particular computation?
Scott Aaronson was born a year before me and received his Ph.D. the year I received my A.B..
Anyway it looks interesting so now I'll actually read it from the beginning.
Scott Aaronson was born a year before me and received his Ph.D. the year I received my A.B..
I know, right? That bastard.
He does lots of very interesting work on the relationship between CS and physics. I can't really speak to that philosophy paper, I skimmed it quickly and it looked interesting, but I don't really know enough about the relevant issues to know how compelling it is. But the general point that complexity theory should be of interest to philosophy seems pretty clear to me.
He does lots of very interesting work on the relationship between CS and physics.
Yeah, I've read a bit of it on his blog. I keep feeling like there's a disconnect between the fundamentally continuous way I think of physics working and the discrete way things seem to work in computational complexity theory. I suppose I should try to meet him at some point.
I keep feeling like there's a disconnect between the fundamentally continuous way I think of physics working and the discrete way things seem to work in computational complexity theory.
Just keep integrating. You'll get there.
Sorry, but the most recent thread is where we talk about things that happens to us that are amusing?
Anyways, I just was at a bar that had a banjo player who would take requests, and so he played princes purple rain and the flaming lips yoshimi vs the pink robots on banjo, and it was amazing.
42 made me guffaw in the public library.