I'm reading this as an exercise in self-referential performance art.
Yeah. From the student's point of view the problem with most learning is it takes place before you encounter the problem. It's hard be engaged in, say, eigenvectors before you've felt the pressing need to invent PageRank, for example.
This is why Coursera etc. are such a great thing -- now you can pick up a University education at the point in time when the learning is useful.
(Note I'm not advocating skipping University. Learning how to get drunk and/or laid are important life skills.)
4.3: It's cheaper and easier to learn those working the midnight shift at a gas station.
4.last: Right. "We learned to tap a keg," declared Representative Steven Palazzo, a Mississippi Republican and Sigma Chi brother, who then yelled a cheer as hundreds of FratPAC donors applauded.
4. Answer given before the question is definitely a pedagogical problem.
EMphatically disagree about eigenvectors, they're fundamental. Not being curious about htem is like not being curious about prime numbers.
Yeah, I never had this problem with math, including eigenvectors. It's obvious why you need to prove stuff.
5. Details or it didn't happen.
6. Definitely. Without the need to study students can concentrate on perfecting their golf game and side parting. Replacing knowing anything with knowing everyone is a pathway to many successful careers.
FWIW, my introduction to eigenvectors involved boring mech. engineering problems. (I was young, callow, and not a mech. engineer.) If I'd been told I could solve cool pattern recognition problems with them, and make a few billion, I'd have paid more attention the first time around.
re: 4
Yeah, one of the hardest things about teaching philosophy is explaining to people why a particular approach is interesting, or radical, or important. It requires a broad overview of a particular family of problems and a decent amount of historical context before it even begins to make any kind of coherent sense. Good teachers can provide a bit of that, but there is never time in 'Introduction to $foo' to also provide 'The complete history of everything, with particular emphasis on everything's relevance to $foo.'
Eigenvectors/eigenvalues/inner product spaces & etc. were one of those subjects "clicked" for me all at once.
I learned the material, and for a while was in the stage where I could correctly follow the rules for using them but without really getting it in an intuitive way. Then all of a sudden there was an "aha!" moment when the whole structure of the subject made sense.
I'm not sure what the trigger was.
Eigenvectors/eigenvalues
TOTALLY NOT A THING, GUYS. GIVE IT UP ALREADY.
I still don't find eigenvectors intuitive, and my first introduction to them, in a high school linear algebra class, was too abstracted from any interesting problem solving for me to get interested in them at the time.
Give me 15 minutes, and you'd find them intuitive.
In fact, I could probably write it out here.
Think of a matrix as being a map, which sends vectors to vectors. The vectors get distorted in various ways - stretched, rotated, etc. A good question to ask is: which vectors get scaled, but not rotated? Those vectors are the eigenvectors, and the corresponding scaling factor is the eigenvalue.
The familiar formula is Av=bv, where A is a matrix, v is a vector, and b is a scalar. All this formula is saying is: on the left, "map the vector v by the matrix", and on the right "scale the vector v by a scalar". So solutions to that equation are exactly those vectors which scale, but don't rotate.
The really familiar formula that people vaguely remember is (A-bI)v=0, but that's just the manipulated version of the intuitive equation above.
What a boring entry. Sorry all, it's kind of my THING.
16: If you were really hardcore you'd use the "comments on" thread to explain eigenvectors in the sidebar.
Actually, I'd be interested to see the writeup. I tend to think that there's a big difference between how physics types understand maths vs how mathematicians understand them, but that might just be an impression left over from the way I learned.
Well, potentially you're not interested anymore now that you've seen it.
17 is great, although it does rely on an abstract understanding of "map"s.
23: Serious question - do you know how to multiply matrices?
No problem. I'll just walk ten feet behind you in public.
What I meant to say is: you can multiply a matrix by a vector, and the answer is another vector. Which might be written Ax=y, where A is a matrix, and x and y are vectors. So one way to think about that is that A is a function, you plug in x, and get out y. So A maps x to y.
I cannot help but answer math questions.
This thread is starting to make me want to just give up and say, "Math class is tough!"
No! It's not! You're just not applying yourself.
I know what a matrix is, and what a vector is, but have no idea what "map" means.
What I meant to say is: you can multiply a matrix by a vector, and the answer is another vector.
Why? How? I just remember vectors as strings of numbers and matrices as grids of numbers, and I remember vectors can represent directional lines on graphs but I have no idea what matrices are supposed to represent.
17: So, I get that in the abstract, but it's not intuitive to me when such vectors should exist. I mean, if I sit down and work it out I can find the igons of a rotation matrix, but I would've naively guessed that none exist.
Matrices really don't have a physical meaning that you can draw, the way vectors do. Sometimes a matrix is just a group of vectors, but generally it's useful to think of a matrix as a function.
34: So why do we use them? It starts to sounds like a math-inspired board game.
33: Oh. Well, there's a bunch more theory that could help you with that. Are you free from 1-2:15 on Tuesdays and Thursdays this fall?
35: One major reason is that they're linear functions. Basically the game of Calculus is "approximate a curvy function with a tangent line". It's nice to play that game with higher dimensions, and matrices are the higher dimension analogue of a line.
"Map" is just another term for "function."
Matrices encode linear transformations of vectors. If you have an input vector (a_1, ..., a_n) and an output vector (b_1, ..., b_m) --in general the lengths can be different--in a linear transformation each of those b's is a linear function of the a's. That means that there exist some constant numbers c_ij's such that
b_1 = a_1*c_11 + a_2*c_21 + ... + a_n*c_n1
b_2 = a_1*c_12 + a_2*c_22 + ... + a_n*c_n2
...
b_m = a_1*c_1n + a_2*c_2n + ... + a_n*c_nm
So you can see that there are n*m c_ij's (with i going from 1 to n and j from 1 to m). You can arrange those c_ij's into a rectangle and call it a matrix.
Let's just think about simpler cases like the line and the plane.
A linear map of the line is a way of assigning to any point x a new point ax for some constant a. Geometrically these all just look like stretching the line. (Well, if a is negative it's a bit of a stretch to call it stretching since it reflects too, but we'll still call that stretching.)
A linear map of the plane is a way of assigning to every point (x,y) a new point (ax+by, cx+dy) for some constants a,b,c,d. Geometrically there are many linear maps of this form, for example you could be a reflection (x,y) -> (-x,y) or a 90 degree rotation (x,y) -> (y,-x) or a stretching map like (x,y) -> (2x,2y).
Given a linear map of the plane it makes sense to ask whether there are any "nice" lines which are sent to themselves by the linear map. For example, the reflection (x,y) -> (-x,y) sends the x axis to itself (by rescaling by -1) and the y axis to itself (by rescaling by 1), while rotation has no lines preserved by the map.
The preserved lines are called eigenspaces, the points on those lines are called eigenvectors, and the scalar that you rescale by on that line is called the eigenvalue.
33: Working over the real numbers rotation doesn't have eigenvectors, but if you work over the complex numbers, then it does. Since two complex dimensions is 4 real dimensions (i.e. 1-dimensional complex space is the "complex plane" so two of them gives you a "complex 4-space") it becomes very hard to visualize the eigenvectors.
The Matrix is everywhere. It is all around us. Even now, in this very room. You can see it when you look out your window or when you turn on your television. You can feel it when you go to work... when you go to church... when you pay your taxes. It is the world that has been pulled over your eyes to blind you from the truth. ... That you are a slave...Like everyone else you were born into bondage. Born into a prison that you cannot smell or taste or touch. A prison for your mind.
Straight lines are easier to work with than not-straight lines. Algebraically, linear things have two convenient properties: If you have a number N and two vectors x and y and a linear function f, you know that:
f(x) + f(y) = f(x+y)
N*f(x) = f(N*x)
Those facts are really nice because you can take things you've found out about your input, about x and y or the space they live in, and make corresponding claims about the output or the space that f(x) and f(y) live in.
36: I'll be here, if that's what you mean.
40: Sure. It get's mysterious pretty quickly. You start with lines in a plane, and all of the sudden you need Euler's formula.
I had linear algebra a few years before The Matrix came out, so I didn't quite get to cash in on those jokes. But I got HUGE, MAJOR HUGE mileage from Cypress Hill's A to the K? A to the motherfucking K, homeboy lyrics.
I refuse to believe that 17 is that confusing, aside from the word "map", helpfully clarified in 27.
Skewing! You forgot skewing!
Going back to 17:
A good question to ask is: which vectors get scaled, but not rotated?
Why is this a property of the vector? Surely what determines whether the vector gets scaled and/or rotated is the matrix it's being multiplied by, IIUC? That is, you could multiply vector A by matrix B and rotate it, but multiply vector A by matrix C and merely scale it.
ZOMG, Cypress Hill were rapping about the power method for calculating eigenvectors!! They invented page rank!
Why is this a property of the vector? Surely what determines whether the vector gets scaled and/or rotated is the matrix it's being multiplied by, IIUC?
Of course, it depends on both. An eigenvector is an eigenvector of a particular matrix (although not necessarily just one).
49: You *fix* the matrix. For a given fixed matrix that you're trying to understand you ask this question. Eigenvalues and eigenvectors come attached to a particular matrix.
49: Good question. The eigenvectors are indeed specific to the matrix in question. Each matrix has its own eigenvectors.
Or what everyone else said, but less supportively. You're really getting it!
It does seem intuitively weird that it depends on the vector, but if you play with the math it tells you something: that there's a whole lot of vectors that this applies to, and they're all in a line. If
A*v = b*v
Then
A*2v = b*2v
And ditto for every other scalar, not just 2. So there's this entire "linear subspace" of vectors that are eigenvectors of A. If you use the other form that heebie mentioned before, (A-bI)*v = 0, the matrix (A-bI) maps every scaled version of v to zero. You can find out a bunch of things about the structure of the space you're mapping to (especially relative to the space you're mapping from) from that fact.
Are you calling me massive?
That's covered in the physics class, three threads over.
EMphatically disagree about eigenvectors, they're fundamental. Not being curious about htem is like not being curious about prime numbers.
I think eigenvectors are way more useful and important than prime numbers.
Essear totally just, like, threw down.
My first class with matrices also left me with the "matrices are grids of numbers" understanding. What could be more exciting?!
34 Matrices really don't have a physical meaning that you can draw, the way vectors do.
Sure they do. You can draw a rotation, or a shear transformation, or whatever.
You can draw the before and after, and use that to guide your intuition, but is it really helpful to say you've drawn the matrix itself?
Eh, I like number theory a lot, but I don't think I can really disagree with 59. Linear algebra really is humankind's greatest accomplishment.
Okay, fine, if you want to be all technical about it.
Obviously I talk about drawing functions all the time, and it's useful to know what a parabola looks like, but a matrix isn't a thing floating in 3D or something, which is how I took the question to be asking.
God made the real numbers, humans just got weirdly obsessed with the ones they could count on their fingers.
Oh wait I mean God made the complex numbers, of course.
Of course if you're doing graph theory/network analysis then the matrix represents the graph or network that you're analyzing.
The eigenvalues and eigenvectors can then tell you a lot about how the network is structured.
67 without seeing how essear totally caved in 65.
Oh wait I mean God made the complex numbers, of course.
God made the quaternions.
70: The utility of the adjacency matrix was one of my "woah" moments in undergrad. Simple and awesome but it didn't feel like it should work.
Each matrix has its own eigenvectors.
That's why they're EIGENvectors, DUH.
This is really maddening. I liked linear algebra (although not to have any particular sense of anything useful to do with it) and did well in the class. And haven't had occasion to think about it for about twenty years. This conversation almost makes sense, and probably would if I spent a couple of hours looking at a textbook, but I can't quite remember anything well enough to follow what you're all talking about. (I'm going to be a little sad when the kids get past the math I can remember well enough to be helpful with.)
I'm pretty skeptical that there are really complex numbers in reality. I know quantum mechanics makes it look that way right now, but it's just implausible to me that uncomputable numbers could be relevant to anything. Where did they come from in the first place?
But linear algebra is still important even if you're working over the field of computable complex numbers.
77: Sure, of course not. But it seemed like we were going up the list of division algebras in the Frobenius theorem.
78 agrees with my intuition, but I don't have the physics to back it up.
It's funny how pretty much everyone outside of a certain subset of theoretical physicists seems to agree with 78. It's inconceivable to me that anything fundamentally discrete could be consistent with empirical evidence about how our universe works, but maybe I'm just not clever enough.
But hey, at least Stephen Wolfram is on your side, and he's the greatest scientist since Newton. (Or is he greater than Newton? I forget.)
Where did they come from in the first place?
Sacramento.
When I hear the word "octonion" I reach for my gun.
82: Newton hardly counts as a scientist, what with his slavish insistence on doing things the old way.
Should I weigh in with "math is nothing but a cognitive construct" just to really flesh out the debate?
Newton was an alchemist, right?
Matrix algebra was my math Waterloo.
I don't trust anything but natural numbers
Useful is IMO a crappy way to choose what's worth paying attention to.
Prime numbers lead to the Riemann zeta function and a bunch of stuff that I don't understand but consistently enjoy learning more about.
Linear algebra is OK for geometry and computer stuff, but really kicks in with PDEs-- orthogonal polynomials, Jacobians. Adjacency matrices and graph theory are also pleasant.
Matrix algebra was my math Waterloo.
I thought all the folks in Waterloo were into quantum computing.
I'm pretty skeptical that there are really complex numbers in reality.
I really have no idea what to make of that claim.
Non-discreteness, and the corresponding uncomputable numbers, seems like it could violate the Church-Turing thesis. So if the universe if fundamentally continuous either the thesis is wrong or there must be some process that makes those numbers unavailable. My intuition is that the former is wrong (as we've looked very hard for counterexamples) but I'm too ignorant to make any claims about the latter. Hence, barring any ignorance-quashing, the feeling that discreteness is right.
86: You're nothing but a cognitive construct.
Prime numbers lead to the Riemann zeta function and a bunch of stuff that I don't understand but consistently enjoy learning more about.
And why are the Riemann zeta zeroes on the critical line? Because their imaginary parts are given by the eigenvalues of some nice Hermitian operator, of course.
(Err, probably. One hopes.)
91: We'll need to first determine if there are real natural numbers before we get to real imaginary numbers.
I mean I don't know what you'd be denying if you said "there are no complex numbers in reality" or what you'd be affirming if you said "there are complex numbers in reality".
I wasn't saying discrete, just computable.
93: right?
I once convinced the sysop of a local BBS that I couldn't hew to the "real names only" rule because I was an AI construct and therefore didn't have a real name in any meaningful sense.
I invite all the 78-believers to show me a discrete theory that doesn't flagrantly violate bounds on Lorentz invariance by many orders of magnitude. (Not even getting into how quantum mechanics could be made fundamentally discrete.)
I don't really see what the conflict with the Church-Turing thesis is, but I like how this is a case where people with different well-informed backgrounds have violently opposed intuitions about what's right.
Oh, and 78 has to be trollery. Fundamental theorem of algebra, anyone? Contour integration.
97: Doesn't that imply discrete? Aren't there lots of uncomputable numbers in any interval between two computable numbers?
97: You're right, I was muddying the waters by incorrectly conflating the concepts.
96: The question (at least for me) is something like "which of these two mathematical constructs more closely resembles/models reality at a low level"; it doesn't have anything to do with the reality of numbers in a Platonic sense.
Oh man. Googling terms relevant to the current discussion led far too quickly to Roger Penrose.
Thank you people so much for posting stuff that makes bar study more appealing.
I think part of why the thing about numbers not being computable doesn't bother me is that we're fundamentally unable to ever measure anything with more than finite precision.
Is the idea that at the moment the universe came into being it started with infinitely precise non-computable numbers and ever since then there's been no new non-computable numbers appearing? Or is there some ongoing process which produces non-computable numbers?
(And don't try looking at the state at an uncomputable time, because any experiment you can ever build is going to be looking at a computable time.)
it doesn't have anything to do with the reality of numbers in a Platonic sense.
Which is handy, because there isn't one! (Just doing my part, just doing my part.)
105: But then how could you possibly be getting evidence of noncomputable numbers in real physics?
Are there any good discussions about the reality of mathematics, accessible by a layman with a short attention span?
California civil procedure? Yes please! Just keep those fucking matrices away.
(And don't try looking at the state at an uncomputable time, because any experiment you can ever build is going to be looking at a computable time.)
Dude, what does that even mean? How does anything happen "at" a time? Everything I ever "look at" is smeared over pretty long intervals in time.
You can't draw what a matrix is, just what it fucks like.
essear is slowly coming around to my side of things.
108: We have good models that work with the complex numbers that fit what we observe. I can't imagine how to rearrange them to only involve computable numbers in a way that wouldn't do horrible violence to the structure of the theory.
110: Never been tested. You know they are going to f'ing start this year though....
What's the key property that distinguishes the complex numbers from the computable complex numbers in terms of the role that they play in those theories?
109: I found this book fairly influential, if probably extremely wrong in key ways, but the last time I brought it up the consensus seemed to be that the commentariat hated it a lot on principle.
Certainly any calculation you do in the theory is going to be computable.
In what sense are complex numbers uncomputable?
101: The set of computable numbers (for a model of computation, etc.) is countable but not necessarily discrete. The rationals are dense in the reals and even though they have the property you described, they're not discrete.
As for what it'd do to Church-Turing, if you have "real" real numbers you can potentially get real computers.
109. I liked Quine's short essay "Two dogmas of Empiricism" a lot. There are or used to be others here who do not think much of it. Aside from math, biology is the other place where the way to connect abstractions with reality is none too clear.
Write down all turing machines in each order. Consider the real number whose decimal expansion is .011101... where each 0 means that the nth Turing machine halts and each 1 means that it never halts. That's a real number, but it's impossible to compute it.
116: Continuity.
120: Wait, the rationals are not discrete? What does "discrete" mean? I thought it mean "not continuous."
120.last: per that wikipedia page it sort of doesn't seem like you could? Doesn't essear's point about the impossibility of infinite precision sort of eliminate that particular problem?
Feel free to replace classical Turing machines with any reasonable quantum analogue (without using infinite precision). Although that gives some time speedups, it doesn't change the underlying notion of computable.
Surely you don't actually use continuity, what with your delta functions.
I really, really need exact Lorentz invariance or something super-close to it.
Plus it's easy to find computable analogues of continuity. Change the definition of limit to "for any computable epsilon there exists a computable delta."
Wait, what does "continuous set" mean? An open set?
To be safe, I was using Wikipedia's definition: a discrete set is a set that has (or consists entirely of, but for our purposes I think "has" is sufficient) isolated points. An isolated point is a point that has a neighborhood that contains no other points in the set.
Honestly, 105 is probably sufficient for me to explain how the space actually could consist of all the reals but we'd never see it. But if that's the case, why would we need all the reals in the first place?
Just replace the Lorentz group with the computable Lorentz group.
Is the idea that at the moment the universe came into being it started with infinitely precise non-computable numbers and ever since then there's been no new non-computable numbers appearing?
WTF does this mean? What would constitute evidence of a noncomputable number, or any number?
Any calculation you've actually ever done would work just as well in the computable setting, because well it's a calculation so it was computable.
What would constitute evidence of a noncomputable number, or any number?
Burning letters in the sand, a thousand feet across.
How can you define differentiability without real numbers, especially differentiability of a complex-valued function?
Green's functions don't work without the complex plane. Time evolution in QM is intrinsically complex-valued.
125: Yes, essear's point about finite-precision measurements is sufficient to quash that concern. As I said in 92, I'm very weak in physics and I figured there'd probably something I don't know about (or in this case know about but very, err, imprecisely) to handle that case.
132: Basically I'm arguing you can't get evidence of non-computable numbers.
However, I do think though that you can get evidence of specific numbers really appearing in physics. There are quantum mechanical systems that are theoretically predicted to involve the square root of two for good mathematical reasons, and can be calculated approximately and keep giving you square root of 2. To me that's pretty good evidence that they're really the square root of 2.
The problem with using a strict, dense subset of the reals is that sometimes, limits that should exist mysteriously don't, right? I wonder how much calculus you could develop with just that if you were sufficiently tricksy.
117: Math is banned.
121: Is there maybe a "Cartoon Guide" to that?
110 -- Check 83 and 78. Why do you think pass rates for the Calif bar are so low? Those damn matrices.
Derivatives really aren't a problem. First, you could just use the usual definition of derivative but restricting your attention to computable numbers in the definition of limit (and actually probably you should require that the function that inputs epsilon and returns delta is also computable). Second, probably all the differentiable functions used in physics belong to some nicer class than merely differentiable.
Unfoggetarian, since you seem to know more about this than I do: are there any proofs/counterexamples about whether the computable numbers are closed under limits?
(I'm not arguing against i here, I'm arguing against the real numbers involved being uncomputable. I'm perfectly happy to believe that physics really involves imaginary numbers.)
129. Seems like a word game. Why stop with continuity, maybe claim that there's no such thing as symmetry either.
Is this an unfair reduction: There's no such thing as (symmetry, continuity), only an observable arbitrarily precise approximation.
But symmetry and continuity are the foundations of how to think, of knowing where to look, and they pass all possible tests.
There are quantum mechanical systems that are theoretically predicted to involve the square root of two for good mathematical reasons,
As well as the diagonals of squares of unit length.
Actually, I don't know why I'm surprised. The rationals are dense in the reals and some reals are uncomputable, so some uncomputable number must be the limit of a sequence of rational numbers (which are computable). Bummer. If the computables were closed under limits, we could all walk away happy.
But you only run into that problem if you take an uncomputable sequence. So in practice you never have that problem.
I'd never heard of computable numbers before this thread. It took me this long to decide that I wasn't inadvertently saying something humiliating.
This thread makes me feel like a real genius, let me tell you.
(I may be confused about the difference between computable and computable in the limit. Still I think there are some numbers not accessible to physics.)
The only non-computable number I can recall hearing about is Gregory Chaitin's omega that's related to the Halting Problem.
I usually feel extremely stupid and humbled on Unfogged threads, so it's nice to have something to contribute for once.
I'm undecided about 148. We all think there are restrictions to what functions actually occur in physics, right? Like, even if you believe in the reals that doesn't mean you think the Cantor set is going to meaningfully occur in a physical process. (I would love to be contradicted on that.) So I guess assuming there are no such sequences isn't unreasonable, but it feels wrong when you're already assuming computableness.
152. SUre. Maybe this is a misunderstanding? Express the size of the universe as measured in Planck scales. Raise the integer 10 to this power.
That number and its reciprocal in SI units are both inaccessible to physics. But when we measure to very fine precision (with interferometry, say), there is no evidence that the continuously-valued quantities which are being combined have a discrete rather than continuous foundation.
155: Computable numbers aren't discrete. I shouldn't have used that word back in 92.
This seems relevant to the discussion; it's an intro text on constructive calculus/analysis and seems to deal entirely with the computable numbers. The marketing's bizarre, though, trying to sell it as calculus for a computer age. I'm interested in this, but I'm not sure if I'm $85-interested in it.
Still I think there are some numbers not accessible to physics.
Your comments from 78 on seem to rest on the assumption that "numbers" exist in the physical world as solid entities, or that numbers only "exist" to the extent that they can be represented by or modeled on physical properties of real world objects.
I've got two pennies in my pocket, but that doesn't mean the number "2" exists in my pocket.
Or am I just not well-versed enough in this subject to grasp what you're arguing? (See comment 28.)
So the hypothesis is that non-computable values of some physical process are just missing?
Or are they present, but can't be distinguished from their neighbors and also there's no physical process to describe the algorithm to identify them?
If the latter, sure, but so what? That seems like a statement about halting and algorithms rather than about the physical world.
That seems like a statement about halting and algorithms rather than about the physical world.
A distinction without a difference!
I've got two pennies in my pocket, but that doesn't mean the number "2" exists in my pocket.
2 is the loneliest adjective.
From my perspective, the question is: "What is the most parsimonious numerical model of the real world consistent with experiment"? The rationals are insufficient since the sqrt(2) shows up, but do we need all of the reals/complex numbers?
This does have a degree of angels-dancing-on-pins to it, but I think it is a statement about both algorithms and the real world. If you can recover certain kinds of information from the real world, that means that we need to look at algorithms in a new way. Because computer scientists think the Church-Turing thesis is true, the set of algorithms and functions that they investigate is much smaller than it would be otherwise.
Alas, it looks like such information isn't recoverable and wouldn't be even if the reals are the most parsimonious numeric model. But the question is still interesting: if you can show that parts of calculus that are actually used by physics (or physicists) is computable, and you can prove interesting theorems about the computables that you can't prove about all of the reals, that's a win for everyone.
I think I follow 17, and to the extent I remember correctly it matches the understanding I had coming out of my linear algebra class, but if 17 was supposed to be a response to 4 et al., making them "intuitive" rather than "abstract", then it's a complete failure. I can understand them on their own terms as the vectors that are scaled but not rotated, etc., but I still have no idea what any of that has to to do with PageRank, or how to use eigenvectors to make billions of dollars.
(I'm looping back to 17 because I'm not even pretending to follow the last half of this thread.)
(Also, someone should bitch at someone for hijacking prior to comment 20, because not one goddamn bit of this has anything to do with public speaking.)
Also, someone should bitch at someone for hijacking prior to comment 20
Agreed. These number theory & computation theory types hijacked a perfectly good linear algebra thread.
but I still have no idea what any of that has to to do with PageRank, or how to use eigenvectors to make billions of dollars.
Oh, me neither. Intuitive has to be practical?
[I'm gonna buy a bunch of math books hole myself up in a cabin in the woods for a few years and come back having invented a genuinely novel human sex act that proves the existence of non-computable numbers, just you wait and see!]
(To phrase 162 a bit more seriously, I understand what eigenvectors/eigenvalues are, but have never understood why it would be useful to know that information. What problem does this allow you to solve? Intuitively, when we learn which vectors are scaled but not rotated, what are we learning?)
162: PageRank uses the eigenvalues of the adjacency matrix to determine how important a given page is. I'm going to read up on it to see if I can say anything intuitive about it, but Kieren Healy's recent post on PRISM explains some of the basics, in a delightfully olden-timey way. This goes back to what AcademicLurker said in 70 about network analysis.
Link fail, I should have previewed. I meant this.
167 is the sense in which I feel like I fundamentally "don't get it". I can solve the problem, but I have no idea why that's and important thing to be able to do.
Oh, thanks--I'd missed 70. But, ok--"The eigenvalues and eigenvectors can then tell you a lot about how the network is structured." That sounds interesting/useful, but I guess I don't get what exactly "a lot" is.
I'm curious how the Lakoff book Tweety linked handles the unreasonable effectiveness of mathematics.
More specifically than the reality of mathematics, I guess I'm interested in discussions of the mind as an instantiation of an algorithm. Discussions, that is, that are beyond what I could get by showing up at a dorm room with a loaded bong.
167: As far as what problems it's useful for solving, Principal Component Analysis is a pretty ubiquitous technique for spotting patterns in data.
Unfortunately the wikipedia page is surprisingly unhelpful in providing any illustrative examples that show why it's useful.
The original PageRank algorithm is delightfully simple, but I think I would fail at explaining it. Diffusion maps in general are fascinating.
171: Just to mention a few, things like finding bottlenecks and clusters. For communication networks, it's good for finding "critical" sets of nodes and edges where, if you remove them, huge sections of the network can no longer communicate with each other.
If it's a network of states in some dynamic process where the dynamics consists of moving between the states (a discrete Markov process, for example), then the set of eigenvectors and eigenvalues tell you things like what the steady state (if there is one) is, how quickly you reach the steady state, what the transients on the way there look like & etc.
The claim that noncomputable numbers have to exist because because they are used in QM seems to rest on the idea that the physical universe is actually embodying all the math we use to describe it. But I don't think that is right.
Compare the situation to the three body problem. Sometime people are tempted to think that the three body problem has a solution because, after all, the planets themselves know where to go. You want to say that everytime the moon orbits the earth while the earth orbits the sun, the three body problem gets solved to infinite precision.
But it isn't really like that. The universe just does what it does. The problems and computations that come up in our description of it are products of our minds collectively trying to represent the universe while it is just doing what it is doing.
I think 176 makes sense. I gave up trying to professionally understand these things fifteen years ago.
While writing my dissertation started using the slogan: "Math is good for doing science with the way liquids are good for swimming in." I meant this to convey the idea that there is nothing unreasonable about the effectiveness of mathematics, the same way there is nothing unreasonable about the fact that you can swim in a liquid. Unfortunately I'm not sure this clarifies anything.
Math is hard and I'm doing math right now. Laydeez.
162, 165 Aha! Let me be (but a pale imitation of) Heebie.
The basic insight behind PageRank is this: the links between web sites give us information about how important a web site is. It's not the number of links a site has coming into it ('cause you could just create little link farms to fake this) but the number of links weighted by the importance of the linking site. How important is a site? See above.
To make it more concrete, here's the model. Assume you're an average Internet surfer. Being largely devoid of higher cognitive processes you click randomly on the links presented to you in the current page you're viewing, except, with small probability, you sometimes choose a page from the ENTIRE INTERWEBZ at random [*]. From this link graph you can create a matrix. Rows and columns are web pages. Elements of the matrix are the probability of going from one website to another.
Now you take a random walk through this matrix, choosing links as described above. Note down which pages you visit. You'll find over time the probability you're at any page converges to what's know as the stationary distribution. This stationary distribution is the Page Rank. Technically, we've constructed a Markov chain and Page Rank is the stationary distribution of the Markov chain.
Now how can we characterise the Page Rank (stationary distribution)? Create a vector that is all (1 / number of pages in the WWW). This is our starting distribution (uniform probability of being in any one page). Multiply it by our transition matrix and we get the distribution over pages after one step. Multiply it again, and again, and again, and ... it converges to the stationary distribution.
Now what kind of vector stays the same (except for scale) when multiplied by a matrix? OMG RAGING MATHS BONER it's an eigenvector of that matrix!
So the stationary distribution is an eigenvector of the transition matrix. In fact it's the principal eigenvector -- the one that is the most powerful and handsome, and also has the largest eigenvalue. The algorithm I've described above is known as power iterations.
Now get in your time machine, go back to 1998, found Google, and collect your billion. Remember to say thanks.
[*] This bit is required to fulfil a technical condition on the Markov chain.
173-175 go too far. I feel like I'm missing the intermediate intuitive understanding. It's as if I'm asking "why would anyone want to know what the slope of a line tangent to a curve at a particular point is?", and am getting answers like "If we know how much distance a car has traveled over a certain period of time, you can use derivatives to determine its velocity and acceleration at any point in time." Whereas what I need is "it tells you the slope of the curve at that point! Which represents the rate of change--how quickly the function is increasing or decreasing!" That's the intuitive part about eigenvectors/eigenvalues that I don't get at all.
Physicists need mathematics like swimmers need water.
PageRank no longer finds the principal eigenvector of the actual adjacency matrix, but rather that of a stochastic approximation to the adjacency matrix. A pretty good rather than guaranteed best solution.
180: The eigenvectors are your building blocks for the map. If you know the eigenvectors for a map, you can easily compute where all other vectors get sent.
Thanks! I think Page Rank is the most awesome application of basic undergrad maths (Markov chains are usually second year, eigenvectors first) and I wish someone has used it an example when I was an undergrad (which would have been hard, due to technical conditions related to the consistency (but not necessarily continuity or computability) of time.)
I was going to try to describe something like 179, but I'm glad I refreshed when I was halfway through. Much better said than I could have.
180: my intuition for the way it's often used is "if you have a thing (a graph, a set of data points, an image) that you're interested in, and you can represent that thing as a matrix, the set of eigenvectors will tell you useful things about the shape of that thing, in whatever basically arbitrary space that thing shares with other things like it"
On the other hand, mathematics needs physics like a fish needs a bicycle.
Linear algebra Obamacare really is humankind's greatest accomplishment.
I need physics like I need my men.
the set of eigenvectors will tell you useful things about the shape of that thing
What useful things?
I'm a very poor public speaker. I just realized the problem isn't that the audience makes me nervous. The problem is that I have to say something.
188 is true nowadays, hadn't been formerly. All the math we're discussing here was developed by people conversant in both math and also physics. Through the early 20th century, it was usual for people to know both fields very well.
192: I'm also pretty good at public standing still silently. I wonder if I could get endorsed for that on LinkedIn.
194: They need people like us to fill in the background during ceremonial occassions. I wonder if it pays well.
188 is an eternal truth, and doesn't have anything to do with actual mathematicians or physicists.
Physics is to mathematics like sex is to masturbation.
197: He's right:
"Grossman and Ion (1995) report that the average number of authors on papers in mathematics has increased steadily over the last sixty years, from a little over 1 to its current value of about 1.5. As Table 1 shows, still higher numbers seem to apply to current studies in the sciences. Purely theoretical papers appear to be typically the work of two scientists, with high-energy theory and computer science showing averages of 1.99 and 2.22 in our calculations."
This "computable" thing has me a little off-kilter, I think. Typically when people talk about something that gives up on continuous real numbers in physics, they're imagining some kind of really crude discretization, and the Lorentz invariance argument kills it. But the computable numbers aren't a simpleminded discretization like that. If you tell me, as Upetgi suggested, that physics transforms in a reasonable way under the computable Lorentz group, that's actually going to be fine, relative to experimental tests. But that's basically because, assuming continuity, it would imply the stronger statement of invariance under the usual Lorentz group. It kind of feels to me like this "only computable numbers" exist thing doesn't actually change anything. And it leaves me fuzzy on what "exist" means.
Usually, though, when people start invoking Church-Turing or whatever they have in mind some much harsher discretization. Turing machines don't exist in continuous time, or even at all times corresponding to computable instants, and it seems like that would lead to different complexity classes?
Still, it seems super-weird to me that you could imagine that, say, scattering amplitudes are not arbitrary unitary matrices but only computable ones. Why would your intuition lead you there? The universe isn't a bunch of gears or whatever. And assuming that noncomputable numbers "exist" in this sense---that is, assuming standard physics---doesn't imply that we have the ability to compute things in a way that would violate the Church-Turing thesis.
I should shut up and mull this over more, because now I've just made myself totes confused about whether this computability hypothesis actually does predict any deviations in physical results or not.
I didn't understand any of this math stuff either, so I read this article on strippers in North Dakota instead.
I agree with 201.last--it's sufficient that there are physical processes that prevent those numbers/functions from being inaccessible to us.
Why does your intuition lead you to believe that the real numbers are the right metaphor and not, say, the larger set of surreal numbers? Because you don't need them, right?
That's what brings me back to parsimoniousness. We know we can access all computable numbers because we can provide a process for getting them in finite time--any theory of physics needs to have at least them. But assuming that calculus works over the computables (which I think is right but am admittedly not entirely sure), what does using the entire reals get you?
But it's not parsimonious at all, to me. You have to assume the Hamiltonian at a given time know the next computable number to skip to? That particles will somehow scatter into a dense but not continuous subset of all the momenta available to them? It's just super-weird.
I think I'm having a basic definitional misunderstanding. What do you mean by "continuous set"?
And it leaves me fuzzy on what "exist" means
Yesssssss.
Do you mean a complete metric space
Suppose I have a theory that takes as input the initial and final momenta of a bunch of particles and gives me, as output, an S-matrix that characterizes how likely the initial specified momenta are to scatter into the final specified momenta. What would Upetgi's computability-modified version of this theory look like? It would restrict to computable momenta, first of all. But then the S-matrix is no longer a function defined on all possible real outgoing momenta. This means I'm also not so sure anymore what it means for it to be unitary; ordinarily I would have been doing an integral over the possible final states, but now no integral is available because my space is some weird subset of the reals. Can I still integrate over it in the usual way?
Hrm. I need to think if the computable numbers are connected. I suspect they aren't.
205 is weird to me. Those seem like way lower level assumptions about the nature of reality than we can reasonably make. I guess my view of "less numbers == more parsimonious" can be viewed as naive, though.
I liked linear algebra (although not to have any particular sense of anything useful to do with it) and did well in the class. And haven't had occasion to think about it for about twenty years.
This is almost precisely my reaction, except in my case it's only been 17 years, and it's slightly more embarrassing because I was a math major.
I feel like I'm following most of the thread, but only in the sense that I can nod in understanding, but not anything more than that.
Those seem like way lower level assumptions about the nature of reality than we can reasonably make.
But that's what a physical theory is-- you make those assumptions and see if the result matches data. If you change the assumptions, I need to know if it changes the result. If it doesn't, I don't consider it a different theory; if it does, it runs a strong risk of being ruled out by empirical evidence.
and then I just imagine all the comments are about strippers in North Dakota.
That gets us back to the basic question of whether (or how much of) calculus works in this space. And I don't know the answer to that; that all boils down to which limits don't exist. I suspect it mostly works since somebody wrote what appears to be a book about it. A word of warning, though: the author might be a Dakotan stripper.
213: Sure--I was thrown off by the "how does it know" wording. 210 feels like a more relevant objection than 205.
214: strippers in North Dakota run a strong risk of being ruled out by empirical evidence?
Anyhow all of this is solved if you remember that math is an arbitary -- if impressively internally consistent -- cognitive framework applied to perceptual evidence of regularities in the world.
215 I suspect it mostly works
In that case, the question remains: does it predict literally the same results for any experiment? If so, I don't care about it, it's just layering meaningless words on top of things. If not, what's the new prediction?
I'm still confused about the nature of existence. Someone please clear that up for me.
My claim (really conjecture as I've yet to do the math) is that it does predict literally the same values for any experiment you could actually do. I don't see why you wouldn't care that you're assuming uncountably many points if a countable solution exists that gives you the same value, but I guess that's just part of the difference between physicists and mathematicians.
I don't see why you wouldn't care that you're assuming uncountably many points if a countable solution exists that gives you the same value
I must admit I don't see why you would care.
I really doubt integration is going to be a problem. For say Riemann integration, just restrict to ways of breaking up into rectangles which are computable, and take a limit over those.
Anyway, it's usually a pain-in-the-ass to make the modification necessary to work computably or constructively, and not really worth it for anyone to bother doing. But the real numbers really are a strange beast when you think about what they really are, and so I'm generally skeptical of anything that says you need them rather than just that they're useful.
One can imagine a universe in which there are genuinely non-computable numbers appearing. Say the ultimate theory of everything has several undetermined dimensionless constants. One could imagine that although there are experiments which would let you approximate these numbers there's no possible algorithm or computation which would let you approximate them without just doing the experiment.
If its so hard for anyone to get a handle on an uncomputable number, it doesn't seem strange that we don't have a physics example on hand yet - more just a reflection of the constraints of our wee human brains. (Treat this comment as though I am not a math person.)
more just a reflection of the constraints of our wee human brains
You know, like all of math.
224 would work well as a movie trailer. IN A WORLD where there are genuinely non computable numbers . . .
1 - Math is a language. (Heebie's Axiom.)
2 - Language is a virus. (Burroughs' Axiom.)
3 - Math is a virus. (Transitive Property.)
4 - Viruses are threats to public health. (Given.)
5 - Math is a threat to public health. (Transitive property.)
Q.E.D.
It just seems like it adds a whole unnecessary layer of technical baggage to think that physics only works with computable numbers, when you're saying that doing so won't actually change any of the predictions. The real numbers are much more intuitive than the computable numbers, at least for ordinary humans. Still, thinking that we should restrict to computable numbers seems much better than imagining that space and time are fundamentally a discrete grid or something, which changes the predictions to be horribly wrong.
229: Let me recommend...
234: Not to mention!
Upetgi should read some van Fraassen.
Say the ultimate theory of everything has several undetermined dimensionless constants.
String theory doesn't have any of those, for what seem like deep reasons that might well be forced on you by generic properties of quantum gravity even if string theory turns out not to be right.
The real numbers are much more intuitive than the computable numbers, at least for ordinary humans.
Yep. And continuousness is much more intuitive than discreteness, at least when it comes to things that boil down to contours and surfaces. Even though the actual mechanism of our perception is discrete! Sort of weirdly relevant.
Probably strike that "at least..." clause in 238. It's true for everything, because everything pretty much boils down to contours and surfaces, perceptually.
One can imagine a universe in which there are genuinely non-computable numbers appearing.
I still don't know what "appearing" means here.
232: I think the intuitiveness of real numbers is a little over-rated. They're really weird things. Though the alternatives aren't exactly more intuitive.
237: I was just trying to illustrate how the claim that physics only requires computable or constructible mathematics could in principal be false, not making an actual claim about our universe.
I hear you can do physics without any numbers.
That was me of course.
236: I'm pretty skeptical of anything that gives imprecise measuring devices like our sense organs a special place among all measurements.
But, you know, that said, even the strongest believer in the platonic reality of math would have to concede the primacy of our sense organs when it comes to the development of intuition about the natural world, no?
I mean, insofar as the comprehension of math (if not, granting for the moment, the reality of math) is a process intrinsic to the individual, it seems necessarily to be the case that at least a big chunk of any individual's early intuition about the physical world is going to be driven by perceptual input, and that later intuitions, though they may wander far afield, are going to rooted in those early comprehensions.
Of course we're better at understanding phenomena that happen at the speeds and sizes we're evolved to deal with, but just because we're better at understanding it doesn't make models for it any more real than any other model.
I was only talking about intuitiveness, not reality. (The usefulness of that word as a way of describing scientific models over and above their tractability and ability to describe the data is a whole other can of worms, which I definitely don't need to open.)
In terms of making eigenvectors and eigenvalues intuitive, I always thought it was helpful to think of the difference between correlated and uncorrelated variables.
For example, consider two linear maps: one that sends the point (x, y) to (4x + y, -2x + y), and one that sends (x, y) to (3x, 2y). The second one is much easier to work with. If you want to know what point gets mapped to (12, 4), this can be done in your head in 3 seconds with the second map, but the first map requires actual computation.
The thing is, if you're working with the first map, you can change it into the second map by changing your coordinate system. Your new x-axis and y-axis are the eigenvectors of the map, and the coefficients 3 and 2 in the second map are the eigenvalues.
Likewise, say you're working in a linear regression context. You want to write some output y as a combination of inputs -- for example, income = b1 * age + b2 * education level + b3 * occupation + b4 * mother's education level + b5 * race + 100 other variables. If your input variables are independent of each other, this is mathematically pretty clean -- it's easy to see which inputs are most important, and it's easy to see what the impact of changing the value of an input is. But if your input variables are correlated with each other, this becomes messy and confusing. Differences in education level are likely to be related to differences in mother's education level and in occupation. Things move together at the same time.
Once again, you can generally create a new universe of input variables that are statistically independent of each other, and therefore much cleaner to work with, by using a different coordinate system. Instead of using your original variables, you use the eigenvectors of a matrix that summarizes the correlations between your original variables. These new input variables may look like 3 * age + 2 * mother's education level - 1.5 * occupation or whatever, but they are independent of each other and so you can cleanly pick out the most important ones and so on.
So anyway, how about that speaking in public - prett crazy, huh?
I still feel like nose-flow's basic question hasn't been answered. What do we mean by existence, if we say that the computable numbers exist and the uncomputable numbers don't? If you are a full-blown Platonist, you have a robust sense of what it means for numbers to exist, but you also don't have a reason to distinguish the computables from the uncomputables. Sure, maybe you could redo physics do only use computable numbers. But that's just the shadows on the wall.
For a Platonist, 122 is enough to prove the existence of computables.
So what if we aren't Platonists? Well now there are all sorts of different ways we can state our physical theories, and some will involve quantifying over different elements than others. this paper does a good job of bringing out how mathematically equivalent theories actually have radically different ontologies if you read them at face value. So what if we can construct one that includes the objects that we like and not include the objects that we don't? The Hartry Field book neb links to ($158 used!) says that we can rewrite physics without having to quantify over any numbers at all. But what does this prove? Why is this the really real way to state the theory?
For a Platonist, 122 is enough to prove the existence of non-computables.
224 is an important comment, but I'm not sure what to do with it.
Are there realistic circumstances in which we could imagine determining that a fundamental physical constant is a uncomputable number? (Honest question.) We aren't going to suddenly find that the gravitational constant is actually determined by a complete ordering of Turing machines, some of which halt and some of which don't.
252: Yes, sorry for the typo.
I am, as they used to say, becks-style, but not quite btocked.
253: The best "realistic" scenario I could imagine was the following. Somehow you have a "God-given" ordering on all statements (I dunno, maybe literally some inspired person wrote a book with the ordering), and this constant has digits which are 1 or 0 based on whether the statement is true. Every time some mathematician proves a new statement, it turns out that it agrees at that digit with the answer of the experiments. After a while that's pretty solid evidence that the number is non-computable (and not even limit computable!).
That's not quite as realistic as I was hoping.
Well... Perhaps more realistically, you just try to come up with algorithms to compute it for a few hundred years and keep failing? And it keeps looking like a totally random number no matter what statistics you compute about it.
236: I'm pretty skeptical of anything that gives imprecise measuring devices like our sense organs a special place among all measurements.
It would be pretty hard to make use of any other measuring device without our sense organs (and I think it's pretty obtuse to refer to sense organs as measuring devices, but I guess you're in august company in making that bizarre claim), so, that makes them pretty special, to me. Unless you've got some other way of finding out about the world in mind.
Well... Perhaps more realistically, you just try to come up with algorithms to compute it for a few hundred years and keep failing?
How would you know you were failing?
You come up with some guess which agrees to 10 decimal places, but then the astronomers build a better telescope and you learn the next 2 digits and your guess was wrong. Wash, rinse, repeat.
Well, this thread sure didn't go the way I expected at all.
I was hoping Upetgi would supply me a theory that's almost but not quite identical to the theory involving the reals. Looks like not. Oh well.
I have zero interest in the whole Platonic whatsis and trying to define "reality" discussion, I'm afraid. Give me an operational distinction or give me... well, not death, I dunno, a sandwich or something?
If we're done discoursing on the mathematical nature of reality or whatever, I can report that Pendleton Canadian Whisky (which is apparently distilled in Canada but bottled in Oregon) is quite good, much as the liquor store clerk said it would be.
I guess I could have reported that if the rest of you were still talking about math, but it would have been more awkward.
You come up with some guess which agrees to 10 decimal places, but then the astronomers build a better telescope and you learn the next 2 digits and your guess was wrong. Wash, rinse, repeat.
Isn't this how it goes in general, though? Or have we calculated, say, the gravitational constant exactly?
(I'm not going to ask how you know that everything else that's gone into the astronomers' calculation—they're making one too, yo know!—is correct to those further decimal places, given that they'll be using inexactly known figures themselves.)
I guess there's nothing stopping us from just defining the gravitational constant to be 0.00000000001 m3kg-1s-2, and letting everything else shift for itself in the fallout.
There's that whole universe is a computer simulation, or simulation of simulations, thing. If a physical constant showed evidence of being noncomputable, that would rather count against it, I think?
166: I can't believe I'm writing a letter to Physical Review Letters B! A month ago I was in a club in North Dakota and a girl asked me whether the google search on her phone for public speaking tips was ever going to return ...
109: The short story Luminous by Gref Egan.
I don't see how you could ever tell that a physical constant is uncomputable as opposed to simply hard to measure with real finite tools. Any time you find yourself revising a number, the most natural explanation is going to be a failure in your instruments, or an ordinary programming error in your calculating device. The idea that the number is actually uncomputable, so that the appearance of the next digit is going to appear random to us, is the last thing you'd check.
Essear's 105 is the real kicker here: "we're fundamentally unable to ever measure anything with more than finite precision."
273 is confusing irrational numbers with uncomputable numbers.
I see that confusion all the time. Also, sometimes people think I'm Mel Gibson.
Yes, but that's pretty close to the problem of scientific induction, isn't it? You can never get enough evidence to prove scientific theories in the same way you prove mathematical theorems. You just get enough data points in a row to say it looks like a good fit.
So, if you have some uncomputable function, but you know a lot of actual data points in it, can that be used? The halting problem function has plenty of known data points, for instance. And then you can wave your hands and say freaky, eh? - probably we don't live in a simulation.
Oh, I think 274 might be the shorter more correct version of my 276.
This is all very confusing. Not the linear algebra eigensystem stuff. I loved that stuff and got it pretty well first time through.
The Church/Turing stuff is freaking me out, though.
LOOK, GUYS, YOU OBVIOUSLY DIDN'T GET THE HINT FROM MY FIRST COMMENT... TAKE IT FROM A GUY WHO HAS... WELL... LET'S SAY, A REPUTATION FOR BEING ABLE TO SEE THE BIG PICTURE. IT'S IGON VALUE.
Don't you mean Egon value?
Enough with your Qui Gon values!
268: We don't currently have a theory of everything. So it's no surprise that there's constants we can't compute yet. Also general relativity has only been around less than a hundred years, not hundreds. I was imagining a situation where there's a theory that's nailed everything for hundreds of years but has one totally mysterious undetermined parameter. That's a very different world from the current one.
It's nailed everything down to all the decimal places. Got it.
Also I guess I just don't know what you mean by "constant we can't compute yet". We learn about the constants from experiments (and we use our eyes, too!). You mean, I guess, a constant we can't compute accurately (or with ever-greater accuracy) in the absence of further experiments. It seems very strange to me to think that we'll ever reach that point, with a supposedly physical theory. (Among other things, I get tripped up by the fact that the constants come in units.)
Computing constants does sound weird even with a theory of everything. I was thinking you would still need empirical input and the constant would coincide with the output of some known noncomputable function. Not that the laws of physics themselves would include some function which was noncomputable. Like the calculating whether the function that describes the decay of a black hole will ever halt or something?
That's why I said unitless constant.
For comparison, there's the fine structure constant. It's unitless, and it's value is not known exactly (and is likely irrational and indeed transcendental). However, we have a well-developed theory (QED) which (oversimplifying a bit) allows us to compute it. So that's a unitless fundamental constant, but it's computable (again oversimplifying a lot).
273 is confusing irrational numbers with uncomputable numbers.
No, no, I understand the difference. With an irrational number you always have a way of computing the next digit in the infinite sequence.
It is true, though, that irrational numbers are already beyond our finite measuring systems in some important sense.
Ok, now I've read the thread more closely (although I haven't looked up church-Turing), and I don't see your argument, utpetgi. What's your argument against uncomputable numbers? Essentially, the first sentence of 273?
290 does not convince me that you do, HC.
I'm not arguing against uncomputable numbers, I'm arguing against the claim that uncomputable real numbers are needed for physics.
Why doesn't the intermediate value theorem guarantee the existence of uncomputable numbers?
I mean, guarantee their existence in physics. I used to be two feet tall, but now I'm 5'4".
Ï€ isn't experimental.
π isn't a physical constant.
While in the shower I realized that I wasn't sure why I was hung up on units. The issue, as conflated said, is that you're attempting to be true to the world, but your experiments have limited precision.
However, we have a well-developed theory (QED) which (oversimplifying a bit) allows us to compute it.
In terms, as far as a quick googling tells me, of other constants. Not useful! Even with natural units—a shell game for present purposes, since changes to the constants either are covert changes to the units or invalidate the equations—you still need e. If there's a definition entirely in terms of non-physical functions, that would be extremely surprising (and interesting!), but I'm not hopeful; as long as there's at least one physical constant, I don't see why you're allowed to say "allows us to compute it".
"The currently accepted value of α is 7.2973525698(24)×10−3", says wikipedia. Let's simplify that to .0073(0), since it's easier to write and it doesn't matter anyway. We can represent that easily as a natural number and it plainly makes claims about what further experimentation will reveal (namely it will confirm those repeating zeroes). Suppose we had a theory of everything, incorporating that and other constants. The values of the constants will, of course, be made to fit our observations, to the precision with which we can make the observations.
But our confidence that we've got it right with .0073(0), especially as to the "(0)" part, depends on the experimental data, and the experimental data certainly can't reach all the way out into the nether regions of the "(0)" part. Who's to say that it isn't actually .0073000[billions of billions of zeros]0001(0)? A stray 1 out there far beyond where it could possibly affect any measurement we could make. (If I were god, you can be damn sure I'd do stuff like that.)
Anyway, this is all related to my dissatisfaction with "You come up with some guess which agrees to 10 decimal places, but then the astronomers build a better telescope and you learn the next 2 digits and your guess was wrong. Wash, rinse, repeat."
So here's my current guess:
10.123456789(0)
The astronomers come along with a new telescope and we learn the next two digits; it's actually
10.123456789012[insignificant blather]
Oh shit! How am I going to come up with an algorithm to compute that?! Well, how about this one: x = 10.123456789012. Just add the error term in. Boom.
Oh, right, essear, finite precision.
292: Well, I guess the only thing I can do then is admit I am in over my head and ask for an explanation.
Ï€ isn't a physical constant.
It's not? Circles aren't physical?
Circles aren't physical?
How wide is a line?
If circles were physical, then we'd have to learn about the ratios of their diameters to their circumferences from said physical circles. IOW, π would be experimental.
294, 295: one presumes it would be possible to redefine an intermediate value theorem that operates over the computable numbers, if all of this other "do as before, but just over the computable numbers" stuff works, yeah?
303: Sure, you could do all the same computations over the computable numbers, and you can restrict IVT to computable numbers. That doesn't mean that IVT doesn't hold over real numbers too, though.
I think U:"pe,tgi" wants to tack on "except for a handful of real numbers that you'll hardly miss" to all theorems.
Let's change their name to unpopular numbers. Who cares about them.
Oh for chrissakes. That's the computable version of "TOO".
294: Great question!
304: right, but if you are generating your physical theories in terms of computable numbers then the fact that IVF holds over the reals will never be relevant for you, since they play no role in your calculations. So whatever physical quantities add up, eventually, to produce 2' heebie and 5'4" heebie, the intermediate values that you will attain will be intermediate values in the space of the computables, rather than the reals.
Too much ambient noise for me to wade into 309 and actually get anything; I'll have to wait until I've got a quiet moment.
Well, right, part of the oversimplifying is that QED doesn't tell you what e is (electron charge, not Euler's constant). But just imagine a version of QED where e happens to turn out to be a nice round number exactly.
The whole point is that waiting for the experiment is not an algorithm, because it can't be simulated without having access to the actual constant in question.
311: I think Reid's answer is the easiest one to follow.
Or at least it was the only one of the answers that I could follow.
(I actually looked up that MO question yesterday because I was worried about exactly this point. I was trying to think of something non-constructive that could come up in physics, and IVT seemed like the best possibility.)
The link shows that IVT holds for the computable numbers. What I'm not clear on is: why does that answer the question in 294?
IOW, am I missing something at the link? It seems like it just shows that I'm passing through all computable heights between 2' and 5'4". But it doesn't prove that I'm not also passing through all uncomputable heights.
Heebie is almost always uncomputably tall.
But just imagine a version of QED where e happens to turn out to be a nice round number exactly.
Sorry, that's the sticking point. How does it turn out to be that, or how do we know that it turns out to be that, except by experiment, which is only finitely precise? "Exactly" is infinite.
317:
Both versions of the computable IVF discussed in the answers to that MO question are concerned with computable functions of computable reals; the range of such a function is necessarily the computable reals, so the answer to your question is immediate.
289 However, we have a well-developed theory (QED) which (oversimplifying a bit) allows us to compute it.
You've "oversimplified" to a point where I don't even have a clue what you're talking about, here.
313 I think Reid's answer is the easiest one to follow.
I vaguely remember that guy from middle-school Mathcounts! (Where I didn't do very well and he did really, really well.) Interesting that he's still in math. IIRC the guy who won first place the year I was there is in finance now, and the guy who won the previous year is some kind of engineer.
321: THe simplification is that if we assume that the "e" in
\alpha = \frac{e^2}{4 \pi}
is the base of natural logarithms and and not elementary charge, and also ignore the shenanigans that enable us to eliminate reference to ε0, c, and ħ, then there's an algorithm for effectively computing α.
It's worth pointing out that thinking it's possible to compute a physical constant like this means that dedicating extra resources to the pure computation giving greater accuracy to the physical constant is a means of finding out about physical nature and could be used to demonstrate that an experiment is flawed. IOW, it's bats.
My old Mathcounts teammate (where I didn't do very well and he did really really well) has remained in the field, or at least nearby.
From their website, I'm guessing that our old sponsor is not doing quite so well.
the shenanigans that enable us to eliminate reference to ε0, c, and ħ
Those aren't shenanigans, that's just a choice of units. α is a good dimensionless number, it's just that it isn't determined by the theory (and it depends on the energy scale of the process).
And in 312 Upetgi said "electron charge, not Euler's constant," so I didn't think he was mistaking it for the base of natural logarithms.
Sorry, I got confused between e and g.
A priori there's no relationship between the fine structure constant and the dimensionless magnetic moment of the electron. However, QED says if you know the latter you can compute the former.
However, QED says if you know the latter you can compute the former.
Err, well, kinda. You can compute the infrared contributions, but there are all sorts of corrections that depend on the masses and couplings of heavier particles.
The point is that (oversimplifying) you can try to get the fine structure constant in two ways: have a computer calculate using QED, or run a direct experiment. In principal, it could be that this constant was not computable even in principal and the only way to get at it was to use an experiment.
(The oversimplifying is that this is an example of going from two constants down to one, instead of from one down to zero. But it still gets across the same point.)
327: Ok, so maybe it's not from two down to one, but QED calculations still decreases your number of unknown constants by one, right?
What does "have a computer calculate" mean? You can't calculate the fine structure constant. Do you mean calculate the magnetic moment? In that case, you can, modulo the fact that you're summing a divergent series and there are contributions from other particles and higher-dimension operators you don't know and all sorts of other things.
I can't really figure out what you're talking about, except that we are capable of making predictions from theory, which is true, but always only at finite precision.
322: I think he's now out of math and working for startups. We see each other roughly once a year, so I might be out of date on that.
still decreases your number of unknown constants by one, right?
I mean, yes, the whole point of having a theory is to relate different observables to each other in a way that's overconstrained so you can test it. I wouldn't usually think of all the observables as "constants," but I guess you can.
Everything is always making a big deal about QED predicting the fine structure constant. Have I totally misunderstood what that means?
And in 312 Upetgi said "electron charge, not Euler's constant," so I didn't think he was mistaking it for the base of natural logarithms.
Oh, he wasn't making a mistake, he was just treating electron charge as if it were a mathematical rather than a physical constant, since he has to take it to be computable or known exactly itself. (As acknowledged in 312—I'm not sure why even bothered to mention "electron charge, not Euler's constant", since the whole point of my previous remark that "you still need to know what e is" is that e is a physical constant—and repeated in 326.) Perhaps I was being rhetorically out of hand, but the thrust of 323 still seems right to me. &alpha can be computed if e can be—and e could be if it denoted Euler's constant but not (or we have no reason to believe!) if it's electron charge.
Those aren't shenanigans, that's just a choice of units. α is a good dimensionless number
The reason it strikes me as shenanigans is, what happens when you get new data regarding the value of (say) ħ? Are ħ, ε0, and c are all interdefinable exclusively in terms of each other, so we just move things around in the other two and it all comes out in the wash? If not, and ħ remains stipulated to be 1, what does happen? Adjustment has to happen somewhere, no?
I'm not trying to make a deep point here. All I'm saying is you could in principal have a theory that predicts the values of certain parameters which prior to that theory were unknown. You could also have constants which even in principal could never be predicted by any computational theory.
Everything is always making a big deal about QED predicting the fine structure constant.
Where are these things making a big deal about that? QED predicts how the fine structure constant changes with energy, or how the magnetic moment of the muon or electron relates to the fine structure constant (again, ignoring lots of other corrections that go beyond QED), or what the cross section is for electron-positron annihilation or electron-electron scattering or zillions of other things. But the fine structure constant itself is an input.
335: All I'm saying is you could in principal have a theory that predicts the values of certain parameters which prior to that theory were unknown.
Yes, but I don't see what this has to do with computability.
The point is that (oversimplifying) you can try to get the fine structure constant in two ways: have a computer calculate using QED, or run a direct experiment.
Of course, your calculation of α is going to be constrained by your confidence in the value of e, which you don't have, and can't calculate, to arbitrary precision, as you can for π. If you pretended otherwise and published a more-precise value for α based solely on having calculated a more-precise estimate for π, you don't have one of two ways to calculate the constant, you have a guess at what the constant might be that will be supported or shown up by the experiment.
Suppose that the value of electron charge is known to one place: 6. And suppose that we also only know the value if pi to one place, so we have (using the equation from wikipedia) a value for α of (6**2)/(4*3) = 3. Then, using the increased computing power available to me, I get a new value for pi, 3.1415926535, and recompute α using it, assuming I know e to arbitrary precision, and get 2.86478897.
Do I now know better what the value of α is? Surely not; I didn't know what e to enough places to justify the new figure, even though I have a better value for the part of the equation that is computable, namely the value of pi.
If we knew the value of e exactly (or could compute it as we can compute pi), then we could get better values for α just by running the algorithm longer. But you've given no reason for thinking the antecedent will ever be fulfilled. You've just repeated the condition. It's also true that if the antecedent were fulfilled (we know e to be exactly 6), and I ran my calculation and got 2.86478897, and you ran your experiment and got 2.9something, we'd probably want to throw out your experiment, as the chancier of the two methods for finding out what the value of α is. But, you know, the antecedent isn't fulfilled, and in the event, we'd probably want to conclude that the value of e I was using was wrong.
what happens when you get new data regarding the value of (say) ħ?
No, see, ħ is 1. It would be new data about how to calibrate the value of "1 Joule," relative to the standard definition of a second, or some annoying human-derived unit like that.
No, see, ħ is 1. It would be new data about how to calibrate the value of "1 Joule," relative to the standard definition of a second, or some annoying human-derived unit like that.
ah, right, of course. You couldn't get new data about ħ.
I mean, you could express it as a new measurement of ħ, but it's better to think of it as a new measurement of a Joule (or an electron-volt, or some other unit of energy). You can fix a second by something directly physical (I think the standard is transitions between certain levels of a cesium atom).
I remember that, already a long time ago, when I would hear someone talking or arguing about some kind of nonsense, I would say, "Eh, you really want to go into that nonsense?" And people would be amazed at that and say, "In what way is this nonsense anyway? If this is nonsense, then what isn't?" And I'd say, "Oh, I don't know, I don't know. But some things aren't."
Imagine a universe consisting of a bunch of rectangular billiard tables each with one point moving with no friction and colliding with the walls elastically. For each table you only have one constant: the ratio of the two side lengths.
Now suppose someone claims all the tables are squares. In this new theory there's no constants left! Even if you can only do measurements with errors (but you can do better measurements with more time) you can still get a lot if evidence that they really are all squares.
By contrast you could have a world where some side lengths are non computable.
(The case I'm confused about is what if they appear to be chosen randomly from some distribution. Does that mean you should expect almost all if them to be noncomputable? Or can you not distinguish between choosing computable numbers randomly and choosing actually random real numbers?)
Eh, it feels like you are hiding constants inside the framing of the model now, by ignoring it being a model that maps to a physical world. You still have to measure table lengths to say how well square theory fits your universe.
Plus what about the width of universe in tables constant, probability of angels having to do a pants run around the table constant, etc?
Stray thought ... I guess a hardcore anti Platonist might consider running a computer program in the physical world a type of experiment.
I had a coffee nightmare tonight.
You still have to measure table lengths to say how well square theory fits your universe.
Yeah, the shapes will be made out of things, won't they? And the measurers probably won't also be rectangles.
In the spherical cow universe optimal packing studies has taken the place of particle physics.
My best public speaking has occurred at a two-thirds level of full optimal giving a shit. My best writing has probably also occurred under those constraints.
Put another way, I think that optimal performance conditions exist when the importance of the task overshadows the investment of the ego.
I have nothing to say on square theory, except that nonsense is fun to the extent that it can be read with enjoyment.
Not connected to the philosophical concerns*, but with regard to physics as it is practiced, are their areas where just doing the computations with double precision does not suffice? I know there are ways you can eff yourself with bad algorithmic choices, but was wondering if anything currently being done intrinsically needed greater precision.
*Although the pattern of how floating-point representations fall on the number line is what raised the question for me.
And clearly things like the calculation of trillions of digits of π use other representations but was wondering about directly physical things. (When I was a boy, double was luxury so my intuitions are geared more to single precision.)
are their areas where just doing the computations with double precision does not suffice?
Probably, but there are ways to get around that.
(First hit for "exact real library".)
Interesting, But "exact real" for all reals? Doubtful (but maybe just do to implementation constraints?).
353: And yes, I assumed there were. Was wondering about the actual practice.
Not all reals, that package only works for (some explicit version of) computable real numbers. In other words, to specify a real number you have to actually give a formula for computing more digits.
346: What do you mean "probably"?
Do I understand right that you're arguing it's in principal impossible that any dimensionless constant could actually be an integer? That is, it's in principal impossible that the electron and the proton actually have exactly the same charge?
Ratio of proton and electron charges is an example I should have hit on earlier. That's a dimensionless constant, which really does seem to be exactly the integer 1.
346: What do you mean "probably"?
I mean that entities capable of measuring things are unlikely to be rectangles. I take this to be uncontroversial.
That is, it's in principal impossible that the electron and the proton actually have exactly the same charge?
I wouldn't claim something like that. But I will claim that our knowledge of the ratio of proton and electron charges depends on our knowledge of the charges.* And we don't know those exactly. I'm perfectly willing to say—and this is something even bloody-minded instrumentalists are willing to say, though they don't mean by it what bloody-minded realists mean by it—that the ratio might actually be exactly 1, 1.(0) all the way out. But that isn't knowledge we have.
* I'm assuming we don't have reason to claim, in advance of measuring the charge on protons and electrons, that whatever those are, their ratio will be such-and-such.
Looking at a paper on this question, I'm pretty sure we have much better knowledge of the ratio than we do of either value.
So they measured the difference, and found it to be at most a very small number.
I was envisioning a theoretical reason for expecting them to be the same in advance, but w/e; I don't think the fact that the difference can be measured, rather than calculated from individually measured components, is relevant.
It's really cool how many different ways there are to estimate this number. Page 3 of this fact sheet on the proton gives three wildly different techniques all giving great bounds.
I no longer have any idea what this thread is about. I like it! Also, earlier I pulled the Lakatos book I bought after some other unfogged thread about math down off the shelf. Great success!
a black hole made from the wee charges must be able to naturally decay to wee charged things, and barring a conspiratorial spectrum of charges and masses, this strongly suggests that the mass of the wee charges must be smaller than the charge
Oddly, the stackexchange thread in linked in 370 features an appearance by the late, lamented J.J. Cale.
If I can't understand the Germans, do you think I can trick them into thinking I'm speaking German?
Right, so, now that I have slightly more time to write a comment: empirically it's pretty clear that you're going to get a really strong bound on possible deviations of the electron/proton charge ratio from -1, because matter has to be neutral. If matter wasn't neutral, Coulomb forces would blow it apart. The gravitational force between a proton and an electron is about 21 orders of magnitude weaker than the Coulomb force (give or take an order of magnitude, I'm not plugging the numbers in), so the bounds should be at least that good, sort of trivially. I suspect one could place much stronger bounds with some thought. I'm surprised the bounds in the PDG don't beat that naive back-of-the-envelope bound by many orders of magnitude.
Once you're comparing numbers that you know are 1 and -1 +/- 10^-21, it's sort of hard to believe they aren't exactly 1 and -1. And the theory says they have to be, if the U(1) gauge group of electromagnetism is compact. Then there's anomaly cancellation, which pretty much nails the exact charges of all the quarks and leptons in the Standard Model once you specify their weak and strong force interactions, for reasons of basic theoretical consistency. So, yes, the ratio of the electron and proton charge being exactly -1 is one of a relatively small number of statements about physics that I really believe is truly exact, and won't be revised by any future theory.
Oh, wait, I'm being kinda dumb. The gravitational force is actually more like 40-odd orders of magnitude weaker, which leaves me really confused about why the bounds aren't way stronger than they are.
Oh, I guess if you were, like, hyper-perverse, you could imagine that there was a tiny excess of the number of electrons in the universe that perfectly compensates for the tiny excess charge on a proton. Or something like that. So the direct bounds are a lot weaker than the dead-obvious "neutrality of matter" argument. But unless baryogenesis just works to give this perfect excess for reasons that I haven't thought through yet, this means that to buy that the right bound is only 10^-20 you also need a 10^-20 fine-tuning somewhere else. I'd have to think through the baryogenesis thing in this weird new hypothetical theory where neutrinos have 10^-20 hypercharge or whatever, but I suspect that the right bound is more like 10^-40. (And it occurs to me that if we get empirical evidence that the neutrino mass really is Majorana, that totally wipes out any possibility of an electron/proton charge ratio different from -1 unless you also believe that the photon has a mass, which is another of those hyper-unlikely and highly constrained things.)
370: Yeah, Ron's argument is exactly right. He's a very clever guy and it's kind of a shame that he couldn't tolerate academia, or it couldn't tolerate him, or whatever happened. (I know him a bit from my Ith/a/ca days; no idea what he's up to now.)
No one else wants to chat about physics at 2 AM on a Saturday? Sigh. What about bananas and dugout canoes or something?
I'm around and willing to chat about whatever. I don't know much about physics, though.
I did buy some bananas today, though.
Speaking of public speaking, My understanding is that listening to this interview will immunize against cholera if not actually cure it. (Although I was disappointed it lacked an extended back and forth during which the interviewer probed why a big doofus Jew would write a book about Native Americans, which protocol for interviewing historians was established in this FoxNews interview.
Bananas in Pajamas Alaskas, our species is the greatest ever.
Hyper-perverse should be a technical term in maths, if it isn't already.
374-378: I'm curious about his use of the term "conspiratorial"; does that have some specific meaning or is it just "so unlikely somebody would have to be doing it on purpose to fuckwith you"?
I still don't get 320 but at least I know now that it's obvious.
The intuition behind the IVF stuff seems straightforward enough to me that now I'm worried I'm missing something non-obvious if heebie doesn't get it. Pay confusion forward!
I think the problem is this: Utpegi phrased the problem "Suppose there are no uncomputable numbers out there. We'd never know the difference!" and I rephrased it to myself as "Assume there are uncomputable numbers out there, but we just can't detect them."
So my basic question was "Why do you prefer the former framework over the latter?" Which is not actually a math question. Everything about restricting domains, and hence ranges, lives within the framework of the former statement, but doesn't get at why one prefers the former statement.
390.2: ah, yeah. As far as I can tell it's a matter of taste, or (non-pejoratively) aesthetic judgment, or metaphysical intuition or something, but of course I stand ever correctable.
I don't know squat about physics, but I can think of several implausible scenarios in which uncomputability plays a role. if they're totally gibberish see the physics disclaimer.
Easier than uncomputability of numbers is uncomputability of functions: Say we find an extremely powerful new force. In the few instances we find of its manifestation, we find that it grows exactly like the Busy Beaver function (for those first few numbers of it which we can compute). We have no data regarding anything larger. We could use a really weird polynomial to explain the few data points we have, but we could also hypothesize that the force does indeed behave like BB.
Using Chaitin's constant: physicists come to the conclusion that this universe is actually a randomly sampled Turing machine. Commence frenzy as humanity demands to know the probability that it will halt (of course humanity cares! Just like we care about global warming).
Say we find an extremely powerful new force. In the few instances we find of its manifestation, we find that it grows exactly like the Busy Beaver function (for those first few numbers of it which we can compute). We have no data regarding anything larger. We could use a really weird polynomial to explain the few data points we have, but we could also hypothesize that the force does indeed behave like BB.
Or it could be midichlorians.
Point taken, I guess. I was trying to think of an example where the uncomputability doesn't kick in at some point of very exact precision which is mostly unnecessary, so we would be satisfied with the approximation our data gives, but rather the data and theory suggest some function or constant which is known to be uncomputable from elsewhere. It's true that that's not very likely to happen.
How do you compute the odds of whether that's a moon or a space station?
Point taken, I guess.
I didn't actually have a point, but your elaboration of my nonexistent point was quite thoughtful.
I'm actually sympathetic to the "some numbers exist and some don't" idea, but I still don't see why Upetgi is drawing the line where he is. Kronecker's comment "God made natural numbers; all else is the work of man" makes far more sense to me.
I'm further tempted to secularize it this way: we have perceptual knowledge of the natural numbers (as properties of small collections of discrete objects). But this perceptual knowledge underdetermines a whole lot of other ideas, including negative and irrational numbers. Once we get beyond this level, numbers are more like what I gather Lakoff says they are. The mathematical anti-realists I'm more familiar with, like Wittgenstein, include more of a social element, thought.
But this perceptual knowledge underdetermines a whole lot of other ideas, including negative and irrational numbers. Once we get beyond this level, numbers are more like what I gather Lakoff says they are.
I mean, what Lakoff would say, suuuuper approximately, is that we combine perceptual knowledge about smallish collections of discrete items with perceptual knowledge about, say, straight lines, and that gets us to negative numbers.
388, 390: Sorry; I misunderstood you as taking it to be the burden of the computable IVT to not only prove the existence of computable intermediate values, but the non-existence of non-computable intermediate values.
As to why one would prefer the computable framework, 391 is right. There are philosophical arguments in favor of the position, but they basically reduce to statements of metaphysical prejudice.
But that's not necessarily to damn the subject. Constructive (read: intuitionistic) logic was similarly motivated, and it turned out to involve interesting mathematics that also found application in other domains. But as far as I know, neither computable analysis nor constructive analysis has been similarly fruitful.
398: I should read his book. That is definitely where I was heading.
The thing is that our perceptual knowledge of collections and our perceptual knowledge of magnitudes don't fit well together, as Euclid and the Pythagoreans discovered. This is a big place where the empirical underdetermination comes in.
The Lakoff book came out at about the time I gave up on the philosophy of mathematics. Just thinking about that time in my professional life fills me with self doubt again.
The thing is that our perceptual knowledge of collections and our perceptual knowledge of magnitudes don't fit well together, as Euclid and the Pythagoreans discovered.
Not completely, no. But, of course, the whole of mathematics doesn't exactly fit together, either.
(Incidentally, Lakoff wouldn't talk about this, but there's reasonably good developmental evidence that (small) collections of objects and (analogue) magnitudes are subserved by separate, possibly innate brain regions.)
The brain uses exponential notation?
Or maybe exponential notation uses the brain.
(I've decided that an even better interpretative aid for this thread than that article about North Dakota strippers [actually a Texan living in Portland who does gigs in North Dakota] is the Stevie Ray Vaughn cover of Little Wing.)
The problem with the "natural numbers exist but everything else is made up" perspective is that in a lot of physics real or complex numbers really do seem to be more fundamental than natural numbers. Discreteness emerges from continuousness via eigenvalues of certain operators being discrete, not from an underlying notion of indivisibility. See essears first few comments on this thread.
An interesting contrasting example to 1 plus the ratio of charges of electron and proton, is the ratio of masses of neutrino and electron. This is also unitless unbelievably small, but we know it is non-zero by neutrino oscillation. I think (essear correct me if I'm wrong) that we knew it was nonzero before we had an effective lower bound.
The busy beaver question is fascinating. One could imagine something like instead of 3 generations of particles there are infinitely many, but their masses grow faster than any computable function.
What age were you when you realized that knowing words and putting them together into sentences was not always the same as thinking about concepts?
Note that once you have the integers, you get the rationals and many irrational numbers for free. For example, you can represent rational numbers as pairs of integers. Irrational numbers that are the roots of polynomials (with integer coefficients) are more work, but can be represented exactly by a finite number of integers.
This was essentially Kronecker's research program for the foundations of mathematics -- a concept is properly grounded if it can be represented exactly as a finite number of integers.
Rationals may be free, but electrons still charge.
The nice thing about the way I was trying to think about mathematics is that you don't have to be revisionist about current mathematical practice at all. The higher levels of math are largely social constructions, but that doesn't undermine their utility or make them less real than other social constructions.
I would like to hear more about the separate mental machinery in 401. When I was reading about this, there was interesting work by Gelman, Gallistel, Wynn and a couple other names positing an unconscious counting mechanism that we share with many animals that gives us an imprecise preverbal intuition about the behavior of collections of objects.
412: Butterflies are free, but Hawn still charges.
That's pretty anti-semitic, Heebie.
I'm allowed to be, I'm an assimilationist.
415.2: you might check out Feigenson, Dehaene & Spelke (2008) for a review of some of the empirical evidence. But yeah, Gelman and Gallistel have some doubters but are still in the mix a/f/a proposing overall mechanisms.
417: And George Nelson Onan withdraws.
I looked at that out of context and thought "Weiner's out of the mayoral race?"
If he'd stuck to onanism, there wouldn't be a problem, no?
422: I had the *exact* same thought.
I thought onanism included pretty much any sex act that didn't involve internal ejaculation, no?
I can hardly deny that that interpretation makes sense.
425: There goes my novel act of "cry, cry, cry, cry".
425: Only literally--if you want to be a bit more accurate, it would be in any location that could never result in offspring.
I am reminded of a lawyer joke.
Petitioner prays to hear the lawyer joke.
Good grammar, finely drawn distinctions and not much else?
Good grammar, finely drawn distinctions and not much else?
What else is there?
430: "I can't be pregnant, we only had anal sex!"
"The ultrasound shows you're having a lawyer."
One of the things I found surprising about Springer's Progress (published in 1977), which I don't know if I even picked up on the first time I read it, is that Springer and Cornford attempt anal sex on their first tryst, without either of them talking about it explicitly (or really at all) or, apparently, thinking it's anything out of the ordinary.
That's weird, right? I mean, my generation was the first to engage in oral sex, as everyone knows, and here these characters, who would be in their 60s and 80s today, are going at it like it's ancient Greece.
433: Now figure the profession to make the joke for intercrural sex.
433: "How do you grow a lawyer?"
"With Viagra."
Onan's actual sin was refusing to impregnate his brother's widow, right?
Well, that was what he *wouldn't* do. What he *did* do was spill the seed on the ground.
I didn't know that parrots were expected to take their brothers' widows to wife.
440: That's just brilliant. I'd never heard that one before.
I was disappointed it lacked an extended back and forth during which the interviewer probed why a...Jew would write a book about Native Americans
Because it's all about me, this was one of my first thoughts when I saw the Aslan interview.
Alas, it seems vanishingly unlikely that JPS is around now, so 443 is both narcissistic and a waste of pixels. Just go ahead and call me Anthony Weiner.
Don't worry, I'm always here. Not creepy one little bit.
Speaking of not-creepy, do you want me to text you a cock pic?
I assure you, it will be tasteful.
Beverly Hills Chihuahua II is somehow worse than you'd think it could be.
450: I have a possibly irrational hatred of preservatives for this reason.
Designing an experiment to determine whether it's rational or not is a trivial problem.
I think (essear correct me if I'm wrong) that we knew it was nonzero before we had an effective lower bound.
I'm not sure I know what that means. Any distribution I can imagine that shows it's above zero with some confidence would also let me put a nonzero lower bound on it, no?
The busy beaver example seems like a stretch to me. Can you really make reasonable guesses about asymptotic growth based on a handful of points?
I looked into it a little more, and it seems that neutrino oscillation experiments give a lower bound for the differences between neutrino masses and hence give a lower bound for all but the smallest one. So there is a concrete lower bound from the original experiments (though one that's way way too small).
Too small for what? There's an upper bound from cosmology that's no more than a couple orders of magnitude bigger than the lower bound from neutrino oscillations, so they're pinned down in a pretty small range.
"Your pulse rate is from ten to one thousand. You should maybe see a doctor who can narrow the range to a single order of magnitude."
My impression was that the bounds you get from the original experiments (oscillation of solar neutrinos) were really bad. (Now the bounds from other experiments are better.)
Just got off the phone with my mother. She believes that the policeman who called me is a participant in the conspiracy against her. She knows this partially because in her interaction with him, she complained that her keys had been stolen, and the next day a different set of keys, that had been stolen several years ago, reappeared on her couch. Her conclusion is that the policeman contacted the people who are stealing her things, and miscommunicated with them about which set of keys should be returned.
I told her I was concerned about her, and asked her to talk to a doctor about this. She refused: she believes she isn't crazy, she's being systematically harassed. I tried talking her through why what she was saying didn't make any sense, and got nowhere. I don't know why I thought that would have worked.
She very firmly told me she didn't need any help, and that her problems weren't any of my affair. I don't have any immediate ideas about what to do, practically, now that I know she's completely uncooperative. I suppose I need to find some kind of social service provider/agency that can give me advice for this situation.
Sorry about the threadjacking. The math and physics part of the thread is over my head, but clearly a lively discussion.
Sorry. I got nothing useful to add. I'm dealing wth a related issue but not quite same because the paranoia isn't there.
However, I am only two beers away from being able to explain what a hadron is.
And six from being able to see them.
If you like piña colliders, random walks on the brane...
465: Most men have trouble getting a hadron after enough beers.
Sorry, Franklin. I have no answers. In the personals column, I wish I'd given more people my number since I'm weepy and chatty, but our WiFi is out until Wednesday and so we can't watch Orange etc, and so Lee gets to go out and so whatever happensin more open terms because I've got both girls to sleep nd just wosh some adult would talk to me. (and Nia can sort of sibg along to Blurred Lines, which is more about Y camp than about me but still not reassuring.)
468: I wouldn't know anything about that.
Or after too many Rusty Nails. Same principle.
Deepest sympathies FDR. My dad went through something very similar with his mom. She wasn't in the US, so no useful advice for you on whether to approach social services people or MDs first. There, starting with MDs was the best way forward.
If you have family that knows the medical profession, local referrals for good geriatric psychiatry might be a start.
Franklin, so sorry. It will make you crazy convincing someone of objective truth that they can't see. (Broken logic is so frustrating!) I've had an unfortunate amount of experince with dementia and aging relatives, but not with the added degree of difficulty you are going through - mine have all been reasonably cooperative (for demented people), plus it's never been all on me to make the tough calls. My suggestion to you is to clarify to yourself what your goals are. You can't solve everything, but you can decide to try to keep your mother as happy as possible or as safe as possible (or some other goal, like not making yourself crazy). I think it helps a bit in making all the horrible decisions. A social worker (specializing in the elderly) might be the right place to start. You might also consider broaching the subject of having your name added to her bank accounts if your mother would agree. This isn't a big step and fairly common for older folks in my family - it means that in case of emergency, finances aren't inaccessible, but also it means that you can check in for truly troubling changes without having to rely on direct reports. One final suggestion is to call outside of your normal time to see whether she has better or worse hours to talk, since my experience has been that my relatives were more or less agreeable at different times in a pretty regular pattern (evenings bad, mornings after 10 good). Good luck.
Oh, fuck, my spelling! I hate everything and know better than to type one-handed while doing bedtime stuff, but I did because i'm the worst. Argh.
Ugh, that's awful, Franklin. Wish I had helpful advice. (And my sympathies, Thorn!)
(I don't know that I deserve sympathies. Now I feel bad for asking.)
476, of course you deserve nice wishes/sympathy? Sounds lonely. Hope you manage to entertain yourself. Or that you get a night out soon because it must be your turn next . . .
Hope you manage to entertain yourself.
Well, she was apparently typing one-handed "while doing bedtime stuff"...
If you can still type, it's probably not entertaining enough.
teo's ideas about the home lives of people with kids are an inspiration.
Ouch, FDR, that sounds tough. Do you think it's dementia, or maybe just a combination of personality quirks and anxieties? (My parents definitely have the latter. I'm no good at dealing with it, alas, because I'm so hesitant about interfering with their lives.)
Also, about the earlier discussion of whether "reality" is continuous or discrete: I think a related question is whether it is possible to measure any physical quantity with arbitrarily good precision? That is, although you cannot measure any quantity exactly, you can make the error bars as small as you like.
If you can do arbitrarily precise measurements, then you can distinguish any discrete theory (no matter how fine-grained it is) from a continuous theory.
But, if there is some fixed lower-bound on the error in your measurements (for instance, if the universe has finite size, finite energy and finite lifetime), then there are discrete theories that you cannot hope to distinguish from a continuous theory.
Time of day and actually local weather are great ideas. , midmorning recommendation seconded.
482 then there are discrete theories that you cannot hope to distinguish from a continuous theory.
No, not really, at least until someone shows me a novel sort of discrete theory that doesn't break relativity. The trouble is that even deciding that space is a lattice at really, really tiny distances (distances much smaller than we can directly probe) generically predicts order-one discrepancies in observables that test relativity. You can read what some moderately well-informed person wrote on StackExchange when the faster-than-light neutrino thing was in the news; it applies to discrete-spacetime theories as well.
479: Good point. She hasn't commented in a while now, so hopefully she stepped up the entertainment level.
I went to sleep, which is indeed pretty fascinating and was definitely the right choice. Presumably the people Lee was out with will have convinced her that quitting her job to become a dog-walker is not the right choice. And now it's time to wake kids up and get them into bathing suits and restart the circle of life.
484: that person's comment is a nice summing up, although not relevant to the computable/no computable distinction without easy-to-come-across difference, one supposes. Also probably not likely to bring people who haven't been following thus far on board.
And the painted horses, they go up and down.
481: It's definitely in accordance with pre-existing quirks and anxieties, but she never previously believed nutty stuff powerfully enough to call the police over it, or to conclude that the police were participating in a conspiracy against her. Something's been changing over the past few years, and I don't know what to call it other than dementia.
490: My observation is that many people tend to become caricatures of their earlier selves as they age (through whatever mechanism). But even without something like clinical dementia causing it*, it can certainly trip otherwise "OK" people into a place where they need intervention (take my father-in-law ... please**). But in that case, in some sense easier to arrange because he was already under regular medical treatment for physical problems.
*In the folks where I have witnessed it, it seems to manifest as a gradual (or rapid) erosion in some of their higher-level "social control" functions.
**In poor taste since he died last year, but pretend it was written by 2 years ago past-me ... which would is of course only marginally in better taste.
488: Right. It applies mostly to "discrete" in the sense that dalriata was clarifying in 130.2, I think.
492: I'm still confused as to how the computable reals can be countable (they have to be, right?) and yet have all these things that rely on infinitesimability work with them, but maybe that's a question for UPETGI.
484 has indeed taught me to be more cautious when slicing things into tiny cubes.
493: Well, the rationals are countable, and yet they're dense in the reals, so there are rational numbers arbitrarily close to whatever number you want. Computable numbers are a superset of rational numbers, right?
I mean, I think I share your intuition, but I think it's a bad intuition from having integers as our prototypical example of a countably infinite set of numbers.
That is probably right. Countable carries in my head an implication of sparse at some level, which doesn't need to be true. The idea that you can have something dense in the reals that's on some level computable is sort of brain-breaking, though.
I have a physics question... I was talking to a physicist a couple of years ago and understood (or perhaps misunderstood) from that conversation that the Higgs field only contributes some of the mass of elementary particles, with the remaining mass being contributed by other fields. So, for example, much of the mass of the down quark is actually coming not from the Higgs field but from the nuclear fields, whereas for a bottom quark the other fields contribute the same mass as they did for down but the Higgs field contributes a whole bunch of new mass. This explanation made a lot of sense to me.
However, when I look at the explanations at "Of Particular Significance" he seems to be saying that this point of view is totally wrong and that all the mass of elementary particles comes from the Higgs. For example on this post he has an exchange in comments where he explicitly says the above picture is totally wrong and 50 years out of date.
However, I heard this explanation in the past year or so, so either I totally misunderstood or there's something subtler going on.
Here's my guess... It's something like if you ignored the Higgs field you'd still get mass contributions from the other fields as in the first explanation, however ignoring the Higgs field while leaving everything else the same doesn't actually give a consistent theory and if you turn off the Higgs field it actually changes everything else in such a way that the other interactions no longer contribute mass to elementary particles.
For example on this post he has an exchange in comments where he explicitly says the above picture is totally wrong and 50 years out of date.
Where does he say this? His exchange with "James" early in the thread seems to be saying that he was oversimplifying and omitting the thing you're talking about.
Your first paragraph sounds like a reasonably good summary to me.
The exchange with Maury.
Oh, that's specifically about the electron. The electron doesn't interact with the strong force. Quarks would (sort of) have a mass if you turned off the Higgs, the proton would have pretty much exactly the same mass it has in our world, and the W and Z bosons would have a much smaller mass. Leptons would be massless.
I guess that makes sense to me, eg the difference in mass of the proton and neutron would be zero in a Higgs-less world, so if you want neutrons to still decay the electron would need to be massless.
But why wouldn't the electric field give it a tiny mass? (I guess I can already see part of the problem, if it did give mass to the electron it would also make the proton heavier than the neutron, but I'd prefer some explanation saying in what way electromagnetism is different from the strong force in terms of giving mass.
496: What I find particularly amazing about it is that the computable numbers are apparently a real closed field; most amazingly, that means all first-order statements about reals are true about the computables. Although this is supposedly true about the algebraic numbers, too.
501 But why wouldn't the electric field give it a tiny mass?
Self-energy corrections to the electron mass are proportional to the mass itself because of something called "chiral symmetry," so if you start with a massless electron you don't generate a mass. (This is a general fact about particles with spin: the massless and massive particles live in different representations of the Poincaré group, so massless particles never pick up a mass from quantum corrections.)