141

Deterministic models. Clarification of the question:

The problem with these blogs is that people are inclined to start yelling at each other. (I admit, I got infected and it's difficult not to raise one's electronic voice.) I want to ask my question without an entourage of polemics.

My recent papers were greeted with scepticism. I've no problem with that. What disturbes me is the general reaction that they are "wrong". My question is summarised as follows:

Did any of these people actually read the work and can anyone tell me where a mistake was made?

Now the details. I can't help being disgusted by the "many world" interpretation, or the Bohm-de Broglie "pilot waves", and even the idea that the quantum world must be non-local is difficult to buy. I want to know what is really going on, and in order to try to get some ideas, I construct some models with various degrees of sophistication. These models are of course "wrong" in the sense that they do not describe the real world, they do not generate the Standard Model, but one can imagine starting from such simple models and adding more and more complicated details to make them look more realistic, in various stages.

Of course I know what the difficulties are when one tries to underpin QM with determinism. Simple probabilistic theories fail in an essential way. One or several of the usual assumptions made in such a deterministic theory will probably have to be abandoned; I am fully aware of that. On the other hand, our world seems to be extremely logical and natural.

Therefore, I decided to start my investigation at the other end. Make assumptions that later surely will have to be amended; make some simple models, compare these with what we know about the real world, and then modify the assumptions any way we like.

The no-go theorems tell us that a simple cellular automaton model is not likely to work. One way I tried to "amend" them, was to introduce information loss. At first sight this would carry me even further away from QM, but if you look a little more closely, you find that one still can introduce a Hilbert space, but it becomes much smaller and it may become holographic, which is something we may actually want. If you then realize that information loss makes any mapping from the deterministic model to QM states fundamentally non-local—while the physics itself stays local—then maybe the idea becomes more attractive.

Now the problem with this is that again one makes too big assumptions, and the math is quite complicated and unattractive. So I went back to a reversible, local, deterministic automaton and asked: To what extent does this resemble QM, and where does it go wrong? With the idea in mind that we will alter the assumptions, maybe add information loss, put in an expanding universe, but all that comes later; first I want to know what goes wrong.

And here is the surprise: In a sense, nothing goes wrong. All you have to assume is that we use quantum states, even if the evolution laws themselves are deterministic. So the probability distributions are given by quantum amplitudes. The point is that, when describing the mapping between the deterministic system and the quantum system, there is a lot of freedom. If you look at any one periodic mode of the deterministic system, you can define a common contribution to the energy for all states in this mode, and this introduces a large number of arbitrary constants, so we are given much freedom.

Using this freedom I end up with quite a few models that I happen to find interesting. Starting with deterministic systems I end up with quantum systems. I mean real quantum systems, not any of those ugly concoctions. On the other hand, they are still a long way off from the Standard Model, or even anything else that shows decent, interacting particles.

Except string theory. Is the model I constructed a counterexample, showing that what everyone tells me about fundamental QM being incompatible with determinism, is wrong? No, I don't believe that. The idea was that, somewhere, I will have to modify my assumptions, but maybe the usual assumptions made in the no-go theorems will have to be looked at as well.

I personally think people are too quick in rejecting "superdeterminism". I do reject "conspiracy", but that might not be the same thing. Superdeterminism simply states that you can't "change your mind" (about which component of a spin to measure), by "free will", without also having a modification of the deterministic modes of your world in the distant past. It's obviously true in a deterministic world, and maybe this is an essential fact that has to be taken into account. It does not imply "conspiracy".

Does someone have a good, or better, idea about this approach, without name-calling? Why are some of you so strongly opinionated that it is "wrong"? Am I stepping on someone's religeous feelings? I hope not.

References:

"Relating the quantum mechanics of discrete systems to standard canonical quantum mechanics", arXiv:1204.4926 [quant-ph];

"Duality between a deterministic cellular automaton and a bosonic quantum field theory in $1+1$ dimensions", arXiv:1205.4107 [quant-ph];

"Discreteness and Determinism in Superstrings", arXiv:1207.3612 [hep-th].


Further reactions on the answers given. (Writing this as "comment" failed, then writing this as "answer" generated objections. I'll try to erase the "answer" that I should not have put there...)

First: thank you for the elaborate answers.

I realise that my question raises philosophical issues; these are interesting and important, but not my main concern. I want to know why I find no technical problem while constructing my model. I am flattered by the impression that my theories were so "easy" to construct. Indeed, I made my presentation as transparent as possible, but it wasn't easy. There are many dead alleys, and not all models work equally well. For instance, the harmonic oscillator can be mapped onto a simple periodic automaton, but then one does hit upon technicalities: The hamiltonian of a periodic system seems to be unbounded above and below, while the harmonic oscillator has a ground state. The time-reversible cellular automaton (CA) that consists of two steps $A$ and $B$, where both $A$ and $B$ can be written as the exponent of physically reasonable Hamiltonians, itself is much more difficult to express as a Hamiltonian theory, because the BCH series does not converge. Also, explicit $3+1$ dimensional QFT models resisted my attempts to rewrite them as cellular automata. This is why I was surprised that the superstring works so nicely, it seems, but even here, to achieve this, quite a few tricks had to be invented.

@RonMaimon. I here repeat what I said in a comment, just because there the 600 character limit distorted my text too much. You gave a good exposition of the problem in earlier contributions: in a CA the "ontic" wave function of the universe can only be in specific modes of the CA. This means that the universe can only be in states $\psi_1,\ \psi_2,\ ...$ that have the property $\langle\psi_i\,|\,\psi_j\rangle=\delta_{ij}$, whereas the quantum world that we would like to describe, allows for many more states that are not at all orthonormal to each other. How could these states ever arise? I summarise, with apologies for the repetition:

  • We usually think that Hilbert space is separable, that is, inside every infinitesimal volume element of this world there is a Hilbert space, and the entire Hilbert space is the product of all these.
  • Normally, we assume that any of the states in this joint Hilbert space may represent an "ontic" state of the Universe.
  • I think this might not be true. The ontic states of the universe may form a much smaller class of states $\psi_i$; in terms of CA states, they must form an orthonormal set. In terms of "Standard Model" (SM) states, this orthonormal set is not separable, and this is why, locally, we think we have not only the basis elements but also all superpositions. The orthonormal set is then easy to map back onto the CA states.

I don't think we have to talk about a non-denumerable number of states, but the number of CA states is extremely large. In short: the mathematical system allows us to choose: take all CA states, then the orthonormal set is large enough to describe all possible universes, or choose the much smaller set of SM states, then you also need many superimposed states to describe the universe. The transition from one description to the other is natural and smooth in the mathematical sense.

I suspect that, this way, one can see how a description that is not quantum mechanical at the CA level (admitting only "classical" probabilities), can "gradually" force us into accepting quantum amplitudes when turning to larger distance scales, and limiting ourselves to much lower energy levels only. You see, in words, all of this might sound crooky and vague, but in my models I think I am forced to think this way, simply by looking at the expressions: In terms of the SM states, I could easily decide to accept all quantum amplitudes, but when turning to the CA basis, I discover that superpositions are superfluous; they can be replaced by classical probabilities without changing any of the physics, because in the CA, the phase factors in the superpositions will never become observable.

@Ron I understand that what you are trying to do is something else. It is not clear to me whether you want to interpret $\delta\rho$ as a wave function. (I am not worried about the absence of $\mathrm{i}$, as long as the minus sign is allowed.) My theory is much more direct; I use the original "quantum" description with only conventional wave functions and conventional probabilities.


(New since Sunday Aug. 20, 2012)

There is a problem with my argument. (I correct some statements I had put here earlier). I have to work with two kinds of states: 1: the template states, used whever you do quantum mechanics, these allow for any kinds of superposition; and 2: the ontic states, the set of states that form the basis of the CA. The ontic states $|n\rangle$ are all orthonormal: $\langle n|m\rangle=\delta_{nm}$, so no superpositions are allowed for them (unless you want to construct a template state of course). One can then ask the question: How can it be that we (think we) see superimposed states in experiments? Aren't experiments only seeing ontic states?

My answer has always been: Who cares about that problem? Just use the rules of QM. Use the templates to do any calculation you like, compute your state $|\psi\rangle$, and then note that the CA probabilities, $\rho_n=|\langle n|\psi\rangle|^2$, evolve exactly as probabilities are supposed to do.

That works, but it leaves the question unanswered, and for some reason, my friends on this discussion page get upset by that.

So I started thinking about it. I concluded that the template states can be used to describe the ontic states, but this means that, somewhere along the line, they have to be reduced to an orthonormal set. How does this happen? In particular, how can it be that experiments strongly suggest that superpositions play extremely important roles, while according to my theory, somehow, these are plutoed by saying that they aren't ontic?

Looking at the math expressions, I now tend to think that orthonormality is restored by "superdeterminism", combined with vacuum fluctuations. The thing we call vacuum state, $|\emptyset\rangle$, is not an ontological state, but a superposition of many, perhaps all, CA states. The phases can be chosen to be anything, but it makes sense to choose them to be $+1$ for the vacuum. This is actually a nice way to define phases: all other phases you might introduce for non-vacuum states now have a definite meaning.

The states we normally consider in an experiment are usually orthogonal to the vacuum. If we say that we can do experiments with two states, $A$ and $B$, that are not orthonormal to each other, this means that these are template states; it is easy to construct such states and to calculate how they evolve. However, it is safe to assume that, actually, the ontological states $|n\rangle$ with non-vanishing inner product with $A$, must be different from the states $|m\rangle$ that occur in $B$, so that, in spite of the template, $\langle A|B\rangle=0$. This is because the universe never repeats itself exactly. My physical interpretation of this is "superdeterminism": If, in an EPR or Bell experiment, Alice (or Bob) changes her (his) mind about what to measure, she (he) works with states $m$ which all differ from all states $n$ used previously. In the template states, all one has to do is assume at least one change in one of the physical states somewhere else in the universe. The contradiction then disappears.

The role of vacuum fluctuations is also unavoidable when considering the decay of an unstable particle.

I think there's no problem with the above arguments, but some people find it difficult to accept that the working of their minds may have any effect at all on vacuum fluctuations, or the converse, that vacuum fluctuations might affect their minds. The "free will" of an observer is at risk; people won't like that.

But most disturbingly, this argument would imply that what my friends have been teaching at Harvard and other places, for many decades as we are told, is actually incorrect. I want to stay modest; I find this disturbing.

A revised version of my latest paper was now sent to the arXiv (will probably be available from Monday or Tuesday). Thanks to you all. My conclusion did not change, but I now have more precise arguments concerning Bell's inequalities and what vacuum fluctuations can do to them.

Řídící
  • 6,725
G. 't Hooft
  • 5,341
  • related http://physics.stackexchange.com/q/34165/3229 – Curious George Aug 15 '12 at 10:33
  • 1
    I think the issue is that the superdeterminism in your model isn't explained intuitively. At least that seems to be the recurrent objection.

    Also, as I've told you before, a lot of people are content with Many Worlds because then they got a "clear" mental image of what is going on. Plus it is seemingly local and deterministic. So when someone comes along with a "deeper" theory, that obviously cause a lot of headaches.

    Because as you say, right now your model doesn't predict or give you any of the known physics. So it doesn't work as a good mental image to conduct quantum mechanics in.

    – QuestionAnswers Aug 15 '12 at 11:00
  • 3
    As an experimental physicist I keep an open mind on theoretical possibilities, being quite aware of the history of physics, and the reversals therein, some of which I have lived through first hand. I find myself biased against a reality based on states being zero or one (or any integer number at that). It is probably because back in 1967 when we got at our institute our first computer and started playing with monte carlo events I had serious disagreements with computer scientists, who claimed : we can describe everything with computers, no need for CERN experiments now!! People who disagree – anna v Aug 15 '12 at 14:04
  • continued:on general principles each must have some similar background. It is only those versed in the specific mathematics who can really contribute to the discussion and I am reading with interest the controversy. A second bias on my part is that I have seen discreetness arise from continuum, but have no intuitive feeling on how continuum can arise from discreetness. – anna v Aug 15 '12 at 14:09
  • I'm not a physicist, but I am curious if there is a relationship between your theory and Belavkin's approach (eventum mechanics) for example: http://arxiv.org/abs/quant-ph/0512187 http://arxiv.org/abs/quant-ph/0512188 http://arxiv.org/abs/math-ph/0702079 –  Aug 15 '12 at 13:27
  • 2
    I don't think anybody was yelling at you or calling you names, certainly physics people have the highest respect for everything you write, which includes breaking one's head over several years trying to internalize every single one of your ideas, even the ones that aren't 100% right. people just said you weren't 100% right, because the no-gos aren't circumvented. This a little blunt, but not really rude. I think I figured out a slightly different way of doing what you want, I'll post it as an answer. – Ron Maimon Aug 16 '12 at 04:06
  • 8
    Dear @QuestionAnswers, you're misinterpreting what I say: I don't think that these questions are illegal to ask. They're legal to be asked, they were asked about 90 years ago and they were answered 85 years ago. It's silly, and not illegal, to ask them again in 2012 because physics has known the answer for quite some time. It's a pretty long time. 85 years after physicists determined that heliocentrism was more right than geocentrism, it was generally viewed as silly to question heliocentrism again. Learning in the modern age should be faster but it's apparently not. – Luboš Motl Aug 16 '12 at 06:10
  • 3
    @LubošMotl: This is a mischaracterization. You can consider it a mathematical exercise--- I want to approximately simulate QM, but my computer is too small to store the state. Can I do it using a computer of size growing roughly (large) constant times the number of particles? Is it possible? This is essentially what t'Hooft is asking. It is not ruled out by the no-go theorems, if it is sufficiently nonlocal (the example I give is horrendously nonlocal in M-copies all interacting together). The statement that it is impossible to reproduce QM from hidden variables is explicitly refuted by Bohm. – Ron Maimon Aug 16 '12 at 06:57
  • 1
    Dear @Ron, you obviously can't simulate quantum phenomena by a classical computer whose size scales as the number of particles. If this were possible, one could write down simple classical algorithms to imitate the fast algorithms that only work at quantum PCs. You surely know these things, don't you? So why are you trying to sell this question as a good one? Nonlocality isn't a recipe to emulate quantum mechanics; realism not locality is the wrong assumption here. Nonl. realist theories may also be falsified, see e.g. http://motls.blogspot.cz/2007/04/falsifying-quantum-realism-again.html?m=1 – Luboš Motl Aug 16 '12 at 07:39
  • 1
    To simulate quantum degrees of freedom, you clearly need to remember the whole wave function and treat it as a "classical observable that collapses", so the complexity grows exponentially with the number of the quantum degrees of freedom. Despite this exponential investment, you don't get the right theory of the physical phenomena. You only get a simulation, something that belongs to the computer game industry, not science. Science is about the real phenomena, not best ways to fake them. The computer imitation would have to be infinitely fine-tuned to fake basic features such as Lorentz sym. – Luboš Motl Aug 16 '12 at 07:44
  • 4
    @LubošMotl: Yes, of course you can't simulate quantum computation. The thing is, these types of quantum computing states are incredibly entangled, and very hard to realize without decoherence spoiling them, so much so that we haven't realized any such states experimentally. The question is whether you can simulate workaday QM, lots of decoherence, no quantum computer around, by a linearly scaling computer. You could say "do collapse", but that's harder than it looks computationally. In a discrete QM analog, you get automatic collapse, and you can always do monte-carlo. – Ron Maimon Aug 16 '12 at 08:58
  • 1
    @QuestionAnswers: Although I see your point, it's not good to be biased in philosophy. Lubos's "acausal events" intepretation is not so different from many-worlds, they only differ by mumbo-jumbo, and many-worlds is superficially realistic (although consider t'Hooft's model in many worlds: the universal wavefunction is never superposed, yet one sees that the projections to relative states definitely make relative states superposed--- how does this happen, really? There is no projection, it is more difficult to see than in usual QM models). – Ron Maimon Aug 16 '12 at 09:40
  • 3
    @LubošMotl: The theorem you used to argue against nonlocal realism is no good--- it is secretly using variations on locality to do the argument, namely by making assumptions on the type of statistical mixtures a realistic theory is allowed to have. In the description I gave, you can see how horrendous nonlocality can spoil these types of assumptions in a natural way. Further, you claim in your blog that field theory and relativity is local and therefore physics is local, but you very well know that string theory is nonlocal, and holography has cast locality out the window. – Ron Maimon Aug 16 '12 at 10:11
  • Dear @Ron, I don't think your question about the simulation "without lots of entanglement" is well-defined in any sense. The more classical a situation is, the less important the quantum phenomena become. But there's nothing wrong with entanglement. Entanglement is the norm in quantum mechanics. When one deals with characteristic phenomena of quantum mechanics, near-maximal entanglement is the rule, not an exception, and that's where the inadequacy of any "classical model" becomes most obvious. One may at most try to hide his head into sand and overlook the falsification. – Luboš Motl Aug 16 '12 at 15:36
  • Dear @QuestionAnswers, you wrote: "The thing is, for YOU the answer was given 80 years ago. But for most realists, it was not, hence Many Worlds interpretation, de-Broglie Bohm interpretation, Zig Zag in time interpretations and so on." Nope, that's not how science works. Evidence in science isn't subjective. The evidence was found 85 - not just 80 - years ago and it was for everyone. The evidence irreversibly showed that any "realistic description" is incompatible with observations. Everyone who declares himself to be a "realist regardless of evidence" is guaranteed to be wrong forever. – Luboš Motl Aug 16 '12 at 15:39
  • 2
    Dear @Ron, the 2007 Zeilinger et al. paper explicitly discusses classes of models that are non-local, and they may still be shown to be incompatible with the observations, thus proving that nonlocality isn't the "cure" here. You're just trying to throw fog on these indisputable and established facts. At any rate, this discussion is academic because it's been established since 1905 that the laws of Nature in the flat spacetime are exactly local; the locality follows from the Lorentz symmetry. They're just not classical (i.e. "realist"). Nature follows quantum i.e. unrealist, probabilistic laws. – Luboš Motl Aug 16 '12 at 15:41
  • 21
    @LubošMotl: You are repeating ridiculous propaganda as if it were fact, this is just deplorable fear-mongering. Zeilinger's "classes of models" is a very myopic class that doesn't include mine or any other reasonable nonlocal model. Locality and Lorentz invariance are unrelated, the naive arguments nonwithstanding. Here is a Lorentz invariant nonlocal action: $\int {\phi(x)\phi(y) \over ((x-y)^2 + 1)^{.73}} d^4x d^4y$, there are lots of others. Locality is absent in string theory, there are no local fields, and it is completely absent in AdS/CFT bulk, where the whole spacetime is emergent. – Ron Maimon Aug 16 '12 at 17:09
  • 5
    ... regarding "realism", what is disproved is local realism, with Bell's inequality, and "small realism" that reproduces exact QM, using Shor's algorithm. All other constraints are weaker. Bohm's theory shows you can't disprove (exponentially large) realism, because it works and it is realistic, so there is no general no-go. But a modern "realist" is not looking for Bohm's theory, but a theory that fails to reproduce QM in the highly entangled domain, and therefore gives actual different predictions. Such a theory is what t'Hooft is after, and it makes sense to look for this. – Ron Maimon Aug 16 '12 at 17:15
  • 5
    ... even if it is physically wrong, if mathematically right, it is a prescription for reducing highly entangled states. Of course entanglement is the norm! But we usually call it "collapse", not entanglement, and it usually only goes one-way--- reducing a quantum system in complexity. The delicate cases of quantum computation require fine-tuning to make the entanglement go back and forth, not getting collapse, but nontrivial computation. Most cases, you can approximate entanglement as collapse. A "realist approximation" to quantum mechanics is useful for automatic collapse simulation. – Ron Maimon Aug 16 '12 at 17:19
  • Dear @Ron, your action is Lorentz invariant but it is acausal: an effect may influence its past. So unless similar field-theory actions may be shown to be equivalent to a local action, they produce an inconsistent theory. It is not really true that one gets similar "nonlocalities" in string theory, either. When you express the degrees of freedom correctly as functions of center-of-mass degrees of freedom, dynamics is exactly local, see e.g. http://arxiv.org/abs/hep-th/0406199 - What I wrote above is not "propaganda" but basics of quantum mechanics. – Luboš Motl Aug 16 '12 at 18:55
  • Again, it is not true that just local realism is disproved as a basis for a theory of the phenomena in the microscopic world. Realism as such is ruled out and the founding fathers of quantum mechanics have known this fact since the mid 1920s. There is no collapse in the real world and even if you imitate the real world by a classical model - and again, physics is about Nature, not about imitations, and one may use standard physics arguments to show that the imitation isn't the real deal - the collapse isn't the same thing as entanglement. Entanglement has to be "generic" even in an imitation. – Luboš Motl Aug 16 '12 at 18:59
  • @QuestionAnswers: "Are realists i.?" - Yes, I would use a different terminology on this website but they are. Yes, I let you be an i. but it is still my duty on this forum to point out that the "realists" are i's. If you want to prevent people from saying that physics has known the fundamental laws to be quantum i.e. non-realist for 85 years, you will have to try to contact a modern counterpart of the inquisition but be ready that such institutions are less powerful than they used to be during Galileo's times. – Luboš Motl Aug 16 '12 at 19:02
  • 2
    As @RonMaimon has explained, realism has obviously not been disproved. I don't even understand how you can claim to do science when you try to get rid of reality itself. However I don't care either. I am as sure you are wrong as you are sure you are right. – QuestionAnswers Aug 16 '12 at 19:26
  • 8
    The real issue I think has been exposed most clearly by Ron Maimon and some others in their earlier contributions: the problem is that in a CA the "ontic" wave function of the universe can only be in specific modes of the CA. This means that the universe can only be in states psi_1, psi_2, ... that have the preoperty ( psi_i | psi_j ) = delta_ij (apologies for this non-latex notation), whereas the quantum world that we would like to describe allow for many more states that are not at all orthonormal to each other. How could these states ever arise? – G. 't Hooft Aug 16 '12 at 19:42
  • 2
    I am tempted to think that the answer to this question is a radical one, that may well upset many of you: -| to describe our world, we have invented Hilbert space which contains not only basis elements but also all superpositions. -| we have learned to think that this Hilbert space is separable, that is, inside every infinitesimal volue element of this world there is such a Hilbert space, and the entire Hibert space is the product of all these. -| normally, we assume that any of the states in this joint Hilbert space may represent an "ontic" state of the Universe. – G. 't Hooft Aug 16 '12 at 19:43
  • 2
    -| the upsetting thing is that this might not be true. The ontic states of the universe may form a much smaller class of states psi_i -| all we need to assume is that all ontic states of the universe form an orthonormal set. This orthonormal set is NOT separable, and this is why, locally, we think we have not only the basis elements but also all superpositions. Observe that it is easy to imagine such sets.

    This orthonormal set is then easy to map onto an automaton. There is no need to think that this automaton cannot be a local one.

    – G. 't Hooft Aug 16 '12 at 19:44
  • 2
    To be precise: the ontic states are only separable if presented with the CA states as basis. The states are non-separable if described with Standard Model(SM) states as a basis. The SM states obey local diff equs when expressed in terms of CA states, but the solutions of these equations are non-local. – G. 't Hooft Aug 16 '12 at 20:14
  • Do you have a simple CA model that exhibits superselection? – Columbia Aug 17 '12 at 03:54
  • 4
    @LubošMotl: First, the Gross/Erler paper doesn't mean string theory is "local", it is "lightcone-local" which is not exactly the same thing. Light-cone locality of string interaction was first shown by Mandelstam in the 1970s, essentially for the purposes of giving a causal initial value formulation, and this was the reason how string field theory was formulated in the first place. When the light cone coordinates are obscure, like when you have a gravitational horizon, the arguments don't translate, which is why string field theory is not so fundamental, it is not fully non-perturbative. – Ron Maimon Aug 17 '12 at 03:56
  • 2
    @LubošMotl: Secondly, Lorentz invariance and locality are separate concepts, and this is true even with theories which admit an initial value formulation. You can use an action $\int \phi(x)\phi(y) G(x-y)$ where G(x-y) is only nonzero in the forward light cone, and make a phase-space that includes all the past history of the classical field $\phi$. These stupid trick actions are not good actions, but they are ruled out because they aren't correctly quantum, not because they are nonlocal and Lorentz invariant. AdS/CFT construction in string theory (or Matrix theory) is clearly bulk nonlocal. – Ron Maimon Aug 17 '12 at 04:00
  • 3
    @LubošMotl: Regarding reality, I tend to think quantum mechanics is exact, because I accept no-collapse interpretations as consistent philosophically within themselves. But I do not accept that realism is dead, because Bohm is real and Bohm is causal, and Bohm is equivalent to QM. It's not good as a theory, but it's good as a counterexample to overly strong claims. – Ron Maimon Aug 17 '12 at 04:01
  • 2
    @G.'tHooft: I am confused--- did you find something wrong with the construction I suggested in the answer? It's not what you are doing, but it is so natural to me for this purpose that I thought for a long time it was what you were doing, and I got confused when I couldn't map what you were doing to this thing. I am pretty sure that it really does work to embed an orthogonal Hilbert space in a probability space, and restricting to unitarity is rather easy afterward. Regarding non-separability, do you mean a nondenumerable infinity of automaton states? This is an unnecessary retrenchment. – Ron Maimon Aug 17 '12 at 04:07
  • 1
    As an experimentalist I am lost with the universe arguments. It seems to me to be all about mathematical intuition and proofs or no go theorems. Why cannot this famous CA start with a proton, lets say, or even simpler, an electron? The mind boggles if one needs the whole universe to study an electron scattering on a proton – anna v Aug 17 '12 at 07:11
  • 3
    @annav: One needs to study a whole classical universe (a lot more data than you would normally associate to two particles) to describe even one quantum proton scattering off one quantum electron, because you can't make quantum mechanics emerge in a small way--- the computation in multi particle QM is just too big. The different parts of the classical universe are sniffing different quantum options, to reproduce QM in the small. This is so nonlocal it is only barely conceivable, and only because of the holographic principle-- we know the proton and the electron are smeared out anyway. – Ron Maimon Aug 17 '12 at 09:18
  • Sorry for the lag, but I didn't answer the question about $\delta\rho$ because I hadn't constructed any nontrivial quantum systems (I only fixed the infinitesimal problem on wednesday). I think I can reproduce Bohm. The intended interpretation is that $\delta\rho$ is the wavefunction. When I reproduce Bohm from a limit of this, I'll know everything. It looks like a closure of Bohm, so that the wavefunction is a function of the positions of the instantiations (there are many). I will fill in the details when I finish working them out (I did some this weekend, but I never have enough time). – Ron Maimon Aug 20 '12 at 08:18
  • It's not stuff that "everyone already knows". As evidenced by all these threads, you or Gerard 't Hooft, aside from about 95% of users who visit these threads, obviously do not know these things. Maybe the two of you really don't realize that you are denying QM but that's because you completely misunderstand it. In reality, you are denying every single postulate of quantum mechanics. That's true about the assumptions, intermediate results, as well as applications of QM. For example, it is logically impossible for a "theory reproducing QM" to imply that quantum computers don't work. – Luboš Motl Aug 20 '12 at 09:37
  • 4
    @ Motl: Apparently you axiomatize QM by basing it on "postulates". Clearly you won't understand my theory if you are not prepared to make any amendments, since your postulates are imprecise. You said that "experiment has shown that one can superimpose quantum states". Not true, you can only do this with the templates you are using, but not in the real world. When you consider superposition of two states, you ignore the environment of these two states, which are never the same, hence always orthogonal. – G. 't Hooft Aug 20 '12 at 15:30
  • 3
    @ Motl: In ordinary applications of QM you can ignore this, since the templates are good enough, but not in questions of the interpretation of QM. – G. 't Hooft Aug 20 '12 at 15:32
  • @'t Hooft, I am very curious why it is that you abhor Bohmian mechanics. It seems to me that it has already done what you are trying to do -- provide a perfectly consistent realist alternative to quantum mechanics. – user7348 Aug 28 '12 at 16:16
  • 1
    @user7348, Do you mean those pilot waves? I think these are ugly concoctions, but I do agree that they show that there is a possibility in principle. I think that elegance and plausibility will be important assets of a healthy theory. I can't make working field theories using Bohm. I am talking about much more fundamental principles. And most importantly: my theory IS quantum mechanics, not an "alternative". – G. 't Hooft Aug 29 '12 at 13:20
  • @ 't Hooft: Why can't you make a working field theory with Bohm? It is because it is very difficult to make the theory compatible with relativity. If you want to talk about fundamental principles, I think you have missed the whole fundamental principle told by Bell's Theorem -- that nature is non-local. Not sure if you've ever read EPR, but Einstein was arguing that quantum mechanics had to be incomplete to avoid what he famously called "spooky action at a distance". Bell showed that Einstein's idea didn't work, he said, "it's a shame it doesn't work". What does that leave us with? Spooky! – user7348 Aug 29 '12 at 14:04
  • @user7348 while I never appeal to authority, I'd just like to point out that G. 't Hooft is a Nobel Prize winner. If you want to check his credentials, just go to wikipedia. Of course he is aware of Bells Theorem, EPR and what the theorem shows UNDER CERTAIN ASSUMPTIONS.

    't Hooft is challenging the assumption that you are "free" to measure X instead of Y given the exact same initial conditions. A well known loophole that Bell himself told people about quite a few times. He termed it "superdeterminism".

    – QuestionAnswers Aug 29 '12 at 15:50
  • @QuestionAnswers: And I am very well aware of the superdeterminism loop whole, Bell proposed that as a possibility, but he did that as an intellectual curiosity exploring all possible avenues. He didn't really believe it. I understand 't Hooft is one of the most important physicists of the last half century, but I do feel he is "burying his head in the sand" as Bell said of all but Einstein. Bell and his work is probably the most misunderstood in the history of physics. Nature is non-local that's Bell's theorem, Einstein's program failed. – user7348 Aug 29 '12 at 17:01
  • 1
    @QuestionAnswers/'t Hooft: I think it is time to start speculating about the real problem: how can non-locality be made compatible with relativity? – user7348 Aug 29 '12 at 17:05
  • @user7348 prove it. How does the many-worlds interpretation incorporate non-locality? How exactly does 't Hoofts models break down when he does not accept the assumption made to construct bells proof? – QuestionAnswers Aug 29 '12 at 18:19
  • @QuestionAnswers: I can't prove it. But, many-worlds belongs in the crank file, and superdeterminism is a conspiracy. I would do anything to hear Einstein's reaction to Bell's theorem. – user7348 Aug 29 '12 at 18:25
  • 1
    @user7348 Explain how MWI is crank and why superdeterminism as proposed by 't Hooft is wrong, if not please stop participating in discussions surrounding 't Hoofts theories. All you are really doing is saying "bells theorem is correct" – QuestionAnswers Aug 29 '12 at 19:46
  • @QuestionAnswers: Fair enough. I was never insulting 't Hooft. I just wanted to understand why he doesn't like non-locality. To me, the natural thing is to embrace it, not deny it. But, fair enough. – user7348 Aug 29 '12 at 19:59
  • Superdeterminism is obviously there, if you care to give it some thought. Now, when people talk of "conspiracy" they really mean that they don't understand the result. But you can understand it this way: the CA can be treated as a fully grown quantum system, as a QFT. This QFT leads to correlations that look like conspiracy. Hence this apparent conspiracy is there. Don't be afraid of spooks. Mathematically, there's nothing wrong with them, just a bit difficult. – G. 't Hooft Aug 30 '12 at 22:44
  • They even don't violate locality, since the QFT has commutators that always vanish outside the light cone. It could be that there is some rudimentary non-locality in the mapping, but I have not really encountered that yet. – G. 't Hooft Aug 30 '12 at 22:45
  • OK let me correct that last statement. Some pseudo-non-locality can enter in two ways: 1: the description of the vacuum state as a superposition of CA states. The vacuum makes discussion of Bell's inequalities very difficult in CA. 2: There are good reasons for imagining information loss to occur in the CA. You can still map that onto a quantum system (with full CPT invariance), but that mapping leads to holography and apparent non-locality. – G. 't Hooft Aug 30 '12 at 22:55
  • @ 't Hooft: I would really be interested to hear what you have to say on the following question since I have the highest respect for your achievments. For just a moment, let's place your recent work aside, and discuss Bell's theorem as it could be discussed in the 1960s when Bell first made his discovery. If one takes quantum mechanics as it is, with no additional hidden parameters, no modifications at all, can we say that quantum mechanics ITSELF is non-local? I ask because that seems to be THE critique Einstein made of quantum mechanics. – user7348 Sep 01 '12 at 16:08
  • 14
    @'t Hooft: I would also like to thank you for participating in physics.stackexchange.com It has been fascinating having discussions with a true giant of physics. – user7348 Sep 01 '12 at 16:11
  • 2
    @user 7348: 1: if you don't care about hidden variables, quantum mechanics as it is, or more precisely quantum field theory, is entirely local. Locality means that if we have, in the Heisenberg notation, two field operators depending on space-time: $Op_1(x_1,t_1)$ and $Op_2(x_2,t_2)$, then they must commute if $(x_1,t_1)$ and $(x_2,t_2)$ are completely space-like separated. This holds for QFT and even (in spite of claims to the contrary) for string theory (if two points are spacelike separated in target space, they also are so in the world sheet - assuming we may ignore certain projections). – G. 't Hooft Sep 05 '12 at 09:37
  • For many physicists, this is all that matters, also in the 1960s. I presume Einstein did think of something like hidden variables. 2: WITH Hidden variables, Bell claims that the hidden variables are non-local, but what he really means is that the Ansatz equation that he starts with cannot be satisfied. My claim is that in a superdeterministic scenario that equation cannot hold, even if the evolution laws of a CA are local. – G. 't Hooft Sep 05 '12 at 09:42
  • 't Hooft, I don't want to ask a juvenile question, but in the absence of hidden variables, in an entanglement type experiment, how can you explain how the spins are always correlated? There must be some communication going on, I think that was the point of EPR -- to show that quantum mechanics is incomplete by a reductio ad absurdum argument. But, if you except Bell's theorem it shows Einstein was wrong. So, I disagree that "if we don't care about hidden variables, quantum mechanics as it is is entirely local." – user7348 Sep 05 '12 at 12:53
  • 1
    @user7348 See my answer in http://physics.stackexchange.com/questions/34650/definitions-locality-vs-causality/34675#34675 . There is not some communication going on in entangled states, there are just previous correlations. – Diego Mazón Sep 06 '12 at 05:11
  • @user7348 What one has to require in a quantum theory to be local or causal is that observables commute at space-like distance, instead of fields. There are theories, or formulations of theories, in which the fields do not commute at space-like distances but they are perfectly local or causal since the observables do. I guest 't Hooft would agree. – Diego Mazón Sep 06 '12 at 05:17
  • @ drake: I don't know what theories you talk about. In my theory, obtained from a local mapping from a local CA, the only non-locality is over a small number of lattice sites. Further away, all commutators outside the light cone do vanish. – G. 't Hooft Sep 06 '12 at 16:38
  • @ user7348: I do not accept Bell's theorem that easily. It is difficult to see exactly what happens, but it is crucial that all CA observables at all times commute. After my unitary mapping, therefore, only observables that are orthogonal to each other are uniquely defined in terms of CA variables. Now, since the mapping is complicated, these observables are different every time you do an experiment. Therefore we can have counterfactual observables that do not commute. – G. 't Hooft Sep 06 '12 at 16:51
  • If you repeat an experiment, or do it many times, you therefore can't modify one observable without affecting an other one, somewhere, somehow. It is difficult to understand how this happens, you have to remember that the vacuum surrounding us is a very complicated entangled state. All this is the real reason why Bell may be violated in the CA. So I ignore Bell, and in that case qm (rather: qft) is local. – G. 't Hooft Sep 06 '12 at 16:51
  • @'t Hooft, I'm sorry but your answer seems off topic. I was not asking about your CA theory. I said that I have a hard time understanding how correlations can be explained in standard quantum mechanics without some "spooky" communication going on. I am in the situation where even though I am willing to give up realism, I still think there is a problem with relativity. So my question to you is: Suppose you are like me and you can give up realism, and accept quantum mechanics completely. Do we have spooky actions at a distance? Call this a silly question, but it is Einstein's EPR question. – user7348 Sep 07 '12 at 20:03
  • 1
    @user7348: No, quantum field theory as it stands has causality built in; for that, it is sufficient to demand that all commutators vanish outside the light cone. This guarantees that no signal ever will go faster than light. So qft obeys relativity and has ordinary qm particles in its non-relativistic limit. Everything is fine, no problem with relativity, until you try to understand what the ontology is. You have to remember that spacelike correlations are fine if you can explain them in terms of intial states in the past. – G. 't Hooft Sep 07 '12 at 23:35
  • 1
    Think of qft as a large set of quantum harmonic oscillators, each oscillating at isolated points in space. Then assume that each oscillator shows interactions only with its direct neighbors. In qft, these are quantum interactions. To most theorists, this looks sufficiently local, no spooky signals. – G. 't Hooft Sep 07 '12 at 23:36
  • 1
    @ 't Hooft: Thanks 't Hooft! Thanks for your answers, and thanks for participating in physics.stackexchange! – user7348 Sep 07 '12 at 23:49
  • @ 't Hooft: I actually have one other question, one very suited for a master of QFT. You mentioned that Bohmian Mechanics can't be used to construct field theories. I've always found this suprising given that it reproduces quantum mechanics. So, why can't a Bohmian field theory work? – user7348 Sep 10 '12 at 18:45
  • 3
    In principle, yes, one should be able to construct a Bohmian field theory, but I think it would be inelegant. To my taste, Bohmian mechanics adds far too many "unobservable observables" in the form of pilot waves. This would be awful for field theories, where the pilot wave would be a field functional, or a function of infinitely many particle positions. – G. 't Hooft Sep 12 '12 at 07:55
  • "On the other hand, our world seems to be extremely logical and natural." Well, there's the mistake you're making. Nature has no obligation to conform to human logic. There is no reason to expect that the most accurate models should be the most "logical" models. Our human logic is conditioned from particular scales of size, energy, speed, mass and so on. When we venture outside the realm where our minds developed, of course we will find things not to make intuitive sense to us. – electronpusher Apr 19 '22 at 12:41
  • @electronpusher It isn't apparent to me that G. 't Hooft is using a premise that Nature has an obligation to conform to human logic. His wording is consistent with having made an empirical induction, which of course is subject to the problem of induction. I don't know what he meant by "natural". – Galen Apr 19 '22 at 15:44

6 Answers6

87

I can tell you why I don't believe in it. I think my reasons are different from most physicists' reasons, however.

Regular quantum mechanics implies the existence of quantum computation. If you believe in the difficulty of factoring (and a number of other classical problems), then a deterministic underpinning for quantum mechanics would seem to imply one of the following.

  • There is a classical polynomial-time algorithm for factoring and other problems which can be solved on a quantum computer.
  • The deterministic underpinnings of quantum mechanics require $2^n$ resources for a system of size $O(n)$.
  • Quantum computation doesn't actually work in practice.

None of these seem at all likely to me. For the first, it is quite conceivable that there is a polynomial-time algorithm for factoring, but quantum computation can solve lots of similar periodicity problems, and you can argue that there can't be a single algorithm that solves all of them on a classical computer, so you would have to have different classical algorithms for each classical problem that a quantum computer can solve by period finding.

For the second, deterministic underpinnings of quantum mechanics that require $2^n$ resources for a system of size $O(n)$ are really unsatisfactory (but maybe quite possible ... after all, the theory that the universe is a simulation on a classical computer falls in this class of theories, and while truly unsatisfactory, can't be ruled out by this argument).

For the third, I haven't seen any reasonable way to how you could make quantum computation impossible while still maintaining consistency with current experimental results.

Peter Shor
  • 11,274
  • 1
    I admit my strength is computer science and not physics these days, but wouldn't any deterministic QM theory imply that the running time of an algorithm on a quantum computer is a fiction too? Remember that the QC had the entire history of the universe to prepare it's internal state so that it could solve programmed problems in sub-classical time. The QC is basically running a classical, partially evaluated version of your computation where much of the work was already done before you pressed "run". – naasking Jan 15 '17 at 06:27
  • 7
    @naasking: If you need to run a $2^n$ algorithm for moderately large $n$, having the entire history of the universe isn't that much help. – Peter Shor Jan 15 '17 at 15:18
  • Can you explain why determinism => quantum computing doesn't work? Bohmian Mechanics is experimentally equivalent to traditional Quantum Mechanics, how can it be that a deterministic theory rules out quantum computing? except minor caveats which do not affect the ability to make N entangled 2-level systems or logic gates – doublefelix Jan 02 '20 at 03:02
  • @doublefelix; Bohmian Mechanics is non-local, meaning that as far as I can tell, a system of size $O(n)$ doesn't have any fixed size description. This is the second possibility I list in my answer. So my answer allows for Bohmian mechanics. And if you look at the question, 't Hooft says "So I went back to a reversible, local, deterministic automaton and asked: To what extent does this resemble QM, and where does it go wrong?" So the question excludes Bohmian Mechanics because Bohmian Mechanics is non-local. – Peter Shor Jan 02 '20 at 03:41
  • "then a deterministic underpinning for quantum mechanics would seem to imply one of the following." One way to get out of the argument made here is stating that quantum computers don't give a definite answer to a question. The implications are not right. A quantum algorithm only gives a random answer that is very probably the correct answer but it doesn't need to be (repeating the algorithm might give a different answer). So you can not equate quantum algorithms to being equivalent to classical algorithms. – Sextus Empiricus Oct 10 '22 at 12:34
  • @SextusEmpiricus: That doesn't help. The best algorithms that we have for factoring on classical computers are probabilistic, and they still can't can't factor in polynomial time, the way quantum computers can in theory. – Peter Shor Oct 10 '22 at 16:19
  • @PeterShor but the probabilistic comparison makes that one can not just compare one-to-one the different algorithms. How do we compare a quantum algorithm with a classical algorithm? The quantum algorithm makes errors. It is never gonna be giving a hundred percent certain answer. That is a pressure relief vent that allows a quantum algorithm to perform so well (based on some sort of cost function it performs well, but that doesn't make it perfect) and makes it incomparable to deterministic classical algorithms with definite answers. Also, why should classic algorithms perform exactly the same? – Sextus Empiricus Oct 10 '22 at 19:11
41

This could have been a comment, but as it actually anwers the question asked in the title, I'll post it as such:

As far as I can tell there's no rational reason to dismiss these models out of hand - it's just that quantum mechanics (QM) has set the bar awfully high: So far, there's no experimental evidence that QM is wrong, and no one has come up with a viable alternative.

Ultimately, your theory needs to reproduce all experimentally verified predictions of QM (or rather, your theory may only deviate within the experimental precision). However, there is of course no need to reproduce arbitrary predictions - in fact, if you did, you'd end up with a re-formulation - i.e. a new interpretation - of ordinary QM. If your model tells us large-scale quantum computation is impossible, then it's up to the experimentalists to prove you wrong.

Any objections beyond that are just psychology at work: It takes quite some effort for most people to convince themselves that QM is a valid description of the world we live in, and once such a belief is ingrained, it easily becomes dogma.

Jbag1212
  • 2,274
Christoph
  • 13,545
9

Foundational discussions are indeed somewhat like discussions about religious convictions, as one cannot prove or disprove assumptions and approaches at the foundational level.

Moreover, it is in the nature of discussions on the internet that one is likely to get responses mainly from those who either disagree strongly (the case here) or who can add something constructive (difficult to do in very recent research). I think this fully explains the responses that you get.

I myself read superficially through one of your articles on this and found it not promising enough to spend more time on the technical issues.

However, I agree that both many-worlds and pilot waves are unacceptable physical explanations of quantum physics, and I am working on an alternative interpretation.

In my view, particle nonlocality is explained by negating particles any ontological existence. Existent are quantum fields, and on the quantum field level, everything is local. Nonlocal features appear only when one is imposing on the fields a particle interpretation, which, while valid under the usual asssumptions of geometric optics, fails drastically art higher resolution. Thus nothing needs to be explained in the region of failure. Just as the local Maxwell equations for a classsical electromagnetic field explain single photon nonlocality (double slit experiments), and the stochastic Maxwell equations explain everything about single photons (see http://arnold-neumaier.at/ms/optslides.pdf), so local QFT explains general particle nonlocality.

My thermal interpretation of quantum physics (see http://arnold-neumaier.at/physfaq/therm) gives a view of physics consistent with actual experimental practice and without any of the strangeness introduced by the usual interpretations. I believe this interpretation to be satisfactory in all respects, though it requires more time and effort to analyse the standard conundrums along these lines, with a clear statistical mechanics derivation to support my so far mainly qualitative arguments.

In presenting my foundational views in online discussions, I had similar difficulties as you; see, e.g., the PhysicsForums thread ''What does the probabilistic interpretation of QM claim?'' http://www.physicsforums.com/showthread.php?t=480072

  • 3
    The problem is that your interpretation is standard QM, and whether you take fields or particles as the basic variables, you still have the exponential explosion. The field wavefunction is just as horrendous as particle wavefunction. The model t'Hooft would like (and I would like too), would not require exponential resources for a large grid. You can't just rejigger the philosophy to make this happen, by changing the "ontic" status of objects in the theory (honestly, a positivist doesn't even care about the ontic status). – Ron Maimon Aug 18 '12 at 02:32
  • @RonMaimon: Interpretations of quantum physics have nothing per se to do with computational complexity, but with making commong sense out of the standard quantum phenomena. – Arnold Neumaier Aug 26 '12 at 09:54
  • 4
    Yes, ordinarily this is true, and this is why this question is not ordinary. The point of 'tHooft's line of investigation is to formulate a new theory (not quantum mechanics) which is nevertheless approximately quantum, but which does not suffer from the exponential explosion of computatinal complexity that plagues ordinary QM, and therefore is experimentally different from ordinary QM for the case of a quantum computer. The goal of such a theory is to not be experimentally different in cases where you would disagree with present experiments. This is a tough thing to do. – Ron Maimon Aug 26 '12 at 10:00
  • 1
    @RonMaimon: While this may be the case in his model, it is not his expressed goal. Instead, he wrote above: ''I can't help being disgusted by the "many world" interpretation, or the Bohm - deBroglie "pilot waves", and even the idea that the quantum world must be non-local is difficult to buy. I want to know what is really going on, and in order to try to get some ideas, I construct some models with various degrees of sophistication.'' – Arnold Neumaier Aug 26 '12 at 10:04
  • 1
    Yes, I know, the stated motivations invite a lot of philosophical answers, but this is really not the motivation. I know this from understanding the holographic principle, reading the papers, and from talking to him briefly once ten years ago. The motivation is to make small hidden variables, meaning a number of bits which makes classical computation of reasonable size given holographic constraints, and which reproduces QM approximately. It is not to reproduce QM exactly. But he makes a mistake in his papers, and gets too good a QM, so he thinks it reproduces it locally and almost exactly. – Ron Maimon Aug 26 '12 at 10:09
  • 1
    Saying that fields are the fundamental concept, and that fields behave locally, doesn't get around the fact that you need either nonlocality or exponential resources to allow for quantum computation. So your program to find a local field-based explanation of QFT is probably destined to fail. – Peter Shor Jan 17 '20 at 11:32
  • @PeterShor: The thermal interpretation of quantum physics has indeed a partially nonlocal character, in spite of the locality of the operator field equations,since the (potentially measurable) dynamical entities that satisfy a closed dynamics are the n-point correlation functions, which are nonlocal for $n>1$. – Arnold Neumaier Jan 17 '20 at 14:45
6

There are two questions here: Why criticize your models? And are there better ideas? I will try to answer the second question in a separate answer. Here I only give some comments of a general nature to adress the first question.

I personally agree with you, and I think most people who care about this stuff do too, that it is disconcerting to have a theory in which the information produced by observations is not contained in the theory itself, but is produced out of thin-air by an act of measurement. The natural idea is that when we see a bit of information produced through an act of observation, then the value of this bit was somehow contained in the complete description of nature independent of the act of observation. This was Einstein's reality principle, and I agree that it is preferred for a theory to obey it.

When a theory doesn't obey the reality principle, one has to note that macroscopic reality does obey it, and find the bits in the macroscopic world by a philosophically contorted roundabout exercize in mysticism. But since physics is empirical, and positivism is fruitful, I take the point of view that any framework that explains the results of observations must ultimately be philosophically ok, even if it requires contortions, and even if it is not correct! So Newton's mechanics, even though it is wrong, is not necessarily empirically disproved given only observations of human beings and so on, so it must not be philosophically incompatible with free will. Similarly, quantum mechanics might be wrong, but we have no empirical data that shows it is wrong, so it should be philosopically consistent to say QM is all there is. This means QM should describe observers too, and if there is no mathematical contradiction with this view, there should not be a philosophical contradiction either, even if there is a contradiction with experiment. This is the many-worlds philosophy, and it's the self-consistent answer if quantum mechanics is correct. It might be annoying, but I don't think it is too annoying--- one should just learn to live with many-worlds as a fine philosophical position.

But it is wrong to just say "many-worlds" at this point, because the quantum description has not been tested in the realm where the many-worlds have a real logical-positivist manifestation--- most obviously when doing factoring of enormously large numbers using a quantum computer. Until we do this, it is definitely conceivable that nature is only very closely approximately quantum for small systems of a few particles, in the cases we tested the theory already, and is just not quantum for highly entangled many particle systems.

Even if the world turns out to be really quantum, and a quantum computer factors numbers all the time, finding a deterministic substructure is useful for giving a computationally tractable small truncation of quantum mechanics in cases which are not a quantum computer, and it is possible that this truncation can be useful for quantum simulations. This is so needed that I think finding a substructure to quantum mechanics is a central important problem, personally, regardless of whether it turns out to be right. For this reason, I devoted much time to understanding your approach.

The problem with your construction is that it works too well, it is too easy to transform a quantum system into a beable basis, so that the global wavefunction is evolving in a way that is deterministic using the global Hamiltonian. Since you introduce the Hilbert space early and use it to do the transformation of basis to the internal states of the automaton, there is no obvious barrier to transforming a quantum computer into a beable basis, nor is there any barrier to violating Bell's inequality locally. These do not suggest that the no-go theorems are flawed, rather they suggest that the transformation to a beable basis with a permutation Hamiltonian does not produce a true classical system.

The precise way in which I believe that this system fails to be classical is in state-preparation on the interior. The process of state-preparation involves a measurement, entangling some interior subsystem with a macroscopic subsystem, and then a reduction of the macroscopic system according to Born's rule, leaving a pure quantum state of the interior subsystem. In your paper on Born's rule, you suggested how reduction should happen in a CA system, but your precise models don't really respect this intuition, in that the measurement of intermediate states always produces one of the eigenstates of the observable in the interior, no matter how complicated the observable and how entangled its eigenstates. This is what allows you to reproduce quantum mechanics on the interior subsystems, I am somewhat certain that this does not keep the state unsuperposed in the beable basis. Because these internal reductions don't respect the probability structure, you are really doing quantum mechanics not CA, and this is the only reason that you have such an easy time sidestepping the no-gos.

The fact that you sidestep the no-gos with no difficulty suggests strongly that your construction is leaving the space of allowed classical probability distributions on the CA somehow. The only place where this can happen is during internal state-preparation, during measurements of internal operators. This is how you prepare Bell states or quantum computers, after all. These interior operations must be producing states (after projection) which cannot be interpreted as classical probability states of the automaton, although the Hamiltonian evolution never does this. This is not a proof, but I would bet lots of money (if I had any). I asked for a proof here: In 't Hooft beable models, do measurements keep states classical?

This is part I of the answer, I post it separately, so that people who agree with this part don't have to upvote the second part, which is devoted to a different approach to getting quantum mechanics out of automata, to answer the second question.

4

This question tries to reproduce quantum mechanics from classical automata with a probabilistically unknown state.

Probability distributions on Automata states

Start with a classical CA and a probability distribution on the CA. To keep things general, I allow the CA to have some non-determinstic evolution, but only stochastic probability, no quantum evolution, and it's not necessary, you can always put the probability in the initial conditions, with no stochasticity at intermediate times, it's just an option.

The first point about these stochastic systems is detailed here: Consequences of the new theorem in QM? (in the section on duck feet). If the flow of probability is always between states where the probability is only infinitesimally different from te stationary distribution, then the classical flow conserves entropy, and is reversable, even if it is probabilistic and diffusive. This is the central motivation for the construction, and one should review how the particle diffusing the in the heat exchanger diffuser bounces back and forth reversibly from room to room, in a linear way described by an operator with a mostly complex eigenvalue, even though it is at all times only diffusing between different allowed regions.

Consider a classical probability distribution on a CA, $\rho(B)$ where B is the state of all the bits comprising the automaton, then

$$ I = - \sum_B \rho(B) \log(\rho(B)) $$

is the information contained in fully knowing the automaton state, above that provided by the distribution. If you make a perturbation to first order, changing $\rho$ to $\rho+\delta\rho$, you find

$$ I = - \sum_B (\rho(B)+\delta\rho(B)) \log(\rho(B) + \delta\rho(B)) $$

When $\rho$ is uniform, the first order correction vanishes since the sum of $\delta\rho$ is zero, and the second order correction gives a quadratic metric structure on $\delta\rho$.

$$ I = - \sum_B \delta\rho(B)^2$$

This is what I identify as a pre-quantum structure on the space of perturbations to the uniform distribution. The reason it is so symmetrical (like a sphere, not like a simplex) is because the perturbation is small. The reversibility is required by conservation of entropy, and conservation of entropy requires that all the transformations on $\delta\rho$ are orthogonal.

The picture to zeroeth order is that nearly every state is equally likely, but some states are slightly more likely than others, and the information revealed by experiments is only producing a slight bias for some states rather than others. These slight biases then are more symmetric than the underlying space of probabilities on automata states, because these distributions never deviate enough from uniformity to see the corners of the probability space simplex. The corners are the states where the automata bits are known for certain, and if you are always far from these, you can find a symmetric and reversible probabilistic dynamics.

Here is the central problem with this approach--- it is impossible for an information containing perturbation $\delta \rho$ to be everywhere small. The reason is that an everywhere small $\delta \rho$ necessarily produces a state which is almost indistinguishable from the uniform state, and which therefore makes a perturbation which corresponds to you having learned much less than even 1 bit of information. For example, if you have N bit automaton, and you make a distribution where the probability of every bit value is between ${1\over 2} - \epsilon$ and ${1\over 2} + \epsilon$, you get an information content bounded above by a small multiple of $\epsilon$ bits.

The reason is that learning even one bit of information about an automaton state roughly cuts down the number of states you can occupy by a factor of 2. This means that the true probability distribution must be significantly small on at least half the configurations, and it cannot be a small perturbation. This means that the information expansion breaks down, and this is where I was stuck for a long time

Locally small perturbations

The reason the notion of "small perturbation" is failing is because a small perturbation, as in the duck-feet example, is not globally small, it only has the property that the ratio of the probabilities between two nearby states is small. If the states are made by independently varying lots of bits, there are many states with the same ratio of probability.

The fix might as well be the following easy trick: just raise everything to the M-th power. If you have a system with states indexed by i an integer in the range 1,2,...,N, and a perturbation

$$(\rho_i + \delta\rho_i) $$

You can take the M-th tensor power of $\rho$, to produce a product distribution on the tensor space with M indices $i_1,i_2,...,i_M$. This product distribution is defined by the condition that changing every value i from one value to another produces the same ratio change in probability.

Now it is allowed for $\delta\rho$ to be small even when the information in $\delta\rho$ is not, because the M-th power isn't small at all. In fact, in this system, because it's a tensor product, if you know that the information content of $\rho+\delta\rho$ is I bits overall, then you learn that

$$ M \sum_B \delta\rho^2 = 1 $$

In other words, the finite information perturbations to the stationary distribution on a system with M-copies forms a (real, not complex) Hilbert space, ever more perfectly as M goes to infinity. If the dynamics is duck-feet, meaning that the entropy is conserved with the small perturbation, then the time evolution of $\delta\rho$ is necessarily an orthogonal transformation, no matter what the underlying stochastic or deterministic evolution law is.

The basic idea is that you can make a quantum mechanics emerge from stochastic evolution of systems with many identical copies, under the condition that the copies interact symmetrically with each other, so that you don't know which copy is which.

To see how the inner product comes out, you consider the mutual information, which tells you how independent two different distributions are. To lowest order, this is found by taking the information in $\delta\rho_1$ and $\delta\rho_2$ and subtracting the information in $\delta\rho_1$ and $\delta\rho_2$ separately. Since these are the norms, you find

$$ I_{12} = ||\delta\rho_1 + \delta\rho_2 ||^2 - || \delta\rho_1 ||^2 - ||\delta\rho_2||^2 = \langle\rho_1 , \rho_2 \rangle $$

So that if you have two distributions, they share states to the degree that their inner product is nonzero.

  • Are you familiar with this paper: http://arxiv.org/pdf/1111.6597.pdf and if so, is it applicable to your kind of construction? – user1247 Aug 19 '12 at 19:43
  • @user1247: That constraint is relatively weak, in that it assumes distant systems are described by independent ensembles (this is again locality creeping in). It was discussed here: http://physics.stackexchange.com/questions/17170/consequences-of-the-new-theorem-in-qm . I read it and understood the argument, and found it interesting, but it doesn't apply to this type of thing, as you can explicitly see by constructing entangled states in the above: they always share the statistical ensemble, no matter how far apart they are, their description is not by concatenation of each individual one. – Ron Maimon Aug 19 '12 at 20:30
-1

There's no doubt it's possible to reproduce quantum integrable models rather efficiently and simply using classical systems. And of all integrable systems, harmonic oscillators are one of the simplest. The real challenge is to reproduce quantum nonintegrable systems. Can you reproduce quantum chaos? Can you reproduce quantum nonintegrable spin models over a 1d spatial lattice? Trying perturbation theory from an integrable models runs into the problem that the number of feynman diagrams grows exponentially with the number of loops.

  • Its just the start in the future he may be able to. – Asphir Dom Aug 18 '12 at 10:35
  • There is doubt! It is not trivial to reproduce simple quantum mechanics from cellular automata, and I don't think 'tHooft does it (although I think he came very close to doing so, and intuitively he is spot on). – Ron Maimon Aug 19 '12 at 21:35
  • 3
    @Scary Monster: The claim is that any CA can be cast in the language of QM, although in most cases the QM models you get will be uninteresting; there will be states, and they will obey Schroedinger equations. Now many CA models are computationally universal, so certainly not integrable, and therefore the asociated QM theory is also expected to be non-trivial. But of course the math is much harder; it's much more instructive to search for cases where you can do (perturbative) calculations. – G. 't Hooft Aug 20 '12 at 15:57