• bunchberry@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    In any statistical theory, the statistical distribution, which is typically represented by a vector that is a superposition of basis states, evolves deterministcally. That is just a feature of statistics generally. But no one in the right mind would interpret the deterministic evolution of the statistical state as a physical object deterministically evolving in the real world. Yet, when it comes to QM, people insist we must change how we interpret statistics, yet nobody can give a good argument as to why.

    We only “don’t fully understand where the probabilistic measurement happens” if you deny it is probabilistic to begin with. If you just start with the assumption that it is a statistical theory then there is no issue. You just interpret it like you interpret any old statistical theory. There is no invisible “probability waves.” The quantum state is an epistemic state, based on the observer’s knowledge, their “best guess,” of a system that is in a definite state in the real world, but they cannot know it because it evolves randomly. Their measurement of that state just reveals what was already there. No “collapse” happens.

    The paradox where we “don’t know” what happens at measurement only arises if you deny this. If you insist that the probability distribution is somehow a physical object. If you do so, then, yes, we “don’t know” how this infinite-dimensional physical object which doesn’t even exist anywhere in physical space can possibly translate itself to the definite values that we observe when we look. Neither Copenhagen nor Many Worlds have a coherent and logically consistent answer to the question.

    But there is no good reason to believe the claim to begin with that the statistical distribution is a physical feature of the world. The fact that the statistical distribution evolves deterministically is, again, a feature of statistics generally. This is also true of classical statistical models. The probability vector for a classical probabilistic computer is mathematically described as evolving deterministically throughout an algorithm, but no sane person takes that to mean that the bits in the computer’s memory don’t exist when you aren’t looking at them an infinite-dimensional object that doesn’t exist anywhere in physical space is somehow evolving through the computer.

    Indeed, the quantum state is entirely decomposable into a probability distribution. Complex numbers aren’t magic, they always just represent something with two degrees of freedom, so we can always decompose it into two real-valued terms and ask what those two degrees of freedom represent. If you decompose the quantum state into polar form, you find that one of the degrees of freedom is just a probability vector, the same you’d see in classical statistics. The other is a phase vector.

    The phase vector seems mysterious until you write down time evolution rules for the probability vector in quantum systems as well as the phase vector. The rules, of course, take into account the previous values and the definition of the operator that is being applied to them. You then just have to recursively substitute in the phase vector’s evolution rule into the probability vector’s. You then find that the phase vector disappears, because it decomposes into a function over the system’s history, i.e. a function over all operators and probability vectors at all previous time intervals going back to a division event. The phase therefore is just a sufficient statistic over the system’s history and is not a physical object, as it can be defined in terms of the system’s statistical history.

    That is to say, without modifying it in any way, quantum mechanics is mathematically equivalent to a statistical theory with history dependence. The Harvard physicist Jacob Barandes also wrote a proof of this fact that you can read here. The history dependence does make it behave in ways that are bit counterintuitive, as it inherently implies a non-spatiotemporal aspect to how the statistics evolve, as well as interference effects due to interference in its history, but they are still just statistics all the same. You don’t need anything but the definition of the operators and the probability distributions to compute the evolution of a quantum circuit. A quantum state is not even necessary, it is just convenient.

    If you just accept that it is statistics and move on, there is no “measurement problem.” There would be no claim that the particles do not have definite states in the real world, only that we cannot know them because our model is not a deterministic model but a statistical model. If we go measure a particle’s position and find it to be at a particular location, the explanation for why we find it at that location is just because that’s where it was before we went to measure it. There is only a “measurement problem” if you claim the particle was not there before you looked, then you have difficulty explaining how it got there when you looked.

    But no one has presented a compelling argument in the scientific literature that we should deny that it is there before we look. We cannot know what its value is before we look as its dynamics are (as far as we know) random, but that is a very different claim than saying it really isn’t there until we look. This idea that the particles aren’t there until we look has, in my view, been largely ruled out in the academic literature, and should be treated as an outdated view like believing in the Rutherford model of the atom. Yet, people still insist on clinging to it.

    They pretend like Copenhagen and Many Worlds are logically consistent by writing enormous sea of papers upon papers upon papers, where it only seems “consistent” because it becomes so complicated that hardly anyone even bothers to follow along with it anymore, but if you actually go through the arguments with a fine-tooth comb, you can always show them to be inconsistent and circular. There is only a vague aura of logical and mathematical consistency on the surface. The more you actually engage with both the mathematics and read the academic literature on quantum foundations, the more clear it becomes how incoherent and contrived attempts to make Copenhagen and Many Worlds consistent actually are, and how no one in the literature has actually achieved it, even though many falsely pretend they have done so.

    • ඞmir@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I’m pretty sure this goes against the properties proven of entanglement (Bell test) and how far entanglement can propagate, but I don’t know enough about quantum mechanics to explain why this explanation is incompatible with entanglement.

      However, I don’t currently see how this at all explains computing with superpositions; if it’s just statistics a superposition can never exist, so entanglement doesn’t exist; so quantum algorithms wouldn’t be possible, but we know they are.

      • bunchberry@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I’m pretty sure this goes against the properties proven of entanglement (Bell test) and how far entanglement can propagate, but I don’t know enough about quantum mechanics to explain why this explanation is incompatible with entanglement.

        If you don’t know anything about the topic then maybe you shouldn’t speak on it. Especially when claiming you have debunked peer reviewed papers from Harvard physicists like Jacob Barandes.

        However, I don’t currently see how this at all explains computing with superpositions; if it’s just statistics a superposition can never exist

        Superposition is a property of statistics. Even classical statics commonly represent the system’s statistical state as a linear combination of basis states. That’s just what a probability distribution is. If you take any courses in statistics, you will superimpose things all the time. This is a mathematical property.

        so entanglement doesn’t exist; so quantum algorithms wouldn’t be possible, but we know they are.

        Quantum advantage obviously comes from the phase of the quantum state. If you remove the phase from the quantum state then all you are left with is a probability distribution, and so there would be nothing to distinguish it from a classical statistical theory. But the phase is, again, a sufficient statistic over the system’s history. The quantum advantage comes from the fact that you are ultimately operating with a much larger information space, since each instruction in the computer is a function over the whole algorithm’s history back to the start of the quantum circuit, rather than just the current state of the computer’s memory at that present moment.

    • MOCVD@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I kinda boil it down to discreet energy packets distributed in an area as field values and the collapse occurs when two discreet packets interact

      • bunchberry@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        What if two packets interact with each other? If you claim a collapse occurs then entanglement could never happen, and so such a viewpoint is logically ruled out. If you say a collapse does not occur but only occurs if you introduce a measurement device, then this is vague without rigorously defining what a measurement device is, but providing any additional physical definition with then introduce something into the dynamics which is not there in orthodox quantum mechanics, so you’ve not moved into a new theory and are no longer talking about textbook QM.