• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle
  • I’m pretty sure this goes against the properties proven of entanglement (Bell test) and how far entanglement can propagate, but I don’t know enough about quantum mechanics to explain why this explanation is incompatible with entanglement.

    If you don’t know anything about the topic then maybe you shouldn’t speak on it. Especially when claiming you have debunked peer reviewed papers from Harvard physicists like Jacob Barandes.

    However, I don’t currently see how this at all explains computing with superpositions; if it’s just statistics a superposition can never exist

    Superposition is a property of statistics. Even classical statics commonly represent the system’s statistical state as a linear combination of basis states. That’s just what a probability distribution is. If you take any courses in statistics, you will superimpose things all the time. This is a mathematical property.

    so entanglement doesn’t exist; so quantum algorithms wouldn’t be possible, but we know they are.

    Quantum advantage obviously comes from the phase of the quantum state. If you remove the phase from the quantum state then all you are left with is a probability distribution, and so there would be nothing to distinguish it from a classical statistical theory. But the phase is, again, a sufficient statistic over the system’s history. The quantum advantage comes from the fact that you are ultimately operating with a much larger information space, since each instruction in the computer is a function over the whole algorithm’s history back to the start of the quantum circuit, rather than just the current state of the computer’s memory at that present moment.


  • What if two packets interact with each other? If you claim a collapse occurs then entanglement could never happen, and so such a viewpoint is logically ruled out. If you say a collapse does not occur but only occurs if you introduce a measurement device, then this is vague without rigorously defining what a measurement device is, but providing any additional physical definition with then introduce something into the dynamics which is not there in orthodox quantum mechanics, so you’ve not moved into a new theory and are no longer talking about textbook QM.


  • In any statistical theory, the statistical distribution, which is typically represented by a vector that is a superposition of basis states, evolves deterministcally. That is just a feature of statistics generally. But no one in the right mind would interpret the deterministic evolution of the statistical state as a physical object deterministically evolving in the real world. Yet, when it comes to QM, people insist we must change how we interpret statistics, yet nobody can give a good argument as to why.

    We only “don’t fully understand where the probabilistic measurement happens” if you deny it is probabilistic to begin with. If you just start with the assumption that it is a statistical theory then there is no issue. You just interpret it like you interpret any old statistical theory. There is no invisible “probability waves.” The quantum state is an epistemic state, based on the observer’s knowledge, their “best guess,” of a system that is in a definite state in the real world, but they cannot know it because it evolves randomly. Their measurement of that state just reveals what was already there. No “collapse” happens.

    The paradox where we “don’t know” what happens at measurement only arises if you deny this. If you insist that the probability distribution is somehow a physical object. If you do so, then, yes, we “don’t know” how this infinite-dimensional physical object which doesn’t even exist anywhere in physical space can possibly translate itself to the definite values that we observe when we look. Neither Copenhagen nor Many Worlds have a coherent and logically consistent answer to the question.

    But there is no good reason to believe the claim to begin with that the statistical distribution is a physical feature of the world. The fact that the statistical distribution evolves deterministically is, again, a feature of statistics generally. This is also true of classical statistical models. The probability vector for a classical probabilistic computer is mathematically described as evolving deterministically throughout an algorithm, but no sane person takes that to mean that the bits in the computer’s memory don’t exist when you aren’t looking at them an infinite-dimensional object that doesn’t exist anywhere in physical space is somehow evolving through the computer.

    Indeed, the quantum state is entirely decomposable into a probability distribution. Complex numbers aren’t magic, they always just represent something with two degrees of freedom, so we can always decompose it into two real-valued terms and ask what those two degrees of freedom represent. If you decompose the quantum state into polar form, you find that one of the degrees of freedom is just a probability vector, the same you’d see in classical statistics. The other is a phase vector.

    The phase vector seems mysterious until you write down time evolution rules for the probability vector in quantum systems as well as the phase vector. The rules, of course, take into account the previous values and the definition of the operator that is being applied to them. You then just have to recursively substitute in the phase vector’s evolution rule into the probability vector’s. You then find that the phase vector disappears, because it decomposes into a function over the system’s history, i.e. a function over all operators and probability vectors at all previous time intervals going back to a division event. The phase therefore is just a sufficient statistic over the system’s history and is not a physical object, as it can be defined in terms of the system’s statistical history.

    That is to say, without modifying it in any way, quantum mechanics is mathematically equivalent to a statistical theory with history dependence. The Harvard physicist Jacob Barandes also wrote a proof of this fact that you can read here. The history dependence does make it behave in ways that are bit counterintuitive, as it inherently implies a non-spatiotemporal aspect to how the statistics evolve, as well as interference effects due to interference in its history, but they are still just statistics all the same. You don’t need anything but the definition of the operators and the probability distributions to compute the evolution of a quantum circuit. A quantum state is not even necessary, it is just convenient.

    If you just accept that it is statistics and move on, there is no “measurement problem.” There would be no claim that the particles do not have definite states in the real world, only that we cannot know them because our model is not a deterministic model but a statistical model. If we go measure a particle’s position and find it to be at a particular location, the explanation for why we find it at that location is just because that’s where it was before we went to measure it. There is only a “measurement problem” if you claim the particle was not there before you looked, then you have difficulty explaining how it got there when you looked.

    But no one has presented a compelling argument in the scientific literature that we should deny that it is there before we look. We cannot know what its value is before we look as its dynamics are (as far as we know) random, but that is a very different claim than saying it really isn’t there until we look. This idea that the particles aren’t there until we look has, in my view, been largely ruled out in the academic literature, and should be treated as an outdated view like believing in the Rutherford model of the atom. Yet, people still insist on clinging to it.

    They pretend like Copenhagen and Many Worlds are logically consistent by writing enormous sea of papers upon papers upon papers, where it only seems “consistent” because it becomes so complicated that hardly anyone even bothers to follow along with it anymore, but if you actually go through the arguments with a fine-tooth comb, you can always show them to be inconsistent and circular. There is only a vague aura of logical and mathematical consistency on the surface. The more you actually engage with both the mathematics and read the academic literature on quantum foundations, the more clear it becomes how incoherent and contrived attempts to make Copenhagen and Many Worlds consistent actually are, and how no one in the literature has actually achieved it, even though many falsely pretend they have done so.


  • It’s amazing how nonsensical the actual foundational axioms of modern day economics are.

    Classical economics tried to tie economics to functions of physical things we can measure. Adam Smith for example proposed that because you can recursively decompose every product into the amount of physical units of time it takes to produce it all the way down the supply chain, then any stable economy should, on the average (not the individual case), roughly buy and sell in a way that reflects that time, or else there would necessarily have to be physical time shortages or waste which would lead to economic problems. We thus may be able to use this time parameter to make quantifiable predictions about the economy.

    Many people had philosophical objections to this because it violates free will. If you can predict roughly what society will do based on physicals factors, then you are implying that people’s decisions are determined by physical parameters. Humans have the “free will” to just choose to buy and sell at whatever price they want, and so the economy cannot be reduced beyond the decisions of the human spirit. There was thus a second school of economics which tried to argue that maybe you could derive prices from measuring how much people subjectively desire things, measured in “utils.”

    “Utils” are of course such ambiguous nonsense that eventually these economists realized that this cannot work, so they proposed a different idea instead, which is to focus on marginal rates of substitution. Rather than saying there is some quantifiable parameter of “utils,” you say that every person would be willing to trade some quantity of object X for some quantity of object Y, and then you try to define the whole economy in terms of these substitutions.

    However, there are two obvious problems with this.

    The first problem is that to know how people would be willing to substitute things rigorously, you would need an incredibly deep and complex understanding of human psychology, which the founders of neoclassical economics did not have. Without a rigorous definition, you could not fit it to mathematical equations. It would just be vague philosophy.

    How did they solve this? They… made it up. I am not kidding you. Look up the axioms for consumer preference theory whenever you have the chance. It is a bunch of made up axioms about human psychology, many of which are quite obviously not even correct (such as, you have to assume that the person has evaluated and rated every product in the entire economy, you have to assume that every person would be more satisfied with having more of any given object, etc), but you have to adopt those axioms in order to derive any of the mathematics at all.

    The second problem is one first pointed out, to my knowledge, by the economist Nikolai Bukharin, which is that an economic model based around human psychology cannot possibly even be predictive because there is no logical reason to believe that the behavior of everything in the economy, including all social structures, is purely derivative of human psychology, i.e. that you cannot have a back-reaction whereby preexisting social structures and environmental factors people are born into shape their psychology, and he gives a good proof-by-contradiction that the back-reaction must exist.

    The idea that you can derive everything based upon some arbitrary set of immutable mathematical laws made up in someone’s armchair one day that supposedly rigorously details human behavior that is irreducible beyond anything else is just nonsense. No one has ever even tested any of these laws that supposedly govern human psychology.


  • Surprisingly that is a controversial view. Most physicists insist QM has nothing to do with probability! But then why does it only give you probabilistic predictions? Ye old measurement problem, an entirely fabricated problem because physicists cannot accept that a theory that gives you probabilities is obviously a probabilistic theory.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzGottem
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    They do have values. Their position is just a superposition, rather than one descrete one, which can be described as a wave. Their value is effectively a wave until it’s needed to be discrete.

    To quote Dmitry Blokhintsev: “This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.”

    When I say “real values” I do not mean pure abstract mathematics. We do not live in a Platonic realm. The mathematics are just a tool for predicting what we observe in the real world. Don’t confuse the map for the territory. The abstract wave has no observable properties, it is pure mathematics. If the whole world was just one giant wave in Hilbert space, then this would be equivalent to claiming that the entire world is just one big mathematical function without any observable properties at all, which obviously makes no sense as we can clearly observe the world.

    To quote Rovelli: “The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world.”

    Again, as I said in my first comment, any mathematical theory that describes the world needs to, at some point, include symbols which directly refer to something we can observe. An abstract mathematical function contains no such symbols. If you really believe that particles transform into purely mathematical waves, then you need some process to transform them back, or else you cannot explain what we observe at all, and so far the only process you have put forward is “it happens at every interaction” which is just objectively and empirically wrong because then entanglement would be impossible.

    This is why you run into contradictions like the “Wigner’s friend” paradox where Wigner would describe his friend in a superposition of states, and if you believe that this literally means that all that exists inside the room is an abstract function, then you cannot explain how the observer in the room can perceive anything that they later claim they do, because there would be no observables inside of the room.

    You cannot get around criticisms of solipsism by just promoting purely abstract mathematical entities to being “objective reality” as if objects transform into purely Platonic mathematical functions. At least, if you are going to claim this, then you need some rigorous process to transform them back into something that is described with mathematical language where some of the symbols refer to something we can actually observe such that we can then explain how it is that we can observe it to have the properties that it does when we look at it.

    Sure. That doesn’t make the general understanding of the thought experiment accurate. Once the decay of the atom that triggers the poison is detected, it’s no longer in a superposition. It has to not be in order for the detection to occur.

    Please scroll up and read my actual comment. You seem to have skipped all the important technical bits, because you are claiming something which is mathematically incompatible with the predictions of quantum mechanics. Your personal self-theory you are inventing here literally would render entanglement impossible.

    The double slit experiment shows that an interaction can change the result from wave-like to particle-like behavior.

    Decoherence is not relevant here. Decoherence theory works like this:

    1. Assume that the system+environment become entangled.
    2. Assume that the observer loses track of the environment.
    3. Trace out the the environment.
    4. This leaves you with a reduced density matrix for the system where the coherence terms have dropped to 0.

    Notice that step #2 is entirely subjective. We are just assuming that the observer has lost track of the environment in terms of their subjective epistemic access, and step #3 is then akin to statistically marginalizing over the environment in order to then remove it from consideration.

    This isn’t an actual physical transition but an epistemic one. The system+environment are still in a coherent superposition of states, and decoherence theory merely shows that it looks like it has decohered if you only have subjective knowledge on a small portion of the much larger coherent superposition of states.

    If you believe that a superposition of states means it has no observable properties and is just purely a mathematical function, then decoherence does not solve your problem at all, because it is ultimately a subjective process and not a physical process. If you spent time studying the environment enough before running the experiment such that you could include the environment in your model then decoherence would not occur.

    I’m literally not. My entire point is that it isn’t a solipsism. Any interaction causes the waveform to collapse.

    Which, again, renders entanglement impossible, since objects must interact to become entangled.

    If we accepted your personal self-theory, then quantum computers should be impossible, because the qubits all need to interact many many times over as the algorithm progresses for them to all become entangled and to create a superposition of states of the whole computer’s memory.

    You are not listening and advocating things that are trivially wrong.

    yet you give no explanation of an alternative. Something is happening. How do you explain it?

    I just don’t deny value definiteness. That’s it. There is nothing beyond this.

    Consider a perfectly classical world but this world is still fundamentally random. The randomness of interactions would disallow us from tracking the definite values of particles at a given moment in time, so we could only track them with an evolving probability distribution. We can represent this probability distribution with a vector and represent interactions with stochastic matrices. Given that the model does not include observable definite values, would it then be rational to claim that particles suddenly transform into an infinite-dimensional vector in configuration space when you’re not looking at them and lose all their observable properties? No, of course not. The particles still have real observable properties in the real world, but you just lose track of them in the model due to their random evolution.

    You could create a simulation where you assign definite values and permute them stochastically at each interaction, and this would produce the same statistical results if you make a measurement at any given step. It is the same with quantum mechanics. It is just a form of non-classical statistical mechanics. There is no empirical, mathematical, or philosophical reason to deny that particles stop possessing real values when you are not looking at them. It is not hard to put together a simulation where the qubits are assigned definite bit values at all times and each logic gate just stochastically permutes those bit values. I even created one myself here. John Bell also showed you can do this with quantum field theory in his paper “Beables for Quantum Field Theory.”


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzGottem
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Value indefiniteness is just solipsism. If particles do not have values when you are not looking, then any object made of particles also do not have values when you are not looking. This was the point of Schrodinger’s “cat” thought experiment. Your beliefs about the microworld inherently have implications for the macroworld. If particles don’t exist when you’re not looking at them, then neither do cats, or other people. This view of “value indefiniteness” you are trying to defend is indefensible because it is literally solipsism and any attempt to promote it above solipsism will just become incoherent.

    You say:

    it’s when position is needed to be known that causes it. Until then, the position is in a superstate of all possible positions, but for an interaction to occur it needs to be in one position.

    This is trivially false, because then it would not be possible for two particles to become entangled on the position basis, which requires them to interact in such a way that depends upon their position values. The other particle would thus need to “know” its position value to become entangled with it, and if this leads to a “collapse,” then such entanglement could not occur. Yet we know it can occur in experiments.

    If by “know” you mean humans knowing and not other particles, yeah, okay, but that’s obviously solipsism.

    Any attempt to defend value indefiniteness will always either amount to:

    1. Solipsism
    2. Something that is trivially wrong
    3. A theory which is not quantum mechanics (makes different predictions)

    This (at least your wording) implies that physics cares about our mathematical models. It doesn’t. Quantum mechanics and “classical” physics are just ways we organize things for education.

    I don’t blame them, it is literally the textbook Dirac-von Neumann axioms. That is how it is taught in schools, even though it is obviously incoherent. You are taught that there is a “Heisenberg cut” between the quantum and classical world, with no explanation of how this occurs.

    Though we don’t have a model for it, the unvirse is not using two separate models of physics. There is no “quantum mechanics” and “classical physics”. There is only physics.

    The problem is that the orthodox interpretation of quantum mechanics does not even allow you to derive classical physics minus gravity in a limiting case from quantum mechanics. It is not even a physical theory of nature at all.

    We know from the macroscopic world that particles have real observable properties, yet value indefiniteness denies that they have real observable properties, and it provides no method of telling you when those real, observable properties are added back to the world. It thus cannot make a single empirical prediction at all without this sleight-of-hand where they just say, as a matter of axiom in the Dirac-von Neumann textbook axioms of quantum mechanics that it happens “at measurement.”

    If measurement is taken to be a subjective observation, then it is just solipsism. If measurement is taken to be a physical process, then it cannot reproduce the mathematical predictions of quantum mechanics, because this “Heisenberg cut” would be a non-reversible process, yet all unitary evolution operators are reversible. Hence, any model which includes a rigorous definition of “measurement” (like Ghirardi–Rimini–Weber theory) would include an additional non-reversible process. You could then just imagine setting up an experiment where this process would occur and then try to reverse it. The mathematics of quantum mechanics and your theory would inevitably lead to different predictions in such a process.

    Therefore, again, if you believe in value indefiniteness, then you either (1) are a solipsist, (2) don’t believe in quantum mechanics but think it will be replaced by a physical collapse model, or (3) are confused.

    The only way for quantum mechanics to be self-consistent is to reject value indefiniteness, at least as a metaphysical point of view. This does not require actually modifying the mathematics. If nature is random, then of course the definite values will evolve statistically such that they could not be tracked and included in the model. All you would need to then demonstrate is that quantum statistics converges to classical statistics in a limiting case on macroscopic scales, which is achieved by the theory of decoherence.

    But the theory of decoherence achieves nothing if you believe in value indefiniteness, because if you believe quantum mechanics has nothing to do with statistics at all, then there is no reason to conclude that what you get in the reduced density matrices after you trace out the environment has anything to do with classical statistics, either.

    There is no good argument in the academic literature for value indefiniteness. It is an incoherent worldview based on no empirical evidence at all. People who believe it often just regurgitate mindlessly statements like “Bell’s theorem proves it!” yet cannot articulate what Bell’s theorem even is or how on earth is proves that, especially since Bell himself was the biggest critic of value indefiniteness yet wrote the damned theorem!