Warning: Undefined array key "nonc" in /home2/zlisiecki/domains/evot.org/public_html/zbyszek/nonqc.php on line 37

Warning: Undefined array key "ca" in /home2/zlisiecki/domains/evot.org/public_html/zbyszek/nonqc.php on line 353

Warning: Undefined array key "ca" in /home2/zlisiecki/domains/evot.org/public_html/zbyszek/nonqc.php on line 355
The Principle of the Non-Realizability of Quantum Computers


The Principle of the Non-Realizability of Quantum Computers

Version:   1.1   29.10.2024   (former versions)  
  1.0        15.02.2024
1.1        29.10.2024
 

 


The principle of non-realizability of a Quantum Computing , abbreviated as NONQC, is a postulate. It arises from two conceptual errors (or inaccuracies) made in the very idea of a quantum computer. In the following, I will explain why these mistakes seem to be crucial in the construction of Quantum Computers and may determine the unfeasibility of it.

The term " Quantum Computer " is a shorthand for a quantum computing device. For the purposes of this article, I mean by this a machine that performs quantum data processing:

Quantum data processing is the transformation of sequences of meaning terms in which to these basic units of information, the so-called q-bits are assigned continuous state spectra obeying the Quantum Superposition . Definition:
quantum
data
processing

 (1)  
  The term quantum data processing is often understood to mean processing using quantum phenomena. (see Wikipedia). Such a definition is inadequate, because it would already be fulfilled by a simple current flow in a copper conductor, which is only fully explained in quantum mechanics. This results in the need for the above clarification, in which the quantum principle of superposition is applied and, as a result, potentially infinite distributions of states (so-called state spectra) are assigned to the basic data processing units.  



The Principle
of Nonrealizability
of Quantum
Computing
Technical limitations of a statistical nature increase with the number of units used in quantum computing devices in a way that effectively excludes the possibility of quantum processing larger data than is theoretically possible without the use of quantum computing.

There are two inconsistencies in the theory of quantum mechanics and in general physics that lead to the well-known conceptual difficulties often referred to in the literature as the inconsistency between quantum mechanics and general relativity (the theory of gravity). Here they are:

1. An incorrect and incomplete understanding of the notion of wave function in quantum mechanics, in which statistical functions are assigned to individual cases.

2. A hypothesis about the permissibility of using the concepts of time and space familiar to us from the macroscopic world on an ever smaller scale ad infinitum.

Ad.1. The currently common understanding of the quantum microworld is based on the use of the concept of a wave function that describes particle states as distributions of measurement probabilities stating, for example, the position of an electron at at certain place and time. However, such an understanding is incorrect and can only play the role of a shortcut, which in the situation considered here ignores the essence of the phenomenon. The results of measurements of location only appear in the laws of physics and statistics when billions of billions of individual events occur, that make up a macroscopic measurement. These statistics apply appropriately to this huge number of cases, while individual measurements remain unpredictable, i.e. they are outside the scope of theory  (2)  
  I omit here the fact here, which is othrwise important for full understanding, that the inability to predict a single measurement, e.g. the position of a particle, means that, assuming that the theory is based only on real, measurable facts and not on a priori assumptions, that it is de facto impossible to define the concept of position for the case of a single microscopic event.  

. Is is conceptually absurd to asign statistics to a single case.

Ad.2. The above-mentioned assumption of an arbitrary reducibility of physical space and time to ever smaller scales leads to the well-known paradoxes and computational problems that postulate the possibility of removing the components of these results that grow to infinity. The reducibility assumed in this way also seems to make it possible to reproduce the description of physical reality in the form of its\s copy on a smaller scale. This in turn leads to collapse - our physical reality would collapse in on itself.

However, if one assumes that the concepts of time and space are only justified on a macroscopic scale, then such reducibility can no longer be carried out, which allows the collapse of the description of reality to be avoided.

Therefore, the hypothesis that allows us to recreate the model of our physical reality in a subatomic quantum subsystem and calculate its possible effects using q-bits seems to have its limits.

The second limit we encounter concerns the concept of numbers themselves. We assume that natural numbers are mental structures created for dealing with macroscopic objects. However, the notion of independently existing individual objects breaks down, e.g. in quantum field theory. The hypothesis that real numbers are only infinite series of signs built on natural numbers also comes to an end here.

In short, the digital (in the sense of a small digital number of symbols) processing of models of our macroscopic world of objects on a subatomic scale by means of "quantum computing" would be nothing other than a physical realisation of the Hidden Parameters Theory , which, as we know, has been excluded by the verification of the violation of Bell's Theorem . Such Hidden Parameters Theory cannot exist, and continuous spectra, which are subject to the quantum principle of superposition are not macroscopically handled abacuses  (3)  
  Indeed in Shor's algorithm, the key element is to read the complete distribution of a wave function after superposition into macroscopic results. If it were possible to do analytically accurately, we would actually gain the knowledge of the "hidden parameters" that determine the result of the superposition. This suggests that the practical impossibility of factorising large numbers could be proven by breaking Bell's inequalities .  

.

Summary:

The impossibility to build a Quantum Computing follows thereafter from the fact, that the appearing technical troubles, like the trouble to keep an ideally constant temperature conditions are not only of technical nature, which one could overcome one day with a more precise device, but they are the fundamentally statistical appearance of very terms one applies to interpret the outcomes if such device.

It's worth to mension that the history of the very concept of begins (so wikipedia) from the No-cloning Theorem developed early 70-ties which seem to be extendable to the nnonrealisability of

Consequences:

The above principle