Theoretical BackgroundDecember 14, 2016 12:13 pm
Heisenberg’s Uncertainty Principle
The uncertainty principle represents, without any doubt, one of the most important cornerstones of the Copenhagen interpretation of quantum theory. In his celebrated paper from 1927 1 Werner Heisenberg gives at least two distinct statements about the limitations on preparation and measurement of physical systems: (i) incompatible observables cannot be measured with arbitrary accuracy: a measurement of one of these observables disturbs the other one accordingly, and vice versa; (ii) it is impossible to prepare a system such that a pair of noncommuting (incompatible) observables are arbitrarily well defined. here the observables are represented by position and momentum. In his his paper, Heisenberg proposed a reciprocal relation for measurement error and disturbance by the famous – ray microscope thought experiment: “At the instant when the position is determined—therefore, at the moment when the photon is scattered by the electron—the electron undergoes a discontinuous change in momentum. is change is the greater the smaller the wavelength of the light employed— that is, the more exact the determination of the position. . .” 1. Heisenberg follows Einstein’s realistic view, that is, to base a new physical theory only on observable quantities (elements of reality), arguing that terms like velocity or position make no sense without defining an appropriate apparatus for a measurement. By solely considering the Compton effect, named after the American physicist Arthur Holly Compton Heisenberg gives a rather heuristic estimate for the product of the inaccuracy (error) of a position measurement and the disturbance induced on the particles momentum, denoted by . This equation can be referred to as a measurement uncertainty (i) or as an error-disturbance uncertainty relation (EDR). Heisenberg’s original formulation can be read in modern treatment as for error of a measurement of the position observable and disturbance of the momentum observable induced by the position measurement.
However, most modern textbooks introduce the uncertainty relation in terms of a preparation uncertainty (ii) relation denoted by , originally proved by Earle Hesse Kennard 2 for the standard deviations and and of the position observable and the momentum observable in an arbitrary state , where the standard deviation is defined by . This formulation of the uncertainty principle is uncontroversial and has been tested with several quantum systems including neutrons. However it does not capture Heisenberg’s initial intensions: Kennard’s formulation is an inequality for statistical distributions of not a joint but rather single measurements of either or . It is an intrinsic uncertainty inherent to any quantum system independent whether it is measured or not. The unavoidable recoil caused by the measuring device is ignored here. The reciprocal behavior of the distributions of and is illustrated on the right side. Heisenberg actually derived Kennard’s relation, from above, for Gaussian wave functions , applied this relation to the state just after the measurement with error and disturbance , and concluded his relation from the additional, implicit assumptions and . However, his assumption holds only for a restricted class of measurements and holds if the initial state is the momentum eigenstate state, but it does not hold generally. In 1929, Howard Percy “Bob” Robertson 3 extended Kennard’s relation () to an arbitrary pair of observables and as , with the commutator . The corresponding generalized form of Heisenberg’s original error-disturbance uncertainty relation would read . But the validity of this relation is known to be limited to specific circumstances.
Ozawa’s generalized Uncertainty Relation
In the year 2003 Japanese theoretical physicist Masanao Ozawa proposed a new error- disturbance uncertainty relation 4, and proved its universal validity in the general theory of quantum measurements (see here for details of the derivation). Here denotes the root-mean-square (r.m.s.) error of anarbitrary measurement for an observable , is the r.m.s. disturbance on another observable induced by the measurement, and and are the standard deviations of and in the state before the measurement. Here error and disturbance are defined via an indirect measurement model for an apparatus A measuring an observable of an object system S as and . Here is the initial state of system S, which is associated with Hilbert space . is the state of the probe system P before the measurement, defined on Hilbert space , and an observable , referred to as meter observable of P. The time evolution of the composite system P+S during the measurement interaction is described by a unitary operator on . The Hilbert–Schmidt norm is used where the norm of a state vector in Hilbert space is given by the square root of its inner product: . A schematic illustration of a measurement apparatus A is given aside.
Tighter Uncertainty Relations
Though universally valid Ozawa’s relation from above is not optimal. The reason for this is that three terms in Ozawa’s relation come from three independent uses of Robertson’s relation (see here for details) to different pairs of observables. Although this indeed leads to a valid relation, this is not optimal because the three Robertson’s relations – and consequently Ozawa’s relation – generally cannot be saturated simultaneously. Thus, Cyril Branciard 6 showed that one can improve on the sub-optimality of Ozawa’s proof and derived the following trade-off relation between error and disturbance by applying two geometric lemmas: , where we have again . For the special case , which implies , and replacing and by and , respectively, the above equation can be strengthened yielding the tight relation , as illustrated aside in comparison with Heisenberg’s and Ozawa’s error-disturbance uncertainty relations.
Entropic Uncertainty Relations
The uncertainty relation, as formulated by Robertson Relation in terms of standard deviations has two flaws, which was first recognized by Israeli-born British physicist David Deutsch in 1983 7 : i) standard deviation not optimal measure for all states, there exist certain states, for which the standard deviation diverges. ii) the boundary (right hand side of any uncertainty relation) can become zero for non-commuting observables (this is also the case for our neutron spins for a combination of and ). So Deutsch suggested to seek a theorem of linear algebra in the form . Heisenberg’s (Kaennard) inequality has that form but its generalization does not. In order to represent a quantitative physical notion of “uncertainty, must at least possess the following elementary property: If and only if is a simultaneous eigenstate of and may become zero. From this we can infer a property of , namely that it must vanish if and only if and have an eigenstate in common. “The most natural measure of the uncertainty in the result of a measurement or preparation of a single discrete observable is the (Shannon) entropy 7 “. According to Claude Shannon entropy is the expected (average) information received, which consists of the so called information content and its probability to occur defined as
which can be expressed in the language of quantum mechanics as
Information-theoretic definitions for noise and disturbance in quantum measurements were proposed in 8 , which is schematically illustrated below:
Here and . From that a state independent noise-disturbance trade-off relation in terms of , with , is inferred.