
# What is Quantum Infodynamics?

Quantum Infodynamics is a theory of motion of matter-energy-information which takes into account the reality of space. From the primitive materialistic point of view space is "nothing", i.e. it exists only in relation to something positive and material. This view is erroneous. Space is real and it contains and conditions motion. It even moves: space respiration on cosmological scale is just one example of such motion, called primary motion. The confusion of modern physicists stems from the failure to comprehend the reality of space. Despite the human mind being rather rigidly bound to space, the mind is in fact the only instrument that allows us to transcend space, albeit partially. Our finite status must not discourage us from the exploration of the true nature and dynamic structure of space, because space itself is — to all beings of creature status — relatively finite.

To understand Quantum Infodynamics we need to consider carefully the following three levels of the description of motion:

1. Hamiltonian Dynamics. On this level we have particles which obey equations of motion represented by ordinary differential equations. The initial state is a set of coordinates and velocities (or momenta) and our task is to discover the time-evolution of this state.
2. Classical Infodynamics. The initial coordinates and momenta can never be specified with infinite precision and, therefore, their time-evolution is likewise indeterminate. Instead, we can specify some probability distribution $$f_0(x,p)$$ of the initial coordinates and momenta and ask for the corresponding distribution $$f(x,p,t)$$ at some later (or earlier) moment of time $$t$$. This is the level of classical infodynamics and the word "classical" here means "non-quantum", which, in turn, means "without taking into account the reality of space" (see below). The concrete physical model is fully specified (say, in one spatial dimension) by the form of the potential energy's $$U(x)$$ dependence on the coordinate and the kinetic energy's $$T(p)$$ dependence on the momentum. The dynamics is determined by the exponential propagators containing the ordinary derivatives of these two entities: $$\partial U\over\partial x$$ and $$\partial T\over\partial p$$ respetively.
3. Quantum Infodynamics. On the classical level there is nothing in principle precluding the specification of both the coordinate and momentum of a particle simultaneously with as high precision as is desired. The empirical evidence suggests that this is not the case in reality and so the Classical Infodynamics needs modification. This brings us to the level of Quantum Infodynamics. Here, unlike the Classical level, the specification of $$U(x)$$ and $$T(p)$$ as real-valued functions of real arguments is insufficient, even though in the formal equations this may seem to be the case. In actual fact, one has to perform the analytic continuation of these two functions into the complex plane $$\mathbb{C}$$ and consider the quantum differentials (defined below) of the complex valued functions $$U(z_x)$$ and $$T(z_p)$$. We shall see below what exactly does this amount to.

# Hamiltonian Dynamics

In theoretical (in contrast with the school-level) physics the second law of Newton can be conveniently reformulated using either Lagrangian or Hamiltonian formalism. In the Lagrangian approach the state of the system is described by the generalised coordinates $$q_i$$ and velocities $$\dot{q_i}$$. The specifics of the system are encoded in the Lagrange function $$L(q,\dot{q},t)$$ and the equations of motion arise as the extremals of the action functional $$S[q(t)]$$: \begin{align} & S[q(t)] = \int\limits_{t_1}^{t_2}L(q,\dot{q},t)\,dt \\ & \delta S[q(t)] = 0 \Rightarrow {\partial L\over\partial q_i} = {d\over dt}{\partial L\over\partial\dot{q_i}}, (i=1\dots s) \end{align}

For a system with $$s$$ degrees of freedom, these comprise $$s$$ ordinary differential equations of second order. The Cauchy problem is stated by specifying the initial value of all coordinates and velocities and asking for their time evolution: $$$$(q(0),\dot{q}(0)) \mapsto (q(t),\dot{q}(t))$$$$

In the Hamiltonian approach the state of the system is described by the generalised coordinates $$q_i$$ and momenta $$p_i$$ and the specifics of the system are encoded in the Hamilton function $$H(q,p,t)$$ which is connected with $$L(q,\dot{q},t)$$ via Legendre transformation as follows: \begin{align} & p_i \equiv {\partial L\over\partial\dot{q_i}}\\ & H(q,p,t) = \sum\limits_{i=1}^{s}p_i\dot{q_i} - L \end{align}

and the equation of motion are: \begin{align} & \dot x^i = {\partial H\over\partial p_i} \equiv \{x^i,H\}\label{dotxi}\\ & \dot p_i = -{\partial H\over\partial x^i} \equiv \{p_i,H\}\label{dotpi}\\ & \{A,B\} = A(x,p)(\overset{\leftarrow}{\partial_x}\overset{\rightarrow}{\partial_p} - \overset{\leftarrow}{\partial_p}\overset{\rightarrow}{\partial_x})B(x,p) \end{align}

where we have defined the Poisson bracket $$\{A,B\}$$ on the algebra of smooth functions on the phase space.

The phase flow providing time evolution of the position and momentum $$(q(0),p(0))$$ defines a one-parametric semi-group $${g^t}$$ of local diffeomorphisms: $$$$g^t(q(0),p(0)) = (q(t),p(t))$$$$

In the case of the autonomous system $$\partial_t H\equiv 0$$ the semi-group becomes a proper group.

Geometrically, the difference between the Lagrangian and Hamiltonian formalisms is the same as that between the tangent $$T\mathcal{M}$$ and cotangent bundle $$T^*\mathcal{M}$$ over a base configuration manifold $$\mathcal{M}$$.

# Classical Infodynamics

## Main Principles

If we switch from the concentration of information about the state of a system in a tuple of numbers $$\{q_i(t),p_i(t)\}$$ to the real-valued (and reasonably smooth) distribution function $$f(x,p,t)$$, the equations of motion will have to be modified considerably.

We give the function $$f(x,p,t)$$ the following physical meaning: if the phase space $$\mathcal{P}$$ has an integration measure $$dx\,dp$$, then the expression $$f(x,p,t)\,dx\,dp$$ corresponds to the probability of finding the particle within the neighbourhood of the point $$(q,p)$$ with the infinitesemal volume $$dx\,dp$$. For a finite domain $$G\subseteq\mathcal{P}$$ we have the probability $$P_G(t)$$ of finding the particle in the phase image $$g^t(G)$$ at a moment of time $$t$$ given by the following integral: $$$$P_G(t) = \int\limits_{g^t(G)} f(x,p,t)\,dx\,dp$$$$

To obtain the dynamical equation for the information field $$f(x,p,t)$$ (which on this level has the simple meaning of a positive-definite distribution function or a probability measure) we can demand a very natural conservation law: $$$$P_G(t) = const \label{Pconst}$$$$

This simply means that if the particle was initially ($$t=0$$) somewhere in the domain $$G$$, then it can only end up in the phase image $$g^t(G)$$ of this domain at a subsequent moment of time $$t$$. Particles do not disappear and do not appear out of nowhere. This is a purely classical condition and will have to be abandoned in the quantum regime.

Now we can make use of the well-known Transport Theorem which we here give for the most generic case of a non-autonomous dynamical system: \begin{align} & \dot{x}^i = F^i(x,t),\\ & x^i(0) = y^i, g^t(y) = x(t;y),\\ & {d\over dt}\int\limits_{g^t(G)} f(x,t)\,d^nx = \int\limits_{g^t(G)}\left\{{\partial f\over\partial t} + \sum\limits_{i=1}^n {\partial\over\partial x^i}(F^i f)\right\}\label{transport} \end{align}

Notice the similarity of the rhs of (\ref{transport}) with the continuity equation from hydrodynamics: $$\partial_t\rho + \div(\rho\mathbf{v}) = 0$$. This signifies an important fact: the information flows in phase space as an ordinary liquid with the role of velocity played by the phase flow. As we shall see later, this is true only in the classical regime, i.e. when the reality of space is ignored.

For a Hamiltonian system the sum in the above expression turns into the Poisson bracket $$\{f,H\}$$ and so we have: $${d\over dt}\int\limits_{g^t(G)} f(x,p,t)\,dx\,dp = \int\limits_{g^t(G)}\left\{{\partial f\over\partial t} - \{H,f\}\right\}$$

Thus, the demand (\ref{Pconst}) is equivalent to the following partial differential equation on the information field: $$$$\partial_t f = \{H,f\}$$$$

The original ordinary differential equations of motion serve as the characteristics for this equation in partial derivatives.

We can look at what we have achieved so far in the following way. The system consists of: a) particles moving along the characteristic trajectories and b) the information field flowing as a liquid driven by the phase flow. The combined Dynamical Equations of Energy-Information Field are so important that we collect them all here again: \begin{align} \dot x & = \{x,H\}\label{dotx}\\ \dot p & = \{p,H\}\label{dotp}\\ \partial_t f & = \{H,f\}\label{dotf} \end{align}

As we see above, when the equations of motion are expressed by means of Poisson brackets, the equations of time evolution of particles and information about particles are similar, up to the sign.

The formal solution $$f(x,p,t)$$ of the Cauchy problem for (\ref{dotf}) is given by the following exponent of differential operators: $$$$f(x,p,t) = \exp\left\{t({\partial H\over\partial p}{\partial\over\partial x} - {\partial H\over\partial x}{\partial\over\partial p})\right\}f_0(x,p)$$$$

In order to cast this formal solution into a form useable for numeric simulations we split the exponent using Baker-Cambpell-Hausdorff formula: $$$$f(x,p,t+\Delta t) \approx \exp\left\{t{\partial H\over\partial p}{\partial\over\partial x}\right\} \exp\left\{-t{\partial H\over\partial x}{\partial\over\partial p}\right\}f(x,p,t)$$$$

And here we can directly apply the method of spectral split propagator, i.e. by hopping from the space of $$x-p$$ functions to the space of fourier images over $$p$$ and then over $$x$$ we can solve Cauchy problem. In the classical regime there are other stable methods of solving this equation, but in the quantum regime the spectral split propagator is practically the only useable algorithm and I successfully used it for all my computer simulations.

There is a well-defined concept of entropy $$S_G(t)$$ associated with the given information field $$f(x,p,t)$$: $$$$S_G(t) = - \int\limits_{g^t(G)} f(x,p,t)\ln f(x,p,t)\,dx\,dp = const$$$$

This quantity is well-defined because if the information field is non-negative initially, it will remain such forever: $$$$f_0(x,p)\geqslant 0 \Rightarrow f(x,p,t) \geqslant 0$$$$

This fact follows from the fact that the information field is constant along the characteristics: $$$${df(x(t),p(t),t)\over dt} = {\partial f \over\partial t} + \{f,H\} = 0$$$$

Note that in the classical regime the entropy is real-valued. This is not the case in the quantum case, where entropy becomes complex-valued and the imaginary part thereof corresponds to what is known as "negentropy" or "syntropy" — its increase corresponds to the appearance of order, i.e. describes self-organisation. This effect is clearly seen in one of my simulations.

## Harmonic Oscillator

As an illustration, we shall consider here the case of Harmonic Oscillator, i.e. the system with the Hamilton function $$H(x,p)$$ being a quadratic polynomial on both the coordinate $$x$$ and momentum $$p$$: $$$$H(x,p) = {p^2\over2m} + {m\omega^2 x^2\over2}$$$$

The characteristic system (\ref{dotx}-\ref{dotp}) has the form: \begin{align} & \dot x = {p\over m}\label{harmdotx}\\ & \dot p = -m\omega^2 x\label{harmdotp}\\ & x(0) = x_0\\ & p(0) = p_0\\ \end{align}

The general solution of (\ref{harmdotx}-\ref{harmdotp}) in Cauchy form is: \begin{align} x(t) & = x_0\cos{\omega t} + {p_0\over m\omega}\sin{\omega t}\label{harmxsol}\\ p(t) & = p_0\cos{\omega t} - {m\omega x_0}\sin{\omega t}\label{harmpsol}\\ \end{align}

Solving (\ref{harmxsol}-\ref{harmpsol}) against $$x_0,p_0$$ we can use the method of characteristics to generate the time evolution of the classical information field $$f(x,p,t)$$: $$$$f(x,p,t) = f_0(x_0(x,p,t),p_0(x,p,t)) = f_0(x\cos{\omega t} - {p\over m\omega}\sin{\omega t}, p\cos{\omega t} + {m\omega x}\sin{\omega t})\label{harmfsol}$$$$

The form of solution (\ref{harmfsol}) shows that the support of the initial distribution $$f_0(x,p)$$ rotates in the $$x-p$$ plane with the cyclic frequency $$\omega$$. It could be immediately obtained by noticing that the vector field generating the phase flow $$p\partial_x - x\partial_p$$ could be brought to the form $$\partial_{\varphi}$$ and this is a well-known generator of rotations.

Now, let's start with the initial distribution having the normalised Gaussian form centered at some arbitrary point $$(x_0,p_0)\in\mathbb{R}^2$$: $$$$f_0(x,p) = {1\over {2\pi\sigma_x\sigma_p}} \exp\left\{ -{(x-x_0)^2\over{2\sigma_x^2}} - {(p-p_0)^2\over{2\sigma_p^2}} \right\}\label{harmGaussf}$$$$

Let's calculate the energy $$E$$ in a state $$f_0$$: $$$$E = \int H(x,p)f_0(x,p)\,dx\,dp = {p_0^2\over2m} + {m\omega^2 x_0^2\over2} + {\sigma_p^2\over2m} + {m\omega^2\sigma_x^2\over2}\label{harmE}$$$$

As we see from (\ref{harmE}) the energy consists of two elements: the purely-mechanical energy of the wave packet's centre and the contribution from the information (or, more precisely, the lack thereof) which has the form of quasi-particle with the momentum $$\sigma_p$$ and coordinate $$\sigma_x$$.

Let us now consider the Boltzmann distribution, i.e. the state of maximum entropy with the constrained energy and unity-normalisation: $$$$f_B(x,p) = {1\over Z(\beta)}\exp(-\beta H(x,p))\label{harmBoltzf}$$$$

If we compare the functions (\ref{harmGaussf}) and (\ref{harmBoltzf}), we notice that they coincide under the following conditions: \begin{align} & x_0 = 0, p_0=0\\ & \sigma_x = {1\over{\omega\sqrt{m\beta}}}, \sigma_p = \sqrt{m\over\beta}\label{harmsigmaxp} \end{align}

From (\ref{harmsigmaxp}) we immediately obtain the classical analogue of Heisenberg's Uncertainty Principle: $$$$\sigma_x\sigma_p = {1\over{\beta\omega}}$$$$ What is more important is that there is an explicit connection between $$\sigma_x$$ and $$\sigma_p$$, given by: $$$$\boxed{\sigma_x = {\sigma_p\over{m\omega}}}\label{sxsp}$$$$ In other words, if the oscillator is in the stationary impure state centered around the rest equilibrium position (i.e. with maximum enthropy for a given total energy and normalised distribution), then its "uncertainties" of the coordinate $$\sigma_x$$ and momentum $$\sigma_p$$ cannot be arbitrary, but are related by the equation (\ref{sxsp}). This proves that there is a perfectly valid Classical Uncertainty Principle in Classical Infodynamics, due to the fact that the stationary impure states form a one-parametric $$\beta$$ family of functions and, necessarily, the two values $$\sigma_x$$ and $$\sigma_p$$ cannot be chosen independently: fixing one automatically determines the other.

# Quantum Infodynamics

## Operation of Quantisation

The operation of quantisation is a bit of a "magic", in the following sense. It can be disguised in many different ways, but if you follow carefully the logical steps you will inevitably arrive at an assumption or "postulate" which is inexplicable. For example, I have seen some people and textbooks actually claiming to "derive" the Schrödinger Equation but it amounts to nothing more than substituting the operators $$\hat{E} = i\hbar{\partial\over\partial t}$$ and $$\hat{P} = -i\hbar{\partial\over\partial x}$$ into the classical formula for energy $$E = {P^2\over 2m} + U(x)$$. We shall avoid all such jiggery-pokery here and honestly admit that the exact form of the procedure of quantisation is a direct act of the Transcendental Architects of the Master Universe and originating as it does from the Paradise (absolute) level of reality, cannot be satisfactorily explained in terms of anything pertaining to this, finite, level of reality.

For our purposes the procedure of quantisation (i.e. "activating" the reality of space) can be concisely stated as follows: \begin{align} & [A,B] \equiv {1\over i\hbar}(A\star B - B\star A)\label{quantumbrackets}\\ & \star \equiv \exp\left\{{i\hbar\over2}(\overset{\leftarrow}{\partial_x}\overset{\rightarrow}{\partial_p} - \overset{\leftarrow}{\partial_p}\overset{\rightarrow}{\partial_x})\right\}\\ & \{A,B\} \mapsto [A,B], \label{quantisation} \end{align}

i.e. the classical Poisson brackets $$\{A,B\}$$ are replaced with the quantum brackets $$[A,B]$$ as defined by (\ref{quantumbrackets}). This is our main postulate and it cannot be reduced to the more basic or elementary forms.

In the classical limit $$\hbar\rightarrow 0$$ the quantum brackets coincide with the classical Poisson brackets: $$$$\lim_{\hbar\rightarrow 0}[A,B] = \{A,B\}\\$$$$

## Fundamental Equations

Now we can rewrite the fundamental system of equations of classical infodynamics in terms of the quantum brackets and arrive at the fundamental system of quantum infodynamics: \begin{align} \dot x & = [x,H]\label{qdotx}\\ \dot p & = [p,H]\label{qdotp}\\ \partial_t\rho & = [H,\rho]\label{qdotrho} \end{align}

where we have designated the quantum information field as $$\rho(t)$$ in anticipation of the fact that it coincides with what is known in the orthodox Quantum Mechanics as the density matrix.

What is most interesting here is that the quantum brackets of the coordinate $$x$$ and momentum $$p$$ with the Hamilton function $$H(x,p,t)$$ coincide with the classical ones: \begin{align} & \{x,H\} \equiv [x,H]\\ & \{p,H\} \equiv [p,H] \end{align}

This means that the first two of the equations of quantum infodynamics coincide with the classical ones, i.e. the characteristics of the classical information field remain intact under the procedure of quantisation. In other words, more strikingly emphasising the difference between our and the "orthodox" interpretation of QM: the quantum particles actually move along the classical trajectories, but the information field associated with them evolves according to the much more complicated law (\ref{qdotrho}) than the classical law for the function $$f(x,p,t)$$. And when the coordinates and momentum of a particle are measured, we interact with the information about the particle rather than the particle itself, whilst the latter becomes an "abstract" intangible entity on quantum level, i.e. simply a pattern which provides a skeleton for the information field to be shaped around in accordance with the reality of space both inside and surrounding the particle. Thus we have reconciled both (65:6.1 and 42:5.14) statements in the Urantia Papers: one confirming Heisenberg's Uncertainty Principle and the seemingly impossible statement that all particles move along well-defined trajectories.

## Density Matrix And Four-Operator Algebra $$(\hat{x},\hat{p},\hat{\theta},\hat{\lambda})$$

Now let us rewrite the equation (\ref{qdotrho}) in the form connecting it with the density matrix known from the orthodox QM. To do this we must notice a certain symmetry between configuration and momentum spaces. We use the same notation as in the very important recent paper by Cabrera, Bondar et al (see "Efficient Method to generate time evolution of the Wigner function for open quantum systems", Nov 2015). It turns out that we can split the quantum operators of coordinate and momentum into a linear combination of the operators corresponding to their classical (i.e. simultaneously measurable) counterparts and "quantum corrections" as follows: \begin{align} & \qx = \hat{x} - {\hbar\over 2}\hat{\theta}\label{Xdef}\\ & \qp = \hat{p} + {\hbar\over 2}\hat{\lambda}\label{Pdef}\\ & [\qx,\qp] = i\hbar\label{XPcomm} \end{align}

To satisfy the main commutation relation (\ref{XPcomm}) we can impose the following six commutation relations on the four operators $$\hat{x},\hat{p},\hat{\lambda},\hat{\theta}$$: \begin{align} & [\hat{x},\hat{\lambda}] = i\label{xpcomm1}\\ & [\hat{p},\hat{\theta}] = i\\ & [\hat{x},\hat{p}] = 0\\ & [\hat{x},\hat{\theta}] = 0\\ & [\hat{\lambda},\hat{\theta}] = 0\\ & [\hat{p},\hat{\lambda}] = 0\label{xpcomm2}\\ \end{align}

As we see from the above, the meaning of the operators of classical coordinate $$\hat{x}$$ and classical momentum $$\hat{p}$$ is that they correspond to the simultaneously measurable (commuting, in quantum language) physical values of coordinate and momentum.

We also need a mirror algebra of operators $$\qxm$$ and $$\qpm$$: \begin{align} & \qxm = \hat{x} + {\hbar\over 2}\hat{\theta}\\ & \qpm = \hat{p} - {\hbar\over 2}\hat{\lambda}\\ & [\qxm,\qpm] = -i\hbar\label{XP1comm} \end{align}

It is straightforward to verify that (\ref{XP1comm}) is automatically satisfied by virtue of (\ref{xpcomm1}-\ref{xpcomm2}).

Now we are ready to write the equation for density matrix in the most general form: $$$$i\hbar{\partial\rho\over\partial t} = \{H(\qx,\qp) - H(\qxm,\qpm)\}\rho = \{H(\hat{x} - {\hbar\over 2}\hat{\theta},\hat{p} + {\hbar\over 2}\hat{\lambda}) - H(\hat{x} + {\hbar\over 2}\hat{\theta},\hat{p} - {\hbar\over 2}\hat{\lambda})\}\rho\label{densmat}$$$$

The formal solution of (\ref{densmat}) can be written without fixing a representation for the four operators $$(\hat{x},\hat{p},\hat{\theta},\hat{\lambda})$$: $$$$\rho(t) = \exp\left\{-{it\over\hbar}\left( H(\hat{x} - {\hbar\over 2}\hat{\theta},\hat{p} + {\hbar\over 2}\hat{\lambda}) - H(\hat{x} + {\hbar\over 2}\hat{\theta},\hat{p} - {\hbar\over 2}\hat{\lambda} \right)\right\}\rho(0) \label{rhosol}$$$$

## $$x-p$$ Representation: The Wigner Function $$W(x,p,t)$$

With the four operators $$\hat{x},\hat{p},\hat{\lambda},\hat{\theta}$$ we have the freedom of choosing the representation in such a way that any two of them will correspond to simple multiplication and the other two to differentiation (this is analogous to the choice of $$X$$ vs $$P$$ representation in the orthodox QM).

For example, we can choose to represent $$\hat{x}$$ and $$\hat{p}$$ by multiplication by number and the other two by differentiation: \begin{align} & \hat{x} = x\label{xprepx}\\ & \hat{p} = p\label{xprepp}\\ & \hat{\theta} = -i\partial_p\label{xpreptheta}\\ & \hat{\lambda} = -i\partial_x\label{xpreplambda}\\ \end{align}

Then, the equation for density matrix (\ref{densmat}) becomes: $$$$i\hbar{\partial W\over\partial t} = \left\{H(x + {i\hbar\over 2}\partial_p, p - {i\hbar\over 2}\partial_x) - H(x - {i\hbar\over 2}\partial_p, p + {i\hbar\over 2}\partial_x)\right\}W(x,p,t), \label{WignerEq}$$$$

where we immediately we recognise the well-known Moyal equation for the Wigner function $$W(x,p,t)$$.

## Olavo Representation: $$F(x,p,t)=\Psi^*(x,p,t)\Psi(x,p,t)$$

In the series of papers by L.S.F. Olavo titled "Quantum Mechanics as a Classical Theory" it was shown how a positive-definite space density can be obtained. And this, not in some arbitrary manner, but in a way that preserves the value of the average value of energy, at least in the non-relativistic case this is treated rigorously in paper XVI of the series.

Let us see to what it corresponds in our formalism. To this end let us begin with the Schrödinger equation rather than that of the density matrix: $$$$i\hbar{\partial\Psi\over\partial t} = H(\qx,\qp)\Psi$$$$

Now, substitute the operators $$\qx,\qp$$ according to their definition (\ref{Xdef}-\ref{Pdef}) and concrete representation (\ref{xprepx}-\ref{xpreplambda}) to obtain the Schrödinger equation in the phase space: $$$$i\hbar{\partial\Psi(x,p,t)\over\partial t} = H(x+{i\hbar\over 2}\partial_p, p-{i\hbar\over 2}\partial_x)\Psi(x,p,t)$$$$

We can construct the distribution function which can be interpreted as a probability distribution: $$$$F(x,p,t) = \Psi^*(x,p,t)\Psi(x,p,t)$$$$

which has the important property (proved in Olavo, 2003) that the energy calculated by means of $$F(x,p,t)$$ is the same as if it was calculated from the original probability amplitude $$\psi(x,t)$$ satisfying the usual Schrödinger equation: $$$$\bar{E} = \int\left({p^2\over{2m}} + U(x)\right)F(x,p,t)\,dx\,dp = \int\psi^*(x,t)\left(-{\hbar^2\over{2m}}{\partial^2\over\partial x^2} + U(x)\right)\psi(x,t)\,dx$$$$

Now let us explore the question whether Olavo distribution $$F(x,p,t)$$ can be a candidate for quantum information field. To answer this question it is sufficient to investigate the classical limit for a free non-relativistic particle, which we shall do in the $$x-p$$-representation of the Hilbert phase space: $$$$H = {\hat{P}^2\over{2m}} = {(p-{i\hbar\over2}\partial_x)^2\over{2m}}$$$$

The Schrödinger equation becomes: $$$$i\hbar{\partial\Psi(x,p,t)\over\partial t} = {1\over2m}\left(p^2 - i\hbar p\partial_x - {\hbar^2\over4}\partial_x^2\right)\Psi(x,p,t)$$$$

and its conjugated version is: $$$$-i\hbar{\partial\Psi^*(x,p,t)\over\partial t} = {1\over2m}\left(p^2 + i\hbar p\partial_x - {\hbar^2\over4}\partial_x^2\right)\Psi^*(x,p,t)$$$$

Now we can try to find the equation satisfied by the Olavo distribution $$F(x,p,t)$$: $$$$i\hbar\partial_t F(x,p,t) = i\hbar\partial_t\Psi^*\Psi + i\hbar\Psi^*\partial_t\Psi = \Psi {1\over2m}\left(-p^2 - i\hbar p\partial_x + {\hbar^2\over4}\partial_x^2\right)\Psi^* + \Psi^*{1\over2m}\left( p^2 - i\hbar p\partial_x - {\hbar^2\over4}\partial_x^2\right)\Psi$$$$ $$$$i\hbar{\partial F(x,p,t)\over\partial t} = -{i\hbar p\over2m}{\partial F\over\partial x} + {\hbar^2\over8m}\left(\Psi\Psi^*_{xx} - \Psi^*\Psi_{xx}\right)$$$$

The last equation can be written in the following more compact form: $$$${\partial F\over\partial t} + {p\over2m}{\partial F\over\partial x} = {\hbar\over4m}\Im(\Psi\Psi^*_{xx})$$$$

From here we can immediately observe that the classical limit $$\hbar\to0$$ of this equation does not coincide with the Liouville equation. It differs by an extra factor of $$1/2$$ in the velocity $$p/m$$: $$$${\partial F\over\partial t} + {p\over2m}{\partial F\over\partial x} = 0$$$$

This result most likely corresponds to the well-known fact that the phase velocity of de-Broglie wave corresponding to a free non-relativistic massive particle is equal to $$V/2$$, where $$V$$ is the actual velocity of the particle itself.

This means that the classical limit of the Olavo function is retarded behind the real classical distribution function and, therefore, this object (useful though it is, due to its non-negativity), unfortunately cannot be considered as a candidate for the quantum information field either.

## $$x-\theta$$ Representation: The Blokhintsev Function $$B(x,\theta,t)$$

Sometimes it can be convenient to use a different representation, namely the one where the operators $$\hat{x}$$ and $$\hat{\theta}$$ have a simple form: \begin{align} & \hat{x} = x\\ & \hat{p} = i\partial_\theta\\ & \hat{\theta} = \theta\\ & \hat{\lambda} = -i\partial_x\\ \end{align}

The master equation takes the form, originally found by Blokhintsev (in the particular case of non-relativistic Hamiltonian): $$$$i\hbar{\partial B\over\partial t} = \left\{H(x - {\hbar\over 2}\theta, i\partial_\theta - {i\hbar\over 2}\partial_x) - H(x + {\hbar\over 2}\theta, i\partial_\theta + {i\hbar\over 2}\partial_x)\right\}B(x,\theta,t) \label{BlokhintsevEq}$$$$

The $$x$$-dependence of the Hamiltonian $$H(x,p)$$ can be very complex, because it comes from the potential energy term $$U(x)$$, whereas the $$p$$-dependence comes from the kinetic energy term $$T(p)$$ and in the non-relativistic case is a simple quadratic polynomial. In this case, the $$x-\theta$$ representation allows one to deal with a much more simle partial differential equation than the pseudo-differential equation in $$x-p$$ representation.

In any case, one has to deal with both $$x-p$$ and $$x-\theta$$ representations when solving Cauchy problem numerically using the spectral split propagator technique because in this method one splits the full time-propagator into a composition of pieces that take a simple form in one such representation, but not the other.

The connection between Wigner and Blokhintsev functions is established via Fourier transform: \begin{align} W(x,p,t) & = {1\over2\pi}\int B(x,\theta,t)e^{ip\theta}\,d\theta & \equiv \mathcal{F}_{\theta\rightarrow p}[B(x,\theta,t)]\\ B(x,\theta,t) & = \int W(x,p,t)e^{-ip\theta}\,dp & \equiv \mathcal{F}^{-1}_{p\rightarrow\theta}[W(x,p,t)] \end{align}

## Spectral Split Propagator in General Case

Frequently, the Hamiltonian $$H(x,p)$$ happens to be a sum of the kinetic and potential energy terms: $$$$H(x,p) = T(p) + U(x)$$$$

Then, the formula (\ref{rhosol}) becomes: $$$$\rho(t) = \exp\left\{-{it\over\hbar}\left( T(\hat{p} + {\hbar\over 2}\hat{\lambda}) - T(\hat{p} - {\hbar\over 2}\hat{\lambda}) + U(\hat{x} - {\hbar\over 2}\hat{\theta}) - U(\hat{x} + {\hbar\over 2}\hat{\theta}) \right)\right\}\rho(0) \label{rhosolTU}$$$$

The formula (\ref{rhosolTU}) can be rewritten in a more compact form using the definition of a quantum differential (see also \ref{qd}): \begin{align} & \qd f(x,dx) \equiv {1\over i\hbar}\left(f(x+{i\hbar\over 2}dx) - f(x-{{i\hbar\over 2}}dx)\right)\\ & \rho(t) = \exp\left\{t\left(\qd T(\hat{p},-i\hat{\lambda}) + \qd U(\hat{x},i\hat{\theta})\right)\right\}\rho(0)\label{rhosolqd} \end{align}

Now, let us assume that our initial state $$\rho(0)$$ is realised in the form of a function depending on the classical phase space variables $$x,p$$, (i.e. the Wigner Function) $$W_0(x,p)$$. Then, we can make the formula (\ref{rhosolqd}) more specific: $$$$W(x,p,t) = \exp\left\{t\left(\qd T(\hat{p},-i\hat{\lambda}) + \qd U(\hat{x},i\hat{\theta})\right)\right\}W_0(x,p)\label{Wsolqd}$$$$

The formula (\ref{Wsolqd}) can be turned into a computational scheme of calculating the value $$W(x,p,t+\Delta t)$$ from its value in the previous moment of time $$W(x,p,t)$$ (in all points of the phase space $$x,p$$, not just current one!): $$$$W(x,p,t+\Delta t) \approx \exp\left\{\Delta t\left(\qd T(\hat{p},-i\hat{\lambda}) + \qd U(\hat{x},i\hat{\theta})\right)\right\}W(x,p,t)\label{Wsoldt}$$$$

Now we have a small parameter $$\Delta t$$ inside the exponent and so we can make use of the one of the following approximations: \begin{align} &\exp\left\{\Delta t(\hat{A}+\hat{B})\right\} \approx \exp\left\{\Delta t\hat{A}\right\} \exp\left\{\Delta t\hat{B}\right\}\label{forder}\\ &\exp\left\{\Delta t(\hat{A}+\hat{B})\right\} \approx \exp\left\{\Delta t\hat{A}\over2\right\} \exp\left\{\Delta t\hat{B}\right\} \exp\left\{\Delta t\hat{A}\over2\right\}\label{sorder} \end{align}

The first order formula (\ref{forder}) is called Lie splitting and the second-order formula (\ref{sorder}) is known as symmetric Strang splitting approximation. The error in the first case diminishes linearly with decreasing step $$\Delta t$$ and in the second case it diminishes quadratically, making it more attractive for computations.

The local error can be estimated easily for the Lie splitting method: \begin{align} & u' = Au + Bu, u_0 = u(0)\\ & u(t) = e^{t(A+B)}u_0\\ & u_{n+1} \approx e^{\Delta tA}e^{\Delta tB}u_n\\ & \Delta^{(1)}(h) \equiv u_1(h) - u(h) = \left(e^{hA}e^{hB} - e^{h(A+B)}\right)u_0 = {h^2\over2}[A,B]u_0 + O(h^3) \end{align}

For the second-order (Strang) splitting we have a slightly longer expression: $$$$\Delta^{(2)}(h) \equiv u_1(h) - u(h) = \left(e^{hA/2}e^{hB}e^{hA/2} - e^{h(A+B)}\right)u_0 = h^3\left({1\over12}[B,[B,A]]-{1\over24}[A,[A,B]]\right)u_0 + O(h^4)$$$$

Let us introduce two operators $$\tilde{T}(\hat{p},\hat{\lambda},\Delta t)$$ and $$\tilde{U}(\hat{x},\hat{\theta},\Delta t)$$ as follows: \begin{align} & \tilde{T}(\hat{p},\hat{\lambda},\Delta t) \equiv \exp\left\{\Delta t\qd T(\hat{p},-i\hat{\lambda})\right\}\label{tildeT}\\ & \tilde{U}(\hat{x},\hat{\theta},\Delta t) \equiv \exp\left\{\Delta t\qd U(\hat{x},i\hat{\theta})\right\}\label{tildeU} \end{align}

Then, we can use, for example the first order BCH approximation to rewrite (\ref{Wsoldt}) in the following form: $$$$W(x,p,t+\Delta t) \approx \tilde{T}(\hat{p},\hat{\lambda},\Delta t)\tilde{U}(\hat{x},\hat{\theta},\Delta t)W(x,p,t)\label{Wsoldt2}$$$$

To simplify the action of the operator $$\tilde{U}(\hat{x},\hat{\theta},\Delta t)$$ we must Fourier-transform from the initial $$x-p$$ representation to Blokhintsev representation $$x-\theta$$: $$$$W(x,p,t+\Delta t) \approx \tilde{T}(\hat{p},\hat{\lambda},\Delta t) \mathcal{F}_{\theta\rightarrow p} \tilde{U}(x,\theta,\Delta t)\mathcal{F}^{-1}_{p\rightarrow\theta}W(x,p,t)\label{Wsoldt3}$$$$

Notice how the hats over $$\hat{x}$$ and $$\hat{\theta}$$ in $$\tilde{U}(\hat{x},\hat{\theta},\Delta t)$$ are gone, because in $$x-\theta$$ representation these operators are purely multiplying by $$x$$ and $$\theta$$ respectively.

The same can be done for $$\tilde{T}(\hat{p},\hat{\lambda},\Delta t)$$ by temporarily switching to $$p-\lambda$$ representation: $$$$W(x,p,t+\Delta t) \approx \mathcal{F}_{\lambda\rightarrow x} \tilde{T}(p,\lambda,\Delta t) \mathcal{F}^{-1}_{x\rightarrow\lambda} \mathcal{F}_{\theta\rightarrow p} \tilde{U}(x,\theta,\Delta t)\mathcal{F}^{-1}_{p\rightarrow\theta}W(x,p,t)\label{Wsoldt4}$$$$

The formula (\ref{Wsoldt4}) is in the form that can be used on a computer, because all its terms are readily computable. Nevertheless, let us rewrite it in terms of the original exponents, to make the whole computational procedure more explicit: $$$$W(x,p,t+\Delta t) \approx \mathcal{F}_{\lambda\rightarrow x} \exp\left\{\Delta t\qd T(p,-i\lambda)\right\} \mathcal{F}^{-1}_{x\rightarrow\lambda} \mathcal{F}_{\theta\rightarrow p} \exp\left\{\Delta t\qd U(x,i\theta)\right\} \mathcal{F}^{-1}_{p\rightarrow\theta}W(x,p,t)\label{Wsoldt5}$$$$

Likewise, we can use the second order decomposition (\ref{sorder}) to construct a much more accurate computational scheme than (\ref{Wsoldt5}): $$$$W(x,p,t+\Delta t) \approx \mathcal{F}_{\lambda\rightarrow x} \exp\left\{{\Delta t\over2}\qd T(p,-i\lambda)\right\} \mathcal{F}_{\theta\rightarrow p} \mathcal{F}^{-1}_{x\rightarrow\lambda} \exp\left\{\Delta t\qd U(x,i\theta)\right\} \mathcal{F}^{-1}_{p\rightarrow\theta} \mathcal{F}_{\lambda\rightarrow x} \exp\left\{{\Delta t\over2}\qd T(p,-i\lambda)\right\} \mathcal{F}^{-1}_{x\rightarrow\lambda}W(x,p,t)\label{Wsoldt6}$$$$

## Free Relativistic Quantum Particle in One Dimension

In this case the Hamiltonian consists of kinetic energy (including rest mass): $$$$H(x,p) = T(p) = c \sqrt{p^2 + m^2 c^2}$$$$

The equation for the information field takes the following simple form: $$$$i\hbar{\partial\rho\over\partial t} = \left[T(\hat{p}+{\hbar\over2}\hat{\lambda}) - T(\hat{p}-{\hbar\over2}\hat{\lambda})\right]\rho$$$$

Its formal solution in Cauchy form is: $$$$\rho(t) = \exp\left\{-{it\over\hbar}\left[T(\hat{p}+{\hbar\over2}\hat{\lambda}) - T(\hat{p}-{\hbar\over2}\hat{\lambda})\right]\right\}\rho(0) \label{rhosolT}$$$$

It is convenient to specify the initial state as a function $$W_0(x,p)$$ defined on the classical phase space, i.e. in $$x-p$$ representation. Therefore, we shall need to "jump" through one intermediate space of fourier-images to compute the operator-valued exponent in (\ref{rhosolT}): $$$$(x-p)\overset{\mathcal{F}_{x\rightarrow\lambda}^{-1}}{\mapsto}(\lambda-p)\overset{\mathcal{F}_{\lambda\rightarrow x}}{\mapsto}(x-p)$$$$

This procedure allows us to write the explicit form for the information field in $$x-p$$ representation: $$$$W(x,p,t) = \mathcal{F}_{\lambda\rightarrow x} \exp\left\{-{it\over\hbar}\left[T(p+{\hbar\over2}\lambda) - T(p-{\hbar\over2}\lambda)\right]\right\} \mathcal{F}_{x\rightarrow\lambda}^{-1}[W_0(x,p)]\label{Wsol}$$$$

Notice, that unlike (\ref{rhosolT}), in (\ref{Wsol}) we no longer have hats over $$p$$ and $$\lambda$$, because they are not operators anymore, but simple multiplications by numbers. Great indeed is the power of spectral analysis: it allows to convert from differential operators to multiplication. This provides the simplest illustration of the "spectral split propagator", which does not even require to "split" the exponent, as both operators $$\hat{p}$$ and $$\hat{\lambda}$$ were cast into multiplicative representation simultaneously. (If we had an interaction term $$U(x)$$, it would have been impossible to do so for all four operators $$\hat{x},\hat{p},\hat{\theta},\hat{\lambda}$$ and we would solve this problem by "splitting" the exponent (ala BCH in the first or even second order) into components containing only pairs of these.)

Now we have a convenient and concrete formula which can be turned into an algorithm simulated on a computer: $$$$W(x,p,t) = \mathcal{F}_{\lambda\rightarrow x} \exp\left\{-{ict\over\hbar}\left[\sqrt{(p+{\hbar\over2}\lambda)^2+m^2c^2} - \sqrt{(p-{\hbar\over2}\lambda)^2+m^2c^2}\right]\right\} \mathcal{F}_{x\rightarrow\lambda}^{-1}[W_0(x,p)]$$$$

Here $$\mathcal{F}_{\lambda\rightarrow x}$$ and $$\mathcal{F}_{x\rightarrow\lambda}^{-1}$$ can be computed using FFT (Discrete Fast Fourier Transform) implementation either in C or in Python.

Unfortunately, with the number of spatial dimensions going beyond one (i.e. the number of phase space dimensions being 4 or more) the only realistic way to solve the equations of quantum infodynamics is to use multigrid with adaptive mesh refinement (AMR). This technique is well developed for the classical numeric recipes like finite elements method, but, to my knowledge, it has not yet been applied to the spectral split propagator technique. Therefore, this is what I am developing right now: an adaptive mesh refinement generalisation of the spectral split propagator, so that we can apply it not only to the $$3+3$$ phase spaces, but also to $$4+4$$ (GR) and even $$5+5$$ (Kaluza-Klein type 5D gravity).

# Bibliography

1. Ed. C.K. Zachos, D.B. Fairlie, T.L. Curtright, "Quantum Mechanics in Phase Space: An Overview with Selected Papers", World Scientific 2005
2. Л.Д. Ландау, Е.М. Лифшиц, Теоретическая физика, т.3 "Квантовая механика, нерелятивистская теория", Москва 1989
3. T. Aivazian, "Extended Hilbert Phase Space and Dissipative Quantum Systems", 2017 PDF, arXiv
4. Renan Cabrera, Denys Bondar, Kurt Jacobs, Herschel A. Rabitz, "Efficient method to generate time evolution of the Wigner function for open quantum systems", 2015 arXiv
5. L.S.F. Olavo, "Quantum Mechanics as a Classical Theory XVI, 2008 arXiv
6. Д.И. Блохинцев, "Пространство и время в микромире", Москва 1982
7. Д.И. Блохинцев, "Принципиальные вопросы квантовой механики", Москва 1987 (2-е изд.)
8. A. Cohn, M. Rabinowitz, "Classical Tunneling"
9. Kirk T. McDonald, "A Damped Oscillator as a Hamiltonian System", Joseph Henry Laboratories, Princeton University, 2015
10. Emmanuel Saucedo-Flores, Ruben Ruelas, Victor Manuel Rangel Cobian, "Particle in a Finite Dual Well Potential System: A Simple Quantum Model for Well Super-Localization and Ultra-Dispersion Duality Effect An Alternative Model for the Hydrogen Atom", ResearchGate
11. Н.А. Козырев, "Избранные труды", ЛГУ 1991
12. В.Ф. Золотарёв, Б.Б. Шамшев, "Структура и свойства среды физического вакуума", Известия вузов МВ и ССО СССР, 1985
13. Л.С. Шихобалов, "Причинная механика Н.А. Козырева: Анализ основ", в книге "Избранные труды Н.А. Козырева", ЛГУ 1991
14. А.А. Власов, "Теория многих частиц", Москва 1950 (reissued byм URSS in 2016 in the series "Physics and Mathematics Heritage")
15. А.А. Власов, "Статистические функции распределения", Москва изд-во "Наука", 1966
16. А.А. Власов, "Нелокальная статистическая механика", Москва изд-во "Наука", 1978 (appeared after the death of Anatoly Aleksandrovich. The publication went ahead despite the objections of certain "official circles", thanks to the involvement of acad. N.N. Bogolyubov

# Programs used for Simulations in Quantum Infodynamics

All programs used for simulations in Quantum Infodynamics are freely available under GNU General Public License in my github quantum-infodynamics repository. The main programs are solve.py, solanim.py and solplay.py. At some point I will describe how to use them, but for now you can look in the "bin" directory for the scripts showing the basic usage.

The main solver solve.py is using the Spectral Split Propagator of Second Order method, described in the recent paper by Cabrera et al. The $$x-p$$ mesh grid is of fixed size (specified on the command line), but the time evolution is controlled via adaptive step size adjustment to match the specified absolute tolerance value (the "-tol" option).

The program solanim.py creates the frames for subsequent turning into animation (using ffmpeg, see bin/harmonic-oscillator.sh). Its input is the solution prepared by the main solver solve.py.

The program solplay.py is very similar to solanim.py but instead of generating frames it displays them on the screen. It can be used for quickly pre-viewing the results of simulation while waiting for all the frames made by solanim.py to be ready. But it can also save the animation to the file specifief on the command line (the "-o" option)

Presently I am designing the more generic infrastructure called "Quantum Infodynamics Explorer" which will allow to explore dynamical systems without using up huge amounts of disk and memory space. And when a particular interesting phenomena is detected during such a (potentially very long, lasting days and weeks) simulation run, it will have the ability to create a snapshot which will allow to re-create such scenario using the standard "saving to disk" solver program solve.py.