arXiv:quant-ph/0408192v5 17 Oct 2005 nttt fPyis nvriyo iln ´r,u.Szaf G´ora, [email protected] ul. Zielona E-mail: of University Physics, of Institute time and Differential Review Abstract: Keywords: literature. the in term ignored in typically inaccessible are provide to which dynamics extend entropy phenomena, to Shannon non-equilibrium found the that is view observation the This supports mean. the in Schr¨ app processes functionals the information by Fisher driven related is and Gibbs-type densities probability me of information clas dynamics as the Neumann, introduced von consistently quantum be can where entr addressed, Kullback is and non-eq Gibbs niche of of ceptual case behavior in temporal different explained very is A us entropies purpose-dependent (Shannon) The Gibbs di and systems. probability quantum continuous and classical time-dependent to of case in havior A Scds 54.b 25.r 36.a 03.67.-a 03.65.Ta, 02.50.-r, 05.45.+b, codes: PACS dynam function evolution; density. entropy Schr¨odinger picture methods; process; entropy Uhlenbeck entropy; Gibbs entropy; Entropy eevd 2Ags 05/Acpe:1 coe 05/Publis / 2005 October 17 Accepted: / 2005 August 22 Received: it Garbaczewski Piotr 2005 egv ealdaayi fteGbstp nrp notion entropy Gibbs-type the of analysis detailed a give We ieeta nrp;vnNuanetoy hno entropy Shannon entropy; Neumann von entropy; differential , 7[4] 253-299 , aa4,6-1 iln G´ora, Poland Zielona 65-516 4a, rana a oqatf otiilpwrtransfer power nontrivial quantify to ear dne itr aepce evolution, wave-packet picture odinger srsfrtesm hsclsse.If system. physical same the for asures c fpoaiiydniis invariant densities; probability of ics fteKlbc-ebe nrp and entropy Kullback-Leibler the of s tiuin fvre rgn:related origins: varied of stributions g fcniinlKullback-Leibler conditional of age nisgtit hsclyrelevant physically into insight an s l;Fse nomto;Ornstein- information; Fisher als; iiru mlcosiprocesses. Smoluchowski uilibrium pe scnrne.Aseiccon- specific A confronted. is opies lsia isptv rcse and processes dissipative classical ia ulakLilradGibbs and Kullback-Leibler sical e:1 coe 2005 October 18 hed: www.mdpi.org/entropy/ n t yaia be- dynamical its and Entropy Kullback-Leibler ; SN1099-4300 ISSN Entropy 2005, 7[4], 253-299 2

1 Introduction

Among numerous manifestations of the concept of entropy in physics and mathematics, the information-theory based entropy methods have been devised to investigate the large time be- havior of solutions of various (mostly dissipative) partial differential equations. Shannon, Kull- back and von Neumann entropies are typical tools, designed to quantify the information content and possibly information loss for various classical and quantum systems in a specified state. For quantum systems the von Neumann entropy vanishes on pure states, hence one presumes to have a complete information about the state of a system. On the other hand, for pure states the Gibbs-type (Shannon, e.g. differential) entropy gives access to another information-theory level, associated with a probability distribution inferred from a L2(Rn) wave packet. Although Shannon or Kullback entropies are interpreted as information measures, it is quite natural to think of entropy as a measure of uncertainty. In view of the profound role played by the Shannon entropy in the formulation of entropic indeterminacy relations [1, 2], the term information, in the present paper is mostly used in the technical sense, meaning the inverse of uncertainty. In physics, the notion of entropy is typically regarded as a measure of the degree of randomness and the tendency (trends) of physical systems to become less and less organized. Throughout the paper we shall attribute a more concrete meaning to the term organization, both in the classical and quantum contexts. Namely, we shall pay special attention to quantifying, in terms of suitable entropy functionals, the degree of the probability distribution (de)localization, and the dynamical behavior of this specific - localization uncertainty - feature of a physical system.

1.1 Notions of entropy Notions of entropy, information and uncertainty are intertwined and cannot be sharply differen- tiated. While entropy and uncertainty are - to some extent synonymous - measures of ignorance (lack of information, uncertainty), the complementary notion of information basically quantifies the ability of observers to make reliable predictions about the system, [4, 6, 7]: the more aware one is about chances of a concrete outcome, the lower is the uncertainty of this outcome. Normally, the growth of uncertainty is identified with an increase of the entropy which in turn is interpreted as an information loss. Consult e.g. standard formulations of the celebrated Boltzmann H-theorem. Following Ref. [3] let us recall that entropy - be it thermodynamical (Gibbs-Boltzmann), dynam- ical, von Neumann, Wehrl, Shannon, Renyi, Tsallis or any other conceivable candidate - has an exceptional status among physical quantities. As a derived quantity it does not show up in any fundamental equation of motion. Generically, there is no a priori preferred notion of entropy (perhaps, except for the thermodynamical Clausius case) in physical applications and its specific choice appears to be purpose-dependent. As an obvious remnant of the standard thermodynamical reasoning, one expects entropy to be a state function of the system (thermodynamical notions of equilibrium or near-equilibrium are Entropy 2005, 7[4], 253-299 3 implicit). This state connotation is a source of ambiguities, since inequivalent notions of the system state are used in the description of physical systems, be them classical, thermodynamical and quantum. Not to mention rather specialized meaning of state employed in the standard information theory, [4, 9, 7]. A primitive information-theory system is simply a bit whose two admissible states are binary digits 1 and 0. Its quantum equivalent is a qubit whose admissible states are vectors in a two-dimensional Hilbert space, hence an infinity of pure states of a two-level quantum system. The information theory framework, if extended to more complicated systems, employs a plethora of notions of state [4, 7]. As very special cases we may mention a phase-space point as the determi- native of the state of a classical dynamical system, or the macroscopic notion of a thermodynamical state in its classical and quantum versions, [3]. A symbolic mathematical representation of quantum states in terms of wave vectors and/or density operators is expected to provide an experimentally verifiable ”information” about the system. To obtain a catalogue of the corresponding statistical predictions, an a priori choice of suitable observables (and thus measurement procedures) is necessary. Then, a casual interpretation of entropy as a measure of one’s uncertainty about measurable properties of a system in a prescribed quantum state may acquire an unambiguous meaning. When adopting the state notion to the Hilbert space language of quantum theory, we realize that normalized wave functions and density operators, which are traditionally supposed to determine the quantum state, allow to extend the notion of entropy to certain functionals of the state of the quantum system. The von Neumann entropy

(ˆρ)= k Tr(ˆρ lnρ ˆ) (1) S − B of a quantum state (e.g. the density operatorρ ˆ), though often infinite, is typically related to the degree of departure from purity (e.g. the ”mixedness” level) of the state and is particularly useful while quantifying measurements performed upon finite quantum systems. For a given density operatorρ ˆ, von Neumann entropy is commonly accepted as a reliable measure of the information content (about the departure from purity), to be experimentally extracted from of a quantum system in a given state. Only under very specific circumstances, like e.g. in an optimal ”quantum experiment” [11, 12] which refers to the diagonal density operator (with pi, 1 i N being its eigenvalues), the information gain can be described in terms of both von ≤ ≤ Neumann’s and the standard Shannon measure of information:

Tr(ˆρ lnρ ˆ)= p ln p . (2) − − i i i X Since von Neumann entropy is invariant under unitary transformations, the result exhibits an invariance under the change of the Hilbert space basis and the conservation in time for a closed system (when there is no information/energy exchange with the environment). Thus, Schr¨odinger dynamics has no impact on the von Neumann encoding of information, see e.g. also [13, 14] for a related discussion. Entropy 2005, 7[4], 253-299 4

Pure states have vanishing von Neumann entropy ( (ˆρ) = 0 ”for the pure states and only for S them”, [3]) and are normally considered as irrelevant from the quantum information theory per- spective, since ”one has complete information” [3] about such states. One may even say that a pure state is an unjustified over-idealization, since otherwise it would constitute e.g. a completely measured state of a system in an infinite Hilbert space, [15]. A colloquial interpretation of this situation is: since the wave function provides a complete description of a quantum system, surely we have no uncertainty about this quantum system, [16]. Note that as a side comment we find in Ref. [15] a minor excuse: this idealization, often employed for position-momentum degrees of freedom, is usually an adequate approximation, to be read as an answer to an objection of Ref. [17]: ”although continuous observables such as the position are familiar enough, they are really unphysical idealizations”, c.f. in this connection [18] for an alternative view. On the other hand, the classic Shannon entropy is known to be a natural measure of the amount of uncertainty related to measurements for pairs of observables, discrete and continuous on an equal footing, when a quantum system actually is in a pure state. Hence, a properly posed question reveals obvious uncertainties where at the first glance we have no uncertainty. The related entropic uncertainty relations for finite and infinite quantum systems have received due attention in the literature, in addition to direct investigations of the configuration space entropic uncertainty/information measure of L2(Rn) wave packets, [15], [19] -[42]. The commonly used in the literature notions of Shannon and von Neumann entropy, although coinciding in some cases, definitely refer to different categories of predictions and information measures for physical systems. In contrast to the exclusively quantum concept of von Neumann entropy, the Shannon entropy - quite apart from its purely classical provenance - appears to capture a number of properties of quantum systems which cannot be detected nor described by means of the von Neumann entropy. Obviously, there is no use of Shannon entropy if one is interested in verifying for mixed quantum states, how much actually a given state is mixed. On the other hand, von Neumann entropy appears to be useless in the analysis of L2(R) wave packets and their dynamical manifestations (time-dependent analysis) which are currently in the reach of experimental techniques, [44, 43]. It is enough to invoke pure quantum states in L2(Rn) and standard position-momentum observ- ables which, quite apart from a hasty criticism [17], still stand for a valid canonical quantization cornerstone of quantum theory, [18]. Those somewhat underestimated facts seem to underlie statements about an inadequacy of Shan- non entropy in the quantum context, [11], while an equally valid statement is that the von Neu- mann entropy happens to be inadequate. The solution of the dilemma lies in specifying the purpose, see also [12].

1.2 Differential entropy We are primarily interested in the information content of pure quantum states in L2(Rn), and thus pursue the following (albeit scope-limited, c.f. [44, 43] for experimental justifications) view: Entropy 2005, 7[4], 253-299 5 an isolated system is represented in quantum mechanics by a state vector that conveys statistical predictions for possible measurement outcomes. Consequently, it is the state vector which we regard as an information (alternatively, predictions and uncertainty) resource and therefore questions like, [45]: how much information in the state vector or information about what, may be considered meaningful. Let us emphasize that we do not attempt to define an information content of a physical system as a whole, but rather we wish to set appropriate measures of uncertainty and information for concrete pure states of a quantum system. A particular quantum state should not be misinterpreted to provide a complete description of the corresponding physical system itself, [46]. In fact, when we declare any Schr¨odinger’s ψ as the state of a quantum system, we effectively make a statement about the probabilities of obtaining certain results upon measurement of suitable observables, hence we refer to a definite experimental setup. Therefore, to change or influence this state is not quite the same as changing or influencing the system. Our, still vague notion of information, does not refer to qubits since we shall basically operate in an infinite dimensional Hilbert space. This does not prohibit a consistent use of information- theory concepts, since an analytic information content of a quantum state vector, in the least reduced to a properly handled plane wave, is not merely an abstraction and can be dug out in realistic experiments, including those specific to time-dependent quantum mechanics, [43, 44]. On the way one may verify a compliance with quantum theory of a number of well defined properties of the quantum system for which: the only features known before an experiment is performed are probabilities of various events to occur, [11]. In the case of a quantum mechanical position probability density, its analytic form is assumed to arise in conjunction with solutions of the Schr¨odinger equation. Then, we need to generalize the original Shannon’s entropy for a discrete set of probabilities to the entropy of a continuous distribution with the density distribution function [4], which is also named the differential entropy, [5, 6]. Most of our further discussion will be set in a specific context of quantum position-momentum information/uncertainty measures, where the classical form of Shannon differential entropy [4] has been used for years in the formulation of entropic versions of Heisenberg-type indeterminacy relations, [19, 20, 21, 25]. The entropic form of indeterminacy relations, enters the stage through the of L2(Rn) wave packets, in conjunction with the Born statistical interpretation, hence with ψ-induced probability measures in position and momentum space, [19, 20]. The experimental connotations pertaining to the notion of uncertainty or indeterminacy are rather obvious, although they do not quite fit to the current quantum information idea of a ”useful” quantum measurement, [11]. Given the probability density ρ(x) on Rn, we define the differential entropy [6, 5, 48]), as follows:

(ρ)= ρ(x) ln ρ(x) dx . (3) S − Z One may consider a subset Γ Rn to be a support of ρ instead of R; this is guaranteed by the ⊂ Entropy 2005, 7[4], 253-299 6 convention that the integrand in Eq. (3) vanishes if ρ does. Note a minor but crucial notational difference betweenρ ˆ and ρ. Let us stress that, modulo minor exceptions, throughout the paper we carefully avoid dimensional quantities (all relevant dimensional constants like the Planck ~ are scaled away), since otherwise the above differential entropy definition may be dimensionally defective and have no physical meaning. An extensive list od differential entropy values for various probability densities can be found in Re. [5]. Since our paper is supposed to be concerned with physical applications, in Section II we shall analyze the issue of how the differential entropy definition depends on the units used. The related difficulty, often overlooked in the literature, refers to literally taking the logarithm of a dimensional argument, see e.g. [9, 10]. In the quantum mechanical context, we shall invoke either position (ρ) or momentum (˜ρ) S S information entropies, with no recourse to the classical entropy given in terms of classical phase- space distributions f(q,p) or (Wehrl entropy) their Wigner/Husimi analogues, [3, 28]. The notion of entropic uncertainty relations, [2, 21, 22, 25] explicitly relies on the differential entropy input. Namely, an arithmetic sum of (presumed to be finite) momentum and position information en- tropies for any normalized L2(Rn) wave packet ψ(x), is bounded from below: (ρ)+ (˜ρ) n(1 + ln π) (4) S S ≥ where n stands for the configuration space (respectively momentum space) dimension, [21]. This feature is worth emphasizing, since neither (ρ) nor (˜ρ) on their own are bounded from below S S or from above. Nonetheless, both take finite values in physically relevant situations and their sum is always positive.

1.3 Temporal behavior-preliminaries Since a normalized wave function ψ represents a pure state of a quantum system whose dynamics is governed by the Schr¨odinger equation, only for stationary states the differential entropy (ρ) is S for sure a conserved quantity. In general, the Schr¨odinger picture evolution of ψ(x, t) and so this . of ψ(x, t) 2 = ρ(x, t) may give rise to a nontrivial dynamics of the information entropy associated | | with the wave packet ψ(x, t). Let us point out that most of the ”entropic” research pertains to time-independent situations, like in case of stationary solutions of the Schr¨odinger equation. Notable exceptions are Refs. [27, 33, 34]. On general non-quantum grounds an information (differential entropy) dynamics is addressed in Refs. [6, 47] and [59]-[66], see also [67, 68, 69, 71, 70]. The differential entropy, by a number of reasons [4, 6], is said not to quantify the absolute ”amount of information carried by the state of the system” (Shannon’s uncertainty), unless carefully inter- preted. Up to measure preserving coordinate transformations the latter objection remains invalid and this feature gave some impetus to numerically assisted comparative studies of the Shannon information content of different pure states of a given quantum system. Results are ranging from simple atoms to molecules, nuclei, aggregates of particles, many-body Bose and Fermi systems, and Bose-Einstein condensates, see e.g. Refs. [29] - [42]. In these cases, Entropy 2005, 7[4], 253-299 7

Shannon’s differential entropy appears to be a fully adequate measure for the localization degree (which in turn is interpreted as both the uncertainty measure and the information content) of the involved wave packets. A difference of two information entropies (evaluated with respect to the same coordinate system) (ρ) (ρ′) is known to quantify an absolute change in the information content when passing S − S from one state of a given system to another. All potential problems with dimensional units would disappear in this case, [4, 9]. Alternatively, to this end one may invoke the familiar notion of the relative Kullback entropy ρ (ln ρ ln ρ′) dx, [4, 6], provided ρ′ is strictly positive. − Γ − Cogent recommendations towards the use of the Shannon information measure, plainly against R the Kullback option, can be found in Ref. [72]. We shall come to this point later. For arguments just to the opposite see e.g. [73] and also [92, 93]. In the present paper, we predominantly invoke the differential entropy. In Section IV we shall describe a number of limitations upon the use of the Kullback entropy. If both entropies can be safely employed for the same physical model (like e.g. for the diffusion type dynamics with asymptotic invariant densities), we establish direct links between the Shannon and Kullback en- tropy dynamics. In the context of the induced (by time development of probability densities) ”information dy- namics” (t), [6, 56], it is the difference (t) (t′) between the (presumed to be finite) S → S S − S information entropy values for the time-dependent state of the same physical system, considered at times t′ < t, which properly captures the net uncertainty/information change in the respective time interval [t′, t]. Let us mention that the very same strategy underlies the Kolmogorov-Sinai entropy notion for classical dynamical systems, [60, 61, 48]. dS In particular, the rate in time of information entropy dt is a well defined quantity characterizing the temporal changes (none, gain or loss) in the information content of a given L2(Rn) normalized wave packet ψ(x, t) (strictly speaking, of the related probability density). We indicate that, at variance with standard thermodynamical intuitions, quantum mechanical information (differential) entropy needs not to be a monotonic function of time. In the course of its evolution it may oscillate, increase or decrease with the flow of time, instead of merely increasing with time or staying constant, as customarily expected. That, regardless from the intrinsic time reversal property of the quantum dynamics. To conform with the information theory lore, we need to address an information entropy balance in the course of time, since for an isolated quantum system there is no analog of thermal reservoir, capable of decreasing (removal) or increasing (putting into) an information entropy of the partic- ular state in which the system actually is. Since there in no quantifiable energy exchange with the environment, the actual purpose and ”fate” of the differential entropy need to be investigated. The entropic method we follow in the present paper extends to any formalism operating with general time-dependent spatial probability densities, [47, 48, 50], even if set out of the explicit thermodynamic context (e.g. the phase space formulation of statistical mechanics). Information entropy and its intrinsic dynamics, like e.g. the information flow and information entropy production rate, quantify properties of general reversible and/or irreversible dynamical systems. Normally, the microscopic dynamics of such systems is expected to follow well defined Entropy 2005, 7[4], 253-299 8 trajectories (deterministic paths of a dynamical system or sample paths of a stochastic process) and those may be thought to induce a corresponding dynamics for statistical ensembles of trajectories. It is seldom possible to have a sharp wisdom of the initial data x X for the trajectory dynamics 0 ∈ taking place in a phase space X of the system. This imprecision extends to the terminal data (x x after time t> 0) as well. 0 → t Therefore, even if one knows exact dynamical rules governing the behavior of individual trajectories in time, it is basically impossible to tell more about the system then: if its initial state can be found in a subset A X with a probability prob(x A), then after time t one can identify the terminal ⊂ 0 ∈ state of the system x X in a subset B X with a probability prob(x B). An evolution of t ∈ ⊂ t ∈ derived probability densities eventually may be obtained as a solution of an appropriate partial differential transport equation, [47, 48, 59, 63] In the present paper we take a more general view and we bypass a concept of the underly- ing trajectory dynamics by emphasizing the role of transport equations and their density so- lutions. Under such premises, we can safely address the dynamics of uncertainty/information generated by the Schr¨odinger picture quantum evolution of wave packets in closed (no system - reservoir/environment coupling) quantum mechanical systems. Remark 1: Keeping in touch with quantum mechanical tradition, let us recall that at least two different ”trajectory pictures ” can be related to the very same mathematical model based on the Schr¨odinger wave packet dynamics: deterministic Bohmian paths [49] and random paths of (basically singular) diffusion-type processes, [50, 51, 52]. Additionally, under suitable restrictions (free motion, harmonic attraction) classical deterministic phase-space paths are supported by the associated with ψ(x, t) positive Wigner distribution function and its spatial marginal distribution. However, none of the above derived trajectory ”pictures” deserves the status of an underlying physical ”reality” for quantum phenomena although each of them may serve as an adequate pictorial description of the wave-packet dynamics. Remark 2: In view of Born’s statistical interpretation postulate, the Schr¨odinger picture dynam- . ics sets a well defined transport problem for a probability density ρ(x, t) = ψ(x, t) 2. Therefore, | | one is tempted to resolve such dynamics in terms of (Markovian) diffusion-type processes and their sample paths, see e.g. [50, 51, 52] and [53, 54]. A direct interpretation in terms of random ”trajectories” of a Markovian diffusion-type process is here in principle possible under a number of mathematical restrictions, but is non-unique and not necessarily global in time. The nontrivial boundary data, like the presence of wave function nodes, create additional problems although the nodes are known to be never reached by the pertinent processes. The main source of difficulty lies in guaranteing the existence of a process per se i.e. of the well defined transition probability den- sity function solving a suitable parabolic partial differential equation (Fokker-Planck or Kramers). By adopting milder conditions upon the drift fields (instead of too restrictive growth restrictions, one may simply admit smooth functions) it is possible to construct well defined, albeit non-unique, diffusion-type processes. They are consistent with the time development of a given probability density, see Chap. 3 of Ref. [68] and [52]. Entropy 2005, 7[4], 253-299 9

1.4 Outline of the paper The paper is structured as follows. We begin by recalling the standard lore of the Shannon infor- mation theory to attribute an unambiguous meaning to two principal notions, this of information and that of uncertainty. To this end various notions of state of a model system are invoked and suitable information measures are discussed. Next we turn to the coarse-graining issue and set a connection between the Shannon entropy of a discrete probability measure and the differential entropy of a related (through a suitable limiting procedure) continuous probability density. On the way, we analyze the dimensional units impact on the entropy definition. We discuss various entropic inequalities for both differential and coarse-grained entropies of quantum mechanical densities. In Section III, the localization degree of probability densities is analyzed by means of so-called entropy powers and of the Fisher information measure. We infer two chain inequalities, Eqs. (51) and (52), which imply that typically the differential entropy is a well behaved quantity, bounded both from below and above. The formalism is general enough to include quantum mechanical densities as merely the special case. In Section IV we set a conceptual framework for time-dependent problems. Since classical dynam- ical, stochastic and quantum systems (in their pure states) in general give rise to time-dependent probability densities and information entropies, we resolve the exemplary density dynamics in terms of Smoluchowski diffusion processes, albeit with no explicit random path (e.g. random variable) input. The entropy and Fisher information evolution equations are established. Close links of the dif- ferential and conditional Kullback entropies are established for Smoluchowski diffusion processes, when asymptotic invariant densities enter the scene. We discuss a compliance of the induced continual power release in the course of the diffusion process with the mean energy conservation law, Eqs (104) and (108). In section V we analyze differential entropy dynamics and time evolution of the Fisher localization measure in quantum theory and next exemplify the general formalism for simple analytically tractable cases. The emergent continual power transfer effect has been analyzed in connection with the finite energy constraint for the mean energy of quantum motion, Eqs. (114) and (117). Although uncertainty dynamics scenarios of sections IV and V are fundamentally different, nonethe- less the respective methodologies appear to have an overlap, when restricted to steady states which support invariant densities for (reversible) stationary diffusion-type processes.

2 Differential entropy: uncertainty versus information

2.1 Prerequisites The original definition of Shannon entropy conveys a dual meaning of both uncertainty and in- formation measure. It is useful to interpret those features in a complementary (albeit colloquial) way: the less is the uncertainty of the system or its state, the larger (and more valuable) is the information we acquire as a result of the measurement (observation) upon the system, and in Entropy 2005, 7[4], 253-299 10 reverse. We know that a result of an observation of any random phenomenon cannot be predicted a priori (i.e. before an observation), hence it is natural to quantify an uncertainty of this phenomenon. Let us consider µ = (µ , ..., µ ) as a probability measure on N distinct (discrete) events A , 1 1 N j ≤ j N pertaining to a model system. Assume that N µ = 1 and µ = prob(A ) stands for a ≤ j=1 j j j probability for an event A to occur in the game of chance with N possible outcomes. j P Let us call log µ an uncertainty function of the event A . Interestingly, we can coin here the − j j name of the (”missing”) information function, if we wish to interpret what can be learned via direct observation of the event Aj: the less probable is that event, the more valuable (larger) is the information we would retrieve through its registration . Then, the expression N (µ)= µ log µ (5) S − j j j=1 X stands for the measure of the mean uncertainty of the possible outcome of the game, [6], and at the same time quantifies the mean information which is accessible from an experiment (i.e. actually playing the game). The base of the logarithm for a while is taken equal 2, but we recall that log b ln 2 = ln b and ln 2 0.69555 with the base e 2.71828 . · ≃ ≃ Thus, if we identify event values A1, ..., AN with labels for particular discrete ”states” of the system, we may interpret Eq. (5) as a measure of uncertainty of the ”state” of the system, before this particular ”state” it is chosen out of the set of all admissible ones. This well conforms with the standard meaning attributed to the Shannon entropy: it is a measure of the degree of ignorance concerning which possibility (event A ) may hold true in the set A , A , ..., A with a given a j { 1 2 N } priori probability distribution µ , ..., µ . { 1 N } Notice that: 0 (µ) log N (6) ≤ S ≤ ranges from certainty (one entry whose probability equals 1 and thus no information is missing) to maximum uncertainty when a uniform distribution µ =1/N for all 1 j N occurs. In the j ≤ ≤ latter situation, all events (or measurement outcomes) are equiprobable and log N sets maximum for a measure of the ”missing information”. By looking at all intermediate levels of randomness allowed by the inequalities Eq. (5) we realize that the lower is the Shannon entropy the less information about ”states” of the system we are missing, i.e. we have more information about the system. If the Shannon entropy increases, we actually loose an information available about the system. Consequently, the difference between two uncertainty measures can be interpreted as an informa- tion gain or loss.

2.2 Events, states, microstates and macrostates The Boltzmann formula . = k ln W = k ln P (7) S B − B Entropy 2005, 7[4], 253-299 11 sets a link of entropy of the (thermodynamical) system with the probability P = 1/W that an appropriate ”statistical microstate” can occur. Here, W stands for a number of all possi- ble (equiprobable) microstates that imply the prescribed macroscopic (e.g. thermodynamical) behavior corresponding to a fixed value of . S It is instructive to recall that if P is a probability of an event i.e. of a particular microstate, then ln P (actually, with log instead of ln) may be interpreted [8] as ”a measure of information − 2 produced when one message is chosen from the set, all choices being equally likely” (”message” to be identified with a ”microstate”). Another interpretation of ln P is that of a degree of − uncertainty in the trial experiment, [7]. As a pedestrian illustration let us invoke a classic example of a molecular gas in a box which is divided into two halves denoted ”1” and ”2”. We allow the molecules to be in one of two elementary states: A1 if a molecule can be found in ”1” half-box and A2 if it placed in another half ”2”. Let us consider a particular n-th macrostate of a molecular gas comprising a total of G molecules in a box, with n molecules in the state A and G n molecules in the state A . 1 − 2 The total number of ways in which G molecules can be distributed between two halves of the box in this prescribed macrostate, i.e. the number W = W (n) of distinct equiprobable microstates, clearly is W (n) = G!/[n!(G n)!]. Here, P (n)=1/W (n) is a probability with which any of − microstates may occur in a system bound to ”live” in a given macrostate. The maximum of W (n) and thus of kB ln W (n) corresponds to N1 = N2 = n. To get a better insight into the information-uncertainty intertwine, let us consider an ensemble of finite systems which are allowed to appear in any of N > 0 distinct elementary states. The meaning of ”state” is left unspecified, although an ”alphabet” letter may be invoked for convenience. Let us pick up randomly a large sample composed of G 1 single systems, each one in a ≫ . . certain (randomly assigned) state. We record frequencies n1/G = p1, ..., nN /G = pN with which the elementary states of the type 1, ..., N do actually occur. This sample is a substitute for a ”message” or a ”statistical microstate” in the previous discussion. Next, we identify the number of all possible samples of that fixed size G which would show up the very same statistics p1, ..., pN of elementary states. We interpret those samples to display the same ”macroscopic behavior”. It was the major discovery due to Boltzmann, see e.g. [4], that the number W of relevant ”mi- croscopic states” can be approximately read out from each single sample and is directly related . to the the introduced a priori probability measure µ1, ..., µN , with an identification pi = µi for all 1 i N, by the Shannon formula: ≤ ≤ N . ln W G p ln p = G (µ) (8) ≃ − i i − · S i=1 X On the basis of this formula, we can consistently introduce (µ) as the mean information per each S (i-th) elementary state of the N-state system, as encoded in a given sample whose size G 1 is ≫ sufficiently large, [9]. Entropy 2005, 7[4], 253-299 12

To exemplify previous considerations, let us consider N = 2. It is instructive to compare the uncertainty level (alternatively - information content) of (µ) for the two-state system, if we take S 2 as the logarithm base instead of e. Then, Eq. (8) would refer to the binary encoding of the message (string) with G entries. We find that µ = 0.1 and µ = 0.9 yield (µ)=0.469. Analogously 0.2 and 0.8 imply 0.7219, 1 2 S while 0.3 and 0.7 give 0.8813. Next, 0.4 and 0.6 imply 0.971, and we reach an obvious maximum = 1 for µ = µ = 0.5. An instructive example of the ”dog-flea” model workings with G = 50 S 1 2 fleas jumping back and forth between their ”states of residence” on a dog ”1” or dog ”2”, can be found in Ref. [55]. Albeit, in a number of specific cases, an evolution of the Gibbs entropy may show up some surprises if the ”entropy growth dogma” is uncritically accepted, see e.g. examples in [55, 56] and the discussion of Refs. [57, 58]. By pursuing the Shannon’s communication theory track, [4], we can as well identify states of the model system with ”messages” (strings) of an arbitrary length G> 0 which are entirely composed by means of the prescribed N ”alphabet” entries (e.g. events or alphabet letters Aj with the previous probability measure µ). Then, Eq. (8) may be interpreted as a measure of information per alphabet letter, obtained after a particular message (string state of the model system) has ≡ been received or measured, c.f. our discussion preceding Eq. (8). In this case, the Shannon en- tropy interpolates between a maximal information (one certain event) and a minimal information (uniform distribution), cf. Eq. (6).

Remark 3: Any string containing G = 10.000 symbols which are randomly sampled from among equiprobable N = 27 alphabet letters, [9], stands for a concrete microstate. In view of µ =1/27, a corresponding macrostate is described via Eqs. (5) and (8) in terms of the number (µ) = S log (1/27) 4.76. Accordingly, log W = G (µ) 47.600, where W is the number of − 2 ≃ 2 · S ≃ admissible microstates.

2.3 Shannon entropy and differential entropy 2.3.1 Bernoulli scheme and

Let us consider again a two-state system where A1 appears with a probability µ1 = p while A2 with a probability µ =1 p. A probability with which A would have appeared exactly n times, 2 − 1 in the series of G repetitions of the two-state experiment, is given by the Bernoulli formula: G! P = pn(1 p)G−n (9) n [n!(G n)!] − − where, in view of the Newton formula for the binomial (p + q)G, after setting q =1 p we arrive G − at n=0 Pn = 1. Since the number n of successes in the Bernoulli scheme is restricted only by inequalities 0 P ≤ n G, what we have actually defined is a probability measure µ = P ,P , ..., P for G distinct ≤ { 0 1 G} random events denoted B0, B1, ..., BG. Accordingly, we can introduce a random variable B and say that it has the Bernoulli distribution, if B takes values n = 0, 1, ..., G with the Bernoulli Entropy 2005, 7[4], 253-299 13

probabilities Pn of Eq. (9) for all n. A random event Bn is interpreted as ”taking the value n in the Bernoulli scheme”. . Let us denote P (B = k) = P (Bk) = Pk. We know that P (B < n) = k

1 (n Gp)2 P exp − . (10) n ≡ [2πGp(1 p)]1/2 −2Gp(1 p) −  −  At this point we shall take an inspiration from Ref. [74] and relate the Bernoulii ”success” prob- abilities with probabilities of locating a particle in an arbitrary interval on a line R. Namely, let us first consider an interval of length L: [0, L] R. Let G 1, we define an interval grating unit ⊂ ≫ to be r = L/G and next redefine Eq. (10) to arrive at a probability per bin of length r 1: ≪ 1 1 (x x )2 P = ρ(x )= exp n − 0 (11) r n n [2πσ2]1/2 − 2σ2   with: x = nr, x = Gpr and σ2 = Gr2p(1 p). Obviously, ρ(x ) is not a probability on its own, n 0 − n while r ρ(x ) = P is a probability to find a particle in the n-th interval of length r out of the · n n admitted number G = L/r of bins.

For convenience let us specify p = 1/2 which implies x0 = rG/2 and σ = r√G/2. We recall that almost all of the probability ”mass” of the Gauss distribution is contained in the interval 3σ < x < +3σ about the mean value x . Indeed, we have prob( x x ) < 2σ)=0.954 while − 0 0 | − 0| prob( x x ) < 3σ)=0.998. | − 0| The lower bound 100 < n G justifies the usage of simplified versions of the standard Stirling ≤ formula nn exp( n)√2πn < n! < nn exp[ n + (1/12n)]√2πn, in view of the above probability − − ”mass” estimates. Therefore, we can safely replace the Bernoulli probability measure by its (still discrete) Gaussian approximation Eq. (10) and next pass to Eq. (11) and its obvious continuous generalization. Remark 4: By taking a concrete input of L = 1 and G = 104 we get the grating (spacing, resolution) unit r = 10−4. Then, x =1/2 while σ = (1/2) 10−2. It is thus a localization interval 0 · [1/2 3σ, 1/2+3σ] of length 6σ = 3 10−2 to be compared with L = 1. By setting G = 106 we − · would get 6σ =3 10−3. · For future reference, let us stress that generally we expect r σ which implies a sharp distinction ≪ between the grating (resolution) unit r and the localization properties of the Gauss function expressed through its half-width σ.

2.3.2 Coarse-graining

For a given probability density function on R we can adopt the coarse-graining procedure, [47], giving account of an imprecision with which a spatial position x can be measured or estimated. Thus, if compared with the previous Bernoulli Gauss argument, we shall proceed in reverse from → Entropy 2005, 7[4], 253-299 14 density functions to approximating them by piece-wise constant, histogram-type discontinuous functions.

We need to partition the configuration space R into a family of disjoint subsets (intervals) Bk . { } such that B R and B B =Ø for i = j. We denote µ(B ) = µ the length of the k-th ∪k k ⊆ i ∩ j 6 k k interval, where µ stands for the Lebesgue measure on R.

A probability that a Gaussian random variable with the density ρ takes its value x in an interval Bk . equals prob(Bk) = pk = Bk ρ(x)dx. An average of the density ρ over Bk we denote <ρ>k= pk/µk where µ = dx. k Bk R The probability density ρ coarse grained with respect to the partition B reads: R { k} . ρB(x) = <ρ>k 1k(x) (12) Xk where 1 (x) is an indicator (characteristic) function of the set B , which is equal 1 for x B and k k ∈ k vanishes otherwise. Since 1k(x)dx = µk it is clear that R ρB(x)dx = <ρ>k µk = pk =1 (13) Z Xk Xk where an interchange of the summation with integration is presumed to be allowed. By invoking arguments of the previous subsection, we choose a grating unit µ = r 1 for all k k ≪ and notice that <ρ> = p /r where p ρ(x ) r for certain x B . k k k ≃ k · k ∈ k In view of the twice ”triple half-width” spatial localization property of the Gauss function, we can safely assert that an interval L 6σ about x may be used in the coarse graining procedure, ∼ 0 instead of the full configuration space R. Effectively, we arrive at a finite partition on L with . the resolution L/G = r and then we can safely invoke the definition of p = P = r ρ(x ), in k k · k conformity with Eq. (11). For a coarse grained probability density we introduce a coarse grained Shannon entropy whose relationship to the original differential entropy is of major interest. We have:

(ρ )= p ln p (14) S B − k k ≃ Xk [rρ(x )] ln r [rρ(x )] ln[ρ(x )] − k − k k Xk Xk with a standard interpretation of the mean information per bin of length r. Here, if a partition (grating) unit r is small, one arrives at an approximate formula (we admit ln r 1): | |≫ (ρ ) ln r ρ(x) ln[ρ(x)]dx = ln r + (ρ) (15) S B ≃ − − − S Z with the obvious proviso that (ρ ) 0 and hence, in view of (ρ) ln r, we need to have S B ≥ S ≥ maintained a proper balance between σ and the chosen grating level r. Remark 5: It is instructive to validate the above approximation for the choice of r = 10−6, hence ln r = 6ln10 13.86 . We have: (ρ) = (1/2) ln(2πeσ2) 0.92 + ln σ. By setting − ∼ − S ∼ Entropy 2005, 7[4], 253-299 15

σ = (1/2)10−3 we realize that (ρ) 0.92 ln 2 3ln10 6.7 hence is almost twice larger S ∼ − − ∼ − than the allowed lower bound ln r. The approximate value of the coarse grained entropy is here (ρ ) 7.16 and stands for the mean information per partition bin in the ”string” composed of S B ∼ G bins. In view of Eq. (15), as long as we keep in memory the strictly positive grating unit r, there is a well defined ”regularization” procedure (add ln r to (ρ)) which relates the coarse grained − S entropy with a given differential entropy. In a number of cases it is computationally simpler to evaluate the differential entropy, and then to extract - otherwise computationally intractable - coarse grained entropy. Notice that one cannot allow a naive zero grating limit in Eq. (15), although r may be arbitrarily small. Indeed, for e.g. the Gaussian densities, differential entropy takes finite values and this would suggest that the coarse grained entropy might be arbitrarily large. This obstacle does not arise if one proceeds with some care. One should remember that we infer the coarse grained entropy from the differential entropy exactly at the price of introducing the resolution unit r. The smaller is r, the better is an approximation of the differential entropy by the second term on the right-hand-side of Eq. (15), but ln r needs to remain as a finite entry in Eq. (15). − We have inequalities 0 (ρ ) G where L = G r. They extend to all approximately equal ≤ S B ≤ · entries in Eq. (15). Since ln r = ln L + ln G, we arrive at new inequalities: − − ln r [rρ(x )] ln[ρ(x )] ln L (16) ≤ − k k ≤ Xk where [rρ(x )] ln[ρ(x )] ρ ln ρ dx with r 0 and possibly L . A conclusion is k k k ⇒ − → → ∞ that the differential entropy is unbounded both form below and from the above. In particular, P R (ρ) may take arbitrarily low negative values, in plain contrast to its coarse grained version (ρ ) S S B which is always nonnegative. We can be more detailed in connection with approximations employed in Eqs. (16) and (15). Actually, the right-hand-side of Eq. (15) sets a lower bound for the coarse-grained entropy (ρ ). S B Let us recall that the value of a convex function x ln x at the mean value of its argument x , does h i not exceed the mean value x ln x of the function itself. Then, in our notation which follows Eq. h i (12), we can directly employ an averaging over Bk: 1 1 1 ρ ln ρdx ρdx ln ρdx . (17) r ≥ r r ZBk  ZBk   ZBk  Taking the minus sign, executing summations with respect to k (convergence of the series being presumed) and using Eqs. (15) and (16) we get:

(ρ) ln r (ρ ) (18) S − ≤ S B as a complement to Eq. (17), see e.g. also [27, 22]. Equations (16) and (18) allow, with suitable reservations, to extend the standard information/uncertainty measure meaning from coarse-grained entropies to differential entropies per se. Namely, the differ- ence of two coarse grained entropies, corresponding to the same partition but to different (coarse Entropy 2005, 7[4], 253-299 16 grained) densities, may be adequately approximated by the difference of the corresponding differ- ential entropies: (ρ ) (ρ′ ) (ρ) (ρ′) , (19) S B − S B ≃ S − S provided they take finite values, [6, 27]:

2.3.3 Coarse-graining exemplified: exponential density An exponential density on a positive half-line R+ is known to maximize a differential entropy among all R+ density functions with the first moment fixed at 1/λ > 0. The density has the form: ρ(x)= λ exp( λx) for x 0 and vanishes for x< 0. Its variance is 1/λ2. The differential entropy − ≥ of the exponential density reads (ρ)=1 ln λ. S − In he notation of the previous subsection, let us coarse-grain this density at a particular value of λ = 1 and then evaluate the corresponding entropy as follows. We choose r 1, p ρ(x ) r ≪ k ≃ k · with xk = kr where k is a natural number. One can directly verify that for small r, we can write ∞ r 1 exp( r) and thence consider p [1 exp( r)] exp( kr), such that p = 1, with ≃ − − k ≃ − − − k=0 k the well known quantum mechanical connotation. P Namely, let us set r = hν/kBT ; notice that for the first time in the present paper, we explicitly invoke dimensional units, in terms of which the dimensionless constant r is defined. We readily arrive at the probability of the kν-th oscillator mode in thermal bath at the temperature T . Our assumption of r 1 corresponds to the low frequency oscillator problem with ν 1, see ≪ ≪ e.g. [9]. Clearly, we have for the mean number of modes 1 1 n = kp = (20) h i k exp r 1 ≃ r X − which implies the familiar Planck formula hν E = hν/r = k T . (21) h i exp(hν/k T ) 1 ≃ B B − For the variance we get 1 (n n )2 = n 2 n . (22) h −h i i h i −h i ≃ r2 The standard Shannon entropy of the discrete probability distribution p ,k N reads k ∈ (ρ )= p ln p = ln[1 exp( r)] + n r (23) S B − k k − − − h i X so that in view of (ρ ) ln r + 1 and (ρ)=1 for ρ(x) = exp( x), we clearly have (ρ ) S B ≃ − S − S B ≃ ln r + (ρ), as expected. − S Let us point out our quite redundant sophistication. In fact, we could have skipped all the above reasoning and take Eq. (15) as the starting point to evaluate (ρ ) for the coarse grained S B exponential density, at the assumed resolution r = hν/k T 1, with the obvious result B ≪ (ρ ) 1 ln(hν/k T ) 1 . (24) S B ≃ − B ≫ Entropy 2005, 7[4], 253-299 17

Remark 6: An analogous procedure can be readily adopted to analyze phenomenological his- tograms of energy spectra for complex systems, with an obvious extension of its range of validity to spectral properties of the classically chaotic case. In particular, semiclassical spectral series of various quantum systems admit an approximation of spacing histograms by continuous distri- butions on the positive half-line R+. Although a full fledged analysis is quite complicated, one may invoke quite useful, albeit approximate formulas for adjacent level spacing distributions. The previously mentioned exponential density corresponds to the so-called Poisson spectral series. In the family of radial densities of the form ρ (x) = 2 xN−1exp( x2) where N > 1 and Γ is N Γ(N/2) − the Euler gamma function, [79], the particular cases N = 2, 3, 5 correspond to the generic level spacing distributions, based on the exploitation of the Wigner surmise. The respective histograms plus their continuous density interpolations are often reproduced in ”quantum chaos” papers, see for example [80].

2.3.4 Spatial coarse graining in quantum mechanics The coarse grained entropy attributes the ”mean information per bin of length r” to systems described by continuous probability densities and their differential entropies. Effectively one has a tool which allows to accompany the coarse grained density histogram (of pk in the k-th bin on R) by the related histogram of uncertainties ln p , c.f. Section II.A where an uncertainty function − k has been introduced. The archetypal example of position measurement in quantum mechanics presumes that position is measured in bins corresponding to the resolution of the measurement apparatus. This means that the continuous spectrum of the position observable is partitioned into a countable set of intervals (bins) whose maximum length we regard as a ”resolution unit”. For an interval B R we may k ⊂ denote pk the probability of finding the outcome of a position measurement to have a value in Bk. We are free set the bin size arbitrarily, especially if computer assisted procedures are employed, [78]. Following [17] one may take the view that the most natural measure of the uncertainty in the result of a position measurement or preparation of specific localization features of a state vector (amenable to an analysis via spectral properties of the position operator) should be the information . entropy coming from Eqs. (12) and (13): (ρB) = pk ln pk with a direct quantum input . S − k p = ψ(x) 2dx, where ψ L2(R) is normalized. This viewpoint is validated by current k Bk | | ∈ P experimental techniques in the domain of matter wave interferometry, [44, 43], and the associated R numerical experimentation where various histograms are generated, [78]. The formula Eq. (15) gives meaning to the intertwine of the differential and coarse grained en- tropies in the quantum mechanical context. When an analytic form of the entropy is in the reach, the coarse graining is straightforward. One should realize that most of the results known to date have been obtained numerically, hence with an implicit coarse-graining, although they were interpreted in terms of the differential entropy, see e.g. [28]-[37]. In connection with an entropic inequality Eq. (4) let us point out [2] that it is a generic property of normalized L2(Rn) wave functions that, by means of the Fourier transformation, they give rise Entropy 2005, 7[4], 253-299 18 to two interrelated densities (presently we refer to L2(R)): ρ = ψ 2 andρ ˜ = (ψ) 2 where | | |F | 1 ( ψ)(k)= ψ(x) exp( ikx) dx (25) F √2π − Z is the of ψ(x). The inequality (4) for the corresponding (finite) differential entropies follows, here with n = 1. By choosing resolutions r 1 andr ˜ 1 we can introduce the respective coarse grained entropies, ≪ ≪ each fulfilling an inequality Eq. (18). Combining these inequalities with Eq. (4), we get the prototype entropic inequalities for coarse grained entropies:

(ρ )+ (˜ρ ) 1 + ln π ln(r r˜) (26) S B S B ≥ − · with the corresponding resolutions r andr ˜. By referring to Eq. (15) we realize that the knowledge of (ρ ), completely determines (˜ρ ) at S B S B the presumed resolution levels:

(˜ρ ) 1 + ln π ln(r r˜) (ρ ) 0 (27) S B ≃ − · − S B ≥ and in reverse. This in turn implies that in all computer generated position-momentum differential entropy inequalities, where the coarse graining is implicit, the knowledge of position entropy and of the resolution levels provide sufficient data to deduce the combined position-momentum outcomes, see also [14]-[37]. In standard units (with ~ reintroduced), the previous discussion pertains to quantum mechanical position - momentum entropic uncertainty relations. In the notation of Refs. [26] and [22] we have: δx δp Sx + Sp 1 ln 2 ln · (28) ≥ − − h   for measurement entropies with position and momentum ”abstract measuring device” resolutions δx and δp respectively, such that δx δp h. · ≪ Note, that to reintroduce ~ we must explain how the differential entropy definition is modified if we pass to dimensional units, see e.g next subsection. Let us also point out that one should not confuse the above resolution units r, r˜ and δx,δp with standard mean square deviation values ∆X and ∆P which are present in the canonical indeterminacy relations: ∆X ∆P ~/2. · ≥ Let us set ~ 1 (natural units system). If, following conventions we define the squared standard ≡ deviation (i.e. variance) value for an observable A in a pure state ψ as (∆A)2 =(ψ, [A A ]2ψ) −h i with A = (ψ, Aψ), then for the position X and momentum P operators we have the following h i version of the entropic uncertainty relation (here expressed through so-called entropy powers, see e.g. [2]): 1 1 ∆X ∆P exp[ (ρ)+ (˜ρ)] (29) · ≥ 2πe S S ≥ 2 which is an alternative for Eq. (4); n = 1 being implicit. Entropy 2005, 7[4], 253-299 19

2.4 Impact of dimensional units Let us come back to an issue of reintroducing physical units in Eq. (28). In fact, if x and p stand for one-dimensional phase space labels and f(x, p) is a normalized phase-space density, f(x, p)dxdp = 1, then the related dimensionless differential entropy is defined as follows, [10, 40]:

R dxdp = (hf) ln(hf) = f ln(hf)dxdp (30) Sh − h − Z Z where h = 2π~ is the Planck constant. If ρ(x) = ψ 2(x), where ψ L2(R), thenρ ˜ (p) = | | ∈ h (ψ) 2(p) is defined in terms of the dimensional Fourier transform: |Fh | 1 ( hψ)(p)= ψ(x) exp( ipx/~) dx . (31) F √2π~ − Z Let us consider the joint density . f(x, p) = ρ(x)˜ρh(p) (32) and evaluate the differential entropy for this density. Remembering that ρ(x)dx =1= Sh ρ˜ (p)dp, we have formally, [40] Eq. (9): h R R = ρ ln ρdx ρ˜ lnρ ˜ dp ln h = Sx + Sp ln h (33) Sh − − h h − − Z Z to be compared with Eq. (28). The formal use of the logarithm properties before executing integrations in the ρ˜h ln(hρ˜h) dp, has left us with the previously mentioned issue (Section 1.2) of ”literally taking the logarithm of a dimensional argument” i. e. that of ln h. R We recall that is a dimensionless quantity, while if x has dimensions of length, then the Sh probability density has dimensions of inverse length and analogously in connection with momentum dimensions. . . Let us denote x = rδx and p =rδp ˜ where labels r andr ˜ are dimensionless, while δx and δp stand for respective position and momentum dimensional (hitherto - resolution) units. Then:

. ρ ln ρdx ln(δx) = ρ ln(δxρ)dx (34) − − − Z Z is a dimensionless quantity. Analogously . ρ˜ lnρ ˜ dp ln δp = ρ˜ ln(δpρ˜ ) dp (35) − h h − − h h Z Z is dimensionless. First left-hand-side terms in two above equations we recognize as Sx and Sp respectively. Hence, formally we get a manifestly dimensionless decomposition δxδp . δxδp = ρ ln(δxρ)dx ρ˜ ln(δpρ˜ ) dp + ln = Sx + Sp + ln (36) Sh − − h h h δx δp h Z Z instead of the previous one, Eq. (33). The last identity Eq. (36) gives an unambiguous meaning to the preceding formal manipulations with dimensional quantities. Entropy 2005, 7[4], 253-299 20

As a byproduct of our discussion, we have resolved the case of the spatially interpreted real axis, x when x has dimensions of length, c.f. also [10]: Sδx is the pertinent dimensionless differential entropy definition for spatial probability densities. Example 1: Let us go through an explicit example involving the Gauss density

ρ(x)=(1/σ√2π) exp[ (x x )2/2σ2] (37) − − 0 where σ is the standard deviation (its square stands for the variance). There holds (ρ) = 1 2 S 2 ln(2πeσ ) which is a dimensionless outcome. If we pass to x with dimensions of length, then inevitably σ must have dimensions of length. It is instructive to check that in this dimensional case we have a correct dimensionless result: 1 σ 2 Sx = ln [2πe ] (38) δx 2 δx  x  −1/2 to be compared with Eq. (34). Then we realize that Sδx vanishes if σ/δx = (2πe) , hence at the dimensional value of the standard deviation σ = (2πe)−1/2δx, compare e.g. [10]. Example 2: Let us invoke the simplest (naive) text-book version of the Boltzmann H-theorem, valid in case of the rarified gas (of mass m particles), without external forces, close to its thermal equilibrium, under an assumption of its space homogeneity, [57, 58]. If the probability density function f(v) is a solution of the corresponding Boltzmann kinetic equation, then the Boltzmann H-functional (which is simply the negative of the differential entropy) H(t)= f(v) ln f(v)dv does not increase: d H(t) 0. In the present case we know that there exists an invariant (asymptotic) dt R ≤ 1/2 2 density, which in one-dimensional case has the form f∗(v)=(m/2πkBT ) exp[ m(v v0) /2kBT ]. . − − H(t) is known to be time-independent only if f = f∗(v). We can straightforwardly, albeit for- mally, evaluate H∗ = f∗ ln f∗dv = (1/2) ln(2πek T/m) and become faced with an apparent − B dimensional difficulty, [9], since an argument of the logarithm is not dimensionless. For sure, R 2 a consistent integration outcome for H(t) should involve a dimensionless argument kBT/m[v] instead of kBT/m, provided [v] stands for any unit of velocity; examples are [v]=1 m/s (here m stands for the SI length unit, and not for a mass parameter) or 10−5 m/s. To this end, in conformity with our previous discussion, it suffices to redefine H∗ as follows, [9, 10]:

[v] H∗ H∗ = f∗ ln([v] f∗)dv . (39) → · Z Multiplying f∗ by [v] we arrive at the dimensionless argument of the logarithm in the above and cure the dimensional obstacle. Remark 7: Let us also mention an effect of the scaling transformation upon the differential entropy, [4]. We denote ρ = β ρ[β(x α)] (40) α,β − where α> 0,β > 0. The respective Shannon entropy reads:

(ρ )= (ρ) ln β . (41) S α,β S − In case of the Gaussian ρ, Eq. (37), we get (ρ ) = ln[(σ/β)√2πe]. An obvious interpretation S α,β is that the β-scaling transformation of ρ(x α) would broaden this density if β < 1 and would − Entropy 2005, 7[4], 253-299 21 shrink when β > 1. Clearly, (ρ ) takes the value 0 at σ = (2πe)−1/2β in analogy with our S α,β previous dimensional considerations. If an argument of ρ is assumed to have dimensions, then the scaling transformation with the dimensional β may be interpreted as a method to restore the dimensionless differential entropy value.

3 Localization: differential entropy and Fisher information

We recall that among all one-dimensional distribution functions ρ(x) with a finite mean, subject to the constraint that the standard deviation is fixed at σ, it is the Gauss function with half- width σ which sets a maximum of the differential entropy, [4]. For the record, let us add that if only the mean is given for probability density functions on R, then there is no maximum entropy distribution in their set. Let us consider the Gaussian probability density on the real line R as a reference density function: ρ(x) = (1/σ√2π) exp[ (x x )2/2σ2], in conformity with Eq. (11), but without any restriction − − 0 on the value of x R. ∈ The differential entropy of the Gauss density has a simple analytic form, independent of the mean value x0 and maximizes an inequality: 1 (ρ) ln(2πeσ2) . (42) S ≤ 2 This imposes a useful bound upon the entropy power: 1 exp[ (ρ)] σ (43) √2πe S ≤ with an obvious bearing on the spatial localization of the density ρ, hence spatial (un)certainty of position measurements. We can say that almost surely, with probability 0.998, there is nothing to be found (measured) beyond the interval of the length 6σ which is centered about the mean value x0 of the Gaussian density ρ. Knowing that for arbitrary density functions the differential entropy Eq. (3) is unbounded form below and from above, we realize that in the subset of all densities with a finite mean and a fixed variance σ2, we actually have an upper bound set by Eq. (42). However, in contrast to coarse grained entropies which are always nonnegative, even for relatively large mean deviation σ < 1/√2πe 0.26 the differential entropy (ρ) is negative. ≃ S Therefore, quite apart from the previously discussed direct information theory links, c.f. Eqs. (15), (18) and (19), the major role of the differential entropy is to be a measure of localization in the ”state space” (actually, configuration space) of the system, [75, 76, 77].

Let us consider a one-parameter family of probability densities ρα(x) on R whose first (mean) and second moments (effectively, the variance) are finite. The parameter-dependence is here not completely arbitrary and we assume standard regularity properties that allow to differentiate various functions of ρα with respect to the parameter α under the sign of an (improper) integral, [82]. Entropy 2005, 7[4], 253-299 22

Namely, let us denote xρ (x)dx = f(α) and x2ρ dx < . We demand that as a function of α α ∞ x R, the modulus of the partial derivative ∂ρ /∂α is bounded by a function G(x) which together ∈ R αR with xG(x) is integrable on R. This implies, [82], the existence of ∂f/∂α and an important inequality: ∂lnρ 2 df(α) 2 (x α)2ρ dx α ρ dx (44) − α · ∂α α ≥ dα Z Z     directly resulting from df ∂(ln ρ ) = [(x α)ρ1/2][ α ρ1/2]dx (45) dα − α ∂α α Z via the standard Schwarz inequality. An equality appears in Eq. (44) if ρα(x) is the Gauss function with mean value α. 2 At this point let assume that the mean value of ρα actually equals α and we fix at σ the value (x α)2 = x2 α2 of the variance (in fact, standard deviation from the mean value ) of the h − i h i − probability density ρα. The previous inequality Eq. (44) now takes the familiar form: 2 . 1 ∂ρα 1 = dx (46) Fα ρ ∂α ≥ σ2 Z α   where an integral on the left-hand-side is the so-called Fisher information of ρα, known to ap- pear in various problems of statistical estimation theory, as well as an ingredient of a number of information-theoretic inequalities, [82, 23, 24, 75, 83]. In view of 1/σ2, we realize that the Fα ≥ Fisher information is more sensitive indicator of the wave packet localization than the entropy power, Eq. (43). . . Let us define ρ (x) = ρ(x α). Then, the Fisher information = is no longer the mean value α − Fα F α-dependent and can be readily transformed to the conspicuously quantum mechanical form (up to a factor D2 with D = ~/2m): 1 1 1 ∂ρ 2 u2 = dx = ρ dx = Q (47) 2F 2 ρ ∂x · 2 −h i Z   Z . where u = ln ρ is named an osmotic velocity field, [50, 53], and an average Q = ρ Qdx is ∇ h i · carried out with respect to the function R ∆ρ1/2 Q =2 . (48) ρ1/2 As a consequence of Eq. (46), we have Q 1/2σ2 for all relevant probability densities with −h i ≥ any finite mean and variance fixed at σ2. When multiplied by D2, the above expression for Q(x) notoriously appears in the hydrodynamical formalism of quantum mechanics as the so-called de Broglie-Bohm quantum potential (D = ~/2m). It appears as well in the corresponding formalism for diffusion-type processes, including the stan- dard Brownian motion (then, D = kBT/mβ, see e.g. [53, 54, 84]. An important inequality, valid under an assumption ρ (x) = ρ(x α), has been proved in [23], α − see also [24, 85]: 1 (2πe) exp[ 2 (ρ)] (49) σ2 ≤ − S ≤F Entropy 2005, 7[4], 253-299 23

It tells us that the lower bound for the Fisher information is in fact given a sharper form by means of the (squared) inverse entropy power. Our two information measures appear to be correlated.

Remark 8: Let us point out that the Fisher information (ρ) may blow up to infinity under F a number of circumstances, [83]: when ρ approaches the Dirac delta behavior, if ρ vanishes over some interval in R or is discontinuous. We observe that > 0 because it may vanish only when F ρ is constant everywhere on R, hence when ρ is not a probability density.

Remark 9: The values of (ρ ) and (ρ ) are α-independent if we consider ρ (x) = ρ(x α). F α S α α − This reflects the translational invariance of the Fisher and Shannon information measures, [86]. The scaling transformation ρ = β ρ[β(x α)], where α > 0,β > 0, transforms Eq. (43) to the α,β − form (2πe)−1/2 exp[ (ρ )] σ/β. S α,β ≤ Under an additional decomposition/factorization ansatz (of the quantum mechanical L2(Rn) . provenance) that ρ(x) = ψ 2(x), where a real or complex function ψ = √ρ exp(iφ) is a nor- | | malized element of L2(R), another important inequality holds true, [23, 2]:

∂√ρ 2 =4 dx 16π2σ˜2 , (50) F ∂x ≤ Z   provided the Fisher information takes finite values. Here,σ ˜2 is the variance of the ”quantum mechanical momentum canonically conjugate to the position observable”, up to (skipped) dimen- . sional factors. In the above, we have exploited the Fourier transform ψ˜ =( ψ) of ψ to arrive at . F ρ˜ = ψ˜ 2 of Eq. (4) whose variance the aboveσ ˜2 actually is. | | In view of two previous inequalities (49), (50) we find that not only the Fisher information, but also an entropy power is bounded from below and above. Namely, we have: 1 16π2σ˜2 (51) σ2 ≤ F ≤ which implies 1/2σ2 Q 8π2σ˜2 and furthermore ≤ −h i ≤ 1 1 exp[ (ρ)] σ . (52) 4πσ˜ ≤ √2πe S ≤ as a complement to Eq. (43). Most important outcome of Eq. (52) is that the differential entropy (ρ) typically may be expected to be a well behaved quantity: with finite both lower and upper S bounds. We find rather interesting that the Heisenberg indeterminacy relationship Eq. (29), which is normally interpreted to set a lower bound on the experimentally accessible phase-space data (e.g. volume), according to Eq. (52) ultimately had appeared to give rise to lower and upper bounds upon the configurational (spatial) information measure and thence - the uncertainty (information) measure. To our knowledge, the inequalities Eqs. (51) and (52), although implicit in various information theory papers, see especially [23] and [2], hitherto were never explicitly spelled out. Entropy 2005, 7[4], 253-299 24

4 Asymptotic approach towards equilibrium: Smoluchowski processes

4.1 Random walk Let us consider a classic example of a one-dimensional random walk where a particle is assumed to be displaced along R1 with a probability 1/2 forth and back, each step being of a unit length, [74]. If one begins from the origin 0, after G steps a particle can found at any of the points G, G +1, ... 1, 0, 1, ..., G. The probability that after G displacements a particle can be found − − − at the point g [ G,G] is given by the Bernoulli distribution: ∈ − G! 1 G P = (53) g [(1/2)(G + g)]![(1/2)(G g)]! 2 −   G where g=−G Pg = 1. We are interested in the asymptotic formula, valid for large G and g G. (Note that even for P ≪ relatively small value of G = 20, and g 16, an accuracy level is satisfactory.) There holds: | | ≤ g2 1 2 ln P + ln (54) g ≃ −2G 2 πG   and, accordingly 2 1/2 g2 P exp . (55) g ≃ πG −2G     We assume r 10−6m, to be a grating unit (i.e. minimal step length for a walk). Let r ∆x G ∼− . ≪ ≪ (size ∆x 10 4m is quite satisfactory). For large G and g G, we denote x = g r and ask for a ∼ | |≪ · probability ρG(x)∆x that a particle can be found in the interval [x, x+∆x] after G displacements. The result is [74]: 1 1 x2 ρ(x)= P = exp (56) 2r g (2πGr2)1/2 −2Gr2   and by assuming that a particle suffers k displacements per unit time, we can give Eq. (56) the familiar form of the heat kernel: 1 x2 ρ(x, t)= exp (57) (4πDt)1/2 −4Dt   with the diffusion coefficient D = kr2/2. It is a fundamental solution of the heat equation

∂tρ = D∆ρ which is the Fokker-Planck equation for the Wiener process. The differential entropy of the above time-dependent density reads:

(t)=(1/2) ln(4πeDt) (58) S and its time evolution clearly displays the localization uncertainty growth. By means of the formula Eq. (19) we can quantify the differential entropy dynamics for all solutions of the heat equation. Since the heat kernel determines the transition probability density for the Wiener process (free Brownian motion in R), by setting x x x′ and t t t′ 0, we can replace the previous → − → − ≥ Entropy 2005, 7[4], 253-299 25

ρ(x, t) of Eq. (57) by p(x x′, t t′). This transition density allows to deduce any given solution − − ρ(x, t) of the heat equation from its past data, according to: ρ(x, t)= p(x x′, t t′)ρ(x′, t, )dx′. − − In particular, we can consider the process starting at t′ = 0 with any initial density ρ (x). R 0 Let ρυ denote a convolution of a probability density ρ with a Gaussian probability density having variance υ. The transition density of the Wiener process generates such a convolution for ρ0, with . υ = σ2 = 2Dt. Then, de Bruijn identity, [23, 75], d (ρ )/dυ = (1/2) (ρ ), directly yields the S υ F υ information entropy time rate for (ρ)= (t): S S d ( ρ)2 S = D = D ∇ dx > 0 . (59) dt ·F · ρ Z The Fisher information (ρ) is the α = 0 version of the general definition given in Eqs. (46) and F (47). The derivation of Eq. (59) amounts to differentiating an υ-dependent integrand under the sign of an improper integral, [82, 83]) The monotonic growth of (t) is paralleled by linear in time growth of σ(t) and the decay of , S F hence quantifies the uncertainty (disorder) increase related to the ”flattening” down of ρ, see also [83, 86].

4.2 Kullback entropy versus differential entropy We emphasize that in the present paper we have deliberately avoided the use of the relative Kullback-Leibler entropy, [81, 6, 47]. This entropy notion is often invoked to tell ”how far from each other” two probability qdensities are. The Kullback entropy is particularly useful if one investigates an approach of the system toward (or its deviation from) equilibrium, this being normally represented by a stationary density function, [48, 88]. In this context, it is employed to investigate a major issue of the dynamical origins of the increasing entropy, see [47, 48, 92]. Consult also both standard motivations and apparent problems encountered in connection with the celebrated Boltzmann’s H-theorem, [57, 58] and [67]. However, a reliability of the Kullback entropy may be questioned in case of general parameter- dependent densities. In particular, this entropy fails to quantify properly certain features of a non-stationary dynamics of probability densities. Specifically if we wish to make a ”comparison” of once given density function to itself, but at different stages (instants) of its time evolution. Let us consider a one parameter family of Gaussian densities ρ = ρ(x α), with the mean α R α − ∈ and the standard deviation fixed at σ. These densities are not differentiated by the information (differential) entropy and share its very same value = 1 ln(2πeσ2) independent of α. Sσ 2 If we admit σ to be another free parameter, a two-parameter family of Gaussian densities ρα ′ → ρα,σ(x) appears. Such densities, corresponding to different values of σ and σ do admit an ”absolute comparison” in terms of the Shannon entropy, in accordance with Eq. (19): σ′ ′ = ln . (60) Sσ − Sσ σ   . ′ . ′ By denoting σ = σ(t) = √2Dt and σ = σ(t ) we make the non-stationary (heat kernel) density amenable to the ”absolute comparison” formula at different time instants t′ >t> 0: (σ′/σ) = t′/t. p Entropy 2005, 7[4], 253-299 26

In the above we have ”compared” differential entropies of quite akin, albeit different, probability densities. Among many inequivalent ways to evaluate the ”divergence” between probability dis- tributions, the relative (Kullback) entropy is typically used to quantify such divergence from the a priori prescribed reference density, [88, 48]. We define the Kullback entropy (θ, θ′) for a one-parameter family of probability densities ρ , so K θ that the ”distance” between any two densities in this family can be directly evaluated. Let ρθ′ stands for the prescribed (reference) probability density. We have, [81, 88, 6]:

′ . ρθ(x) (θ, θ ) = (ρθ ρθ′ )= ρθ(x) ln dx . (61) K K | ρ ′ (x) Z θ which, in view of the concavity of the function f(w)= w ln w, is positive. . − Let us indicate that the negative of , = , named the conditional entropy [6], is predomi- K Hc −K nantly used in the literature [6, 92, 48] because of its affinity (regarded as a formal generalization) to the differential entropy. Then e.g. one investigates an approach of towards its maximum −K (usually achieved at the value zero) when a running density is bound to have a unique stationary asymptotic, [92]. ′ . If we take θ = θ+∆θ with ∆θ 1, the following approximate formula holds true under a number ≪ of standard assumptions, [81]: 1 (θ, θ + ∆θ) (∆θ)2 (62) K ≃ 2Fθ · where denotes the Fisher information measure, previously defined by Eq. (46). With this Fθ proviso, we can evaluate the Kullback distance within a two-parameter (α, σ) family of Gaussian densities, by taking θ α. → Passing to α′ = α + ∆α at a fixed value of σ we arrive at:

(∆α)2 (α, α + ∆α) . (63) K ≃ 2σ2 For the record, we note that the respective Shannon entropies do coincide: = . Sα Sα+∆α Analogously, we can proceed with respect to the label σ at α fixed:

(∆σ)2 (σ, σ + ∆σ) (64) K ≃ σ2 when, irrespective of α: ∆σ . (65) Sσ+∆σ − Sσ ≃ σ By choosing θ σ2 at α fixed, we get (now the variance σ2 is modified by its increment ∆(σ2)): → [∆(σ2)]2 (σ2, σ2 + ∆(σ2)) (66) K ≃ 4σ4 while ∆(σ2) 2 2 2 (67) Sσ +∆(σ ) − Sσ ≃ 2σ2 Entropy 2005, 7[4], 253-299 27 which, upon identifications σ2 = 2Dt and ∆(σ2)=2D∆t, sets an obvious connection with the differential (∆ )(t) and thence with the time derivative ˙ = 1/2t of the heat kernel differential S S entropy, Eq. (58) and the de Bruijn identity. Our previous observations are a special case of more general reasoning. Namely, if we consider a . two-parameter θ =(θ1, θ2) family of densities, then instead of Eq. (62) we would have arrived at 1 (θ, θ + ∆θ) ∆θ ∆θ (68) K ≃ 2 Fij · i j i,j X where i, j, =1, 2 and the Fisher information matrix has the form Fij ∂ ln ρ ∂ ln ρ = ρ θ θ dx . (69) Fij θ ∂θ · ∂θ Z i j In case of Gaussian densities, labelled by independent parameters θ1 = α and θ2 = σ (alternatively θ = σ2), the Fisher matrix is diagonal and defined in terms of previous entries and (or 2 Fα Fσ 2 ). Fσ It is useful to note (c.f. also [92]) that in self-explanatory notation, for two θ and θ′ Gaussian densities there holds: σ′ 1 σ2 1 (θ, θ′) = ln + ( 1) + (α α′)2 (70) K σ 2 σ′2 − 2σ′2 − The first entry in Eq. (70) coincides with the ”absolute comparison formula” for Shannon entropies, Eq. (60). However for θ′ θ 1, hence in the regime of interest for us, the second term dominates | − |≪ the first one. Indeed, let us set α′ = α and consider σ2 = 2Dt, ∆(σ2)=2D∆t. Then (σ′) (σ) ∆t/2t, S − S ≃ while (θ, θ′) (∆t)2/4t2. Although, for finite increments ∆t we have K ≃ ∆t (σ′) (σ) (θ, θ′) , (71) S − S ≃ K ≃ 2t p the time derivative notion ˙ can be defined exclusively for the differential entropy, and is mean- S ingless in terms of the Kullback ”distance”. Let us mention that no such obstacles arise in the standard cautious use of the relative Kullback entropy c. Indeed, normally one of the involved densities stands for the stationary reference one . H . + . ρ ′ (x) = ρ∗(x), while another evolves in time ρ (x) = ρ(x, t), t R , thence (t) = (ρ ρ∗), θ θ ∈ Hc −K t| see e.g. [48, 92].

4.3 Entropy dynamics in the Smoluchowski process We consider spatial Markov diffusion processes in R with a diffusion coefficient (constant or time-dependent) D and admit them to drive space-time inhomogeneous probability densities ρ = ρ(x, t). In the previous section we have addressed the special case of the free Brownian motion . characterized by the current velocity (field) v = v(x, t)= u(x, t)= D ln ρ(x, t) and the diffusion . − ∇ current j = v ρ which obeys the continuity equation ∂ ρ = j, this in turn being equivalent to · t −∇ the heat equation. Entropy 2005, 7[4], 253-299 28

It is instructive to notice that the gradient of a potential-type function Q = Q(x, t), c.f. Eq. (48), entirely composed in terms of u:

∆ρ1/2 1 Q =2D2 = u2 + D u (72) ρ1/2 2 ∇· almost trivially appears (i.e. merely as a consequence of the heat equation, [53, 94]) in the hydrodynamical (momentum) conservation law appropriate for the free Brownian motion:

∂ v +(v )v = Q . (73) t · ∇ −∇ A straightforward generalization refers to a diffusive dynamics of a mass m particle in the external field of force, here taken to be conservative: F = F (x) = V . The associated Smoluchowski F −∇ diffusion process with a forward drift b(x)= mβ is analyzed in terms of the Fokker-Planck equation for the spatial probability density ρ(x, t), [88, 89, 90, 91]:

∂ ρ = D ρ (b ρ) (74) t △ − ∇ · with the initial data ρ0(x)= ρ(x, 0). Note that if things are specialized to the standard Brownian motion in an external force field, we kB T know a priori (due to the Einstein fluctuation-dissipation relationship, [74]) that D = mβ , where β is interpreted as the friction (damping) parameter, T is the temperature of the bath, kB being the Boltzmann constant . We assume, modulo restrictions upon drift function [52, 68], to resolve the Smoluchowski dynam- ics in terms of (possibly non-unique) Markovian diffusion-type processes. Then, the following compatibility equations follow in the form of hydrodynamical conservation laws for the diffusion process, [53, 94]:

∂ ρ + (vρ) = 0 (75) t ∇ (∂ + v )v = (Ω Q) (76) t · ∇ ∇ − where, not to confuse this notion with the previous force field potential V , we denote by Ω(x) the so-called volume potential for the process:

1 F 2 F Ω= + D , (77) 2 mβ ∇· mβ     where the functional form of Q is given by Eq. (72). Obviously the free Brownian law, Eq. (73), comes out as the special case. . In the above (we use a short-hand notation v = v(x, t)):

. F ρ v = b u = D ∇ (78) − mβ − ρ defines the current velocity of Brownian particles in external force field. This formula allows us to transform the continuity equation into the Fokker-Planck equation and back. Entropy 2005, 7[4], 253-299 29

With a solution ρ(x, t) of the Fokker-Planck equation, we associate a differential (information) entropy (t) = ρ ln ρ dx which is typically not a conserved quantity, [59]-[28]. The rate of S − change in time of (t) readily follows. SR Boundary restrictions upon ρ, vρ and bρ to vanish at spatial infinities (or at finite spatial volume boundaries) yield the rate equation:

d ( ρ)2 S = [ρ ( b)+ D ∇ ] dx (79) dt ∇· · ρ Z to be compared with the previous, b = 0 case, Eq. (73). Anticipating further discussion, let us stress that even in case of plainly irreversible diffusive dynamics, it is by no means obvious that the differential entropy should grow, decay (diminish) or show up a mixed behavior. It is often tacitly assumed that one should ”typically” have ˙ > 0 S which is not true, [67, 87]. We can rewrite Eq. (79) in a number of equivalent forms, like e.g. (note that u2 = D u ) h i − h∇· i . D ˙ = D b + u2 = D v (80) S h∇ · i h∇ · i and specifically, [67, 68], as the major entropy balance equation:

D ˙ = v2 b v = v u (81) S h i−h · i −h · i where denotes the mean value with respect to ρ. h·i This balance equation is extremely persuasive, since b = F/(mβ) and j = vρ combine into a characteristic ”power release” expression:

d . 1 1 1 Q = F j dx = b v . (82) dt D mβ · D h · i Z Like in case of not necessarily positive ˙, the ”power release” expression ˙ may be positive which S Q represents the power removal to the environment, as well as negative which corresponds to the power absorption from the environment. In the formal thermodynamical lore, in the above we deal here with the time rate at which the mechanical work per unit of mass may possibly being dissipated (removed to the reservoir) in the form of heat, in the course of the Smoluchowski diffusion process: k T ˙ = F j dx, with T B Q · being the temperature of the bath. When there is no external forces, we have b = 0, and then the R differential entropy time rate formula for the free Brownian motion Eq. (59) reappears. On the other hand, the positive terms in Eq. (81) and Eq. (59) represent the rate at which informa- tion entropy is put (pumped) into the diffusing system by the thermally active environment, thus causing a disorder/uncertainty growth. This particular ”entropy production” rate may possibly be counterbalanced (to this end we need external forces) by the heat removal due to dissipation, according to: d d d S = S Q (83) dt dt − dt  in Entropy 2005, 7[4], 253-299 30 where ˙ is defined in Eq. (82) while ( ˙) = (1/D) v2 . Q S in h i Remark 10: In Refs. [68, 67, 69] a measure-theoretic and probabilistic justification was given to an interpretation of (1/D) v2 as the entropy production rate of the (originally - stationary) diffu- h i sion process with the current velocity v. We would like to point out that traditionally, [70, 71, 61], the statistical mechanical notion of an entropy production refers to the excess entropy that is pumped out of the system. An alternative statement tells about the entropy production by the physical system into the thermostat. In the present discussion, an increase of the information entropy of the Smoluchowski process definitely occurs due to the thermal environment: the dif- ferential entropy is being generated (produced) in the physical system by its environment.

Of particular interest is the case of constant information entropy ˙ = 0 which amounts to the S existence of steady states. In the simplest case, when the diffusion current vanishes, we encounter the primitive realization of the state of equilibrium with an invariant density ρ. Then, b = u = D ln ρ and we readily arrive at the classic equilibrium identity for the Smoluchowski process: ∇ (1/k T ) V = ln ρ (84) − B ∇ ∇ which determines the functional form of the invariant density in case of a given conservative force field, [68, 88]. There is an ample discussion in Ref. [68] of how these properties match with time reversal of the stationary diffusion process and the vanishing of the entropy production (in our lore, see e.g. [61, 60]) rate ( ˙)in. S . Coming back to the general discussion, let us define the so-called thermodynamic force Fth = v/D associated with the Smoluchowski diffusion and introduce its corresponding time-dependent potential function Ψ(x, t): . k T F = F k T ln ρ = Ψ . (85) B th − B ∇ −∇ Notice that v = (1/mβ) Ψ. In the absence of external forces (free Brownian motion), we − ∇ obviously get F = ln ρ = (1/D)u. th −∇ − The mean value of the potential

Ψ= V + kBT ln ρ (86) of the thermodynamic force associates with the diffusion process an obvious analogue of the Helmholtz free energy: Ψ = V T (87) h i h i − SG . where the dimensional version = k of information entropy has been introduced (actually, SG BS it is a direct configuration-space analog of the Gibbs entropy). The expectation value of the mechanical force potential V plays here the role of (mean) internal energy, [71, 67]. h i By assuming that ρV v vanishes at integration volume boundaries (or infinity), we easily get the time rate of Helmholtz free energy at a constant temperature T : d Ψ = k T ˙ T ˙ . (88) dt h i − B Q − SG Entropy 2005, 7[4], 253-299 31

By employing Eq. (83), we readily arrive at

d d Ψ = (k T ) S = (mβ) v2 (89) dt h i − B dt −  in which either identically vanishes (equilibrium) or remains negative. Thus, Helmholtz free energy either remains constant in time or decreases as a function of time at the rate set by the information entropy ”production” ˙ . One may expect that actualy Ψ (t) Sin h i drops down to a finite minimum as t . →∞ However, this feature is a little bit deceiving. One should be aware that a finite minimum as well may not exist, which is the case e.g. for the free Brownian. Also, multiple minima need to be excluded as well.

4.4 Kullback entropy versus Shannon entropy in the Smoluchowski process

In the presence of external forces the property Eq. (89) may consistently quantify an asymptotic approach towards a minimum corresponding to an invariant (presumed to be unique) probability density of the process. Indeed, by invoking Eq. (84) we realize that

1 V (x) ρ∗(x)= exp (90) Z − k T  B  where Z = exp( V (x)/k T ) dx, sets the minimum of Ψ (t) at Ψ ∗ =Ψ∗ = k T ln Z. − B h i h i − B Let us take the above ρ∗(x) as a reference density with respect to which the divergence of ρ(x, t) is R evaluated in the course of the pertinent Smoluchowski process. This divergence is well quantified by the conditional Kullback entropy (t). Let us notice that Hc ρ V c(t)= ρ ln dx = (t) ln Z h i (91) H − ρ∗ S − − k T Z   B Consequently, in view of Eqs. (88) and (83), we get

˙ = ˙ + ˙ =( ˙) 0 (92) Hc S Q S in ≥ so that d Ψ = (k T ) ˙ . An approach of Ψ (t) towards the minimum proceeds in the very dt h i − B Hc h i same rate as this of (t) towards its maximum. Hc In contrast to ˙ which is non-negative, we have no growth guarantee for the differential entropy Hc ˙ whose sign is unspecified. Nonetheless, the balance between the time rate of entropy produc- S tion/removal and the power release into or out of the environment, is definitely correct. We have ˙ ˙ . S ≥ −Q The relationship between two different forms of entropy, differential (Shannon) and conditional (Kullback), and their dynamics is thereby set. An exhaustive discussion of the temporal approach to equilibrium of the Gibbs and conditional entropies can be found in Refs. [92, 93]. Both invertible deterministic and non-invertible stochastic systems were addresses. Entropy 2005, 7[4], 253-299 32

Typically, the conditional entropy either remains constant or monotonically increases to its maxi- mum at zero. This stays in conformity with expectations motivated by the Boltzamnn H-theorem and the second law of thermodynamics. To the contrary, the Gibbs entropy displays a different behavior, even under the very same approach-to-equilibrium circumstances: it may monotonically increase or decrease, and as well display an oscillatory pattern. In below we shall demonstrate that this potentially strange behav- ior of the differential entropy gives an insight into nontrivial power transfer processes in the mean, whose assessment would not be possible in terms of the conditional entropy.

4.5 One-dimensional Ornstein-Uhlenbeck process

It is quite illuminating to exemplify previous considerations by a detailed presentation of the standard one-dimensional Ornstein-Uhlenbeck process. We denote b(x)= γx with γ > 0. − 2 If an initial density is chosen in the Gaussian form, with the mean value α0 and variance σ0. the Fokker-Planck evolution Eq. (74) preserves the Gaussian form of ρ(x, t) while modifying the mean value α(t)= α exp( γt) and variance according to 0 − D σ2(t)= σ2 exp( 2γt)+ [1 exp( 2γt)] . (93) 0 − γ − − 2 Accordingly, since a unique invariant density has the form ρ∗ = γ/2πD exp( γx /2D) we − obtain, [92]: 2 p γα0 (t) = exp( 2γt) (ρ , ρ∗)= exp( 2γt) (94) Hc − Hc 0 − 2D − while in view of our previous considerations, we have (t) =(1/2) ln[2πeσ2(t)] and =1/σ2(t). S F Therefore 2γ(D γσ2) exp( 2γt) ˙ = − 0 − . (95) S D (D γσ2) exp( 2γt) − − 0 − We observe that if σ2 > D/γ, then ˙ < 0, while σ2 < D/γ implies ˙ > 0. In both cases the 0 S 0 S behavior of the differential entropy is monotonic, though its growth or decay do critically rely on the choice of σ2. Irrespective of σ2 the asymptotic value of (t) as t reads (1/2) ln[2πe(D/γ). 0 0 S →∞ The differential entropy evolution is anti-correlated with this of the localization, since γ ˙ ˙ = S . (96) F − [D (D γσ2) exp( 2γt)]2 − − 0 − For all σ2 the asymptotic value of reads γ/D. 0 F We have here a direct control of the behavior of the ”power release” expression ˙ = ˙ ˙. Since Q Hc −S

˙ =(γ2α2/D) exp( 2γt) > 0 , (97) Hc 0 − in case of ˙ < 0 we encounter a continual power supply ˙ > 0 by the thermal environment. S Q In case of ˙ > 0 the situation is more complicated. For example, if α = 0, we can easily check S 0 that ˙ < 0, i.e. we have the power drainage from the environment for all t R+. More generally, Q ˙ 2 2 ∈ ˙ the sign of is negative for α0 < 2(D γσ0 )/γ. If the latter inequality is reversed, the sign of Q − 2 2 Q is not uniquely specified and suffers a change at a suitable time instant tchange(α0, σ0). Entropy 2005, 7[4], 253-299 33

4.6 Mean energy and the dynamics of Fisher information By considering ( ρ)(x, t) and s(x, t), such that v = s, as canonically conjugate fields, we − ∇ can invoke the variational calculus. Namely, one may derive the continuity (and thus Fokker- Planck) equation together with the Hamilton-Jacobi type equation (whose gradient implies the hydrodynamical conservation law Eq. (76)): 1 ∂ s + ( s)2 (Ω Q)=0 , (98) t 2 ∇ − − by means of the extremal (least, with fixed end-point variations) action principle involving the (mean Lagrangian: 1 u2 = ρ ∂ s + ( s)2 + Ω dx . (99) L − t 2 ∇ − 2 Z    The related Hamiltonian (which is the mean energy of the diffusion process per unit of mass) reads . 1 u2 = ρ ( s)2 + Ω dx (100) H · 2 ∇ − 2 Z    i. e. = (1/2)( v2 u2 ) Ω . H − −h i We can evaluate an expectation value of Eq. (98) which implies an identity = ∂ts . By H −h i invoking Eq. (86), with the time-independent V , we arrive at k T Ψ=˙ B (vρ) (101) ρ ∇ whose expectation value Ψ˙ , in view of vρ = 0 at the integration volume boundaries, identically h i vanishes. Since v = (1/mβ) Ψ, we define − ∇ . s(x, t) = (1/mβ)Ψ(x, t)= ∂ s =0 (102) ⇒h t i so that 0 identically. H ≡ We have thus arrived at the following interplay between the mean energy and the information entropy ”production” rate:

D dS 1 u 2 = v2 = ρ −→ + Ω dx 0 , (103) 2 dt 2h i 2 ≥  in Z ! generally valid for Smoluchowski processes with non-vanishing diffusion currents. . By recalling the notion of the Fisher information Eq. (47) and setting = D2 , we can rewrite F Fα the above formula as follows: = v2 2 Ω 0 (104) F h i − h i ≥ where /2= Q > 0 holds true for probability densities with finite mean and variance. F −h i We may evaluate directly the uncertainty dynamics of the Smoluchowski process, by recalling that the Fisher information /2 is the localization measure, which for probability densities with finite F mean value and variance σ2 is bounded from below by 1/σ2, see e.g. Section III. Entropy 2005, 7[4], 253-299 34

Namely, by exploiting the hydrodynamical conservation laws Eq. (76) for the Smoluchowski process we get: ∂ (ρv2)= [(ρv3)] 2ρv (Q Ω) . (105) t −∇ · − · ∇ − We assume to have secured conditions allowing to take a derivative under an indefinite integral, and take for granted that of ρv3 vanishes at the integration volume boundaries. This implies the following expression for the time derivative of v2 : h i d v2 =2 v (Ω Q) . (106) dt h · ∇ − i

Proceeding in the same vein, in view of Ω,˙ we find that d Ω = v Ω (107) dth i h · ∇ i and so the equation of motion for follows: F d d = [ v2 2 Ω ]= 2 v Q . (108) dtF dt h i − h i − h · ∇ i Since we have Q = P/ρ where P = D2ρ ∆ ln ρ, the previous equation takes the form ˙ = ∇ ∇ F ρv Qdx = v P dx, which is an analog of the familiar expression for the power release − ∇ − ∇ (dE/dt = F v, with F = V ) in classical mechanics; this to be compared with our previous R · R −∇ discussion of the ”heat dissipation” term Eq. (82). Remark 11: As indicates our previous example of the Ornstein-Uhlenbeck process in one dimen- sion, there is nothing obvious to say about the growth or decay of various quantities involved. In this particular case, we have e.g. v2 (t)=(D/2) ˙ = t(γ2α2/D) exp( 2γt), hence an asymptotic h i Hc 0 − value 0, while u2 (t)=(D/2) (t) γ/D. Accordingly, we have Ω (t) γ/2D. h i F → h i → −

5 Differential entropy dynamics in quantum theory

5.1 Balance equations In the discussion of Smoluchowski diffusions, our major reference point was the conventional Fokker-Planck equation (74) for a probability density supporting a Markovian diffusion process. The (time-independent) drift function b was assumed to be known a priori (e.g. the conservative external forces were established on phenomenological or model construction grounds), while the initial and/or boundary data for the probability density of the process could be chosen (to a high degree) arbitrarily. Under such ”normal” circumstances, the hydrodynamical conservation laws (76) come out as a direct consequence of the Fokker-Planck equation. Also, the functional expression for Ω of the form (77) is basically known to arise if one attempts to replace an elliptic diffusion operator by a Hermitian (and possibly self-adjoint) one, [88, 68, 53]. We shall depart from the standard Brownian motion setting to more general Markovian diffusion- type processes which, while still respecting the Fokker-Planck equation, admit general time- dependent forward drifts. In fact, we invoke at this point a well defined stochastic counterpart Entropy 2005, 7[4], 253-299 35 of the Schr¨odinger picture quantum dynamics of wave packets, [50, 51, 53, 54, 52, 68, 85], where the notion of differential entropy and its dynamics finds a proper place. The dynamics of quantal probability densities is here resolved in terms of diffusion-type processes. Let us assume to have chosen an arbitrary continuous (it is useful, if bounded from below) function = ( x , t) with dimensions of energy. we consider the Schr¨odinger equation (set D = ~/2m) in V V −→ the form i∂ ψ = D∆ψ + V ψ . (109) t − 2mD The Madelung decomposition ψ = ρ1/2 exp(is) with the phase function s = s(x, t) defining v = s ∇ is known to imply two coupled equations: the standard continuity equation ∂ ρ = (vρ) and t −∇ the Hamilton-Jacobi-type equation 1 ∂ s + ( s)2 + (Ω Q)=0 (110) t 2 ∇ − . where Ω = /m and the functional form of Q coincides with this introduced previously in Eq. (72). V Notice a ”minor” sign change in Eq. (110) in comparison with Eq. (98). Those two equations form a coupled system, whose solutions describe a Markovian diffusion-type process: the probability density is propagated by a Fokker-Planck dynamics of the form Eq. (74) with the drift b = v u where u = D ln ρ is an osmotic velocity field. − ∇ We can mimic the calculus of variations steps of the previous section, so arriving at the Hamiltonian (actually, the mean energy of the quantum motion per unit of mass):

. 1 u2 = ρ ( s)2 + + Ω dx , (111) H · 2 ∇ 2 Z    to be compared with Eq. (100). There holds

= (1/2)[ v2 + u2 ]+ Ω = ∂ s . (112) H h i −h t i Of particular interest (due to its relative simplicity) is the case of time-independent , when V . = ∂ s = = const (113) H −h t i E is known to be a conserved finite quantity, which is not necessarily positive. Since generally = 0, H 6 we deal here with so-called finite energy diffusion-type processes, [51, 52]. The corresponding Fokker-Planck equation propagates a probability density ψ 2 = ρ, whose differential entropy | | S may quite nontrivially evolve in time. Keeping intact the previous derivation procedures for ( ˙) (while assuming the validity of math- S in ematical restrictions upon the behavior of integrands), we encounter the information entropy bal- ance equations in their general form disclosed in Eqs. (81)-(83). The related differential entropy ”production” rate reads:

2 1 ( ˙) = + Ω 0 , . (114) S in D E − 2F h i ≥    Entropy 2005, 7[4], 253-299 36

We recall that 1 = Q > 0 which implies Ω 1 > 0. Therefore, the localization 2 F −h i E−h i ≥ 2 F measure has a definite upper bound: the pertinent wave packet cannot be localized too sharply. F We notice that the localization (Fisher) measure

= 2( Ω ) v2 (115) F E−h i −h i in general evolves in time. Here is a constant and Ω=0.˙ E By invoking the hydrodynamical conservation laws, we find out that the dynamics of Fisher information follows an equation: d F = +2 v Q (116) dt h ∇ i and that there holds 1 d 1 ˙ = [ v2 + Ω ] (117) 2F −dt 2h i h i which is to be compared (notice the opposite sign of the right-hand expression) with the result we have obtained for Smoluchowski processes. Obviously, now we have ˙ = + v P dx, with the same functional form for P as before. We F ∇ interpret ˙ as the measure of power transfer in the course of which the (de)localization ”feeds” the F R diffusion current and in reverse. Here, we encounter a negative feedback between the localization and the proper energy of motion which keeps intact an overall mean energy = of the quantum H E motion. See e.g. also [53]. In case of v = 0, we have = 1 + Ω and no entropy ”production” nor dynamics of uncertainty. E 2 F h i There holds ˙ = 0 and we deal with time-reversible stationary diffusion processes and their S invariant probability densities ρ(x), [68, 52]. Remark 12: Let us indicate that the phase function s(x, t) shows up certain (remnant) features of the Helmholtz Ψ and Ψ . This behavior is not unexpected, since e.g. the ground state densities h i (and other invariant densities of stationary states) are directly related to time-reversible stationary diffusion-type processes of Refs. [52, 68]. We have ∂ s = . In view of v = s and assumed −h t i E ∇ vanishing of sρv at the integration volume boundaries, we get: d s = v2 (118) dth i h i−E The previously mentioned case of no entropy ”production” refers to v = 0 and thus s = s t. 0 −E· We recall that the corresponding derivation of Eq. (89) has been carried out for v = (1/mβ) Ψ, − ∇ with Ψ˙ = 0). Hence, as close as possible link with the present discussion is obtained if we h i . re-define s into s = s. Then we have Ψ − d s = v2 . (119) dth Ψi E−h i For stationary quantum states, when v = 0 identically, we get d s = , in contrast to the dt h Ψi E standard Fokker-Planck case of d Ψ = 0. dt h i Interestingly enough, we can write the generalized Hamilton-Jacobi equation, while specified to the v = regime, with respect to s . Indeed, there holds ∂ s = Ω Q, in close affinity with Ψ t Ψ − Eq. (98) in the same regime. Entropy 2005, 7[4], 253-299 37

5.2 Differential entropy dynamics exemplified 5.2.1 Free evolution Let us consider the probability density in one space dimension: α x2α2 ρ(x, t)= exp (120) [π(α4 +4D2t2)]1/2 −α4 +4D2t2   and the phase function 2D2x2t 2Dt s(x, t)= D2 arctan (121) α4 +4D2t2 − − α2   which determine a free wave packet solution of equations (109) and (110), i.e. obtained for the case of 0 with the initial data ψ(x, 0)=(πα2)−1/4 exp( x2/2α2). V ≡ − We have: 2D(2Dt α2)x b(x, t)= v(x, t)+ u(x, t)= − (122) α4 +4D2t2 and the the Fokker-Planck equation with the forward drift b(x, t) is solved by the above ρ. In the present case, the differential entropy reads: 1 (t)= ln 2πe X2 (t) (123) S 2 .   where X2 = x2ρdx =(α4 +4D2t2)/2α2. Its time rate D ˙ = v2 b v equals: h i S h i−h · i R d 4D3t D S = 0 (124) dt α4 +4D2t2 ≥ for t 0. Its large time asymptotic is D/t. ≥ Furthermore, we have 8D4t2 D( ˙) = v2 = (125) S in α2(α4 +4D2t2) with the obvious large time asymptotic value 2 D2/α2: the differential entropy production remains untamed for all times. Due to u2 = (2D2α2)/(α4 +4D2t2) there holds h i 1 D2 = ( v2 + u2 )= . (126) E 2 h i h i α2 Accordingly, the quantum mechanical analog of the entropy (rather than heat) ”dissipation” term D in the quantum case reads − · Q 4D3t(α2 2Dt) b v = − (127) −h · i α2(α4 +4D2t2) and while taking negative values for t < α2/2D, it turns out to be positive for larger times. Formally speaking, after a short entropy ”dissipation” period we pass to the entropy ”absorption” regime which in view of its D/t asymptotic, for large times definitely dominates D( ˙) 2D2/α2. S in ∼ Entropy 2005, 7[4], 253-299 38

Those differential entropy balance features do parallel a continual growth of the mean kinetic energy (1/2) v2 from an initial value 0 towards its asymptotic value D2/α2 = . Note that the h i E negative feedback is here displayed by the the behavior of u2 which drops down from the initial h i value 2D2/α2 towards 0. It is also instructive to notice that in the present case (t)= D2/ X2 (t). F h i We can readily check that ˙ = d u2 /dt = d v2 /dt. F h i − h i

5.2.2 Steady state We choose the probability density in the form: ω 1/2 ω ρ(x, t)= exp (x q(t))2 (128) 2πD −2D −   h i where the classical harmonic dynamics with particle mass m and frequency ω is involved such that q(t)= q cos(ωt)+(p /mω) sin(ωt) and p(t)= p cos(ωt) mωq sin(ωt). 0 0 0 − 0 One can easily verify that (9), and (40) hold true identically once we set = 1 ω2x2 and consider: V 2

s(x, t)=(1/2m) [xp(t) (1/2)p(t)q(t) mDωt] . (129) − − A forward drift takes the form: 1 b(x, t)= p(t) ω (x q(t)) (130) m − − and the above ρ solves the corresponding Fokker-Planck equation. The differential entropy is a constant equal = (1/2) ln(2πeD/ω). Although trivially d /dt = 0, S S all previous arguments can be verified. For example, we have v = s = p(t)/2m and therefore an oscillating entropy ”production” term ∇ D( ˙) = p2(t)/4m2 which is balanced by an oscillating ”dissipative” counter-term to yield ˙. S in S Suitable expressions for s and ∂ s easily follow. h i h t i Concerning the Fisher measure, we have obviously = ω/D which is a constant of motion. F

5.2.3 Squeezed state Let us consider [33] the squeezed wave function of the harmonic oscillator. We adopt the re-scaled units ~ = ω = m = 1, hence also D = 1. The solution of the Schr¨od inger equation i∂tψ = (1/2)∆ψ +(x2/2)ψ with the initial data ψ(x, 0) = (γ2π)−1/4 exp( x2/2γ2) and γ (0, ), is − − ∈ ∞ defined in terms of the probability density : 1 x2 ρ(x, t)= exp (131) (2π)1/2σ(t) −2σ2(t)   where 1 2σ2(t)= sin t + γ2 cos2 t (132) γ2 and the phase function t π (1/γ2 γ2) sin 2t s(x, t)= + + arctan(γ2 cot t)+ − x2 . (133) 2 4 8σ2(t) Entropy 2005, 7[4], 253-299 39

Now, the differential entropy = (1/2) ln[2πeσ2(t)] displays a periodic behavior in time, whose S level of complexity depends on the particular value of the squeezing parameter γ. The previously mentioned negative feedback is here manifested through (counter)oscillations of the localization, this in conformity with the dynamics of σ2(t) and the corresponding oscillating dynamics of the Fisher measure =1/σ2(t). . F See e.g. also [33] for a pictorial analysis and an instructive computer assisted discussion of the Schr¨odinger cat state (superposition of the harmonic oscillator coherent states with the same amplitude but with opposite phases), with the time evolution of the corresponding differential entropy.

5.2.4 Stationary states In contrast to generic applications of the standard Fokker-Planck equation, where one takes for granted that there is a unique positive stationary probability density, the situation looks otherwise if we admit the Schr¨odinger equation as a primary dynamical rule for the evolution of (inferred) probability densities. For a chosen potential, all available stationary quantum states may serve the purpose, since then we have nonnegative (zeroes are now admitted) ρ∗(x), and v(x) = 0 identically (we stay in one spatial dimension). The standard harmonic oscillator may serve as an instructive example. One may e.g. consult Fig. 3 in [30] to check the behavior of both position and momentum differential entropies, and their sum, depending on the energy eigenvalue. All these stationary state values grow monotonically with n =1, 2, ..., 60, [30] and follow the pattern in the asymptotic regime n 500, [33]. ≡ For convenience we shall refer to the Schr¨odinger eigenvalue problem with scaled away physical units. We consider ( compare e.g. Eq. (109) with D 1/2) → 1 x2 1 [ ∆+ ]√ρ∗ =(n + )√ρ∗ . (134) −2 2 2 In terms of a suitable Hamilton-Jacobi type equation we can address the same problem by seeking solutions of an equation n +1/2 = Ω Q (135) − 2 with respect to √ρ∗, provided we set Ω = x /2, define u = ln √ρ∗ and demand that Q = ∇ u2/2+(1/2) u. ∇· For the harmonic oscillator problem, we can refer to standard textbooks. For each value of n we 1/2 recover a corresponding unique stationary density: √ρ∗ ρn with n =0, 1, 2, ...). We have: → 1 x2 ρ1/2(x)= exp H (x) (136) n (2nn!√π)1/2 − 2 n   where H (x) stands for the n-th Hermite polynomial: H = 1, H = 2x, H = 2(2x2 1), n 0 1 2 − H =4x(2x2 3), and so on. 3 − We immediately infer e.g. b = x Q = x2/2 1/2, next b = (1/x) x Q = x2/2 3/2, 0 − → − 1 − → − and b = [4x/(2x2 1)] x Q = x2/2 5/2, plus b = [(1/x)+4x/(2x2 3)] Q = x2 7/2, 2 − − → − 3 − → − that is to be continued for n> 3. Therefore Eq. (135) is here a trivial identity. Entropy 2005, 7[4], 253-299 40

Obviously, except for the ground state which is strictly positive, all remaining stationary states are nonnegative. An open problem, first generally addressed in [95], see also [96], is to implement a continuous dynamical process for which any of induced stationary densities may serve as an invariant asymp- totic one. An obvious (Ornstein-Uhlenbeck) solution is known for the ground state density. Its ample discussion (albeit without mentioning the quantum connotation) has been given in Section IV.

6 Outlook

One may raise an issue of what are the entropy functionals good for. In non-equilibrium statistical mechanics of gases one invokes them to solve concrete physical problems: for example to address the second law of thermodynamics and the related Boltzmann H-theorem from a probabilistic point of view. We are strongly motivated by a number of reported problems with the rigorous formulation of effects of noise on entropy evolution and a justification of the entropy growth paradigm for model systems, [58, 57, 89, 92, 13, 97] and a long list of mathematically oriented papers on the large time asymptotic of stochastic systems, [98]-[102]. Therefore, the major issue addressed in the present paper is that of quantifying the dynamics of probability densities in terms of proper entropy func- tionals. The formalism was designed to encompass standard diffusion processes of non-equilibrium statistical physics and the Schr¨odinger picture implemented dynamics of probability densities re- lated to pure quantum states in L2(R). In the latter case an approach towards equilibrium has not been expected to occur at all. To this end the behavior of Shannon and Kullback-Leibler entropies in time has been investigated in classical and quantum mechanical contexts. The utility of a particular form of entropy for a given dynamical model appears to be basically purpose dependent. The use of the Kullback- Leibler entropy encounters limitations which are not shared by the differential entropy, when the dynamical process is rapid. Alternatively, if one is interested in its short-time features. On the contrary, it is the conditional Kullback-Leibler entropy which is often invoked in rigorous formulations of the Boltzmann H-theorem under almost-equilibrium conditions and its analogues for stochastic systems. The large time asymptotics of solutions of Fokker-Planck equations, if analyzed in terms of this entropy, gives reliable results. However, our analysis of Smoluchowski diffusion processes and of the exemplary Ornstein-Uhlenbeck process, demonstrates that a deeper insight into the underlying non-equilibrium physical phenom- ena (the inherent power transfer processes) is available only in terms of the Shannon entropy and its time rate of change. This insight is inaccessible in terms of the Kullback-Leibler entropy . The differential entropy needs not to increase, even in case of plainly irreversible dynamics. The monotonic growth in time of the conditional Kullback entropy (when applicable), not necessarily should be related to the ”dynamical origins of the increasing entropy”, [47, 93]. We would rather tell that the conditonal entropy is well suited to stay in correspondence with the lore of the second law of thermodynamics, since by construction its time behavior is monotonic, if one quantifies an Entropy 2005, 7[4], 253-299 41 asymptotic approach towards a stationary density. In case of Smoluchowski processes, the time rate of the conditional Kullback entropy was found to coincide with the corresponding differential (Shannon) entropy ”production” rate. The differential entropy itself needs not to grow and may as well change its dynamical regime from growth to decay and in reverse, even with the entropy ”production” involved. Balance equations for the differential entropy and the Fisher information measure involve a non- trivial power transfer. In case of Smoluchowski processes this power release can be easily attributed to the entropy removal from the system or the entropy absorption from (drainage) the thermostat. In the quantum mechanical regime, the inherent power transfer is related to metamorphoses of various forms of mean energy among themselves and needs not the notion of external to the system thermostat. Apart from the above observations, we have provided a comprehensive review of varied appear- ances of the differential entropy in the existing literature on both classical and quantum dynamical systems. As a byproduct of the general discussion, we have described its specific quantum manifes- tations, in the specific (pure quantum states) regime where the traditional von Neumann entropy is of no much use.

Acknowledgement: The paper has been supported by the Polish Ministry of Scientific Research and Information Technology under the (solicited) grant No PBZ-MIN-008/P03/2003. I would like to thank Professor Robert Alicki for help in quest for some hardly accessible references.

Dedication: This paper is dedicated to Professor Rafael Sorkin on the occasion of his 60th birthday, with friendly admiration.

References

[1] Alicki, R. and Fannes, M.: Quantum Dynamical Systems, Oxford University Press, Oxford, 2001

[2] Ohya, M. and Petz, D.:Quantum Entropy and Its use, Springer-Verlag, Berlin, 1993

[3] Wehrl, A.: General properties of entropy, Rev. Mod. Phys. 50 (1978), 221-260

[4] Shannon, C. E.: A mathematical theory of communication, Bell Syst. Techn. J. 27, (1948), 379-423, 623-656

[5] Cover, T. M. and Thomas, J. A.: Elements of Information Theory, Wiley, NY, 1991

[6] Sobczyk, K.: Information Dynamics: Premises, Challenges and Results, Mechanical Systems and Signal Processing 15 (2001), 475-498

[7] Yaglom, A. M. and Yaglom, I. M.: Probability and Information, D. Reidel, Dordrecht, 1983

[8] Hartley, R. V. L.: Transmission of information, Bell Syst. Techn. J. 7, 1928, 535-563 Entropy 2005, 7[4], 253-299 42

[9] Brillouin, L.: Science and Information Theory, Academic Press, NY, 1962

[10] Ingarden, R. S., Kossakowski, A. and Ohya M.: Information Dynamics and Open Systems, Kluwer, Dordrecht, 1997

[11] Brukner, C.ˆ and Zeilinger, A.: Conceptual inadequacy of the Shannon information in quantum measurements, Phys. Rev. A 63, (2002), 022113

[12] Mana, P. G. L.: Consistency of the Shannon entropy in quantum experiments, Phys. Rev. A 69 (2004), 062108

[13] Jaynes, E. T.: Information theory and statistical mechanics.II., Phys. Rev. 108 (1957), 171- 190,

[14] Stotland,A. et al.: The information entropy of quantum mechanical states, Europhys. Lett. 67 (2004), 700-706

[15] Partovi. M. H.: Entropic formulation of uncertainty for quantum measurements, Phys. Rev. Lett. 50 (1983), 1883-1885

[16] Adami, C.: Physics of information, arXiv:quant-ph/040505, (2004)

[17] Deutsch, D.: Uncertainty in quantum measurements, Phys. Rev. Lett. 50 (1983), 631-633

[18] Garbaczewski, P. and Karwowski, W.: Impenetrable barrriers and canonical quantization, Am. J. Phys. 72 (2004), 924-933

[19] Hirschman, I. I.: A note on entropy, Am. J. Math. 79 (1957), 152-156

[20] Beckner, W.; Inequalities in Fourier analysis, Ann. Math. 102 (1975), 159-182

[21] Bia lynicki-Birula, I. and Mycielski, J.: Uncertainty Relations for Information Entropy in Wave Mechanics, Commun. Math. Phys. 44 (1975), 129-132

[22] Bia lynicki-Birula, I. and Madajczyk, J.: Entropic uncertainty relations for angular distribu- tions, Phys. Lett. A 108 (1985), 384-386

[23] Stam, A. J.: Some inequalities satisfied by the quantities of information of Fisher and Shan- non, Inf. and Control 2 (1959), 101-112

[24] Dembo, A. and Cover, T.: Information theoretic inequalities, IEEE Trans. Inf. Th. 37 (1991), 1501-1518

[25] Maasen, H. and Uffink, J. B. M.: Generalized Entropic Uncertainty Relations, Phys. Rev. Lett. 60 (1988), 1103-1106

[26] Blankenbecler, R. and Partovi. M. H.: Uncertainty, entropy, and the statistical mechanics of microscopic systems, Phys. Rev. Lett. 54 (1985), 373-376 Entropy 2005, 7[4], 253-299 43

[27] S´anchez-Ruiz, J.: Asymptotic formula for the quantum entropy of position in energy eigen- states, Phys. Lett. A 226 (1997), 7-13

[28] Halliwell, J. J.: Quantum-mechanical histories and the : Information- theoretic inequalities, Phys. Rev. D 48 (1993), 2739-2752

[29] Gadre, S. R. et al.: Some novel characteristics of atomic information entropies, Phys. Rev. A 32 (1985), 2602-2606

[30] Ya˜nez, R. J., Van Assche, W., Dehesa, J. S. : Position and information entropies of the D-dimensional harmonic oscillator and hydrogen atom, Phys. Rev. A 50 (1994), 3065-3079

[31] Ya˜nez, R. J. et al.; Entropic integrals of hyperspherical harmonics and spatial entropy of D-dimensional central potentials, J. Math. Phys. 40 (1999), 5675-5686

[32] Buyarov, V. et al.: Computation of the entropy of polynomials orthogonal on an interval, SIAM J. Sci. Comp. to appear (2004), also math.NA/0310238

[33] Majernik, V. and Opatrn´y, T.: Entropic uncertainty relations for a quantum oscillator, J. Phys. A: Math. Gen. 29 (1996), 2187-2197

[34] Majernik, V. and Richterek, L.: Entropic uncertainty relations for the infinite well, J. Phys. A: Math. Gen. 30 (1997), L49-L54

[35] Massen, S. E. and Panos C. P.: Universal property of the information entropy in atoms, nuclei and atomic clusters, Phys. Lett. A 246 (1998), 530-532

[36] Massen, S. E. et al.: Universal property of information entropy in fermionic and bosonic systems, Phys. Lett. A 299 (2002), 131-135

[37] Massen, S. E.: Application of information entropy to nuclei, Phys. Rev. C 67 (2003), 014314

[38] Coffey, M. W.: Asymtotic relation for the quantum entropy of momentum in energy eigen- states, Phuys. Lett. A 324 (2004, 446-449

[39] Coffey, M. W.: Semiclassical position entropy for hydrogen-like atoms, J. Phys. A: Math. Gen. 36 (2003), 7441-7448

[40] Dunkel, J. and Trigger, S. A.: Time-dependent entropy of simple quantum model systems, Phys. Rev. A 71 (2005), 052102

[41] Santhanam, M. S.: Entropic uncertainty relations for the ground state of a coupled sysytem, Phys. Rev. A 69 (2004), 042301

[42] Balian, R.: Random matrices and information theory, Nuovo Cim. B 57 (1968), 183-103

[43] Werner, S. A. and Rauch, H.: Neutron interferometry: Lessons in Experimental Quantum Physics, Oxford University Press, Oxford, 2000 Entropy 2005, 7[4], 253-299 44

[44] Zeilinger, A. et al.: Single- and double-slit diffraction of neutrons, Rev. Mod. Phys. 60 (1988), 1067-1073

[45] Caves, C. M. and Fuchs, C.: Quantum information: how much information in a state vector ?, Ann. Israel Phys. Soc. 12 (1996), 226-237

[46] Newton, R. G.: What is a state in quantum mechanics?, Am. J. Phys. 72 (2004), 348-350

[47] Mackey, M. C.: The dynamic origin of increasing entropy, Rev. Mod. Phys. 61 (1989), 981-1015

[48] Lasota, A. and Mackey, M. C.: Chaos, Fractals and Noise, Springer-Verlag, Berlin, 1994

[49] Berndl, K. et al.: On the global existence of Bohmian mechanics, Commun. Math. Phys. 173 (1995), 647-673

[50] Nelson, E.: Dynamical Theories of the Brownian Motion, Princeton University Press, Prince- ton, 1967

[51] Carlen, E.: Conservative diffusions, Commun. Math. Phys. 94 (1984), 293-315

[52] Eberle, A.: Uniqueness and Non-uniqueness of Semigroups Generated by Singular Diffusion Operators, LNM vol. 1718, Springer-Verlag, Berlin, 2000

[53] Garbaczewski, P.: Perturbations of noise: Origins of isothermal flows, Phys. Rev. E 59 (1999), 1498-1511

[54] Garbaczewski, P. and Olkiewicz, R.: Feynman-Kac kernels in Markovian representations of the Schr¨odinger interpolating dynamics, J. Math. Phys. 37 (1996), 732-751

[55] Ambegaokar, V. and Clerk, A.: Entropy and time, Am. J. Phys.67 (1999), 1068-1073

[56] Tr¸ebicki, J. and Sobczyk, K.; Maximum entropy principle and non-stationary distributions of stochastic systems, Probab. Eng. Mechanics 11 (1996), 169-178

[57] Huang, K.; Statistical Mechanics, Wiley, New York, 1987

[58] Cercignani, C.: Theory and Application of the Boltzmann Equation, Scottish Academic Press, Edinburgh, 1975

[59] Daems, D. and Nicolis, G.: Entropy production and phase space volume contraction, Phys. Rev. E 59 (1999), 4000-4006

[60] Dorfman, J. R.: An Introduction to Chaos in Nonequilibrium Statistical Physics, Cambridge Univ. Press, Cambridge, 1999

[61] Gaspard, P.: Chaos, Scattering and Statistical Mechanics, Cambridge Univ. Press, Cam- bridge, 1998 Entropy 2005, 7[4], 253-299 45

[62] Deco, G. et al: Determining the information flow of dynamical systems from continuous probability distributions, Phys. Rev. Lett. 78 (1997), 2345-2348

[63] Bologna, M. et al.: Trajectory versus probability density entropy, Phys. Rev. E 64 (2001), 016223

[64] Bag, C. C. et al: Noise properties of stochastic processes and entropy production, Phys. Rev. E 64 (2001), 026110

[65] Bag, B. C.: Upper bound for the time derivative of entropy for nonequilibrium stochastic processes, Phys. Rev. E 65 (2002), 046118

[66] Hatano, T. and Sasa, S.: Steady-State Thermodynamics of Langevin Systems, Phys. Rev. Lett. 86 (2001), 3463-3466

[67] Qian, H.: Mesoscopic nonequilibrium thermodynamics of single macromolecules and dynamic entropy-energy compensation, Phys. Rev. E 65 (2001), 016102

[68] Jiang, D-Q, Qian, M. and Qian, M-P.: Mathematical theory of nonequilibrium steady states, LNM vol. 1833, Springer-Verlag, Berlin, 2004

[69] Qian, H., Qian, M. and Tang, X.: Thermodynamics of the general diffusion process: time- reversibility and entropy production, J. Stat. Phys. 107 (2002), 1129-1141

[70] Ruelle, D.: Positivity of entropy production in nonequilibrium statistical mechanics, J. Stat. Phys. 85 (1996), 1-23

[71] Munakata, T., Igarashi, A. and Shiotani, T.; Entropy and entropy production in simple stochastic models, Phys. Rev. E 57 (1998), 1403-1409

[72] Tribus, M. and Rossi, R.; On the Kullback information measure as a basis for information theory: Comments on a proposal by Hobson and Chang, J. Stat. Phys. 9 (1973), 331-338

[73] Smith, J. D. H.: Some observations on the concepts of information-theoretic entropy and randomness, Entropy, 3 (2001), 1-11

[74] Chandrasekhar, S.: Stochastic problems in physics and astronomy, Rev. Mod. Phys. 15 (1943), 1-89

[75] Hall, M. J. W.: Universal geometric approach to uncertainty, entropy and infromation, Phys. Rev. A 59 (1999), 2602-2615

[76] Pipek, J. and Varga, I.; Universal classification scheme for the spatial-localization properties of one-particle states in finite, d-dimensional systems, Phys. Rev. A 46 (1992), 3148-3163

[77] Varga, I. and Pipek, J.; R´enyi entropies characterizing the shape and the extension of the phase-space representation of quantum wave functions in disordered systems, Phys. rev. E 68 (2003), 026202 Entropy 2005, 7[4], 253-299 46

[78] McClendon, M. and Rabitz, H.; Numerical simulations in stochastic mechanics, Phys. Rev. A 37 (1988), 3479-3492

[79] Garbaczewski, P.: Signatures of randomness in quantum spectra, Acta Phys. Pol. A 33 (2002), 1001-1024

[80] Hu, B. et al.: Quantum chaos of a kicked particle in an infinite potential well, Phys. Rev. Lett. 82 (1999), 4224–4227

[81] Kullback, S.: Information Theory and Statistics, Wiley, NY, 1959

[82] Cram´er, H.: Mathematical methods of statistics, Princeton University Press, Princeton, 1946

[83] Hall, M. J. W.: Exact uncertainty relations, Phys. Rev. A 64 (2001), 052103

[84] Garbaczewski, P.: Stochastic models of exotic transport, Physica A 285 (2000), 187-198

[85] Carlen, E. A.: Superadditivity of Fisher’s information and logarithmic Sobolev inequalities, J. Funct. Anal. 101 (1991), 194-211

[86] Frieden, B. R. and Sofer, B. H.: Lagrangians of physics and the game of Fisher-information transfer, Phys. Rev. E 52 (1995), 2274-2286

[87] Catalan, R. G., Garay, J., Lopez-Ruiz, R.; Features of the extension of a statistical measure of complexity to continuous systems, Phys. Rev. E 66 (2002), 011102

[88] Risken, H.: The Fokker-Planck Equation, Springer-Verlag, Berlin, 1989

[89] Hasegawa, H.: Thermodynamic properties of non-equilibrium states subject to Fokker-Planck equations, Progr. Theor. Phys. 57 (1977), 1523-1537

[90] Vilar, J. M. G. and Rubi, J. M.: Thermodynamics ”beyond” local equilibrium, Proc. Nat. Acad. Sci. (NY) 98 (2001), 11081-11084

[91] Kurchan, J.: Fluctuation theorem for stochastic dynamics, J. Phys. A: Math. Gen. 31 (1998), 3719-3729

[92] Mackey, M. C. and Tyran-Kami´nska, M.: Effects of noise on entropy evolution, arXiv.org preprint cond-mat/0501092, (2005)

[93] Mackey, M. C. and Tyran-Kami´nska, M.: Temporal behavior of the conditional and Gibbs entropies, arXiv.org preprint cond-mat/0509649, (2005)

[94] Czopnik, R. and Garbaczewski, P.: Frictionless Random Dynamics: Hydrodynamical For- malism, Physica A 317 (2003), 449-471

[95] Fortet, R.: R´esolution d’un syst´eme d’´equations de M. Schr¨odingeer, J. Math. Pures Appl. 9 (1040), 83 Entropy 2005, 7[4], 253-299 47

[96] Blanchard Ph. and Garbaczewski, P.: Non-negative Feynman-Kac kernels in Schr¨odinger’s interpolation problem, J. Math. Phys. 38 (1997), 1-15

[97] Jaynes, E. T.: Violations of Boltzmann’s H Theorem in Real Gases, Phys. Rev. A 4 (1971), 747-750

[98] Voigt, J.: Stochastic operators, Information and Entropy, Commun. Math. Phys. 81 (1981), 31-38

[99] Voigt, J.: The H-Theorem for Boltzmann type equations, J. Reine Angew. Math 326 (1981), 198-213

[100] Toscani, G.: Kinetic approach to the asymptotic behaviour of the solution to diffusion equation, Rend. di Matematica Serie VII 16 (1996), 329-346

[101] Bobylev, A. V. and Toscani, G.: On the generalization of the Boltzmann H-theorem for a spatially homogeneous Maxwell gas, J. Math. Phys. 33 (1992), 2578-2586

[102] Arnold, A. et al: On convex Sobolev inequalities and the rate of convergence to equilibrium for Fokker-Planck type equations, Comm. Partial Diff. Equations 26 (2001), 43-100 c 2005 by MDPI (http://www.mdpi.org). Reproduction for noncommercial purposes permitted.