<<

ARTICLE

https://doi.org/10.1038/s41467-019-11051-w OPEN Inferring broken detailed balance in the absence of observable currents

Ignacio A. Martínez 1,5, Gili Bisker 2,5, Jordan M. Horowitz 3,4 & Juan M.R. Parrondo1

Identifying dissipation is essential for understanding the physical mechanisms underlying nonequilibrium processes. In living systems, for example, the dissipation is directly related to the hydrolysis of fuel molecules such as adenosine triphosphate (ATP). Nevertheless,

1234567890():,; detecting broken time-reversal symmetry, which is the hallmark of dissipative processes, remains a challenge in the absence of observable directed motion, flows, or fluxes. Furthermore, quantifying the production in a complex system requires detailed information about its dynamics and internal degrees of freedom. Here we introduce a novel approach to detect time irreversibility and estimate the entropy production from time-series measurements, even in the absence of observable currents. We apply our technique to two different physical systems, namely, a partially hidden network and a molecular motor. Our method does not require complete information about the system dynamics and thus provides a new tool for studying nonequilibrium phenomena.

1 Departamento de Estructura de la Materia, Física Termica y Electronica and GISC, Universidad Complutense de Madrid, 28040 Madrid, Spain. 2 Department of Biomedical Engineering, Faculty of Engineering, Center for Physics and Chemistry of Living Systems, Center for Nanoscience and Nanotechnology, Center for Light-Matter Interaction, Tel Aviv University, Tel Aviv 6997801, Israel. 3 Department of Biophysics, University of Michigan, Ann Arbor, MI 48109, USA. 4 Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI 48104, USA. 5These authors contributed equally: Ignacio A. Martínez, Gili Bisker. Correspondence and requests for materials should be addressed to I.A.M. (email: [email protected]) or to G.B. (email: [email protected])

NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications 1 ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w

rreversibility is the telltale sign of nonequilibrium dissipation1,2. nonequilibrium state. Such an approach, however, is challenging Systems operating far-from-equilibrium utilize part of their free to implement accurately as it requires large amounts of data, I 32 energy budget to perform work, while the rest is dissipated into especially when there is no observable current . the environment. Estimating the amount of free energy lost to Despite the absence of observable average currents, irreversi- dissipation is mandatory for a complete energetics characterization bility can still leave a mark in fluctuations. Consider, for example, of such physical systems. For example, it is essential for under- a particle hoping on a 1D lattice, as in Fig. 1, where up and down standing the underlying mechanism and efficiency of natural jumps have equal probabilities, but the timing of the jumps have Brownian engines, such as RNA-polymerases or kinesin molecular different likelihoods. Although there is no net drift on average, motors, or for optimizing the performance of artificial devices3–5. the process is irreversible, since any trajectory can be dis- Often the manifestation of irreversibility is quite dramatic, signaled tinguished from its time reverse due to the asymmetry in jump by directed flow or movement, as in transport through mesoscopic times. Thus, beyond the sequence of events, the timing of events devices6, traveling waves in nonlinear chemical reactions7, directed can reveal statistical irreversibility. Such a concept was used, for motion of molecular motors along biopolymers8,andtheperiodic example, to determine that the E. Coli flagellar motor operates out beating of a cell’s flagellum9,10 or cilia11. This observation has led of equilibrium based on the motor dwell-time statistics33. to a handful of experimentally validated methods to identify irre- In this work, we establish a technique that allows one to identify versible behavior by confirming the existence of such flows or and quantify irreversibility in fluctuations in the timing of events, by fluxes3,12–14. However, in the absence of directed motion, it can be applying Eq. (1) to stochastic jump processes with arbitrary waiting challenging to determine if an observed system is out of equili- time distributions, that is, semi-Markov processes, also known as brium, especially in small noisy systems where fluctuations could continuous time random walks (CTRW) in the context of anom- mask any obvious irreversibility15. One such possibility is to alous diffusion. Such models emerge in a plethora of contexts34–36 observe a violation of the fluctuation–dissipation theorem16–18; ranging from economy and finance37 to biology, as in the case of though this approach requires not just passive observations of a kinesin dynamics38 or in the anomalous diffusion of the Kv2.1 correlation function, but active perturbations in order to measure potassium channel39. In fact, as we show below and in the Methods response properties, which can be challenging in practice. Thus, the section, semi-Markov processes result in experimentally relevant development of noninvasive methods to quantitatively measure scenarios where one has access only to a limited set of observables irreversibility and dissipation are necessary to characterize none- of Markov kinetic networks with certain topologies. We begin by quilibrium phenomena. reviewing the semi-Markov framework, where we present our main Our understanding of the connection between irreversibility result of the entropy production rate estimator. Next, we apply our and dissipation has deepened in recent years with the formulation approach to general hidden networks, where an observer has access of stochastic thermodynamics, which has been verified in only to a subset of the states, comparing our estimator with pre- numerous experiments on meso-scale systems19–22. Within this vious proposals for partial entropy production that are zero in the framework, it is possible to evaluate quantities as the entropy absence of currents. Finally, we address a particularly important along single nonequilibrium trajectories23. A cornerstone of this case of molecular motors, where their translational motion is easily approach is the establishment of a quantitative identification of observed, but the biochemical reactions that power their motion are dissipation, or more specifically entropy production rate S_, as the hidden. Remarkably, our technique allows us to even reveal the Kullback–Leibler divergence (KLD) between the probability Pðγ Þ existence of parasitic mechano-chemical cycles at stalling—where γ ~γ t — to observe a trajectory t of length t and the probability Pð tÞ to the observed current vanishes or the motor is stationary simply ~γ 1,24–29 observe the time-reversed trajectory t : from the distribution of step times. In addition, our quantitative lower bound on the entropy production rate can be used to shed _ _ kB γ ~γ ; fi S  SKLD  lim D½Pð tÞjjPð tފ ð1Þ light on the ef ciency of molecular motors operation and on the t!1 t entropic cost of maintaining their far-from-equilibrium dynam- ’ 40–44 where kB is Boltzmann s constant. The KLD between two ics . Pprobability distributions p and q is defined as D½pjjqŠ = x pðxÞ ln ðpðxÞ qðxÞÞ and is an information-theoretic measure 30 = Results of distinguishability . For the rest of the paper we take kB 1, so the entropy production rate has units of time−1. The entropy Irreversibility in semi-Markov processes. A semi-Markov pro- _ cess is a stochastic renewal process α(t) that takes values in a production S in Eq. (1) has a clear physical meaning. It is the α = … usual entropy production defined in irreversible thermodynamics discrete set of states, 1, 2, . The renewal property implies by assuming that the reservoirs surrounding the system are in equilibrium. For instance, in the case of isothermal molecular Position motors hydrolyzing ATP to ADP+P at T, the entropy production in Eq. (1)isS_ ¼ rΔμ=T À W_ =T, where r is Δμ = μ − μ − μ i + 2 the ATP consumption rate, ATP ADP P is the difference between the ATP, and the ADP and P chemical pup = 1/2 potentials, and W_ is the power of the motor31. In many experi- i + 1 ments, all these quantities can be measured except the rate r. Therefore, the techniques that we develop in this paper can help i   Time to estimate the ATP consumption rate, even at stalling conditions. down = 2 s up = 1 s γ The equality in Eq. (1) is reached if the trajectory t contains all i – 1 the meso- and microscopic variables out of equilibrium. Hence pdown = 1/2 the relative entropy in Eq. (1) links the statistical time-reversal i – 2 symmetry breaking in the mesocopic dynamics directly to dis- sipation. Based on this connection, estimators of the relative Fig. 1 Brownian particle jumping on an one-dimensional lattice. Jumps up entropy between stationary trajectories and their time reverses and down are equally likely, but with asymmetric jump rates. As a result, allow one to determine if a system is out of equilibrium or the irreversibility of the dynamics is contained solely in the timing even bound the amount of energy dissipated to maintain a fluctuations

2 NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w ARTICLE

α that the waiting time intervals tα in a given state are positive, the term SWTD because they assumed that the waiting time independent, and identically distributed random variables. If the distributions were independent of the final state, as occurs in system arrives to state α at t = 0, the probability to jump to a Markov processes48–50. However, such a strong assumption does different state β at time [t, t + dt]isψβα(t)dt, with ψβα(t) being the not hold in many situations of interest, as in the ones probability density of transitionR times45. These densities are not discussed below. 1ψ normalized, with pβα  0 βαðtÞdt being the probability for the next jump to be α → β given that the walker arrived at α.We Decimation of Markov chains and second-order semi-Markov assumeP that the particle eventually leaves any site α, i.e., ψαα(t) = processes. Semi-Markov processes appear when sites are decimated 0 and β pβα ¼ 1, so the matrix pβα is a stochastic matrix. Its from Markov chains of certain topologies. Figure 2 shows repre- normalized (right) eigenvector Rα with eigenvalue 1, then repre- sentative examples. In Fig. 2a, b, we show two models of a - … − + … sents the fraction of visits to each state α. P cular motor that runs along a track with sites { , i 1, i, i 1, } and has six internal states. If the spatial jumps (red lines) and the The waiting time distribution at site α, ψαðtÞ¼ β ψβαðtÞ, is normalized with average waiting time τα. We can also transitions between internal states (black lines) are Poissonian define the waiting time distribution conditioned on a given jump jumps, then the motor is described by a Markov process. On α → β as ψ(t|α → β) ≡ ψβα(t)/pβα, which is already normalized. the other hand, when the internal states are not accessible to the Consider now a generic semi-Markovian trajectory γ of length experimenter, the waiting time distributions corresponding to the t → t with n jumps, which is fully described by the sequence of jumps spatial jumps i i ± 1 are no longer exponential and the motion of the motor must be described by a semi-Markov process. Figure 2a γ α t1 α t2 ¼ tnÀ1 α tn α Pand jump times, t ¼f 1 À! 2 À! À! n À! nþ1g with shows an example where the decimation of internal states directly n tn ¼ t, occurring with probability yields a semi-Markov process ruling the spatial motion of the Pðγ Þ¼ψα ;α ðt Þψα ;α ðt Þ ¼ ψα ;α ðt Þ. In order to character- t 2 1 1 3 2 2 nþ1 n n motor. The second example, sketched in Fig. 2b, is more involved ize the dissipation of this single trajectory, we must define its time since the upward and the downward jumps end in different sets of ~γ α tn α tnÀ1 ¼ t2 α t1 α internal states. As a consequence, the waiting time distribution of, reverse t ¼f n À! nÀ1 À! À! 1 À! 0g whose probabil- → + ~γ ψ ¼ ψ say, the jump i i 1, depends on the site that the motor visited ity is given by Pð Þ¼ α ;α ðt1Þ α ;α ðtnÞ, see Methods and t 0 1 nÀ1 n before site i. Then, the resulting dynamics must be described by a Fig. 5. second-order semi-Markov process, that is, one has to consider the Directly applying Eq. (1) to this scenario shows that the α = states (t) [iprev(t), i(t)], where i(t) is the current position of the KLD between the probability distributions of the forward motor and iprev(t) is the previous position, right before the jump. and backward trajectories can be split into two contributions The same applies to generic kinetic networks, as the one (see Methods): depicted in Fig. 2c. Suppose that the original network is _ _ _ : Markovian with states i = 1, …, 5. However, if the experimenter SKLD ¼ Saff þ SWTD ð2Þ only has access to states 1 and 2, with the rest clumped fi _ fi The rst term, Saff ,orafnity entropy production, results together into a hidden state H, then the resulting dynamics is entirely from the divergence between the state trajectories, also a second-order semi-Markov process with the reduced set σ ≡ α α … α regardless of the jump times, { 1, 2, , n+1} and i = 1, 2, H. σ~ α ; ¼ ; α ; α fi f n 1 0g, that is, it accounts for the af nity between For second-order semi-Markov processes the affinity entropy states: production reads X X pβα pβα _ 1 1 ss ; X Saff ¼ pβαRα ln ¼ Jβαln ð3Þ _ 1 pð½ijŠ!½jkŠÞ T pαβ T pαβ S ¼ pðijkÞln ; αβ α<β aff T 𽠊!½ ŠÞ ð5Þ i;j;k p kj ji ss where J ¼ pβαRα À pαβRβ is the net probability flow per step, or βα P ≡ → α β τ where p(ijk) R[ij]p([ij] [jk]) is the probability to observe the current, from to , and the factor T¼ α αRα is the mean → → duration of each step, which can be used to transform the units sequence i j k. This entropy is still proportional to the current from per-step to per-time46. We see that the affinity entropy production vanishes in the absence of currents, as it occurs in abc 32,47 arbitrary Markov systems . i + 1 The contribution due to the waiting times is expressed in terms 1 2 of the KLD between the waiting time distributions X _ 1 ψ β μ ψ β α ; i SWTD ¼ pμβpβαRαD½Šðtj ! Þjj ðtj ! Þ ð4Þ Position 4 T αβμ 3 5 H which is the main result of this paper and allows one to detect i – 1 irreversibility in stationary trajectories with zero current. Motors Hidden network Notice that Rα being the occupancy of state α, pβαRα is the probability to observe the sequence α → β in a stationary forward Fig. 2 Decimation of Markov processes. a, b Molecular motor model: An trajectory, while pμβpβαRα is the probability to observe the observer with access only to the position (vertical axis) cannot resolve the sequence α → β → μ. internal states (circles). a Decimation to position results in a first-order Equation (2) is the chain rule of the relative entropy applied to Markov process, since spatial jumps connect the same internal state. the semi-Markov process and the core of our proposed estimator. b Decimation results in a second-order semi-Markov process, where the In the special case of Poisson jumps, D[ψ(t|β → μ)||ψ(t|β → α)] waiting time distribution for spatial transitions depends on whether the = 0 since all waiting time distributions for jumps starting at a motor previously jumped down or up. c Hidden kinetic network: An given site β are equal (see Methods), and we recover the standard observer unable to resolve states 3, 4, and 5, treats them as a single hidden _ _ H expression for the relative entropy of Markov processes S ¼ Saff . state . The resulting decimated network is a second-order semi-Markov It is worth mentioning that previous attempts to establish the process on the three states 1, 2, and H, where the non-Poissonian waiting entropy production of semi-Markov processes failed to identify time distributions for transitions out of state H depend on the past

NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications 3 ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w for one-dimensional processes and therefore vanishes in the For the KLD estimator, we assume that the observer can record absence of flows in the observed dynamics, see Methods. The whether the system is in states 1 or 2, or in the hidden part of the entropy production contribution due to the irreversibility of the network, H, which is a coarse-grained state representing the waiting time distributions is unobserved subsystem. In this case, the resulting contracted X network has three states, {1, 2, H}. Jumps between states 1 and 2 _ 1 S ¼ pðijkÞD½Šψðtj½ijŠ!½jkŠÞjjψðtj½kjŠ!½jiŠÞ : follow Poissonian statistics, as in a general continuous-time WTD T ð6Þ i;j;k Markov process, with the same rates as in the original network. On the other hand, jumps from H to 1 or 2 are not Poissonian Let us emphasize that the calculation of S_ requires WTD and depend on the state just prior to entering the hidden part. To collecting statistics on sequences of two consecutive jumps, i.e., apply our results for semi-Markov processes, we thus have to i → j → k. We now proceed to apply these results to generic cases consider the states α(t) = [iprev(t) i(t)], where i(t) = 1, 2, H is the of simple kinetic networks and molecular motors. = current state and iprev(t) 1, 2, H is the state right before the last jump. To make the equations more compact, we will use the Hidden networks.Wefirst apply our formalism to estimate the ≡ short-hand notation ij [ij] for the remainder of this section. dissipation in kinetic networks with hidden states, which have Similar to Eq. (2), the semi-Markov entropy production rate received increasing attention in recent years owing to their many _ 24,32,47,51–53 for hidden networks, SKLD, consists of two contributions: the practical and experimental implications . fi _ _ ω af nity estimator Saff and the WTD estimator SWTD. In this case, Consider a network where ij is the transition rate from state j π the affinity estimator, Eq. (5), is given by to i, with i the steady-state distribution. The total entropy 54 ss production rate at steady-state is _ J pð 12 ! 2HÞpð 2H ! H1Þpð H1 ! 12Þ  S ¼ 21 ln ; ð10Þ X ω π aff T pð H ! 2Þpð 2 ! 1Þpð 1 ! HÞ S_ ¼ ω π À ω π ln ji i ; 1 H H 2 2 1 ji i ij j ω π ð7Þ ss fi i

4 NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w ARTICLE

ab c d Real dynamics Observed dynamics 40  | → (t 1H H2) t H 1 t H 2 1 2 2 H 1 H (t| H → 1) 4 20 2 H 3 H 2 2 1 1 0 0 0.1 0.2 Time Time 4 3 Time (s) efg

) stall 2 –2 F = F –1 1.4 1.4 10 1210 10–5 -value F = 3(L)–1 0 1.2 1.2 P –8 10 S 10 104 105 106 107 SKLD –2 1 1 10 SIP

H S Relative error PP 0.8 10–4 Exp F = Fstall 0.8 F = 3(L)–1 Dissipation rate (s SKLD –2024 104 105 106 107 104 105 106 107 F (–1 L–1) Number of steps Number of steps

Fig. 3 Hidden network. a Four-state network, as seen by an observer, with access only to states 1, 2, H. b, c Illustration of a trajectory over the four possible states (b) where the gray region corresponds to the hidden part. The resulting observed semi-Markov dynamics (c). d Kernel density estimation of the wait F = Fstall S_ S_ time distributions at . e Estimated total entropy production rate (solid red line), entropy production for semi-Markov model KLD (dashed blue S_ S_ curve), informed partial entropy production rate IP (dashed-dotted black curve), the passive partial entropy production rate PP (dotted green curve), and S_ Exp the experimental entropy production rate estimated according to the semi-Markov model KLD (blue crosses). f, g Relative error (ratio of experimental entropy production rate to analytical value) for three random trajectories as a function of the number of steps at F = Fstall (f) and F = 3β−1L−1 (g) showing faster convergence away from the stalling force. Inset: p-value for rejecting the null hypothesis that the experimental data was sampled from a zero mean distribution as a function of the number of steps for F = Fstall (blue curve), and F = 3β−1L−1 (red curve), showing that the average is statistically significant −1 −1 different from zero. The numerical simulations were done using the Gillespie algorithm with the following transition rates: ω12 = 2s , ω13 = 0s , ω14 = −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 1s , ω21 = 3s , ω23 = 2s , ω24 = 35 s , ω31 = 0s , ω32 = 50 s , ω34 = 0.7 s , ω41 = 8s , ω42 = 0.2 s , ω43 = 75 s , where the diagonal elements were chosen to have zero sum coloums

In order to assess the rate of convergence with increasing number entropy between waiting time distributions is _ Exp of simulated steps, we calculated the SKLD for different fractions of the 107 steps trajectories, showing <20% error above 105 steps at stalling, and <5% error away from stalling for trajectories with as ð13Þ little as 104 steps (Fig. 3f, g). Let us stress that the hidden semi- Markov entropy production rate averaged over three simulated experimental trajectories produced a lower bound on the total As in the previous examples, the latter term can produce a entropy production rate, which was strictly positive and lower bound on the total entropy production rate even in the statistically significant different from zero (p < 0.05, Fig. 3g, inset) _ absence of observable currents, in which case Saff ¼ 0. Without for all trajectory lengths tested. chemical work (Δμ = 0), however, the waiting time distributions of the and processes become identical and the contribution of Molecular motors. A slight modification of the case analyzed in _ SWTD vanishes as well. the previous section allows us to study molecular motors with Let us apply the molecular motor semi-Markov entropy hidden internal states. We are interested in the schemes pre- production framework to a specific example. We consider the viously sketched in Fig. 2a, b, where a motor can physically move following two-state molecular motor model of a power stroke in space or switch between internal states. The observed motor engine that works by hydrolizing ATP against an external force F, − + position is labeled by {..., i 1, i, i 1, ...}. All jumps are Pois- see Fig. 4a. sonian and obey local detailed balance, with an external source of The state of the motor is described by its physical position and Δμ chemical work, , and an additional mechanical force F that can its internal state, which can be either active, that is, capable of act only on the spatial transitions. hydrolyzing ATP, or passive. We label the active and passive Analogous to the previous example, the observed dynamics is a states as i′ and i, respectively, with i = 0, ±1, ±2, …. Owing to the second-order semi-Markov process. To make the following translational symmetry in the system, all the spatial positions are equations more intuitive, we use the graphical notation for essentially equivalent. The position of the motor is accessible to − → → + two consecutive upward jumps (i 1 i i 1), for a an external observer, whereas the two internal states i and i′ are downward jump followed by and upward one, for an upward indistinguishable. An example of a trajectory is illustrated in followed by a downward jump, and for two consecutive Fig. 4b. downward jumps. Notice that the probabilities are normalized as The chemical affinity Δμ, arising from ATP hydrolysis, . determines the degree of nonequilibrium in our system and Similar to Eq. (2), we have the decomposition of the KLD biases the transitions i′↔i + 1, whereas the external force F fi estimator into a contribution from state af nities given by affects all the spatial transitions, regardless of the internal state. The transition rates between the two internal states are defined as ω = ω = ð12Þ i′i ii′ ks. Transition rates between passive states obey local ω ω = βFL detailed balance: i,i+1/ i+1,i e , where L is the length of a ss = − = where the current per step is J Rup Rdown with Rup R[i,i+1] single spatial jump. From the active state, the system can use the = (Rdown R[i,i−1]) corresponding to the occupancy rate of states ATP to move upward with rates verifying local detailed balance ω ω = β(FL−Δμ) moving upward (downward). The contribution due to the relative i′,i+1/ i+1,i′ e .

NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications 5 ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w

ab cΔ = 0  –1 dΔ = 10  –1 t (4) t (3) 1 1 (t| ) 4 t (1) t (2) 2′ 2 (1) (t| ) 3 t ADP 2 1 ATP 0 0 012340 12345 Time Time (s) Time (s) ′ 1 F 1 ef ADP 100

) 0 S SKLD Saff 10 S

ATP –1 SKLD –2 10 Saff –2 10 ) 0′ 0 –1 10–4 (s

10–6 10–4 Dissipation rate (s

Dissipation rate at stall 10–8 02468 10–1 100 101 Δ –1 F (–1 L–1) ( )

Fig. 4 Molecular motor. a Illustration: Active states (red boxed squares) can use a source of chemical energy while passive states (circles) cannot. The chemical energy is used to power the motor against and external force F. b Illustration of a trajectory for four positions, where the hidden internal active state is denoted by the red shaded regions. c, d Waiting time distributions ψ(t) for the up–up (red) and down–down (blue) transitions at stalling for Δμ = 0 β−1 (c) and Δμ = 10 β−1 (d). Notice that the distributions are only different in the presence of a chemical drive. e Total entropy production rate S_ (red), fi S_ S_ Δμ = Δμ = F af nity estimator aff (green), and entropy production for semi-Markov model KLD for 0 (left) up to 10 (right), as a function of force , centered fi S_ at the stall force. f Same as (e) at stalling as a function of chemical drive. The af nity estimator aff offers a lower bound constrained by the statistical −1 −1 uncertainty due to the finite amount of data (green shaded region). Calculations were done using the parameters ks = 1s , k0 = 0.01 s , and the trajectories were sampled using the Gillespie algorithm61

The resulting waiting time distributions are shown in Fig. 4c, d, We have illustrated our method with two possible applications: and the estimated entropy production rates as a function of a situation where only a subsystem is accessible to an external external force are depicted in Fig. 4e, with observer and a molecular motor whose internal degrees of free- ranging from Δμ = 0 β−1 to 10 β−1. The total entropy production dom cannot be resolved. Using these examples, we have rate S_ is calculated using Eq. (7). As expected, the dissipation demonstrated the advantage of our semi-Markov estimator increases with the nonequilibrium driving force, and vanishes compared with other entropy production bounds, namely, the Δμ = = fi _ passive- and informed-partial entropy production rates, both of when FL 0. Notice that the af nity estimator Saff does not provide a lower bound to the total entropy production rate S_ at which vanish at stalling conditions. stalling, as it is not statistically different from zero (Fig. 4f), and In summary, we have developed an analytic tool that can thus cannot distinguish between nonequilibrium and equilibrium expose irreversibility otherwise undetectable, and distinguish between equilibrium and nonequilibrium processes. This frame- processes. In contrast, the semi-Markov estimator S_ , which KLD work is completely generic and thus opens opportunities in accounts for the asymmetry of the waiting time distributions numerous experimental scenarios by providing a new perspective provides a nontrivial positive bound, even in the absence of for data analysis. observable current. Methods Semi-Markov processes, waiting time distributions and steady states. A semi- Discussion Markov stochastic process is a renewal process α(t) with a discrete set of states α = We have analytically derived an estimator of the total entropy 1, 2, …, N. The dynamics is determined by the probability densities of transition ψ fi ψ productionrateusingtheframeworkofsemi-Markovpro- times βα(t), which are de ned as βα(t)dt being equal to the probability that the α β + cesses. The novelty of our approach is the utilization of the system jumps from state to state in the time interval [t, t dt] if it arrived at site α at time t = 0. By definition ψαα(t) = 0. When the system is a particle jumping waiting time distributions, which can be non-Poissonian, between the sites of a lattice, the semi-Markov process is also called a CTRW. For allowing us to unravel irreversibility in hidden degrees of clarity, we will assume this CTRW picture, that is, the system in our discussion will freedom arising in any time-series measurement of an arbitrary be a particle jumping between sites α. The probability densities ψβα(t) are not normalized: experimental setup. Our estimator can thus provide a lower Z bound on the total entropy production rate even in the absence 1 pβα ¼ ψβαðtÞdt ð14Þ of observable currents. Hence, it can be applied to reveal an 0 underlying nonequilibrium process, even if no net current, flow, α α → fi is the probability that, given that the particle arrived at site , the nextP jump is or drift, are present. We stress that our method fully quanti es β. We will assume that the particle eventually leaves any site α, i.e., β pβα ¼ 1. irreversibility. Owing to the direct link between the entropy Then production rate and the relative entropy between a trajectory X ψ ψ αðtÞ¼ βαðtÞ ð15Þ and its time reversal, as manifested in Eq. (1), our estimator β provides the best possible bound on the dissipation rate uti- lizing time irreversibility. One can consider utilizing other is normalized and it is the probability density of the residence time at site α.Itis also called the waiting time distribution. Its average properties of the waiting time distribution to bound the Z entropy production, through the thermodynamics uncertainty 1 τα ¼ dttψ ðtÞ ð Þ 4,61,62 α 16 relations , for example. 0

6 NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w ARTICLE is the mean residence time or mean waiting time. We can also define the waiting duration t is time distribution conditioned on a given jump α → β, Z Pðγ ; tÞ¼ P dt dt ¼ dt ψ ; ðt Þ ψ ; ðt Þ ¼ ψ ; ðt Þ: nþ1 1 2 nþ1 i1 1 1 i2 i1 2 2 in nþ1 ð26Þ ψβαðtÞ t ¼t ψðtjα ! βÞ¼ ; ð17Þ k pβα This is a convolution. If one performs the Laplace transform on all time- dependent functions, generically denoted by a tilde, which is normalized. The function ψβα(t) is in fact the joint probability distribution Z α → β 1 of the time t and the jump . ψ~ðsÞ dteÀst ψðtÞð27Þ The transition probabilities pβα determine a given by the visited 0 α α α states 1, 2, 3, ..., regardless of the times when the jumps occur. The transition then Eq. (26) simplifies to matrix of this Markov chain is {pβα} and the stationary probability distribution Rα ~ ~ ~ ~ verifies Pðγ ; sÞ¼ψ ; ðsÞψ ; ðsÞ ¼ ψ ; ðsÞ: ð28Þ nþ1 i1 1 i2 i1 2 in X ψdecim ; The transition time distribution 21 ðtÞ in the decimated network is the sum Rβ ¼ pβαRα ð18Þ γ α of P( n+1, t) over all possible paths with an arbitrary number of steps. For Laplace transformed distributions, this is written as i.e., the distribution Rα is the right eigenvector of the stochastic matrix {pβα} with X1 X α ~decim ~ ~ ~ eigenvalue 1. Moreover, if the Markov chain is ergodic, then the distribution R is ψ ðsÞ¼ ψ ; ðsÞψ ; ðsÞ ¼ ψ ; ðsÞ: 21 i1 1 i2 i1 2 in ð29Þ precisely the fraction of visits the system makes to site α in the stationary regime. ; ¼ ; n¼0 fi1 in g Thus, we call Rα the distribution of visits. = … From the distribution of visits one can easily obtain the stationary distribution where the sum runs over all possible paths, that is, the indexes ik 3, 4, take on of the process α(t), all possible values corresponding to decimated sites. Then the sum can be expressed in terms of the matrix Ψ(t) whose entries are the transition time densities τ αRα [Ψ(t)] = ψ (t), i, j = 3, 4, ….IfΨ~ðsÞ is the corresponding Laplace transform of that πα ¼ ; ð19Þ ji ji T matrix, one has X1 X since the particle visits the state α a fraction of stepsPRα and spends an average time Âà ψ~decim ψ~ Ψ~ n ψ~ τα in each step. The normalization constant T α Rατα is the average time 21 ðsÞ¼ i;1ðsÞ ðsÞ ji 2;jðsÞ n¼0 i;j per step. X Âà ð30Þ The stationary current in the Markov chain from state α to β is ψ~ I Ψ~ À1ψ~ ¼ i;1ðsÞ À ðsÞ ji 2;jðsÞ i;j ss Jβα ¼ pβαRα À pαβRβ: ð20Þ which is a sum only over all the decimated sites i, j = 3, 4, … that are connected to This is in fact the current per step in the original semi-Markov system since, in sites 1 and 2, respectively. an ensemble of very long trajectories, it is the net number of particles that jump The decimation procedure can be used to derive transition time distributions in from α to β divided by the number of steps. Since the duration of a long stationary a kinetic network when the observer cannot discern among a set of states, say 3, 4, ss trajectory with K steps (K ≫ 1) is KT , the current per unit of time is Jβα=T . Notice 5,…, that are generically labeled as H for hidden, as in Fig. 2c. For the specific case that the average time per step T acts as a conversion factor that allows one to of the figure, the effective transition time distribution from site 1 to site H, for express currents, entropy production, etc. either as per step or as per unit of time. instance, can be written as ψeff ψ ψ ; H1ðtÞ¼ 31ðtÞþ 51ðtÞ ð31Þ The Markovian case. If the process α(t) is Markovian, then the jumps are Pois- sonian and transition time densities are exponential. Let ωβα be the rate of jumps whereas the distributions for jumps starting at H depend on the previous state. For from α to β. The mean waiting time at site α is the inverse of the the total outgoing instance, if H is reached from 1, the random walk within H starts at site 3 with + + rate: probability p31/(p31 p51) and site 5 with probability p51/(p31 p51). The transition time distribution corresponding to the jump [1H] → [H2] is P 1 Âà Âà τα ¼ ; ð21Þ ψeff p31 I Ψ~ À1ψ~ p51 I Ψ~ À1ψ~ β ωβα ½H2Š ½1HŠðsÞ¼ À ðsÞ 43 24ðsÞþ À ðsÞ 45 24ðsÞ p31 þ p51 p31 þ p51 and the waiting time distributions are ð32Þ Ψ~ Àt=τα where the matrix ðsÞ is a 3 × 3 matrix corresponding to the Laplace transform of Àt=τα e ψβαðtÞ¼ωβαe ψαðtÞ¼ψðtjα ! βÞ¼ : ð22Þ the transition time distributions among sites 3, 4, and 5. τα with jump probabilities pβα = ταωβα. Notice that the waiting time distribution Irreversibility in semi-Markov processes. Here we calculate the relative entropy ψ(t|α → β) does not depends on β. The distribution of visits Rα verifies between a stationary trajectory γ and its time reversal ~γ in a generic semi-Markov X process. A trajectory γ is fully described by the sequence of jumps (see Fig. 5): Rβ ¼ ταωβαRα; ð23Þ γ α α ; ; α α ; ; ¼ ; α α ; ; α α ; α ¼fð 1 ! 2 t1Þ ð 2 ! 3 t2Þ ð nÀ1 ! n tnÀ1Þ ð n ! nþ1 tnÞg ð33Þ and the stationary distribution πα obeys and occurs with a probability (conditioned on the initial jump α → α at t = 0) X 0 1 πβ ¼ ωβαπα; γ ψ ψ ¼ ψ : τ ð24Þ Pð Þ¼ α ;α ðt1Þ α ;α ðt2Þ α ;α ðtnÞ ð34Þ β α 2 1 3 2 nþ1 n The reverse trajectory is which is the equation for the stationary distribution that one obtains from the ~γ α~ α~ ; ; ¼ ; α~ α~ ; ; α~ α~ ; ; ¼fð n ! nÀ1 tnÞ ð 2 ! 1 t2Þ ð 1 ! 0 t1Þg ð35Þ X hiX where we assume, for the sake of generality, that states can change under time _ PβðtÞ α~ α ~γ PβðtÞ¼ ωβαPαðtÞÀωαβPβðtÞ ¼ ωβαPαðtÞÀ : ð25Þ reversal, being the time reversal of state . The probability to observe , τ α~ α~ = α α β conditioned on the initial jump nþ1 ! n at t 0, is

Pð~γÞ¼ψα~ ;α~ ðt Þψα~ ;α~ ðt Þ ¼ ψα~ ;α~ ðt Þ: ð36Þ 0 1 1 1 2 2 nÀ1 n n Decimation of Markov chains. Semi-Markov processes arise in a natural way when states are removed or decimated from Markov processes with certain topologies. Consider a Markov process where two sites, 1 and 2, are connected = … t through a closed network of states i 3, 4, that we want to decimate, as  3 sketched in Fig. 2c. If the observer cannot discern between states i = 3, 4, …, the 3 resulting three-state process with i(t) = 1, 2, H is a second-order semi-Markov   ψdecim  2 n+1 chain. We want to calculate the effective transition time distribution 21 ðtÞ from 0 state 1 to state 2 in terms of the distributions ψ (t) of the initial Markov chain. For t 2 ij   this purpose, we have to sum over all possible paths from 1 to 2 through the 1 n decimated network. t 1 t n fi + γ = → → … Consider rst the paths with exactly n 1 jumps, like n+1 {1 i1 i2 → = … γ in 2}, where ik 3, 4, . The probability that such a path occurs with an exact Fig. 5 A trajectory of a semi-Markov process

NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications 7 ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w

It is again convenient to consider the forward and backward trajectories without i 3 the waiting times, i.e., t 3 ... σ α ; α ; α ; ¼ ; α ; α i i in+1 ¼f 1 2 3 n nþ1gð37Þ 0 2 t 2

σ~ α~ ; α~ ; ¼ ; α~ ; α~ ; α~ ; i 1 in ¼f n nÀ1 2 1 0g ð38Þ t 1 t n and the probability to observe those trajectories are Forward [i 0 i 1] [i 1 i 2][i 2 i 3][in–1 in] PðσÞ¼pα ;α pα ;α ¼ pα ;α ð Þ trajectory 2 1 3 2 nþ1 n 39 Backward [i 2 i 1][i 3 i 2] [in+1 in] Pðσ~Þ¼pα~ ;α~ pα~ ;α~ ¼ pα~ ;α~ ð40Þ trajectory 0 1 1 2 nÀ1 n The initial jumps of γ and ~γ do not contribute to the entropy production in the γ stationary regime. Then the relative entropy per jump reads Fig. 6 A trajectory of a second-order semi-Markov process X X Z Z 1 PðγÞ 1 1 1 δS ¼ lim PðγÞ ln ¼ lim dt ¼ dt ψα ;α ðt Þ ¼ The contribution to the entropy production (per step) due to the state affinities KLD ~γ 1 n 2 1 1 n!1 n γ Pð Þ n!1 n σ 0 0 "#now reads P ψα ;α ðt Þ ψα ;α ðt Þ pð½ijŠ!½jkŠÞ 2 1 1 nþ1 n n _ 1 ψα ;α ðt Þ ln þ ¼ þ ln : Saff ¼ T R½ijŠpð½ijŠ!½jkŠÞ ln pð½kjŠ!½jiŠÞ nþ1 n n ψ ψ ; ; α~ ;α~ ðt1Þ α~ ;α~ ðtnÞ i j k 0 1 nÀ1 n P ð51Þ 1 pð½ijŠ!½jkŠÞ ; ð41Þ ¼ T pðijkÞ ln pð½kjŠ!½jiŠÞ i;j;k Each time integral can be written as Z hiand the contribution due to the waiting time distributions is given by 1 ψμβðtÞ pμβ P ψ ψ β μ ψ β~ α~ ; _ dt μβðtÞ ln ¼ pμβ ln þ pμβD ðtj ! Þjj ðtj ! Þ ð42Þ S ¼ 1 R pð½ijŠ!½jkŠÞD½Šψðtj½ijŠ!½jkŠÞjjψðtj½kjŠ!½jiŠÞ ψ ~ðtÞ p ~ WTD T ½ijŠ 0 α~β α~β i;j;k P ð52Þ α β μ σ α = α β = α μ = α 1 ψ ψ ; where , , is a substring of the forward trajectory ( k, k+1, k+2). ¼ T pðijkÞD½Šðtj½ijŠ!½jkŠÞjj ðtj½kjŠ!½jiŠÞ Inserting this expression in Eq. (41), i;j;k hi P where p(ijk) = R p([ij] → [jk]) is the probability to observe the sequence i→j→k δ 1 σ σ~ ψ β μ ψ β~ α~ [ij] SKLD ¼ lim n D½Pð ÞjjPð ފ þ pμβpβαRαD ðtj ! Þjj ðtj ! Þ = → n!1 αβμ in the trajectory and p(ij) R[ij] is the probability to observe the sequence i j. P P hið43Þ It is interesting to particularize Eq. (51) to a ring with N sites. This is the case of pβα ψ β μ ψ β~ α~ : our examples—the hidden network and the molecular motor. In this case, in the ¼ pβαRα ln p ~ þ pμβpβαRαD ðtj ! Þjj ðtj ! Þ αβ α~β αβμ stationary regime, α β Notice that pβαRα is the probability to observe the sequence , in the pðijkÞÀpðkjiÞ¼ pðijkÞþpðijiÞÀpðijiÞÀpðkjiÞ stationary forward trajectory and pμβpβαRα is the probability to observe the ss ð53Þ sequence α, β, μ. Finally, we can obtain the expression used in the main text for the ¼ pðijÞÀpðjiÞ¼J entropy production per unit of time dividing by the conversion factor T (average since each site has only two neighbors and therefore p(ijk) + p(iji) = p(ij) for any time per step), that is S_ ¼ δS=T . The result is triplet of contiguous sites ijk. Here Jss is the stationary current between any pair of contiguous sites. Hence, we can write the affinity as _ _ _ ; SKLD ¼ Saff þ SWTD ð44Þ P S_ ¼ 1 ½ŠpðijkÞÀpðkjiÞ ln pð½ijŠ!½jkŠÞ where the entropy production corresponding to the affinity of states reads aff T pð½kjŠ!½jiŠÞ i

8 NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w ARTICLE

At stall force, the generalized detailed balance condition in Eq. (57) holds and 19. Liphardt, J., Dumont, S., Smith, S. B., Tinoco, I. & Bustamante, C. Equilibrium can be written as information from nonequilibrium measurements in an experimental test of Jarzynski's equality. Science 296, 1832–1835 (2002). pð½1HŠ!½H2ŠÞωstallω ¼ pð½2HŠ!½H1ŠÞωstallω ; ð59Þ 12 H1 21 H2 20. Collin, D. et al. Verification of the crooks fluctuation theorem and recovery of ω ω where we have taken into account that only the rates 12 and 21 are tuned in the RNA folding free energies. Nature 437, 231 (2005). protocol proposed by Polettini and Esposito to obtain the informed partial entropy 21. Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Experimental production. demonstration of informationto-energy conversion and validation of the The current in the direction 1 → 2 → H → 1 in the stationary regime can be generalized Jarzynski equality. Nat. Phys. 6, 988 (2010). ss = ω π − ω π ωstallπstall ωstallπstall fi written as J /T 12 2 21 1. Then, at stall force 12 2 ¼ 21 1 . With all 22. Xiong, T. et al. Experimental veri cation of a Jarzynski-related information- these considerations, the argument of the logarithm in Eq. (9) of the main text can theoretic equality by a single trapped ion. Phys. Rev. Lett. 120, 010601 (2018). be written as 23. Seifert, U. Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Lett. 95, 040602 (2005). ω πstall ω ωstall ω pð½1HŠ!½H2ŠÞω 12 2 ¼ 12 21 ¼ 12 H1 24. Kawai, R., Parrondo, J. & Van den Broeck, C. Dissipation: the phase-space ω πstall ω ωstall ω pð½2HŠ!½H1ŠÞω 21 1 21 12 21 H2 ð60Þ perspective. Phys. Rev. Lett. 98, 080602 (2007). pð½1HŠ!½H2ŠÞpð½H2Š!½21ŠÞpð½21Š!½1HŠÞ 25. Maes, C. The fluctuation theorem as a gibbs property. J. Stat. Phys. 95, ¼ : pð½2HŠ!½H1ŠÞpð½H1Š!½12ŠÞpð½12Š!½2HŠÞ 367–392 (1999). Comparing Eq. (9) in the main text with Eq. (54), one immediately gets 26. Roldán, É., Barral, J., Martin, P., Parrondo, J. M. & Jülicher, F. Arrow of time fl S_ ¼ S_ . in active uctuations. Preprint at https://arxiv.org/pdf/1803.04743 (2018). IP aff 27. Horowitz, J. & Jarzynski, C. Illustrative example of the relationship between dissipation and relative entropy. Phys. Rev. E 79, 021106 (2009). Data availability 28. Gaveau, B., Granger, L., Moreau, M. & Schulman, L. Dissipation, interaction, The data that support the findings of this study are available from the corresponding and relative entropy. Phys. Rev. E 89, 032107 (2014). author upon reasonable request. 29. Gaveau, B., Granger, L., Moreau, M. & Schulman, L. S. Relative entropy, interaction energy and the nature of dissipation. Entropy 16, 3173–3206 (2014). Code availability 30. Cover, T. M. & Thomas, J. A. Elements of Information Theory (John Wiley & Sons, 2012). Source code is available from the corresponding authors upon reasonable request. 31. Parrondo, J. & de Cisneros, B. J. Energetics of Brownian motors: a review. Appl. Phys. A 75, 179–191 (2002). Received: 30 January 2019 Accepted: 12 June 2019 32. Roldán, É. & Parrondo, J. M. Estimating dissipation from single stationary trajectories. Phys. Rev. Lett. 105, 150607 (2010). 33. Tu, Y. The nonequilibrium mechanism for ultrasensitivity in a biological switch: Sensing by maxwell's demons. Proc. Natl Acad. Sci. USA 105, 11737–11741 (2008). 34. Kindermann, F. et al. Nonergodic diffusion of single atoms in a periodic References potential. Nat. Phys. 13, 137 (2017). 1. Maes, C. & Netočný, K. Time-reversal and entropy. J. Stat. Phys. 110, 269–310 35. Schulz, J. H., Barkai, E. & Metzler, R. Aging renewal theory and application to (2003). random walks. Phys. Rev. X 4, 011028 (2014). 2. Parrondo, J. M., Van den Broeck, C. & Kawai, R. Entropy production and the 36. Metzler, R., Jeon, J.-H., Cherstvy, A. G. & Barkai, E. Anomalous diffusion arrow of time. New J. Phys. 11, 073008 (2009). models and their properties: nonstationarity, non-ergodicity, and ageing at the 3. Gnesotto, F., Mura, F., Gladrow, J. & Broedersz, C. Broken detailed balance centenary of single particle tracking. Phys. Chem. Chem. Phys. 16, and non-equilibrium dynamics in living systems: a review. Rep. Prog. Phys. 81, 24128–24164 (2014). 066601 (2018). 37. Scalas, E. The application of continuous-time random walks in finance and 4. Li, J., Horowitz, J. M., Gingrich, T. R. & Fakhri, N. Quantifying dissipation economics. Phys. A: Stat. Mech. Appl. 362, 225–239 (2006). using fluctuating currents. Nat. Commun. 10, 1666 (2019). 38. Fisher, M. E. & Kolomeisky, A. B. Simple mechanochemistry describes the 5. Brown, A. I. & Sivak, D. A. Toward the design principles of molecular dynamics of kinesin molecules. Proc. Natl Acad. Sci. USA 98, 7748–7753 machines. Physics in Canada 73 (2017). (2001). 6. S. Datta, Electronic Transport in Mesoscopic Systems (Cambridge University 39. Weigel, A. V., Simon, B., Tamkun, M. M. & Krapf, D. Ergodic and nonergodic Press, 1997). processes coexist in the plasma membrane as observed by single-molecule 7. Castets, V., Dulos, E., Boissonade, J. & De Kepper, P. Experimental evidence of tracking. Proc. Natl Acad. Sci. USA 108, 6438–6443 (2011). a sustained standing turingtype nonequilibrium chemical pattern. Phys. Rev. 40. Horowitz, J. M., Zhou, K. & England, J. L. Minimum energetic cost to Lett. 64, 2953 (1990). maintain a target nonequilibrium state. Phys. Rev. E 95, 042102 (2017). 8. Astumian, R. D. & Bier, M. Fluctuation driven ratchets: molecular motors. 41. Horowitz, J. & England, J. Information-theoretic bound on the entropy Phys. Rev. Lett. 72, 1766 (1994). production to maintain a classical nonequilibrium distribution using ancillary 9. Brokaw, C. Calcium-induced asymmetrical beating of triton-demembranated control. Entropy 19, 333 (2017). sea urchin sperm agella. J. Cell Biol. 82, 401 (1979). 42. Pietzonka, P., Barato, A. C. & Seifert, U. Universal bound on the efficiency of 10. Battle, C. et al. Broken detailed balance at mesoscopic scales in active molecular motors. J. Stat. Mech.: Theory Exp. 2016, 124004 (2016). biological systems. Science 352, 604–607 (2016). 43. Brown, A. I. & Sivak, D. A. Allocating dissipation across a molecular machine 11. Vilfan, A. & Jülicher, F. Hydrodynamic flow patterns and synchronization of cycle to maximize ux. Proc. Natl Acad. Sci. USA 114, 11057–11062 (2017). beating cilia. Phys. Rev. Lett. 96, 058102 (2006). 44. Large, S. J. & Sivak, D. A. Optimal discrete control: minimizing dissipation in 12. Gladrow, J., Fakhri, N., MacKintosh, F., Schmidt, C. & Broedersz, C. Broken discretely driven nonequilibrium systems. Preprint at https://arxiv.org/pdf/ detailed balance of filament dynamics in active networks. Phys. Rev. Lett. 116, 1812.08216 (2018). 248301 (2016). 45. Cinlar, E. Introduction to Stochastic Processes (Courier Corporation, 2013). 13. Fodor, É. et al. Nonequilibrium dissipation in living oocytes. EPL (Europhys. 46. Bedeaux, D., Lakatos-Lindenberg, K. & Shuler, K. E. On the relation between Lett.) 116, 30008 (2016). master equations and random walks and their solutions. J. Math. Phys. 12, 14. Zia, R. & Schmittmann, B. Probability currents as principal characteristics in 2116–2123 (1971). the statistical mechanics of non-equilibrium steady states. J. Stat. Mech.: 47. Roldán, É. & Parrondo, J. M. Entropy production and Kullback-Leibler Theory Exp. 2007, P07012 (2007). divergence between stationary trajectories of discrete systems. Phys. Rev. E 85, 15. Rupprecht, J.-F. & Prost, J. A fresh eye on nonequilibrium systems. Science 031129 (2012). 352, 514–515 (2016). 48. Esposito, M. & Lindenberg, K. Continuous-time random walk for open systems: 16. Martin, P., Hudspeth, A. & Jülicher, F. Comparison of a hair bundle's fluctuation theorems and counting statistics. Phys. Rev. E 77, 051119 (2008). spontaneous oscillations with its response to mechanical stimulation reveals 49. Maes, C., Netočný, K. & Wynants, B. Dynamical fluctuations for semi-Markov the underlying active process. Proc. Natl Acad. Sci. USA 98, 14380–14385 processes. J. Phys. A: Math. Theor. 42, 365002 (2009). (2001). 50. Wang, H. & Qian, H. On detailed balance and reversibility of semi-Markov 17. Mizuno, D., Tardin, C., Schmidt, C. F. & MacKintosh, F. C. Nonequilibrium processes and single-molecule enzyme kinetics. J. Math. Phys. 48, 013303 mechanics of active cytoskeletal networks. Science 315, 370 (2007). (2007). 18. Bohec, P. et al. Probing active forces via a fluctuation-dissipation relation: 51. Andrieux, D. et al. Entropy production and time asymmetry in nonequilibrium Application to living cells. EPL (Europhys. Lett.) 102, 50005 (2013). fluctuations. Phys. Rev. Lett. 98, 150601 (2007).

NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications 9 ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-11051-w

52. Shiraishi, N. & Sagawa, T. Fluctuation theorem for partially masked Author contributions nonequilibrium dynamics. Phys. Rev. E 91, 012130 (2015). J.M.R.P. conceived the project. I.A.M. and G.B. performed the numerical simulations and 53. Polettini, M. & Esposito, M. Effective thermodynamics for a marginal analyzed the data. All authors discussed the results and wrote the paper. observer. Phys. Rev. Lett. 119, 240601 (2017). 54. Van den Broeck, C. & Esposito, M. Ensemble and trajectory thermodynamics: Additional information a brief introduction. Phys. A: Stat. Mech. Appl. 418,6–16 (2015). 55. Esposito, M. & Parrondo, J. M. Stochastic thermodynamics of hidden pumps. Supplementary Information accompanies this paper at https://doi.org/10.1038/s41467- Phys. Rev. E 91, 052114 (2015). 019-11051-w. 56. Bisker, G., Polettini, M., Gingrich, T. R. & Horowitz, J. M. Hierarchical bounds on entropy production inferred from partial information. J. Stat. Competing interests: The authors declare no competing interests. Mech.: Theory Exp. 2017, 093210 (2017). 57. Polettini, M. & Esposito, M. Effective fluctuation and response theory. Reprints and permission information is available online at http://npg.nature.com/ Preprint at https://arxiv.org/pdf/1803.03552 (2018). reprintsandpermissions/ 58. Terrell, G. R. & Scott, D. W. Variable kernel density estimation, Ann. Stat. 20, – Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in 1236 1265 (1992). fi 59. Botev, Z. I. et al. Kernel density estimation via diffusion. Ann. Stat. 38, published maps and institutional af liations. 2916–2957 (2010). 60. Gillespie, D. T. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81, 2340–2361 (1977). Open Access This article is licensed under a Creative Commons 61. Gingrich, T. R., Horowitz, J. M., Perunov, N. & England, J. L. Dissipation Attribution 4.0 International License, which permits use, sharing, bounds all steady-state current fluctuations. Phys. Rev. Lett. 116, 120601 (2016). adaptation, distribution and reproduction in any medium or format, as long as you give 62. Barato, A. C. & Seifert, U. Thermodynamic uncertainty relation for appropriate credit to the original author(s) and the source, provide a link to the Creative biomolecular processes. Phys. Rev. Lett. 114, 158101 (2015). Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory Acknowledgements regulation or exceeds the permitted use, you will need to obtain permission directly from I.A.M. and J.M.R.P. acknowledge funding from the Spanish Government through grants the copyright holder. To view a copy of this license, visit http://creativecommons.org/ TerMic (FIS2014-52486-R) and Contract (FIS2017-83709-R). I.A.M. acknowledges licenses/by/4.0/. funding from Juan de la Cierva program. G.B. acknowledges the Zuckerman STEM Leadership Program. J.M.H. is supported by the Gordon and Betty Moore Foundation as © The Author(s) 2019 a Physics of Living Systems Fellow through Grant No. GBMF4513.

10 NATURE COMMUNICATIONS | (2019) 10:3542 | https://doi.org/10.1038/s41467-019-11051-w | www.nature.com/naturecommunications