entropy

Article Quantifying Information without Entropy: Identifying Intermittent Disturbances in Dynamical Systems

Angela Montoya 1 , Ed Habtour 2 and Fernando Moreu 3,*

1 Sandia National Laboratories, Albuquerque, NM 87185, USA; [email protected] 2 William E. Boeing Department of Aeronautics & Astronautics, University of Washington, Seattle, WA 98195, USA; [email protected] 3 Department of Civil, Construction, & Environmental Engineering, University of New Mexico, Albuquerque, NM 87131, USA * Correspondence: [email protected]; Tel.: +1-217-417-1204

 Received: 23 September 2020; Accepted: 19 October 2020; Published: 23 October 2020 

Abstract: A system’s response to disturbances in an internal or external driving signal can be characterized as performing an implicit computation, where the dynamics of the system are a manifestation of its new state holding some memory about those disturbances. Identifying small disturbances in the response signal requires detailed information about the dynamics of the inputs, which can be challenging. This paper presents a new method called the Information Impulse (IIF) for detecting and time-localizing small disturbances in system response data. The novelty of IIF is its ability to measure relative information content without using Boltzmann’s equation by modeling signal transmission as a series of dissipative steps. Since a detailed expression of the informational structure in the signal is achieved with IIF, it is ideal for detecting disturbances in the response signal, i.e., the system dynamics. Those findings are based on numerical studies of the topological structure of the dynamics of a nonlinear system due to perturbated driving signals. The IIF is compared to both the entropy and Shannon entropy to demonstrate its entropy-like relationship with system state and its degree of sensitivity to perturbations in a driving signal.

Keywords: information entropy; discontinuity detection; intermittent disturbance; nonlinear dynamical system

1. Introduction The detection of small transient events, momentary internal or external discontinuities in the inputs or outputs, in nonlinear dynamical systems has received considerable attention in recent years. It continues to be an open scientific problem in many disciplines such as physics [1], engineering [2], and biological sciences [3]. Dynamical systems can be thought of as networks of interacting elements that change over time [4–9]. Those systems display complex changing behaviors at the global (macroscopic) scale, emerging from the collective actions of the interacting elements at the microscopic scale [10,11]. Due to the complexity of nonlinear systems, small global or microscopic transient events can influence the emerging dynamics and trajectories of those systems [2]. A natural propensity for chaotic behavior can exist in any complex system or network [12]. Thus, it is crucial to develop efficient methods for detecting small transient events and gain insights into the ramification of those events by tracking the evolution of the system dynamics. Examples of transient events may include malicious attacks on social or communication networks [13], whirled in sand particles instigated by tiny disturbances [14], intermittent switching between open- and closed-cracks in mechanical

Entropy 2020, 22, 1199; doi:10.3390/e22111199 www.mdpi.com/journal/entropy Entropy 2020, 22, 1199 2 of 21 structures leading to sudden cascaded failure [15], acoustic flame extinction [16], or evidence of a distant catastrophic event like an earthquake [17]. Detecting discontinuities can be particularly challenging in systems exhibiting nonlinear, chaotic, or time-varying behaviors [18–21]. Although effective in many cases, detection methods that are based on signal statistics ignore causal or correlational relationships between data points. For example, the mean and standard deviation are typically calculated with the assumption that each data point is independent and part of the same population. If the assumption is reasonable for most data points, i.e., any relationship between points is weak, then outliers can be detected by merely noting points that are significantly different from the mean. However, transient events that correlate with the dynamics of the system produce subtle changes to the shape of the distribution at the tails. This subtlety can mask an unusual behavior if the data are not partitioned or not modeled appropriately [22]. For a dynamical system, consecutive points are limited to physically realizable solutions of the underlying governing equations of the system. In other words, consecutive data points in a dynamical system are inherently causal, so sudden disturbances will change the system trajectory if only for a short amount of time [23]. As a result, the system will have a topological structure intrinsic to the dynamic behaviors instigated by the disturbances. The topological structure holds some memory of the disturbances, which are reflected in the dynamics of the new state of the system [23,24]. Memory, however brief, implies the storage or transference of information [7]. In recent years, measures of information have become an important tool for discriminating between deterministic and stochastic behavior in dynamical systems [25,26]. The concept that physical systems produce information was proposed to the dynamics community by Robert Shaw [27]. His idea revolutionized the approach to dynamical systems analysis by allowing the use of information theory, namely entropy, to quantify changes in a dynamical system’s topological structure. Information entropy measures the variation of pattern in a sequence, thereby capturing the internal structure or relationships that exist between data points [28]. However, information entropy, like its thermodynamic counterpart, describes macroscopic not microscopic behaviors. Just as with signal statistics, small or localized system changes can be easily engulfed by larger or more dominant behaviors. This paper aims to solve the technical challenge of creating a reliable method for detecting small discontinuities in the response signal of a nonlinear dynamical system even during chaos. This is particularly important since small disturbances in chaotic systems may have large consequences. To do this, the authors chose to take an information theorical approach to account for changes in topological structure of the system. Some background on dynamical systems and information theory is provided to the reader in Section2 to explain this approach. To overcome the limitation of using statistically-based methods to examine changes in response data that are statistically insignificant, an entropy-like function without the need of using Boltzmann’s equation was devised. Discussion and derivation of the new function are provided in Section3. The function appears to reveal a detailed informational structure of the response signal that, as demonstrated by the example in Section4, is sensitive to small intermittent events or impulses. Therefore, the authors call it the Information Impulse Function (IIF), which examines a response signal for discontinuity events on a more localized scale than traditional information entropy methods. Numerical experiments were then conducted to examine the detection sensitivity of the IIF. Results in Section5 show that the IIF outperforms Permutation entropy and Shannon entropy in detecting small discontinuities in a signal and is able to capture the change in the dynamic structure of the system using the Poincare distance plot, which is the standard for characterizing the nonlinear and chaotic behaviors in dynamical systems [3]. Finally, the conclusion and future work are presented in Section6. Entropy 2020, 22, 1199 3 of 21

2. Theory and Background

2.1. Topological Structures of Dynamical Systems In this section, a general overview of the theory and applicability of nonlinear dynamical system is provided, with an emphasis on the states behaviors of a Duffing system [29]. The Duffing system was chosen because of its rich complexity, considerable utility in physics and engineering applications, and as a benchmark in analyzing nonlinear systems. From a mathematical perspective, the topological structure of any dynamical system can be referred to as the system state space, S, such that η(τ) S ∈ defines the current dynamic states of the interacting elements in the system. Evolving from a present to a future state in a continuous time system is detected by an evolution law that acts on S. Typically, the data or signal collected from a dynamical system, and the law acting on S can be expressed as a system of differential equations (Equation (1)):

. η = (η, τ), η S, τ R (1) F ∈ ∈ A state space S can be a set of discrete elements, a , or a manifold. The vector η(τ) represents the state of the dynamical system at a nondimensional characteristic time τ, where η is having dimension n 1. The function ( ) is the evolution law that represents the dynamics of the data.  · To improve the sensitivity of detecting momentary disturbances in a dynamical system, it is necessary to describe its topological changes due to small discontinuities in the input signal (global disturbances), and the parameters (internal disturbances) of the system. To this end, the Duffing system, which consists of a cubic nonlinear term, can be represented as a nonlinear ordinary differential equation (ODE), as follows: .. . η = 2ζ η + η kη3 + E(Ω, τ) (2) − − The forcing function E(z, τ) can be thought of as an input signal into a dynamical system. A harmonic signal, where E(Ω, τ) = Fcos(Ωτ), is selected in this study. The forcing amplitude and frequency are F and Ω, respectively. The dissipative and nonlinear restoring forces contain a damping ratio, ζ, and a cubic coefficient, k, respectively. The system’s natural frequency, ω, is equal to one, such that the linear restoring force is ωη = η. The over-dots represent the temporal derivatives for the system response η, which can be output signals at time τ. In this paper, the dynamical system . .. in Equation (2) is treated as a mechanical system. Therefore, η and η are referred to as velocity and acceleration, respectively. The coefficients in Equation (2) are nominally set to the dimensionless values: ζ = 0.15, k = 1, Ω = 1.2. These values were chosen as they produce a wide range of dynamical behavior over a relatively short range in excitation values. The range used in this paper was 0.1 to 0.45. The topological evolution of a dynamical system can be visualized via Poincaré distance [30], which is based on the system phase portrait, i.e., velocity verses response, as shown in Figure1. The Poincaré distance reveals important intrinsic features about the dynamics of the system such as its stability, nonlinearity, and chaotic behaviors. The Poincaré distance, r, is computed with Equation (3), . were η and η are taken at intervals of ∆τ = 0.45. The parameter r is the Euclidian distance from the origin on a phase diagram for η(τ): r  . 2 r = (η)2 + η (3) Entropy 2020, 22, 1199 4 of 21 Entropy 2020, 22, x FOR PEER REVIEW 4 of 21

FigureFigure 1. 1.Poincar Poincaréé distance distance showing showing the topological the topological evolution evolution of the Du offfi ng the oscillator Duffing with oscillator parameters: with ζparameters:= .15, k = 1 ,휁 and= .15Ω ,= 푘1.2= .1 Forward, and 훺 = and1.2 backward. Forward solutions and backward are given solutions in grey and are black, given respectively. in grey and black, respectively. From the Poincaré plot, it can be observed that the system displays steady state behavior as long as 0

that SE results are highly dependent on both the number of states and the criteria for assigning of 푚푗 The redundancy in a sequence M is calculated by categorizing each mj into one of N possible to a particular state as that determines the probabilities 푝푗. In classical information theory, states are categories, which is referred to as states henceforth. By inspection of Equation (4), it can be inferred defined by an encoding scheme used to package a signal and transmit it across a channel [28]. The that SE results are highly dependent on both the number of states and the criteria for assigning of main criterion for avoiding information loss is that the encoding scheme must contain enough dimensionality to uniquely describe all possible states. Implicit in Shannon’s derivation is that states

Entropy 2020, 22, 1199 5 of 21

mj to a particular state as that determines the probabilities pj. In classical information theory, states are defined by an encoding scheme used to package a signal and transmit it across a channel [28]. The main criterion for avoiding information loss is that the encoding scheme must contain enough dimensionality to uniquely describe all possible states. Implicit in Shannon’s derivation is that states have significance to the analyst. The examples in his seminal work use words and letters in English sentences to make that point. The analog for response data of a mechanical or biological system, however, is ambiguous. Significance is an arbitrary condition, which means the definition of a state and the set of all possible states is open to the interpretation of the analyst. As a consequence, there are many extensions of SE, in which they retain the utility of Boltzmann’s P distribution: H = pLog(p) and address the concept of state for a dynamical system in different − ways. Notable examples relevant to this paper are Kolmogorov–Sinai entropy, Permutation entropy, spectral entropy, and Singular Entropy. Unlike the extensions mentioned above, the IIF, as will be explained in the next section, produces comparable results even though the use of Boltzmann equation is explicitly abandoned. Kolmogorov–Sinai entropy, also known as metric entropy, defines states as partitions in phase space, making a direct connection to the physics of dynamical systems [33,34]. A reconstruction of the phase space is required for the computation of the entropy, which can make metric entropy impractical for real-time applications [35]. Permutation entropy was introduced to account for the complexity of order in a sequence [36]. Unlike metric entropy, permutation entropy does not require a sequence to be the result of a physical process. The states are defined from a set of all possible for a given segment length. Despite permutation entropy disconnection from the physics of the system, the general symbolic equality of permutation entropy and metric entropy was demonstrated, which implies that permutation entropy can be employed to study physical systems [37]. Since permutation entropy is simpler computationally than metric entropy it is widely utilized in many applications. Garland et al., for example, used a rolling permutation entropy to examine anomalies in paleoclimate data [38]. Yao et al. cited the computational ease of permutation entropy as an attractive representative entropy method for quantifying complex processes [39]. Spectral entropy was devised for periodic, stationary signals that the Fourier spectrum was applied to partition the trajectory of a signal into repeatable behaviors [40]. Spectral entropy is commonly used in a variety of applications for signal classification. Recent applications include structural health monitoring [41], and identification of sleep disorders [42]. The continuous utilization of spectral entropy is because a dynamic behavior can be partitioned easily based on its frequency content. The current implementation of the IIF uses the short-time Fourier spectrum (STFT) on this premise (See Supplementary Materials). Of the four SE extensions, the IIF is most similar to wavelet singular entropy (WSE), which uses the singular values from a singular value decomposition (SVD) of a wavelet time-frequency decomposition in the Boltzmann equation to express an entropy [43]. WSE operates on the premise that the singular values correlate to dynamical states. The IIF assumes that dynamical behavior is expressed in the singular vectors instead. For simplicity, the SE probabilities are calculated by binning acceleration amplitude values (aj) according to Equation (5): aj p = f or j = N j PN 1 ... (5) j aj In this study, the total number of bins, N, was 250, which was chosen as a nominally representative and easily computable example of an SE solution given a signal length of 5000 points. The permutation entropy of order 3, lag 1 (PEn3) is computed by collecting consecutive sequences of three values from a discretely sampled signal, rank-ordering them, and comparing them to a list Entropy 2020, 22, 1199 6 of 21

n o of six possible patterns: π [1 2 3] , [2 3 1] , ... , [2 1 3] . The signal is incremented by one k 3 k=1 k=2 k=6 sample (lag 1), so the patterns overlap. PEn3 is thus computed via Equation (6):

3! 1 X PEn = p log p (6) 3 −log 3! k 2 k 2 k =1 where pk is calculated by taking the total the number of times a specific pattern, πk, is observed in the signal and dividing it by the total number of categorized sequences. Details on computing PEn3 can be found in [44]. The permutation entropy results reported in this paper were limited to order 3. Higher orders increased the computation time beyond what would be a reasonable comparison to the IIF without significant gain in sensitivity. This paper compares the normalized permutation entropy of order 3, lag 1 (PEn3) and an SE solution to the IIF. SE and PEn3 are chosen over Spectral and Singular Entropies because they do not share as many common features with the IIF. Additionally, with the parameters chosen, both of these entropy variations are comparable to the IIF in terms of algorithmic simplicity.

3. The Information Impulse Function Prior to computing the IIF, the signal must be decomposed such that it is uniquely and yet completely described. This is necessary for the SVD computation in the IIF and analogous to encoding the signal for transmission. Vector space representations are common for both dynamical systems analysis and encoding algorithms [45–47]. The decomposition method required by the IIF is otherwise not strictly prescribed. Any one of many common time-frequency decomposition methods can represent a time history into an appropriate vector space. For the purposes of demonstration, a signal f (τ) is decomposed into a vector space using the discrete STFT modulated with a periodic Hann window, W. Equation (7) describes the STFT for f (τ):

X ∞ iωτ Y(κ, ω) = f (τ)W(τ κ)e− (7) − −∞ The decomposed signal f (τ) is represented by Y, which is a function of two dimensions: frequency ω and discrete time step κ. The resulting size of Y is m n, where n corresponds to the number of × discrete time steps and m corresponds to the number of frequency bins. In general, this paper assumes n m. ≥ Performing an SVD on Y forms a rotation from the original signal basis to the most compact description of the signal that can be given only by linear transformations. Parenthetically, the SVD description of Y can be used as an expression of the algorithmic or Kolmogorov complexity of the signal. Equation (8) is the decomposition for Y, which contains complex values:

Y(κ, ω) = USVT (8)

The conjugate transpose is V. The singular value matrix, S, is rank ordered such that S1,1 > S2,2> ... > Sr,r. Y is a rank C matrix that is equal to the number of nonzero singular values in S, U and V are the left and right singular vector matrixes, respectively. The columns of U and V form a set of orthonormal vectors such that U and V are unitary. Therefore, if Y is m n, then U is m m, S is × × m n, and V is n n, for n m. U and V can be thought of as representation of different behaviors in × × ≥ Y, which is the signal under study in the frequency domain. Since USVT is linearly decomposable and rank ordered, Y can be approximated by rank reduction or SVD truncation [48]. The matrix Y is approximated by either subtracting out singular values Entropy 2020, 22, 1199 7 of 21 in reverse order starting with the smallest value, or leaving them out of the sum as expressed in Equation (9): Xr r T Y = USqV q = 1, ... , r r C (9) q ≤ The resulting rank of the approximation, given by r, is equal to the highest index of the singular value used in the sum. At full rank ( r = C), Y is approximated by the SVD with minimal error [48]. The net effect on Y of subsequent approximations by rank reduction is a steady increase in information loss. The product of the truncated matrix V and the transpose of its complex conjugate Vˆ , and likewise U and Uˆ , only reach the identity matrix I when all vectors (index j) are included. Equation (10) shows this by writing the unitary product as a sum similarly to Equation (9):

Pr i = 1, ... , n lim ViqVˆ qj = I r n j = 1, ... , n → q = 1 (10) Pr lim UiqUˆ qj = I i = 1, ... , m r m → q = 1 j = 1, ... , m

The repeated index does not imply summation, as the sum is shown explicitly. The intent of IIF is to quantify the work performed by the rank reduction on the signal subspaces U and V, and to examine the signal’s behavior. The rate of convergence to the identity matrix for each index i as a function of r is used to represent the information loss by a rank reduction specific to each subspace. This rate can be quantified by collecting the diagonal elements resulting from the sums in Equation (10) for each value of r. The resulting expression is simplified to be the cumulative sum of the conjugate square of the singular vector elements over the index j, which represents different values of r. Subsequently, IIF potential functions for U and V are computed in Equation (11):

j R P 2 i = 1, ... , n Φij = Viq q = 1 j = 1, ... , n j (11) L P 2 i = 1, ... , m Φij = Uiq q = 1 j = 1, ... , m

R L For each index i, Φij and Φij monotonically increase with j. As such, the potentials describe the contribution gradient to the full signal with respect to column index j. Alternatively, rank reduction can be thought of as a force acting in the j direction, i.e., performing work. Constraining the direction of action to a single dimension implies that the curl is zero. Without an explicit proof, the expressions in Equation (11) are assumed to represent the conservative potential functions in j for every value of i. The range for the expressions in Equation (11) (n and m, respectively) are determined by the size of Y. Unity is reached, as shown in Equation (10), when all vectors are included. However, Equation (11) reaches a maximum in i when j corresponds to a full rank approximation of Y. This is a consequence of rank ordering in the SVD. The corresponding singular vectors in U and V, must contain redundant information because they are effectively removed from the product. Thus, full-rank approximation of Y is equated with the maximum information potential. The authors use C to denote this value because the maximum potential can also be thought of as the minimum channel capacity needed to transmit Y Entropy 2020, 22, 1199 8 of 21 with minimum error. The work can be calculated by integrating the potential functions in the direction j up to the maximum potential at C, as follows:

C i = 1, ... , n R( ) = 1 P R IIF i C(C+1) Φij = j = 1, ... , CC m j 1 ≤ (12) C L( ) = 1 P L = IIF i C(C+1) Φij i 1, ... , m j=1 j = 1, ... , CC m ≤ Equation (12) implies that the IIFR and IIFL are the work required per index i to compress the signal Y (compressing gas particles in a thermodynamic system) with respect to the right and left singular vectors, respectively. The IIFR and IIFL are normalized such that the sum over index i for either 1 function is 2 . The fact that only a scalar value is needed to normalize Equation (12) is a consequence of defining the potential functions on normalized subspaces. The IIFR and IIFL are given the generic unit of intensity since Equation (12) is proportional to square of a vector subspace. Either the IIFR or the IIFL may be used to interrogate the local information content of a signal. To use the IIF to find a discontinuity in time, this paper uses the IIFR form of the IIF exclusively.

4. Application of the IIF to Simulated Dynamic Response Data

4.1. Simulation of Discontinuities in Nonlinear Dynamical Systems The ability of the IIF algorithm to detect discontinuities in system response was examined in this paper by introducing both global and local discontinuities to the system defined in Equation (2). The global discontinuities were injected into the input signal, E(Ω, τ). The local discontinuities, on the other hand, were introduced into the dissipative and nonlinear restoring forces, i.e., damping and stiffness parameters, respectively. Discontinuities were simulated by adding a delta function to the parameter of interest in Equation (2) as follows:

(F , k , ζ ) = A (F, k, ζ)(δ(τ τ )) (13) d d d d − i

The impulse amplitude Ad is a percentage of the tested parameter F, k, or ζ that varies from 1% to 10% in increments of 1%. the Dirac delta function, δ, is centered at τi. The result in Equation (13) is zero everywhere until τ reaches τi. The results presented in this paper are for discontinuities at τi = 160. Equations (14) through (16) represent the test cases for discontinuities in force excitation, stiffness, and damping: .. . η + 2ζ η η + kη3 = (F + F )cos(Ωτ) (14) − d .. . η + 2ζ η η + (k + k )η3 = Fcos(Ωτ) (15) − d .. . η + 2(ζ + ζ )η η + kη3 = Fcos(Ωτ) (16) d − The above equations were solved numerically using MATLAB ODE45, which is a Runge–Kutta ODEs ordinary differential equation solver. The algorithm was devised to ensure that the outputs can be evenly sampled, and the parameters may be defined as functions in time: ζ(τ), k(τ), F(τ), and Ω(τ). Figure2 describes the procedure. Entropy 2020, 22, 1199 9 of 21 Entropy 2020, 22, x FOR PEER REVIEW 9 of 21

Figure 2. Algorithm used to simulate a Duffing Duffing oscillator for variable parameters.

In addition to to the the three three test test cases cases described described by byEquations Equations (14) (14)–(16),–(16), a set a baseline set baseline test cases test caseswere weremade made that thatcontained contained no discontinuities no discontinuities of any of any kind. kind. Baseline Baseline cases cases were were not not needed needed to to detect discontinuities with the IIF but were created to provide additional insight into the numeric results presented in the next section.section.

4.2. Example Output .. While the IIF can be computed computed for for any any input input or or output output variable, variable, 휂η̈ waswas chosenchosen since it was more . sensitive toto changeschanges thanthan η휂oṙ orη .휂Therefore,. Therefore, the the results results reported reported in thisin this paper paper were were computed computed from from the .. outputthe output signal signalη, which 휂̈, which is a common is a common measurement measurement in physics in phys and engineeringics and engineering applications. applications. An example An .. ofexample calculating of calculating the IIF from the an IIF acceleration from an acceleration signal, η, due signal, to an input휂̈ , due signal, to anE input, with signal, a disturbance, 퐸, withF d a, whiledisturbance, the system 퐹푑, while was experiencing the system was chaotic experiencing behaviors chaotic is provided behaviors in Figure is provided3. in Figure 3. The disturbance waswas simulatedsimulated asas aa DiracDiracdelta delta function function with with an an amplitude amplitudeF d퐹푑= =0.0179 0.0179(10% (10% of Fof), which퐹), which was was a discontinuity a discontinuity superimposed superimposed on the on harmonic the harmonic input input signal signal with anwith amplitude an amplitudeF = 0.447 퐹 = at 0.447 time τati = time160.0 휏푖. = For 160 comparison,.0 . For comparison, the simulation the was simulation also computed was also without computed the disturbance without and the superimposeddisturbance and the superimposed case with a disturbance, the case with as a showndisturbance, in Figure as shown3. It can in be Figure observed 3. It can from be Figure observed3a thatfrom the Figure input 3 signala that withthe input disturbance signal shownwith disturbance in red is exceeding shown thein red undisturbed is exceeding signal the in undisturbed black line is barelysignal visiblein black near lineτ is= barely160. However, visible near the aftermath휏 = 160. ofHowever, that minute the eventaftermath is evident of that in minute the acceleration event is signalevident well in the after acceleration the occurrence signal of well the after disturbance the occurrence due to theof the chaotic disturbance nature ofdue the to system the chaotic (Figure nature3b). Theof theIIF systemR with ( andFigure without 3b). The the disturbance퐼퐼퐹푅 with and were without calculated the disturbance using Equation were (12) calculated and plotted using in FigureEquation3c. Clearly,(12) andthe plotted IIF identified in Figure the 3c. occurrenceClearly, the of IIF and identified amplified the the occurrence presence of theand disturbance, amplified the which presence was theof the reason disturbance, for selecting whichFd wa= s0.1 theF toreason illustrate for selecting the mechanics 퐹푑 = of 0. IIF.1퐹 Choosingto illustrateFd the< 0.1mechanicsF would of make IIF. itChoosing more challenging 퐹푑 < 0.1 to퐹 showwould the make effect it ofmore the disturbancechallenging in to the show acceleration the effect signal of the since disturbance the system in wasthe acceleration signal since the system was in a chaotic regime. Even though 휂̈ is more sensitive to disturbances than 휂̇ or 휂, the chaotic response of the system can easily camouflage the presence of small discontinuities in the signal.

Entropy 2020, 22, 1199 10 of 21

.. . inEntropy a chaotic 2020 regime., 22, x FOR Even PEER thoughREVIEW η is more sensitive to disturbances than η or η, the chaotic response10 of 21 of the system can easily camouflage the presence of small discontinuities in the signal.

Figure 3. Example IIF analysis on the acceleration time history of Duffing oscillator in chaos. Plotted Figure 3. Example IIF analysis on the acceleration time history of Duffing oscillator in chaos. Plotted in black are results without a discontinuity. Plotted in red are results containing a discontinuity. in black are results without a discontinuity. Plotted in red are results containing a discontinuity. (a) (a) The sinusoidal excitation. The discontinuity at τ = 160 is almost not visible. (b) The acceleration The sinusoidal excitation. The discontinuity at τ = 160 is almost not visible. (b) The acceleration response. Due to the chaotic nature of the system, the effect of a discontinuity is evident for τ > 160 response. Due to the chaotic nature of the system, the effect of a discontinuity is evident for τ > 160 when compared to the response without a discontinuity. (c) The IIF results. The IIF computed on when compared to the response without a discontinuity. (c) The IIF results. The IIF computed on response data containing a discontinuity produced a peak value near τ = 160 that easily distinguishable response data containing a discontinuity produced a peak value near τ = 160 that easily from the background. In contrast, the IIF computed on the response data without a discontinuity did distinguishable from the background. In contrast, the IIF computed on the response data without a not produce any large peaks. discontinuity did not produce any large peaks. The application off the IIF in Figure3c clearly shows the occurrence of the disturbance when its The applicationF off the IIF in Figure 3c clearly shows theIIF occurrenceR of the disturbance when its amplitude was 0.10 even when the system was chaotic. The peak푅 value was 89.9% greater than amplitude was 0.10R 퐹 even when the system was chaotic.4 The 퐼퐼퐹 peak 1value was 89.9% greater the mean value of IIF , which was approximately 4.0 10− . IIF mean value is over the total number than the mean value of 퐼퐼퐹푅, which was approximately× 4.0 × 10−4. IIF mean2 value is ½ over the total of singular vectors in V (Equation (9)). Incidentally, the IIF mean value is a convenient datum as it number of singular vectors in 푉 (Equation (9)). Incidentally, the IIF mean value is a convenient does not depend on the signal complexity. datum as it does not depend on the signal complexity. However, the total number of vectors in V is a consequence of the parameters used in the STFT However, the total number of vectors in 푉 is a consequence of the parameters used in the STFT decomposition procedure (Equation (7)). One of those parameters is the size of the Hann window in decomposition procedure (Equation (7)). One of those parameters is the size of the Hann window in the STFT, which determines the time resolution of the IIF, and influences its sensitivity. Applying a the STFT, which determines the time resolution of the IIF, and influences its sensitivity. Applying a 16-point window the simulation shown in Figure3. Subsequently, the resulting time resolution in the 16-point window the simulation shown in Figure.. 3. Subsequently, the resulting time resolution in the IIF was 0.2τ, which was coarser than that of the η signal at 0.05τ. Additionally, the STFT caused the IIF was 0.2휏, which was coarser than that of the 휂̈ signal at 0.05휏. Additionally, the STFT caused the energy to distribute or ‘leak’ across multiple time bands as an artifact of sampling, which is one of energy to distribute or ‘leak’ across multiple time bands as an artifact of sampling, which is one of the well-known shortcomings of decomposition procedures [49]. This choice of initial decomposition the well-known shortcomings of decomposition procedures [49]. This choice of initial decomposition method impacts the sensitivity of the IIF and adds uncertainty to the location of the disturbance. method impacts the sensitivity of the IIF and adds uncertainty to the location of the disturbance. The The STFT was chosen as a representative decomposition method for this paper because it is one of STFT was chosen as a representative decomposition method for this paper because it is one of the most well understood methods and is able to produce IIF results with a good signal to noise ratio. The tradeoff between sensitivity and uncertainty for different decomposition methods is currently under study.

Entropy 2020, 22, 1199 11 of 21 theEntropy most 20 well20, 22 understood, x FOR PEER REVIEW methods and is able to produce IIF results with a good signal to noise ratio.11 of 21 The tradeoff between sensitivity and uncertainty for different decomposition methods is currently To gain insight into this limitation of the IIF, a closer examination of the results in Figure 3 based under study. on the net effects (baseline-test case) are provided in Figure 4. The data points are marked to highlight To gain insight into this limitation of the IIF, a closer examination of the results in Figure3 based the difference in temporal resolution. The net changes due to a disturbance in the input and response on the net effects (baseline-test case) are provided in Figure4. The data points are marked to highlight signals are shown in Figure 4a,b, respectively. Figure 4c shows the IIF for STFT with 8, 16, and 32 the difference in temporal resolution. The net changes due to a disturbance in the input and response window sizes. The 16-point window appears to be the most effective coverage for a single point signals are shown in Figure4a,b, respectively. Figure4c shows the IIF for STFT with 8, 16, and 32 discontinuity. Therefore, the results from the experimental simulations provided in this study were window sizes. The 16-point window appears to be the most effective coverage for a single point likewise based on a Hann window size of 16 points in the STFT prior to computing the IIF. discontinuity. Therefore, the results from the experimental simulations provided in this study were likewise based on a Hann window size of 16 points in the STFT prior to computing the IIF.

Figure 4. (a) Difference in excitation from the baseline to the test case. This figure shows the precise Figure 4. (a) Difference in excitation from the baseline to the test case. This figure shows the precise location of the discontinuity. (b) Difference in response from the baseline to the test case. This figure location of the discontinuity. (b) Difference in response from the baseline to the test case. This figure shows that the largest difference in response due to a discontinuity does not determine the location of theshows peak that IIF value.the largest (c) IIF differenceR values forin response three diff dueerent to window a discontinuity lengths does of the not STFT. determine The diff theerence location in 퐼퐼퐹푅 timeof the resolution peak IIF and val possibleue. (c) leaking values in thefor STFTthree createsdifferent uncertainty window lengths in the exact of the peak STFT. location. The difference in time resolution and possible leaking in the STFT creates uncertainty in the exact peak location. 5. Results 5. Results The numerical results reported in this section demonstrate the sensitivity of the IIF in detecting momentaryThe numerical global and results local discontinuitiesreported in this occurring section demonstrate in a dynamical the systemsensitivity with of cubic the IIF nonlinearity in detecting describedmomentary in Section global 2.1and. Thelocal algorithm discontinuities was examined occurring for in multiple a dynamical steady-state system excitationwith cubic amplitudes, nonlinearity whichdescribed were in varied Section from 2.1. 0.10The algorithm to 0.45 by was an incrementexamined for of multiple 0.001. The steady objective-state excitation was to examine amplitudes, IIF responsewhich were to a change varied in from the 0.10 state to of 0.45 the system by an increment regardless of of 0.001.the amplitude The objective level of was the to input examine signal. IIF Discontinuitiesresponse to a change in the(i) in inputthe state excitation; of the system (ii) sti ffregardlessness; and of (iii) the damping amplitude were level investigated of the input for signal. ten impulseDiscontinuities amplitudes in the in each (i) input of the excitation; three cases (ii) by stiffness; Equation and (13). (iii) damping were investigated for ten impulse amplitudes in each of the three cases by Equation (13). The results for the first case are provided in Section 5.1 with a detailed discussion. The results and analyses of the IIF’s ability to detect internal disturbances in the stiffness and damping components, i.e., cases (ii) and (iii), respectively, are reported in Section 5.2.

Entropy 2020, 22, 1199 12 of 21

The results for the first case are provided in Section 5.1 with a detailed discussion. The results and analysesEntropy of2020 the, 22 IIF’s, x FOR ability PEER REVIEW to detect internal disturbances in the stiffness and damping components,12 of 21 Entropy 2020, 22, x FOR PEER REVIEW 12 of 21 i.e., cases (ii) and (iii), respectively, are reported in Section 5.2. 5.1. Detecting Global Disturbances 5.1.5.1. Detecting Detecting Global Global Disturbances Disturbances This section presents IIF computational results for input signals with different forcing and discontinuThisThis section sectionity amplitudes presents presents IIF for IIF computational the computational nonlinear dynamical results results for forsystem input input signalsgiven signals in with Equation with di ff differenterent (14) forcing. This forcing test and case and discontinuityrepresentsdiscontinu ity amplitudessituations amplitudes where for for the thethe nonlinear systemnonlinear dynamical experiences dynamical system an system external given given in (or Equationin global Equation) disturbance. (14). (14 This). This test For test case eachcase representsnumericalrepresents situations experiment, situations where where the the peak the system system퐼퐼퐹푅 experiences (occurring experiences an at externalan휏 = external160. (or0 in (orglobal all global ) trials) disturbance.) disturbance. was computed For For each each and 푅 numericalcomparednumerical experiment, to experiment, Shannon the peak entropy the IIF peakR ((occurring SE)퐼퐼퐹 and (occurring permutation at τ = 160.0 at 휏 in entropy= all160 trials).0 (PEn3)in was all computed trials)calculations, was and computed compared which were and to Shannoncalculatedcompared entropy using to Shannon Equations (SE) and entropypermutation (5) and ( SE)(6), respectively. and entropy permutation (PEn3) calculations, entropy (PEn3) which calculations, were calculated which using were EquationscalculatedFigure (5) andusings 5 and (6), Equations respectively.6 show SE(5) andand PEn3(6), respectively. results (respectively) for all the experiments performed in Figures 5 and 6 show SE and PEn3 results (respectively) for all the experiments performed in caseFigures (i) as5 and a function6 show SE of and excitation PEn3 results and discontinuity (respectively) foramplitudes, all the experiments 퐹 and 퐹푑 performed. At very in close case (i) as a function of excitation and discontinuity amplitudes, 퐹 and 퐹푑 . At very close caseexamination, (i) as a function SE results of excitation showed and an discontinuity ability to differentiate amplitudes, betweenF and F d discontinuity. At very close levels examination, by a very examination, SE results showed an ability to differentiate between discontinuity levels by a very SEsmall results drop showed in value an abilityfor some to excitation differentiate levels. between Close discontinuityexamination of levels 푃퐸푛 by3 results, a very however, small drop did in not small drop in value for some excitation levels. Close examination of 푃퐸푛3 results, however, did not valueshow for similar some excitation success. For levels. example, Close examinationthe percent change of PEn 3fromresults, 1 to however, 10% 퐴푑 didfor notSE showwas − similar0.04% at show similar success. For example, the percent change from 1 to 10% 퐴푑 for SE was −0.푅04% at success.퐹 = 0. For25 with example, a steady the decrease percent changein values. from The 1 percent to 10% changeA for SEfrom was 1 to 0.04%10% 퐴at푑 Fin= peak0.25 퐼퐼퐹with peak a d 푅 퐹 = 0.25 with a steady decrease in values. The percent change from 1 to−R 10% 퐴푑 in peak 퐼퐼퐹 peak steadyvalues decrease for 퐹 = in values.0.25 was The 14 percent.9%. PEn3 change showed from no 1 tochange.10% A d in peak IIF peak values for F = 0.25 퐹 = 0.25 14.9% wasvalues 14.9%. for PEn3 showed was no change.. PEn3 showed no change.

Figure 5. Shannon entropy values as a function of background excitation and discontinuity Figureamplitude.Figure 5. Shannon 5. TheShannon separation entropy entropy values in the values as discontinuity a function as a of function backgroundamplitude of backgroundtest excitation cases is andnot excitation discontinuityclear. and amplitude. discontinuity Theamplitude. separation The in the separation discontinuity in the amplitude discontinuity test casesamplitude is not test clear. cases is not clear.

FigureFigure 6. 6.Permutation Permutation entropy entropy values values as as a function a function of ofbackground background excitation excitation and and discontinuity discontinuity amplitude.amplitude.Figure 6. There Permutation There is nois no discernable discernable entropy separationvalues separation as in a thein function the discontinuity discontinuity of background amplitude amplitude excitation for for most most and test test cases. discontinuity cases. amplitude. There is no discernable separation in the discontinuity amplitude for most test cases. Figure 7 shows peak 퐼퐼퐹푅 values for the same data set. It can be observed that the IIF peak 푅 valuesFigure monotonically, 7 shows peak although 퐼퐼퐹 values not linearly, for the increasedsame data when set. It the can discontinuity be observed amplitudesthat the IIF werepeak increased.values monotonically, The 퐼퐼퐹푅 was although able to notdetect linearly, the presence increased of a when single the point discontinuity discontinuity amplitudes clearly in were all increased. The 퐼퐼퐹푅 was able to detect the presence of a single point discontinuity clearly in all experiments, i.e., 퐹푑 = 퐹 × {0.01,0.02, … , 0.10} including the chaotic region. experiments, i.e., 퐹푑 = 퐹 × {0.01,0.02, … , 0.10} including the chaotic region.

Entropy 2020, 22, 1199 13 of 21

Figure7 shows peak IIFR values for the same data set. It can be observed that the IIF peak values monotonically, although not linearly, increased when the discontinuity amplitudes were increased. R EntropyThe IIF 2020, was22, x FOR able PEER to detect REVIEW the presence of a single point discontinuity clearly in all experiments,13 of 21 i.e.,EntropyF = 20F20, 20.01,2, x FOR 0.02, PEER... REVIEW, 0.10 including the chaotic region. 13 of 21 d × { }

Figure 7. IIF peak values as a function of background excitation and discontinuity amplitude. Clear FigureFigure 7. 7.IIF IIF peakpeak valuesvalues as as a a function function of of background background excitation excitation and and discontinuity discontinuity amplitude. amplitude. Clear separation in the discontinuity amplitude test cases shows how well the IIF is able to detect a global Clearseparation separation in the in thediscontinuity discontinuity amplitude amplitude test testcases cases shows shows how how well wellthe IIF the is IIF able is ableto detect to detect a global a disturbance even when the system is chaotic. The separation between amplitude test cases is in stark globaldisturbance disturbance even even when when the system the system is chaotic. is chaotic. The Theseparation separation between between amplitude amplitude test test cases cases is in is stark in contrast to the SE and PEn results shown in Figures 5 and 6. starkcontrast contrast to the to the SE SEand and PEn PEn results results shown shown in Figure in Figuress 5 and5 and 6.6 . It should be clear from Figures 5–7 that the IIF is much more sensitive to small discontinuities in It shouldIt should be be clear clear from from Figures Figure5–s7 5 that–7 that the the IIF IIF is much is much more more sensitive sensitive to smallto small discontinuities discontinuities in in the excitation force than either of the more traditional entropy approaches. Neither technique was thethe excitation excitation force force than than either either of of the the more more traditional traditional entropy entropy approaches. approaches. Neither Neither technique technique was was able to clearly differentiate between 1% or 10% discontinuities in the input signal when the system ableable to to clearly clearly di ffdifferentiateerentiate between between 1% 1% or or 10% 10% discontinuities discontinuities in in the the input input signal signal when when the the system system was chaotic. waswas chaotic. chaotic. Commonalities between the SE, PEn3, and IIF calculations are shown in Figures 8–10, CommonalitiesCommonalities between between the SE, the PEn3, SE, and PEn3, IIF andcalculations IIF calculations are shown arein Figures shown8– 10 in, respectively.Figures 8–10, respectively. These calculations are superimposed on a 10% discontinuity Poincaré plot to illustrate Theserespectively. calculations These are superimposedcalculations are on superimposed a 10% discontinuity on a 10% Poincar discontinuityé plot to illustrate Poincaré the plot detection to illustrate of the detection of the three methods in various stable and chaotic regions. thethe three detection methods of the in various three methods stable and in various chaotic regions.stable and chaotic regions.

FigureFigure 8. 8.ShannonShannon entropy entropy against against the the Poincaré Poincar édistance.distance. The The shape shape the the SE SE values values make make as as a afunction function Figure 8. Shannon entropy against the Poincaré distance. The shape the SE values make as a function ofof excitation excitation force force mimics mimics the the Poincaré Poincar édistancedistance with with similar similar inflection inflection points points near near regions regions where where the the of excitation force mimics the Poincaré distance with similar inflection points near regions where the systemsystem behavior behavior changes changes character. character. system behavior changes character.

EntropyEntropy 20202020, 2,222, x, FOR 1199 PEER REVIEW 14 14of of21 21 Entropy 2020, 22, x FOR PEER REVIEW 14 of 21

Figure 9. Permutation entropy against the Poincaré distance. The shape the PEn values make as a functionFigureFigure 9. of 9. Permutation excitationPermutation force entropy entropy also mimics against against the the the Poincaré Poincaré Poincar distance édistance.distance. with The Thesimilar shape shape inflection the the PEn PEn points.values values However,make make as as a a thefunctionfunction shape of loses of excitation excitation form past force force F also= also0.3. mimics mimics the the Poincaré Poincar distanceé distance with with similar similar inflection inflection points. points. However, However, thethe shape shape loses loses form form past past F F= =0.3.0.3.

FigureFigure 10. 10. IIFIIF peaks peaks against against the the Poincaré Poincar distance.é distance. The The shape shape the the IIF IIF peak peak values values make make as asa function a function Figure 10. IIF peaks against the Poincaré distance. The shape the IIF peak values make as a function of ofexcitation excitation force force mimics mimics the the Poincaré Poincar distanceé distance similarly similarly to toboth both the the SE SE and and PEn PEn results, results, but but with with a a smootherofsmoother excitation curve. curve. force mimics the Poincaré distance similarly to both the SE and PEn results, but with a smoother curve. TheThe three three approaches approaches share share similar similar characteristic characteristic behavior. behavior. In Ingeneral, general, they they appeared appeared to totrend, trend, toto some someThe degree, three degree, approaches with with the the state stateshare of of similarthe the system system characteristic expressed behavior. inin Poincare’Poincare’ In general, plotsplots (Figure (Figure they8 appeared through8 through Figureto Figure trend, 10 ). 10toEach). some Each had degree,had clear clear with inflection inflection the state points points of thenear near system F퐹 == expressed0.150.15 andand 0.26,0.26, in Poincare’ which was was plots consistent consistent (Figure with8 with through the the PoiPoincar Figurencaré é 퐹 = 0.15 transitions.10transitions.). Each had Changes Changesclear inflection in inthe the slopes points slopes in near inall all three three approaches approaches and 0.26, occurred occurred which near was near orconsistent or after after sharp sharp with declines the declines Poi ncaréwere were intransitions.in trends trends with with Changes the the transitions transitions in the slopes in in the thein Poincaré all Poincar three diagram.éapproachesdiagram. Additionally, Additionally,occurred near all all orthree three after approaches approachessharp declines appeared appeared were tointo identifytrends identify with the the chaos the chaos- transitions- stable stable transition in transition the Poincaré betwe betweenen diagram. 퐹 ≈F0.35 0.35Additionally, to 0to.380.38. This. This allwas three wasa strong approaches a strong evidence evidence appeared that the that 퐹 ≈ 0≈.35 0.38 IIF,tothe identify despite IIF, despite the not chaos being not- being stable derived derived transition from from Boltzmann’s betwe Boltzmann’sen formula formula to behaves behaves. This like was like an a an entropystrong entropy evidence with with regard regardthat the to to systemIIF,system despite state. state. not being derived from Boltzmann’s formula behaves like an entropy with regard to system state. 5.2.5.2. Detecting Detecting Internal Internal Disturbances Disturbances 5.2. Detecting Internal Disturbances ThisThis section section provides provides detailed detailed discussion discussion for forthe the(ii) stiffness (ii) stiffness and and(iii) damping (iii) damping cases cases based based on theon numerical theThis numerical section results provides results for local fordetailed local discontinuities discontinuitiesdiscussion for for the for the (ii) the nonlinear stiffness nonlinear dynamicaland dynamical (iii) damping systems systems cases presented presented based on in in EquationstheEquations numerical (15 (15)) and results and (16 (16) ) for, ,respectively. respectively. local discontinuities As As with with the for previousprevious the nonlinear section,section, dynamical forfor each each numerical numerical systems experiment presented experiment in the theEquationspeak peak IIF IIF (occurring(15 (occurring) and (16 at )at,τ respectively. =휏 =160.0 160in.0 allin As all trials) with trials) wasthe was computedprevious computed section, and and compared forcompared each tonumerical Shannonto Shannon experiment entropy entropy (SE) 휏 = 160.0 (SE)theand peakand permutation permutationIIF (occurring entropy entropy at (PEn3) (PEn3 calculations.) calculations.in all trials) was computed and compared to Shannon entropy (SE) Figureand permutations 11 and 12 entropyshow SE ( PEn3and PEn3) calculations. results (respectively) for all the experiments performed in Figures 11 and 12 show SE and PEn3 results (respectively) for all the experiments performed in case (ii) as a function of excitation and stiffness discontinuity amplitudes, 퐹 and 퐾푑 . Likewise, case (ii) as a function of excitation and stiffness discontinuity amplitudes, 퐹 and 퐾푑 . Likewise,

Entropy 2020, 22, 1199 15 of 21

Figures 11 and 12 show SE and PEn3 results (respectively) for all the experiments performed in case (ii) as a function of excitation and stiffness discontinuity amplitudes, F and Kd. Likewise, Figures 13 and 14 show SE and PEn3 results for all experiments performed in case (iii) as a function of theEntropy excitation 2020, and22, x stiFORffness PEER discontinuity REVIEW amplitudes, F and ζd. These results are similar to those found15 of 21 in SectionEntropy 205.120 ,in 22 both, x FOR the PEER general REVIEW shape of the function with respect to excitation level and insensitivity15 of 21 to smallFigure discontinuities.s 13 and 14 show SE and PEn3 results for all experiments performed in case (iii) as a function Figures 13 and 14 show SE and PEn3 results for all experiments performed in case (iii) as a function ofIn the contrast, excitation IIF and peak stiffness values discontinuity in Figures 15 amplitudes,and 16 show 퐹 the and IIF 휁푑 was. These able results to detect are thesimilar presence to those 퐹 휁 of afoundof single the excitationin pointSection discontinuity and 5.1 stiffnessin both the clearlydiscontinuity general in all shape experiments,amplitudes, of the function i.e., andK with푑=. TheseK respect 0.01,results to 0.02, excitation are... similar, 0.10 level toand those and d × { } ζ =insensitivityfoundζ 0.01,in Section 0.02, to small... 5.1, 0.10 discontinuities.in bothincluding the general the chaotic shape region.of the function Similar towith case respect (i), the to IIF excitation peak values level for and d × { } casesinsensitivity ii and iii monotonically to small discontinuities. increased when the discontinuity amplitudes were increased.

Figure 11. Shannon entropy values as a function of background excitation and discontinuity Figure 11. Shannon entropy values as a function of background excitation and discontinuity Figureamplitude 11. Shannon for case entropy (ii). Inexplicably, values as a function values near of background F = 0.17 seem excitation to show and an discontinuity increased entropy amplitude at 10% for caseoveramplitude (ii). lower Inexplicably, for values. case (ii).Despite values Inexplicably, some near F small= values0.17 differences, seem near toF show= 0.17 these anseem results increased to show closely entropy an followincreased at 10% those entropy over reported lower at 10% in values.Figureover Despite lower 6 for case values.some (i). small Despite differences, some small these differences, results closely these follow results those closely reported follow in Figure those6 for reported case in (i). Figure 6 for case (i).

FigureFigure 12. 12.Permutation Permutation entropy entropy values values as aas function a function of background of background excitation excitation and and discontinuity discontinuity Figure 12. Permutation entropy values as a function of background excitation and discontinuity amplitudeamplitude for casefor case (ii). (ii). Like Like SE, SE, these these results results closely closely follow follow those those reported reported in Figure in Figure7 for 7 case for case (i). (i). amplitude for case (ii). Like SE, these results closely follow those reported in Figure 7 for case (i).

Entropy 2020, 22, x FOR PEER REVIEW 16 of 21 Entropy 2020, 22, 1199 16 of 21 Entropy 2020, 22, x FOR PEER REVIEW 16 of 21

Figure 13. Shannon entropy values as a function of background excitation and discontinuity FigureamplitudeFigure 13. Shannon 13. for Shannon case entropy (iii). entropy Again, values thesevalues as a function results as a closely of function background follow of backgroundthose excitation reported and excitation discontinuityin Figure and 5 for amplitude discontinuity case (i) and forFigure caseamplitude (iii). 11 Again,for for case case these (ii). (iii). results Again, closely these followresults those closely reported follow inthose Figure reported5 for case in Figure (i) and Figure5 for case 11 for(i) and caseFigure (ii). 11 for case (ii).

FigureFigure 14. 14.Permutation Permutation entropy entropy values values as asa function a function of backgroundof background excitation excitation and and discontinuity discontinuity amplitudeamplitudeFigure for 14. caseforPermutation case (iii). (iii). These These e resultsntropy results closelyvalues closely follow as follow a function those those reported of reported background for Figure for Figure 6 excitation for case6 for (i) case and and (i)discontinuity Figure and Figure 12 for12 caseamplitude for (ii). case (ii)for. case (iii). These results closely follow those reported for Figure 6 for case (i) and Figure 12 for case (ii). Another important observation is shapes of the curves in Figures 15 and 16 are significantly In contrast, IIF peak values in Figures 15 and 16 show the IIF was able to detect the presence of different from those in global disturbance case (Figure7). This is due to the fact that in the IIF is a singleIn contrast,point discontinuity IIF peak values clearly in inFigure all experiments,s 15 and 16 show i.e., 퐾the푑 = IIF퐾 ×was{. 01 able, .02 to, … detect,0. 10 }the and presence 휁푑 = 휁 ×of sensitive to the system’s instantaneous response to a discontinuity. In contrast, the SE and PEn3 results {a. 01single, .02, point… ,0. 10 discontinuity} including theclearly chaotic in all region. experiments, Similar toi.e. case, 퐾푑 (i),= 퐾the× IIF{. 01 peak, .02, values… ,0. 10 for} and cases 휁푑 ii= and휁 × for case (ii) shown in Figures 11 and 12, respectively, closely resemble those in case (i). Likewise, iii{. 01monotonically, .02, … ,0. 10} increasedincluding when the chaotic the discontinuity region. Similar amplitudes to case (i), were the increased. IIF peak values for cases ii and SE and PEn3 results for case (iii) (Figures 13 and 14) are also similar to case (i) results. iii monotonicallyAnother important increased observation when the isdiscontinuity shapes of the amplitudes curves in were Figure increased.s 15 and 16 are significantly To more explicitly show the effectiveness of the IIF over SE and PEn3, the percent difference differentAnother from importantthose in global observation disturbance is shapes case of(Figure the curves 7). This in Figureis due sto 15 the and fact 16 thatare insignificantly the IIF is from the largest discontinuity (0.10A from Equation (13)) to the smallest discontinuity (0.01A ) was sensitivedifferent tofrom the those system’s in global instantaneous disturbanced response case ( Figure to a discontinuity. 7). This is due In to contrast, the fact the that SE in and dthe PEn3IIF is calculated by Equation (17): resultssensitive for to case the (ii) system’s shown instantaneous in Figures 11 response and 12, to respectively, a discontinuity. closely In resemble contrast, thosethe SE in and case PEn3 (i). Likewise,results for SE case and (ii)PEn3 shown results in for Figure case s(iii) 11 ( Figureand 12 13, respectively, and Figure 14 closely) are also resemble similar to those case in(i) caseresults. (i). 0.10 Ad 0.01 Ad Likewise, SE and PEn3 results% forDi fc fase erence (iii)= (Figure100 13 and− Figure 14) are also similar to case (i) results.(17) 0.10 Ad

In Figure 17, the % difference for results from cases (i), (ii) and (iii) are shown for IIF peak values in (a), SE in (b), and PEn3 in (c). The effectiveness of the IIF over SE and PEn3 can be concluded from the large difference in scale on the percent difference coordinate between Figure 17a to those of Figure 17b,c, respectively. Additionally, both SE and PEn3 approaches have percent differences with both positive and negative signs. This suggests that the SE and PEn3 approaches are not only insensitive to small discontinuities they are unreliable as a detection method when the system is chaotic.

Entropy 2020, 22, x FOR PEER REVIEW 17 of 21 EntropyEntropy2020 20, 2220,, 119922, x FOR PEER REVIEW 17 of17 21 of 21

Figure 15. IIF peak values as a function of background excitation and discontinuity amplitude for case FigureFigure 15. 15.IIF IIF peak peak values values as aas function a function of backgroundof background excitation excitation and and discontinuity discontinuity amplitude amplitude for for case case (ii). The separation in the discontinuity amplitude is apparent. The shape of the curves is very (ii).(ii). The The separation separation in the in discontinuity the discontinuity amplitude amplitude is apparent. is apparent. The shape The of the shape curves of the is very curves different is very different from the global disturbance case due to the difference in sensitivity of the system to changes fromdifferent the global from disturbance the global casedisturbance due to the case diff dueerence to the in sensitivitydifference ofin thesensitivity system toof changesthe system in stitoff changesness. in stiffness. This was not the case for either SE or PEn. Thisin was stiffness. not the This case was for not either the SEcase or for PEn. either SE or PEn.

FigureFigure 16. 16.IIF IIF peak peak values values as as a a function function of of backgroundbackground excitationexcitation and discontinuity discontinuity amplitude amplitude for for case Figure 16. IIF peak values as a function of background excitation and discontinuity amplitude for case case(iii). (iii). The The separation separation in in the the discontinuity amplitude amplitude is is apparent. apparent.The The shape shape of of the the curves curves is very is very (iii). The separation in the discontinuity amplitude is apparent. The shape of the curves is very differentdifferent from from the globalthe global disturbance disturbance case case and and case case (ii) due(ii) due to the to the diff differenceerence in sensitivity in sensitivity of the of the system system different from the global disturbance case and case (ii) due to the difference in sensitivity of the system to changesto changes in damping. in damping. The The difference difference in shape in shape between between case reportedcase reported in Figure in Figure7 for case 7 for (i), case Figure (i), Figure15 to changes in damping. The difference in shape between case reported in Figure 7 for case (i), Figure for15 case for (ii) case and (ii) the and current the current image image for case for (iii) case results (iii) results from the from sensitivity the sensitivity of the of IIF the to theIIF to response the response of 15 for case (ii) and the current image for case (iii) results from the sensitivity of the IIF to the response theof system. the system. of the system. To more explicitly show the effectiveness of the IIF over SE and PEn3, the percent difference To more explicitly show the effectiveness of the IIF over SE and PEn3, the percent difference from the largest discontinuity (0.10퐴푑 from Equation (13)) to the smallest discontinuity (0.01퐴푑) was from the largest discontinuity (0.10퐴 from Equation (13)) to the smallest discontinuity (0.01퐴 ) was calculated by Equation (17): 푑 푑 calculated by Equation (17): 0.10 퐴푑 − 0.01 퐴푑 % 퐷푖푓푓푒푟푒푛푐푒 = 1000.10 퐴푑 − 0.01 퐴푑 (17) % 퐷푖푓푓푒푟푒푛푐푒 = 100 0.10 퐴푑 (17) 0.10 퐴푑 In Figure 17, the % difference for results from cases (i), (ii) and (iii) are shown for IIF peak values In Figure 17, the % difference for results from cases (i), (ii) and (iii) are shown for IIF peak values in (a), SE in (b), and PEn3 in (c). The effectiveness of the IIF over SE and PEn3 can be concluded from in (a), SE in (b), and PEn3 in (c). The effectiveness of the IIF over SE and PEn3 can be concluded from the large difference in scale on the percent difference coordinate between Figure 17a to those of Figure the large difference in scale on the percent difference coordinate between Figure 17a to those of Figure 17b,c, respectively. Additionally, both SE and PEn3 approaches have percent differences with both 17b,c, respectively. Additionally, both SE and PEn3 approaches have percent differences with both positive and negative signs. This suggests that the SE and PEn3 approaches are not only insensitive positive and negative signs. This suggests that the SE and PEn3 approaches are not only insensitive to small discontinuities they are unreliable as a detection method when the system is chaotic. to small discontinuities they are unreliable as a detection method when the system is chaotic.

EntropyEntropy2020 2020,,22 22,, 1199 x FOR PEER REVIEW 1818 ofof 2121

Figure 17. Difference values for cases (i), (ii), and (iii) as a function of excitation force. (a) SE results (Figureb) PEn3 17. results Difference (c) IIF values results. for The cases low (i), values (ii), and in (a(iii)) and as a (b function) contrasted of excitation with the larger,force. ( consistentlya) SE results positive(b) PEn3 values results in ( (cc)) IIF show results. that the The IIF low is much values better in (a at) and distinguishing (b) contrasted between with thethe discontinuity larger, consistently levels. positive values in (c) show that the IIF is much better at distinguishing between the discontinuity 6. Discussionlevels. A new method to analyze small, momentary internal or external disturbances in time history data 6. Discussion has been presented called the Information Impulse Function (IIF). The IIF is derived by solving for the notionalA new work method done toto transmitanalyze asmall, signal momentary that has been internal decomposed or external or represented disturbances in in a vectortime history space. Thedata IIF has methodology been presented provides called a the way Information to estimate Impulse localized Function information (IIF). content The IIF in is a derived signal without by solving the afor priori the notional information work needed done to to transmit rely on the a signal Boltzmann that has distribution been decomposed explicitly. or represented in a vector space.Numerical The IIF methodology experiments provides were conducted a way to to estimate test the localized ability of information the IIF to detectcontent single-point in a signal discontinuitieswithout the a priori in a nonlinear information system. needed Disturbances to rely on the between Boltzmann1% and distribution10% of the explicitly. nominal parameter amplitudesNumerical were experiments introduced into were the conduc excitation,ted to sti testffness, the and ability damping of the terms. IIF to The detect IIF singlewas able-point to detectdiscontinuities all momentary in a nonlinear internal system. and external Disturbances disturbances between in the 1% response and 10% signal of the even nominal when the parameter system wasamplitudes chaotic. were The resultsintroduced suggest into that the the excitation, amplitude stiffness, of the IIFand was damping sensitive terms. to both The the IIF background was able to complexitydetect all momentary of the system internal and and the changeexternal in disturbances system response. in the response Topological signal changes even when in the the dynamic system behaviorwas chaotic. of the The system results due suggest to small that discontinuities the amplitude canof the be identifiedIIF was sensitive easily byto trackingboth the thebackground IIF peak values.complexity This of was the true system for when and thethe sourcechange of in the system disturbance response. was Topological external or internal. changes The in the IIF behaviordynamic followedbehavior similarof the system trends asdue both to Shannonsmall discontinuities and permutation can be entropy, identified suggesting easily by that tracking the IIF behavedthe IIF peak like anvalues. entropy. This However, was true for the when IIF was the far source more of superior the disturbance in detecting was smallexternal disturbances or internal. than The permutation IIF behavior entropyfollowed and simila Shannonr trends entropy. as both Shannon and permutation entropy, suggesting that the IIF behaved like The an entropy. ability of However, IIF to detect the and IIF localize was far small more discontinues superior in in an detecting input to small a dynamical disturbances system than can bepermutation useful feature entropy in many and biological,Shannon entropy medical,. scientific, and engineering applications. Investigating the robustnessThe ability ofof theIIF to IIF detect to random and localiz noisee small as well discontinues as studies in on an theinput utility to a dynamical of the IIF system for source can be useful feature in many biological, medical, scientific, and engineering applications. Investigating the robustness of the IIF to random noise as well as studies on the utility of the IIF for source

Entropy 2020, 22, 1199 19 of 21 identification are on-going research activities. Future work also includes research into the use of alternate decomposition methods.

Supplementary Materials: The following are available online at http://www.mdpi.com/1099-4300/22/11/1199/s1, MATLAB Function, title: Information Impulse Function (IIF) Using Short-time Fourier Transform. Author Contributions: IIF Concept and development: A.M.; numeric simulation test design: E.H.; analysis of results: A.M. and E.H.; writing—original draft preparation: A.M. and E.H.; writing—review and editing: A.M., E.H., and F.M.; supervision: F.M. All authors have read and agreed to the published version of the manuscript. Funding: This work is part of the Advanced Simulation and Computing (ASC) initiative at Sandia National Laboratories. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Acknowledgments: The authors would like to additionally thank Dan Roettgen, Mike Ross, and John Pott at Sandia National Laboratories for technical review of the manuscript and support. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Manini, N.; Braun, O.M.; Tosatti, E.; Guerra, R.; Vanossi, A. Friction and nonlinear dynamics. J. Phys. Condens. Matter 2016, 28, 293001. [CrossRef][PubMed] 2. Farazmand, M.; Sapsis, T.P. Extreme Events: Mechanisms and Prediction. Appl. Mech. Rev. 2019, 71. [CrossRef] 3. Yan, C.; Li, P.; Liu, C.; Wang, X.; Yin, C.; Yao, L. Novel gridded descriptors of poincaré plot for analyzing heartbeat interval time-series. Comput. Biol. Med. 2019, 109, 280–289. [CrossRef][PubMed] 4. Irving, D.; Sorrentino, F. Synchronization of dynamical hypernetworks: Dimensionality reduction through simultaneous block-diagonalization of matrices. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2012, 86. [CrossRef][PubMed] 5. Salinelli, E.; Tomarelli, F. Discrete Dynamical Models; Springer International Publishing: Vienna, Austria, 2014; ISBN 978-3-319-02291-8. 6. Wen, S.F.; Shen, Y.J.; Yang, S.P.; Wang, J. Dynamical response of Mathieu–Duffing oscillator with fractional-order delayed feedback. Chaos Solitons Fractals 2017, 94, 54–62. [CrossRef] 7. Svenkeson, A.; Glaz, B.; Stanton, S.; West, B.J. Spectral decomposition of nonlinear systems with memory. Phys. Rev. E 2016, 93, 1–10. [CrossRef] 8. MacKay, R.S. Nonlinearity in complexity science. Nonlinearity 2008, 21, 1–10. [CrossRef] 9. Yin, Y.; Duan, X. Information Transfer Among the Components in Multi-Dimensional Complex Dynamical Systems. Entropy 2018, 20, 774. [CrossRef] 10. Allen, B.; Lippner, G.; Chen, Y.-T.; Fotouhi, B.; Momeni, N.; Nowak, M.A.; Yau, S.-T. Evolutionary dynamics on any population structure. Nat. Publ. Gr. 2016, 544, 227–230. [CrossRef] 11. Adiga, A.; Kuhlman, C.J.; Marathe, M.V.; Ravi, S.S.; Rosenkrantz, D.J.; Stearns, R.E. Inferring local transition functions of discrete dynamical systems from observations of system behavior. Theor. Comput. Sci. 2017, 679, 126–144. [CrossRef] 12. Nepomuceno, E.G.; Perc, M. Computational chaos in complex networks. J. Complex Netw. 2020, 8, 1–16. [CrossRef] 13. Van Steen, M.; Tanenbaum, A.S. A brief introduction to distributed systems. Computing 2016, 98, 967–1009. [CrossRef] 14. Shinbrot, T. Granular chaos and mixing: Whirled in a grain of sand. Chaos 2015, 25.[CrossRef][PubMed] 15. Joglekar, D.M.; Mitra, M. Analysis of flexural wave propagation through beams with a breathing crack using wavelet spectral finite element method. Mech. Syst. Signal Process. 2015, 76–77, 576–591. [CrossRef] 16. Aliev, A.E.; Mayo, N.K.; Baughman, R.H.; Mills, B.T.; Habtour, E. Subwoofer and nanotube butterfly acoustic flame extinction. J. Phys. D Appl. Phys. 2017, 50.[CrossRef] Entropy 2020, 22, 1199 20 of 21

17. Liu, G.; Li, C.; Peng, Z.; Li, X.; Wu, J. Detecting remotely triggered microseismicity around Changbaishan Volcano following nuclear explosions in North Korea and large distant earthquakes around the world. Geophys. Res. Lett. 2017, 44, 4829–4838. [CrossRef] 18. Casas, J.R.; Moughty, J.J. Bridge Damage Detection Based on Vibration Data: Past and New Developments. Front. Built Environ. 2017, 3, 1–12. [CrossRef] 19. Lang, Z.Q.; Park, G.; Farrar, C.R.; Todd, M.D.; Mao, Z.; Zhao, L.; Worden, K. Transmissibility of non-linear output frequency response functions with application in detection and location of damage in MDOF structural systems. Int. J. Non. Linear. Mech. 2011, 46, 841–853. [CrossRef] 20. Habtour, E.; Sridharan, R.; Dasgupta, A.; Robeson, M.; Vantadori, S. Phase influence of combined rotational and transverse vibrations on the structural response. Mech. Syst. Signal Process. 2018, 100.[CrossRef] 21. Habtour, E.M.; Cole, D.P.; Kube, C.M.; Henry, T.C.; Haynes, R.A.; Gardea, F.; Sano, T.; Tinga, T. Structural state awareness through integration of global dynamic and local material behavior. J. Intell. Mater. Syst. Struct. 2019, 30, 1355–1365. [CrossRef] 22. Belenky, V.; Glotzer, D.; Pipiras, V.; Sapsis, T.P. Distribution tail structure and extreme value analysis of constrained piecewise linear oscillators. Probabilistic Eng. Mech. 2019, 57, 1–13. [CrossRef] 23. Artime, O.; Ramasco, J.J.; San Miguel, M. Dynamics on networks: Competition of temporal and topological correlations. Sci. Rep. 2017, 7, 1–10. [CrossRef][PubMed] 24. Maack, J.; Turkington, B. Reduced models of point vortex systems. Entropy 2018, 20, 914. [CrossRef] 25. Cliff, O.M.; Prokopenko, M.; Fitch, R. Minimising the kullback-leibler divergence for model selection in distributed nonlinear systems. Entropy 2018, 20, 51. [CrossRef] 26. Guntu, R.K.; Yeditha, P.K.; Rathinasamy, M.; Perc, M.; Marwan, N.; Kurths, J.; Agarwal, A. Wavelet entropy-based evaluation of intrinsic predictability of time series. Chaos 2020, 30.[CrossRef][PubMed] 27. Shaw, R. Strange attractors, chaotic behavior, and information flow. Z. Für Naturforsch. A 1981, 36, 80–112. [CrossRef] 28. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 623–666. [CrossRef] 29. Moon, F.C.; Holmes, P.J. A Magnetoelastic Strange Attractor. J. Sound Vib. 1979, 65, 275–296. [CrossRef] 30. Gilmore, R. Topological analysis of chaotic dynamical systems. Rev. Mod. Phys. 1998, 70, 1455–1529. [CrossRef] 31. Das, B.; Gangopadhyay, G. Large deviation theory for the kinetics and energetics of turnover of enzyme catalysis in a chemiostatic flow. J. Chem. Phys. 2018, 148.[CrossRef] 32. Sarkar, S.; Chattopdhyay, P.; Ray, A. Symbolization of dynamic data-driven systems for signal representation. Signalimage Video Process. 2016, 10, 1535–1542. [CrossRef] 33. Ledrappier, F.; Young, L.-S. The Metric Entropy of Diffeomorphisms: Part I: Characterization of Measures Satisfying Pesin’s Entropy Formula. Ann. Math. 1985, 122, 509. [CrossRef] 34. Ledrappier, F.; Young, L.-S. The Metric Entropy of Diffeomorphisms: Part II: Relations between Entropy, Exponents and Dimension. Ann. Math. 1985, 122, 540. [CrossRef] 35. Mesón, A.M.; Vericat, F. Invariant of dynamical systems: A generalized entropy. J. Math. Phys. 1996, 37, 4480–4483. [CrossRef] 36. Bandt, C.; Pompe, B. Permutation Entropy: A Natural Complexity Measure for Time Series. Phys. Rev. Lett. 2002, 88, 174102. [CrossRef] 37. Amigó, J.M. The equality of Kolmogorov-Sinai entropy and metric permutation entropy generalized. Phys. D Nonlinear Phenom. 2012, 241, 789–793. [CrossRef] 38. Garland, J.; Jones, T.R.; Neuder, M.; Morris, V.; White, J.W.C.; Bradley, E. Anomaly detection in paleoclimate records using permutation entropy. Entropy 2018, 20, 931. [CrossRef] 39. Yao, W.; Yao, W.; Yao, D.; Guo, D.; Wang, J. Shannon entropy and quantitative time irreversibility for different and even contradictory aspects of complex systems. Appl. Phys. Lett. 2020, 116.[CrossRef] 40. Powell, G.E.; Percival, I.C. A spectral entropy method for distinguishing regular and irregular motion of Hamiltonian systems. J. Phys. A Gen. Phys. 1979, 12, 2053–2071. [CrossRef] 41. West, B.M.; Locke, W.R.; Andrews, T.C.; Scheinker, A.; Farrar, C.R. Applying concepts of complexity to structural health monitoring. Conf. Proc. Soc. Exp. Mech. Ser. 2019, 6, 205–215. [CrossRef] 42. Islam, M.R.; Rahim, M.A.; Akter, H.; Kabir, R.; Shin, J. Optimal IMF selection of EMD for sleep disorder diagnosis using EEG signals. Acm Int. Conf. Proceeding Ser. 2018, 96–101. [CrossRef] Entropy 2020, 22, 1199 21 of 21

43. Li, D.; Cao, M.; Deng, T.; Zhang, S. Wavelet packet singular entropy-based method for damage identification in curved continuous girder bridges under seismic excitations. Sensors (Switzerland) 2019, 19, 4272. [CrossRef] 44. Riedl, M.; Müller, A.; Wessel, N. Practical considerations of permutation entropy: A tutorial review. Eur. Phys. J. Spec. Top. 2013, 222, 249–262. [CrossRef] 45. Takeishi, N.; Kawahara, Y.; Yairi, T. Subspace dynamic mode decomposition for stochastic Koopman analysis. Phys. Rev. E 2017, 96, 1–11. [CrossRef][PubMed] 46. Li, Q.; Dietrich, F.; Bollt, E.M.; Kevrekidis, I.G. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the koopman operator. Chaos 2017, 27, 1–25. [CrossRef][PubMed] 47. Hemati, M.S.; Rowley, C.W.; Deem, E.A.; Cattafesta, L.N. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets. Theor. Comput. Fluid Dyn. 2017, 31, 349–368. [CrossRef] 48. Eckart C, Y.G. 311_1.Tif. Psychometrika 1936, 1, 211–218. 49. Stockwell, R.G. Localization of the complex spectrum: The s transform. IEEE Trans. Signal Process. 1996, 44, 993. [CrossRef]

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).