arXiv:1504.05429v11 [cs.OH] 4 Apr 2021 diinlKyWrsadPrss iia inlprocessin signal digital Phrases: and Words Key Additional hspprcnit ftoprs()ad(2): and (1) parts two of consists paper This / o14ads n nata mlmnaino hsstrateg this of implementation actual An on. so and 1/4 to 1/8 hspprcnit ftoprs is,te(nietd Ha (undirected) the First, parts. two of consists paper This INTRODUCTION 1 path Hamiltonian filtering, bandwidth wide lowpass lation, putation C ocps • Concepts: coeffici CCS filter obtaining of complexity time and designs r filter conjectures When filters. differentiation numerical using uhrsades rc i,[email protected]. Kim, Bryce address: Author’s KIM BRYCE paths Hamiltonian of number obtain to filtering Signal f iuodlsga ()ta noe rp.Te ’divi a ou Then filters one graph. - suggested a is signal encodes a of that components ampl bandwidth f(t) becomes signal paths sinusoidal Hamiltonian of of) number - problem filtering 1 ucinecdn fa nietdgahinto graph undirected an of encoding function (1) 2 ops iebnwdhfitrn taey ossigo two of consisting strategy, filtering bandwidth wide lowpass (2) 3 itrn out Filtering (3) • sasge nua frequency angular assigned is - $ F hncnetrso qain() qain()adEuto (6 Equation and (3) Equation (2), Equation of conjectures when  h eann su hni odmntaeta h aitna p Hamiltonian the that demonstrate to is then issue remaining The h nua rqec fHmloinptsin paths Hamiltonian of frequency angular The iie wc and twice visited hsi oeb sinn ahvre wt sindordinal assigned (with vertex each assigning by done is This nplnma ierltv to relative time polynomial in lto sub-strategies. olation filter-bas differentiation numerical and divide-and-conquer nary domain noacombination a into le re on,tefitrcecetmgiueuprbudadt complex and time bound dominates upper part This magnitude hold. coefficient bound, complexity filter time the bound, order filter orsodn oaoe() a fecdn nudrce graph undirected an encoding of way a (1): above to Corresponding n h ubro aitna ah srdcdt finding to reduced is paths Hamiltonian of number the ing o hrdb te ah hsi oeb tlzn h ai rep basis the utilizing by done is this - paths other by shared not by fteevre nua rqece.Te each Then frequencies. angular vertex these of rm1to 1 from inlfitrn ovnet esitteHmloinpt nua f angular path Hamiltonian the shift we convenient, filtering signal 0 ( ihsignal with [ ~ of → : ( 2 5 5 C 5 opeiyclasses Complexity ) ( ) = C  with , ) ahmtc fcomputing of Mathematics ( [ G stenme fHmloinpaths: Hamiltonian of number the is l nua rqec of frequency angular ) ( ) C ) 5 of 4 4 ( : − 8l C 5 2 5 5 8[ ) E F ( ℎ 2 ooti t eofeunyapiuetksplnma ier time polynomial takes amplitude frequency zero its obtain to C C C 5 ) n hnsmu l possible all up sum then and , svstdoc n3-walk in once visited is hnw esaetm by time re-scale we Then . endi qain(7). Equation in defined , ( C  ) ( fsnsia inl,with signals, sinusoidal of 0 • ; ) un u ob h ubro aitna ah.Teeoe find- Therefore, paths. Hamiltonian of number the be to out turns Hardware | + l F ≡ | E 8 hti h u falvrie iie.Freape if example, For visited. vertices all of sum the is that = [ → → [ o each for 8 ah n onciiyproblems connectivity and Paths iia inlprocessing signal Digital prpitl.Ltasmo etcsrfr oasum a to refers vertices of sum a Let appropriately. F then , grigrqie ubro ape o specified for samples of number required egarding ,dgtlfitr ueia ieetain extrapo- differentiation, numerical filter, digital g, = [ C 5 5 novscrfllclplnma extrapolation polynomial local careful involves y rbe,P=NP problem, . nshl,PN conditionally. P=NP hold, ents ℎ -walk ( ( itna ahpolmi eue oasignal a to reduced is problem path miltonian nua rqec / o1 hn14t /,then 1/2, to 1/4 then 1, to 1/2 frequency angular t eadcnur taeyt leigotwide out filtering to strategy conquer’ and de C C utemr,Agrtm1computes 1 Algorithm Furthermore, . G 5 ) ) [ ( ( C uhta h eofeunyamplitude zero-frequency the that such = C l wl inl oform to signals -walk td tzr rqec o acombination (a for frequency zero at itude ) ∈ ) F ~ F hnwudbe would then ( = C C ako length of walk - / 2 [ and E  [ 1 + ( ,saigtemnmlrequired minimal the stating ), + 1 0 ) C u-taeis rqec bi- frequency sub-strategies: ) 8 E . 2 dlclplnma extrap- polynomial local ed ylwasfiltering. lowpass by ∈ hnvrie r ordered are vertices when t,wihoecnsaeas state can one which ity, h olte st assign to is then goal The . eetto hoe,with theorem, resentation . eunyfrom requency R  t nua rqec is frequency angular ath sson nfrequency In shown. is , = [ ℎ ( efitrcoefficient filter he , +, [ • ; = G − ( hoyo com- of Theory Í ) C loe in allowed 1 ) (with 8 E = 8 omake To . [ Í elative ℎ | F + ozero to 4 | 8l G E = 1 ( F C [  is [ C ) ) , . 1 1:2 Bryce Kim

basis being |+ | ≡ [ and vertex frequencies assigned as + = {[,[2,..,[[ }, given that each [-walk (a walk of length [ − 1) cannot visit the same vertex [ times.

The rest of the proof that Algorithm 1, which computes G (C), works as advertised is given in subsection 2.4. • Corresponding to above (2): the problem with 5 (C) turns out to be that non-zero angular fre- quencies of 5 (C) with potential non-zero amplitude have domain of 1/[[+1 ≤ |l| ≤ 1. That is, the least positive angular frequency with potential non-zero amplitude is 1/[[+1, with maximum angular frequency with potential non-zero amplitude being 1. Conventionally, this filtering problem is considered to be infeasible, requiring exponential computational re- source (relative to [). This paper presents a ‘divide and conquer’ strategy that demonstrates : polynomial computational resource, in sense that time complexity is $ ([ 552 ), where :5 5 2 is some constant, assuming a particular digital filter order upper bound when given filter de- sign constraints and polynomial time complexity relative to [ in obtaining filter coefficients. In that sense % = #% up the filter order bound and coefficient time complexity assumptions.

This is done by effectively first filtering out 1/2 of active angular frequencies, extrapolate new samples from filter output samples, and then filtering out 1/4 of originally active an- gular frequencies, extrapolate new samples and then filtering out 1/8 of originally active angular frequencies and so forth. This allows us the continued use of the same modest cut- off frequency, instead of an extremely low cutoff frequency that 5 (C) seems to demand for.

Extrapolation error is controlled by re-doing numerical differentiation whenever new sam- ples are extrapolated using a constructed local polynomial that utilize numerical differenti- ation data - this allows us to treat extrapolation errors as if they come from a combination of sinusoid inputs, given that numerical differentiation filters admit the frequency response interpretation. This allows us to tame down extrapolation errors. Error analysis then is con- ducted with heavy use of Parseval’s theorem and by the ‘extrapolation error of extrapolation error’ strategy.

1.1 Preliminary assumption

Sufficiently large |+ | ≡ [ > [0 would be assumed throughout the paper for simplification purposes.

1.2 Notation style G G For notation simplicity, it would be assumed that [[ ≡ [ ([ ) . 9, in contrast to typical engineering convention, would not refer to an imaginary number. Depending on contexts, 8 would either be used as an imaginary number or index 8, as typical in mathematical convention. : would refer to 2 2 2 positive natural number constants, with different subscripts. For notation, 01 ≡ 0 (1 ) ≠ 01 . By cycles, they would not refer to graph-theoretic cycles or Hamiltonian cycles and would be defined differently.  For formula that are technically not equations, they may still be referred to as equations for expositional convenience. Superscripts always represent exponentiation. Sampling interval is al- ways assumed to be ΔC = 1. I ≡ #5 + #3 + 1, I2 ≡ 2(#5 + #3 )+ #5 − 2. Time refers to variable C in functions such as 5 (C). Signal filtering to obtain number of Hamiltonian paths 1:3

1.3 Preliminary terminology A ‘sinusoid(al) contribution’ would always refer to a combination of sinusoid(al) contributions. That is, (a combination of) sinusoid(al) contributions, where the term in the parenthesis would often be not written. From here on, each vertex would be labelled with its angular frequency, instead of its ordinal based on order from 1 to [, as + = {[,[2,..,[[ } shows. Each vertex with assigned ordinal 8 is assigned vertex label and angular frequency of [8. A sum of vertices then refers to a sum of vertex angular frequencies (labels). In contrast to vertices, a sum @(C) of walks refers to a sum of signals with walk angular frequen- 8lF,@ cies. Or formally notated, @(C) = F 4 C, where F refers to each walk being summed up to form @(C), and lF,@ refers to angular frequency of walk F. A [-walk refer to a walk of lengthÍ [ − 1, with restrictions that edges connecting one vertex to itself are disallowed and that a walk has to respect allowed edge connections. Edges may be used more than once. Length of a walk refers to the number of times any edge is walked upon by the walk. Or equivalently, it refers to the number of times any vertex is visited minus one. Each walk can visit one vertex more than once, as far as edge connections allow it.

1.4 Style issues in exponents and subscripts

Due to the style of the manuscript, exponents and superscripts may not be clearly identified. \2 in 6\2 (C), F\2,9 (C), W\2 (C), k\2 (C), W\2,9 (C), k\2,9 (C), h\2,9 (C) and W\2,9 (C) of subsubsection 3.12.3 are 2 :105 subscripts. The denominator of : should be read as two to the power of [ . Similarly, de- 2[ 105 : .. [ 333.. 2[ :333.. nominator 2 in : should be read as two to the power of [ , where 333.. refers to 2[ 333.. : some random subscript. 2[ 333.. must be distinguished with 2[.., which refers to 2 times [...

2 FUNCTION ENCODING OF GRAPH The idea is to encode or translate an undirected graph as a computation circuit - for example, for graph in Figure 1, the circuit of Figure 2 is generated. We use the circuit to generate G (C), from which we generate 5 (C).

2.1 What is x(t)? Essentially, G (C) isthesumofall [-walks, which refer to walks of length[−1.(Each [-walk contains [ vertices, potentially repeated, and [ − 1 edges, potentially repeated. This follows standard terminology.) Each [-walk F is assigned angular frequency lF - thus each walk represents 8lFC 8lFC 4 . That is, G (C) = F 4 , where F is restricted to a possible [-walk according to graph  = (+,). The eventual goal ofÍ this paper is to calculate the number of Hamiltonian paths. Therefore, it is essential to distinguish Hamiltonian paths from other [-walks. One simple implementation is to assign a unique frequency to walks sharing same vertices up to permutation. That is, in Figure 1, one possible 4-walk is A-B-A-C and another is C-A-B-A. These walks share the following profile: A has been visited twice, B has been visited once, C has been visited once, D has been visited zero times. This is the implementation followed here. Then the issue is, 1) the exact way frequency is assigned to each walk, 2) how G (C) then may efficiently be computed without having to compute each walk signal F (C) and then summing up. 1:4 Bryce Kim

2.2 Vertex and walk frequency assignment One first orders vertices in + and assigns ordinal number 8 to each vertex, with 8 ranging from 1 to [. Then angular frequency is assigned to each vertex as [8, which would directly be used to refer to the vertex. Then a walk is assigned angular frequency as sum of vertices. That is, lF = E 2EE, where E is a vertex visited by walk F and 2E is the number of times E is visited by walk F. One can then see that walks with different vertex visit counts have distinct angular frequenciesÍ via the basis representation theorem - [ essentially works as basis, and one vertex can only be visited less than [ times: + = {[,[2,..,[[ }.

2.3 How x(t) and f(t) are computed Recall that G (C) is sum of all [-walks. The computation procedure is stated in Algorithm 1. Algo-

Algorithm 1: computation of G (C) input : Graph  = (+,), |+ | = [, + = {[,[2,..,[[ }, time C output:G (C) 1 Initialize array D1 and D2 of length [, index starting from 1; 2 G ← 0; 3 for 9 ← 1 to [ do 8[ 9C 4 D1 [9] ← 4 5 end 6 for 9 ← 2 to [ do 7 for < ← 1 to [ do 8 D2 [<] ← 0; 9 for ? ← 1 to [ do ? < 10 if ([ ,[ ) ∈  then 11 D2 [<] ← D2 [<]+ D1 [?]; 12 end 13 end 8[

Then it can be seen that finding the number of Hamiltonian paths is about finding  (0) in frequency domain of 5 (C), which means lowpass filtering of 5 (C), just with wide bandwidth to be filtered out.

Fig. 1. A 4-vertex graph B C

A D

Fig. 2. The circuit expansion representation of the graph in Figure 1. Edges in this figure are wires consistent with edge connections in the graph, with circles representing vertices at each level. ∼ represents function generator associated with each vertex, ×∼ represents multiplying the sum coming from the adder with function generator (oscillator) associated with each vertex. L1, L2, L3, L4 represent levels. Function generator at each vertex circle with angular frequency E sends out 48EC to every outgoing wire. Adder simply sumsÍ up values coming from incoming wires. ∼ ×∼ ×∼ ×∼ A A A A Í Í Í

∼ ×∼ ×∼ ×∼ B B B B Í Í Í G (C)

∼ ×∼ ×∼ ×∼ Í C C C C Í Í Í

∼ ×∼ ×∼ ×∼ D D D D Í Í Í

L1 L2 L3 L4

2.4 Explaining the x(t) algorithm Let us prove that Algorithm 1 does what it is supposed to do - G (C) as the sum of all [-walks, with vertex frequency + = {[,[2,..,[[ } by induction. Figure 2 helps in the proof - let us use level notations there. Then at the Line 9-13 loop of Algorithm 1, it is at the vertex with index < (or simply, notated with vertex angular frequency, [<), with level denoted by index 9 of Line 6 in the algorithm. In the Line 9-13 loop, it searches for ? < all vertices [ connected to vertex [ by allowed edges and adds up values D1 [?] stored in these vertices. 1:6 Bryce Kim

D1 [?] (is assumed to and would be proven to) stands for the sum of all 9 − 1-walks that end ? < with vertex ?. Therefore, summing up D1 [?] for ([ ,[ ) ∈  stands for the sum of all 9 − 1-walks that can be connected with vertex [< at the end to form 9-walks. Then Line 14 of the algorithm 8[

2.4.1 The main proof of the x(t) algorithm. Now onto proving that eachD1 [<] created at Line 17 of Algorithm 1 and level 9 faithfully captures angular frequency of every 9-walk ending with vertex [<. The proof goes by induction. Suppose that to level 9, each D1 [?] faithfully captures angular frequency (and associated signals) ? = 8lF? C of every 9-walk F? that ends with vertex [ . Then D1 [?](C) F? 4 , where lF? refers to its angular frequency. Then D2 [<] at Line 17 of the algorithm is: Í

D2 [<](C) = D1 [?](C) = ? < ([ Õ,[ ) ∈

< 48lF? C 48[ C = ([?,[< ) ∈ F?   Õ Õ      8 l +[< C   4 F? (1)     ([?,[< ) ∈ F?   Õ Õ    Equation (1) demonstrates that to existing 9-walk angular frequency, [< was added to produce   9 + 1-walk angular frequency, as demanded for walk angular frequency . Therefore, the inductive step from level 9 to level 9 + 1 was demonstrated. One thus just needs to demonstrate that at level 1, each D1 [?] faithfully captures angular frequency of 1-walk. This is true by design at Line 4 of the algorithm. Therefore, the proof by induction is complete - G (C) faithfully captures angular frequency (and associated signals) of every [-walk.

3 LOWPASS WIDE BANDWIDTH FILTERING 3.1 Section introduction For a conventional lowpass filter setup, if a required cutoff frequency is so low relative to maximum magnitude of input signal 5 (C), then too many samples are required for successful lowpass filtering. Let us provide the example setup, which would be used throughout the paper. C 8lC [ When 5 (C) ∈ is analyzed as 5 (C) = l l4 , l |l | ≤ [ and let the required cutoff angu- lar frequency be 1/[[+1. Assume that except for zero frequency, no angular frequency |l| < 1/[[+1 exists that has non-zero amplitude contributionsÍ toÍ5 (C). Then conventional filtering required ex- tremely many samples relative to [, in case we wish to filter away frequencies equal to or greater than cutoff frequency sufficiently such that to nearest integer (for real and imaginary parts) there is zero contribution of these frequencies. This is avoided by the following idea. First, lowpass filter away angular frequency 1/2 to 1, dou- ble frequency of the resulting filter output and then again lowpass filter away angular frequency 1/2 to 1. This effectively amounts to filtering out angular frequency 1/4to1/2. This can be con- tinued such that one filters out angular frequency 1/8to1/4, 1/16to 1/8 and so forth until 1/[[+1 to 2/[[+1 is reached. Signal filtering to obtain number of Hamiltonian paths 1:7

3.2 A cycle Each filtering process is considered part of a cycle - for example, (effectively) filtering angular frequency 1/2 to 1 is partof cycle1,while 1/4to1/2 is part of cycle 2, 1/8to1/4is cycle3and so forth. However, filters consume samples, so doubling frequency naively would only return us to expo- nentially many samples relative to [. Therefore, one needs extrapolation from cycle filter output samples, which then would become next cycle filter input samples. This idea is inspired from the point that as long as we overcome extrapolation issues that filtered high-frequency parts cause, since high frequency parts are already filtered, one can move onto next frequency ranges. A cycle 2, starting from cycle 1, then would first start from filtering cycle input 52 (C) that is low- pass filtered by filter 5 . This produces filter output 62 (C), with its samples then used for numerical derivative calculation using differentiation (derivative) filter 3,`, where ` refers to derivative or- der. After numerical derivatives are calculated at C = )4 , then one produces an extrapolated value of 62 ()4 +1) via a local polynomial construction, which then in turn are used along with other sam- ples and past extrapolated values to calculate numerical derivatives of 62 (C) at C = )4 + 1, which then are used to produce an extrapolated value of 62 ()4 + 2) and so forth.

3.3 On extrapolation and numerical derivatives The traditional polynomial extrapolation strategy uses a fixed polynomial calculated from existing samples. This leaves no correction mechanism and eventually creates costly extrapolation errors. A local polynomial extrapolation strategy presented in this paper, by contrast, allows for the correction mechanism by allowing extrapolation errors to be treated as (a combination of) sinu- soids. This is done by re-computing numerical derivatives (to order #` ) with a newly extrapolated (single) sample using numerical differentiation filters for each derivative order. This allows treat- ment of extrapolation errors as having come from a combination of sinusoids, as far as digital filters admit the frequency response interpretation.

3.4 Why extrapolation-based filtering is possible The aforementioned strategies may be questioned for the following reason: given the limited num- ber of pre-extrapolation samples, would not discrete Fourier transform of these samples suggest that valuable information about low frequency signals would be lost? The answer is that even if we replace 5 (C) with 5) (C), where 5) (C) is constructed from discrete Fourier transform of samples of 5 (C), the process of doubling frequency for digital fre- quencies introduces new aliasing such that high frequency parts of 5) (C) are mingled with low frequency parts of 5) (C). Therefore, fidelity of 5) (C) is anyway lost having passed each filtering-extrapolation cycle.

3.5 Additional terminology

For each cycle 2, high-frequencies would refer to post-cutoff frequencies - that is, |l| > l2 . Low- frequencies would refer to pre-cutoff frequencies: |l| ≤ l2 . These frequencies are what filter 5 perceive, not ‘effective’ frequencies scaled to original 5 (C).

3.6 On error analysis

For each cycle, the extrapolation processes would produce deviations from the actual filter 5 out- put results - this would be called (polynomial) extrapolation errors - though almost. Analysis of polynomial extrapolation errors for a high frequency is a little problematic. First of all, we should 1:8 Bryce Kim really consider entire filtered high frequency signals as errors, since we do not want them. Sec- ond, given the initial naive definition for extrapolation errors, heavy errors are reported for high- frequency signals, in contrast to the modified definition where extremely low errors are reported af- ter having adjusted constants appropriately. Therefore, for each cycle, post-cutoff frequency parts of the filter 5 output and their extrapolation errors together are considered extrapolation errors. The next important point is on the use of Parseval’s theorem. Without Parseval’s theorem, there is some difficulty in producing tight bounds that connect values of samples with frequency ampli- tudes - which make frequency analysis difficult. Parseval’s theorem therefore is crucial for error analysis. For extrapolation error analysis, one takes the following strategy. First, at each (extrapolation) sample time C, analysis is done as if samples of an ideal function extrapolated are available up to time C−1. Then take the initial extrapolation error function produced and consider its extrapolation error and so forth. (‘extrapolation error of extrapolation error.’) Together with Parseval’s theorem, this strategy simplifies extrapolation error analysis significantly.

3.7 Derivative filter design Matching with the goal to minimize high-frequency contributions after filtering and extrapola- tion, digital differentiation filters 3,` are designed as to satisfy frequency response |3,` (l)| < |3,` (l2 )| for |l| > l2 . The choice of cutoff angular frequency l2 would briefly be explained below.

3.8 On what remains The rest of this paper then is about tuning constants (labelled with :’s, with different subscripts) so that right balance is to be achieved so that only polynomially many samples are required relative to [ - for this, see subsection 3.11 and 3.14. The demonstration of the claim depends on three conjectures of Equation (2), Equation (3) and Equation (6) that state the minimal required filter order bound, the filter coefficient magnitude upper bound and the filter coefficient time complexity bound, which are left unproved. For the choice of the constants, a brief summary may be provided: :2 = 1 (which by definition gives us ‘cutoff angular frequency’ l2 = 1/[) is determined by Equation (30) when one considers high-frequency and low-frequency extrapolation errors together, :` = 2 (which by definition gives 2 us #` = [ , the maximum derivative order considered in numerical differentiation) is determined [:0 mainly by the low-frequency error equation of Equation (22) (considering |l| < l2 ), :0 (1/[ providing minimum magnitude attenuation ratio at l2 ) is determined by Equation (32) at the end of high-frequency and low-frequency extrapolation consolidation analysis. Rest of :s then are set by the aforementioned :’s so that non-extrapolation errors do not intervene into extrapolation error analysis that were carried out assuming non-extrapolation errors do not exist.

3.9 Overview of the divide and conquer strategy The divide and conquer strategy is described in Algorithm 2. The general idea goes as follows. At each ‘cycle’ 2, with input 52 (C) ending with generation of filter output samples that become next cycle input 52+1 (C), input 52 (C) is lowpass-filtered by filter 5 . (Line 8 to 13 in Algorithm 2 - furthermore, 51 (C) ≡ 5 (C).) This produces filter output 62 (C) from C = #5 to C = #5 + #3 with sample interval ΔC = 1. These samples are used to produce numerical derivative data of 62 (C) at (:) C = #5 +#3 , with derivative orders computed to #` - that is, numerical computation of 62 (C) from : = 1 to : = #` where (:) refers to numerical derivative order, not exponentiation, at C = #5 + #3 is conducted. This is used to extrapolate 62 (#5 + #3 + 1) using a local polynomial constructed Signal filtering to obtain number of Hamiltonian paths 1:9 from numerical derivatives. This numerically computed 62 (#5 + #3 + 1) is then treated as if it is actual 62 (#5 + #3 + 1) - this is used again to obtain numerical derivatives of 62 (C) down to order #` at C = #5 + #3 + 1, and then these derivatives are used to construct a local polynomial that is used to obtain extrapolated 62 (C) at C = #5 + #3 + 2 and so forth. (Line 14 to Line 25) Then we use computed and numerically extrapolated 62 (C) to construct next cycle input 52+1 (C) after doubling angular frequency and adjusting the time frame. (Line 27 of Algorithm 2) There are [2 cycles - ranging from the first cycle (2 = 1 or A = 1 in terms of Algorithm 2) to the final [2th cycle (2 = [2 or A = [2 in terms of Algorithm 2). 3.9.1 local polynomial extrapolation. The idea behind the local extrapolation is this: the usual polynomial interpolation / extrapolation usually accumulates errors rapidly without any taming factors. Why is this so? This is because each polynomial coefficient eseentially encodes derivative information, and derivative is essentially all numerically determined at the single time polynomial interpolation is taken from samples. Because there is no re-evaluation of derivative information, errors simply accumulate. By contrast, the local extrapolation approach re-evaluates derivative at each sample (discrete) time. Given that the extrapolation interval equals sample interval, where extrapolation interval refers to how far a sample at C ′ is extrapolated using numerical derivative information at C, with C ′ − C = ΔC = 1, this allows for protection against extrapolation error explosion. Furthermore, extrapolation error in each sample can now be treated as if it comes from an imaginary sinusoid function. This allows numerical derivative filter analysis fully in terms of angular frequency. We can then think of initial extrapolation error at time C ′ as extrapolation error arising out of ′ ′ ideal (without numerical errors) samples of 62 (C) from C = C − #3 − 1 to C = C − 1. And then we treat this initial extrapolation error function as a sinusoid function via discrete Fourier transform from samples ranging from C = #5 + #3 + 1 to C = 2(#5 + #3 )+ #5 − 2. This initial extrapolation error function generates its own extrapolation error function via discrete Fourier transform from error samples ranging from C = #5 + #3 + 2 to C = 2(#5 + #3 )+ #5 − 2 and so forth. 3.9.2 Derivative filter. It is well-known that numerical derivative filters have problems with a high-frequency signal, while they can be designed as to achieve high numerical derivative accuracy for a low-frequency signal. This issue would be evaded by designing numerical derivative filters such that the absolute nu- merical derivative result for a high-frequency signal is low. This means high numerical derivative inaccuracies, but we do not really care about fidelity of a high-frequency signal, given that we want to filter out high-frequency signals. All we want is to minimize as possible contributions of a high-frequency signal. 3.9.3 Extrapolation error analysis strategy. As would be seen, the sub-theme and strategy in ex- trapolation error analysis is summing up all ‘extrapolation error of extrapolation error,’ along with Parseval’s theorem machineries.

3.10 Number of filtering-extrapolation cycles First, about the number of cycles. As stated before, the idea is to implement the ‘divide and conquer’ strategy by filtering out 1/2 of ‘active’ frequencies of original 5 (C). (That is, we filter out angular frequency 1/2to 1,then 1/4to1/2, then 1/8to1/4 and so forth until 1/[[+1 to 2/[[+1 is reached.) 2 Since 2[ > [[+1, assuming [ is large enough, [2 cycles are sufficient to filter out all meaningful non-zero frequencies of 5 (C). This is implemented by doubling frequency of cycle output ?2−1 (C) of cycle 2 − 1 and then re- labelling it as cycle input 52 (C), which is input to filter 5 at cycle 2. 1:10 Bryce Kim

Algorithm 2: overview of the divide-and-conquer strategy

input : 5 (C), coefficient array ℎ1 of digital filter 5 of length #5 indexed from 1 to #5 , coefficient array ℎ2,` of derivative filter 3,` for derivative order ` with length of #3 + 1 indexed from 1 to #3 + 1 and ` ranging from 1 to #`, value of #5 ,#3 ,#` output:0, zero-frequency amplitude 1 Initialize array 52 of length #5 + #3 , index starting from 1; 2 Initialize array 62 of length 2(#5 + #3 )− 1, index starting from #5 to 2(#5 + #3 )+ #5 − 2; 3 Initialize array 3 of length #` , index starting from 1; 4 for 9 ← 1 to #5 + #3 do 5 52 [9] ← 5 (9); 6 end /* For each filtering-extrapolation cycle, with total of [2 cycles */ 2 7 for A ← 1 to [ do /* Lowpass filtering */ 8 for 9 ← #5 to #5 + #3 do 9 62 [9] ← 0; 10 for < ← 1 to #5 do 11 62 [9] ← 62 [9]+ ℎ1 [<]52 [9 − #5 + <]; 12 end 13 end /* Numerical derivative computation down to order #` for each time C and extrapolation to C + 1 */ 14 for 9 ← #5 + #3 to 2(#5 + #3 )+ #5 − 3 do /* Numerical derivative computation down to order #` for time C */ 15 for ` ← 1 to #` do 16 3 [`] ← 0; 17 for < ← 1 to #3 + 1 do 18 3 [`] ← 3 [`]+ ℎ2,` [<]62 [9 − #3 + < − 1]; 19 end 20 end /* Polynomial extrapolation to C + 1 */ 21 62 [9 + 1] ← 62 [9]; 22 for ` ← 1 to #` do 23 62 [9 + 1] ← 62 [9 + 1]+ 3 [`]/`!; 24 end 25 end /* Translating cycle output as new cycle input, after doubling frequency: we need new #5 + #3 samples with ΔC = 1 in new frequency/time scale. */ 26 for 9 ← 1 to (#5 + #3 ) do 27 52 [9] ← 62 [#5 + 2(9 − 1)]; 28 end 29 end /* Final computation of 0, zero-frequency amplitude to nearest integer */ 30 round up 62 [2(#5 + #3 )+ #5 − 2] to nearest integer: this becomes 0; Signal filtering to obtain number of Hamiltonian paths 1:11

Some remark about filter output ?2 (C). In Algorithm 2, there is no mention about ?2 (C). This is because ?2 (C) is simply 62 (C) up to C = #5 + #3 (with the implicit assumption that 52 (C) starts from C = 1 with ΔC = 1), and numerically extrapolated for C = #5 + #3 + 1 to C = 2(#3 + #5 )+ #5 − 2 from available 62 (C). In Algorithm 2, this was simply labelled together as 62 (C) for exposition convenience, but it is better to distinguish actual 62 (C) from numerical version ?2 (C). In the algorithm, 2 was not used as an index - instead A was used as a cycle index. In the main text, 2 would indeed be used as cycle index. Thus, 51 (C) refers to the cycle input of the first cycle, 52 refers to the cycle input of the second cycle and so forth. 51 (C) = 5 (C) by the design of the algorithm.

3.11 Filter design specifications We start from specifying filter design parameters, which would aid understanding filter require- ments behind Algorithm 2. For filter 5 : : • ‘Cutoff angular frequency’ l2 = 1/[ 2 : cutoff angular frequency is assumed to have post- cutoff minimum attenuation 0< in filter 5 . [:0 • Post-cutoff minimum attenuation 0< = 1/[ : A signal of angular frequency |l| ≥ l2 is attenuated by at least 0< of original amplitude by filter 5 . For example, if the original signal 8lC is 4 with |l| ≥ l2 , then 5 filter output is, in maximum magnitude, less than or equal to 0< . • The requirement that lowpass filter has no overshoot in the lowpass region. That is, fre- quency response | (l)| < 1 for |l| ≠ 0. Furthermore,  (l) = 1 for l = 0.

For derivative filters 3,`: • Subscript ` refers to derivative order. For example, taking numerical differentiation of 5 (C) of derivative order ` = 2 means numerically calculating 5 ′′ (C). :2 • ‘Cutoff angular frequency’ l2 = 1/[ : l2 is shared with filter 5 . Meaning of cutoff angular frequency, however, changes. For derivative filters, up to angular frequency l2 , derivative of 8lC 4 is very accurate up to maximum error magnitude 43 . For |l| > l2 , frequency response |3,` (l)| < |3,` (l2 )|. 8lC • Maximum error magnitude when obtaining numerical derivative of 4 for |l| ≤ l2 : 43 = : 1/[[ 3 : deviation magnitude from actual numerical derivative (of any derivative order `) is bounded above by 43 .

It would be assumed, for further convenience, that :0 = :3 . The reason behind particular form of l2 would be explained when extrapolation error issues are discussed, along with choices for : constants. The required filter order(s) for 5 and 3,` are then stated as: :0:00 +:2:20 #5 , #3 ≤ [ (2) where :00 and :20 are other constants. In this paper, this bound would be unproved, left to the subsequent paper, along with the question of proving polynomial time complexity in obtaining filter coefficients. It is further assumed that filter coefficients are bounded as: : : +: : |ℎ[8]| ≤ 2[ 0 01 2 21 (3) where ℎ[8] refers to any filter coefficient ℎ2,` [<] or ℎ1 [<] in Algorithm 2. The maximum number 83 of integer (non-fractional) digits possible for any number variable used in Algorithm 1 and 2 then is, via Line 11 and Line 18 of Algorithm 2: 1:12 Bryce Kim

:0:01 +:2:21 83 ≤ 4[ (4) Then,

:108 = :1<8 = 2(:0:01 + :2:21 ) (5) [:108 would refer to number of integer binary digits addition computation would keep. [:1<8 would refer to number of integer binary digits multiplication computation would keep. Time complexity of obtaining each filter coefficient is assumed to be bounded as:

$ ([:0:02 +:2:22 +:? :?2 ) (6)

[:? where 1/[ refers to the maximum filter coefficient error magnitude tolerated. :5 5 2 is then de- fined as:

:5 5 2 ≡ :0:02 + :2:22 + :?:?2 (7)

Note that :00, :20, :01 , :21 , :02 , :22 and :?2 are all undetermined constants.

3.12 General error analysis 3.12.1 What is error? We need to define what would be considered error when considering ex- trapolation or filter outputs. Things are initially straightforward. One first can consider various sources of errors - addition computation errors, multiplication computation errors, filter coefficient computation errors and extrapolation errors. They all have clear ideal references. However, our aim is to compute 0 ≡ =ℎ, zero-frequency amplitude to nearest integer and the number of Hamiltonian paths. Therefore, filtered high frequency signals, demarcated by cutoff frequency l2 , should be considered errors, since our aim is that all angular frequencies except zero are filtered out. 3.12.2 General error analysis, continued. For any type of errors induced by numerical approaches, one can think of them as generated by sinusoids. Why this is possible is given by discrete Fourier transform analysis.

• First, there are filter 5 output errors, which are caused by a combination of multiplication, addition computation errors and filter coefficient computation errors. Line 11, 18 and 23 in Algorithm 2 are the sources of the errors. • The next type of error is filter 3,` errors in computing numerical derivatives of order `. But error analysis for numerical derivative of a high-frequency (|l| > l2 ) signal would have to be thought differently - it is not divergence from actual derivative that matters but computed numerical derivatives that matter. We intend to make high-frequency numerical derivatives as small as possible without sacrificing faithful polynomial extrapolation capacities. Numer- ical derivative errors themselves do not matter until polynomial extrapolation is conducted using these numerical derivatives. • Basically, the point is that we want none of high-frequency components - thus, high-frequency components are basically considered as errors. • Numerical derivative computation errors only become relevant when polynomial extrapola- tion is carried out in Line 23 of Algorithm 2. (Note that numerical differentiation is carried out in Line 18 of Algorithm 2.) • Even when there is no numerical differentiation error, the fact that we use polynomials to extrapolate means there would be inherent extrapolation error given by polynomial #` , relevant for Line 23 of Algorithm 2. Signal filtering to obtain number of Hamiltonian paths 1:13

• Initial errors in 52 (C) for 2 ≥ 2. At each cycle, errors accumulated from the previous cycle 2 −1 becometheerrors in 52 (C). They would be treated as a new sinusoidal input contribution in error analysis. • Then there is error in computing 5 (C) ≡ 51 (C). Line 4 and 14 in Algorithm 1 contain the source of sinusoidal (48lC ) calculation errors. Line 11 and 21 contain the source of addition errors. Line 14 contains the source of multiplication errors. Itwould initially be assumed thatthe sole type of errors is polynomial extrapolation error, which is induced by numerical derivative error that becomes polynomial coefficient error and also induced by inherent finite-degree polynomial extrapolation error. Remaining types of errors are revisited later. Note that filter output errors become next cycle input errors.

3.12.3 Error analysis strategy. ′ ′ • The key functions to be used are \2 (C), 6 (C), 6\ (C), F\ ,9 (C) and F (C). 2 subscript refers \2 2 2 \2,9 to the fact that error analysis is done per one cycle. • \2 (C) stands for cycle input - or filter input to filter 5 at cycle 2. ′ • 6 (C) is function to be extrapolated, originated as filter output when filter input is \2 (C) \2 under filter 5 . ′ ′ ′ ′ • Let 6 (C) = W\ (C)+ k (C), where W\ (C) is the low frequency part(s) of 6 (C) and k (C) is \2 2 \2 2 \2 \2 ′ the high frequency part(s) of 6 (C), demarcated by angular frequency l2 . Note that angular \2 frequency decomposition of 6′ (C) comes from full continuous Fourier transform from C = \2 −∞ to C = ∞. = • Let 6\2 (C) W\2 (C)+ k\2 (C). ′ ′ • k\ (C) = k (C) for positive integer C < I − 1 and k\ (C) = k (I − 1) for positive integer 2 \2 2 \2 = C ≥ I − 1, where I #5 + #3 + 1. While k\2 (C) is incompletely defined, this is not a problem,

since we would treat k\2 (C) as if it is a periodic signal of period I2, given by discrete Fourier transform from samples ranging from C = 1 to C = I2. ′ • Change from 6 (C) to 6\ (C) reflects the view that high frequency signals are to be treated \2 2 as errors. Furthermore, since samples of any function are only available from C = 1 to C = #5 +#3 , there is freedom as to what function really is being extrapolated. For low frequency signals, this freedom cannot be exploited, since we need fidelity of these signals to extract 0, the zero frequency amplitude of 5 (C). By contrast, high frequency signals are filtered ones and we do not need them. Essence is that extrapolating a function at time C starts from the ′ value of the function at time C − 1 (this is why k\ (C) = k (I − 1)), and we add to this value 2 \2 using numerical derivatives computed at time C − 1, which then becomes the extrapolation

result. This point would become clear when defining k\2,9 (C) and especially h\2,9 (C), which roughly acts as the additional and numerical extrapolation contribution to k ′ (9 − 1) \2,9−1 ′ when extrapolating k (C) at C = 9, with 9 ≥ I + 1. The full definition of h\ ,9 (C) is given \2,9−1 2

by Equation (16), which roughly says that h\2,9 (C) is additional extrapolation contribution

to k\2 ,9−1 (C − 1) for positive integer C ≥ 9, assuming samples of k\2,9−1 (C) are available from

time 1 to C − 1 without inaccuracies. Given that h\2,9 (C) itself has to be ‘extrapolated’, one does not have to worry so much about C > 9, except to note that this is a convenient trick for error analysis. ′ • Let us think of initial extrapolation error at time C when extrapolating 6\2 (C), assuming = = ′ samples of 6\2 (C) from C 1 to C C − 1 are available without inaccuracies. This allows us ′ to formulate function F (C), where I = #5 + #3 + 1 refers to the starting point of non-zero \2,I extrapolation errors. 1:14 Bryce Kim

However, the high-frequency part of 6\2 (C) produces large extrapolation errors, despite ac- tual extrapolation results not being so. Therefore, it is better to think of the high-frequency part itself as if it is error. ′ This means forming F (C) from 6\ (C) as, with I defined as I = #5 + #3 + 1: \2,I 2 F ′ (C) = Z (C)+ h (C) = W (C)+ k ′ (C) (8) \2,I \2,I \2,I \2,I \2,I

where Z\2,I (C) refers to extrapolation error for W\2 (C) assuming that samples of W\2 (C) are

available from time 1 to C − 1 without inaccuracies, h\2,I (C) contains k\2 (C) and additional

extrapolation output when extrapolating k\2 (C), assuming samples of k\2 (C) are available ′ from time 1 to C −1 without inaccuracies. W\ ,I (C) is the low-frequency part(s) of F (C) and 2 \2,I ′ ′ k (C) is the high frequency part(s) of F (C), demarcated by angular frequency l2 . Note \2,I \2,I that Fourier decomposition is given by discrete Fourier transform using samples from C = 1 to C = I2. 8lC If W\2 (C) is given by l l4 , then for positive integer C ≥ I (otherwise for positive integer C < I, Z\ ,I (C) = 0): 2 Í = 8l (C−1) Z\2,I (C) l4 l Õ #` 43,:,l 1 +(8l)#` +148lCb (9) :! (# + 1)! :=1 ` ©Õ ª ­ 8lC ® where 43,:,l refers to error in computing numerical derivative of order : for 4 at C = 0, 8l (C−1) #` 43,:,l « ¬ and 0 ≤ Cb ≤ 1. l4 :=1 :! refers to the errors induced by errors in polynomial coefficients when extrapolating  48lC , which in turn are about errors in computing numer- Í l ical derivative.  48l (C−1) (8l)#` +148lCb 1 refers to inherent polynomial extrapolation l (#` +1)! error, given by Taylor’s theorem. 8lC Ifk\2 (C) is given by l l4 , then for positive integer C ≥ I (otherwise for positive integer C < I, h\ ,I (C) = 0): 2 Í #` = 8l (C−1) 8l (C−1) 1 h\2,I (C) l4 + l4 3=,:,l (10) l l = :! Õ Õ Õ: 1 8lC where 3=,:,l is numerical derivative of 4 of order : at C = 0. ′ Let k\ ,I (C) be formed from k (C) by: 2 \2,I k (C) = k ′ (C) (11) \2,I \2,I for positive integer C < I and k (C) = k ′ (I) (12) \2,I \2,I ′ for positive integer C ≥ I. This change from k to k\ ,I does not pose any issue in that \2,I 2 given non-existence of samples of any function from C = I, we can choose arbitrary values as the ‘right’ result for C = I to C = 2(#5 + #3 )+ #5 − 2. Furthermore, form of Equation (12) helps in that it suggests we are really counting down additional contributions to the extrapolation result for high frequency signals when moving

from C −1 to C, not usual extrapolation errors. That is,k\2,I really just keeps (or maintains) the initial extrapolation contribution at time I, which is simplyk ′ (I) for positive integer C ≥ I. \2,I Signal filtering to obtain number of Hamiltonian paths 1:15

In short, aforementioned functional form allows invoking Equation (25) when conducting

‘high frequency’ error analysis calculations. Let us now define F\2,I (C): = F\2,I (C) W\2,I (C)+ k\2,I (C) (13) > Then k\2,9 (C) for 9 I would be defined in general sense: k\2,9 (C) maintains the ‘additional’ extrapolation contribution at time 9 for positive integer 9 > I. That is,

k (C) = k ′ (9) (14) \2,9 \2,9 ′ for positive integer C ≥ 9. For other positive integer C < 9, k\ ,9 (C) = k (C). 2 \2,9 But k ′ (C) for 9 > I has not been defined yet - this would be provided later. Let F ′ (C) for \2,9 \2,9 positive integer 9 > I satisfy:

F ′ (C) = Z (C)+ h (C) = W (C)+ k ′ (C) (15) \2,9 \2,9 \2,9 \2,9 \2,9

where Z\2,9 (C) is extrapolation error for extrapolating W\2,9−1 (C) (which would be defined in

the below), assuming samples of W\2,9−1 are available without inaccuracies from time 1 to = < time C − 1. (Z\2,9 (C) 0 for positive integer C 9.) W\2,9 (C) is defined as the low frequency ′ ′ components of F (C), demarcated by |l| ≤ l2 , with k (C) defined as the high frequency \2,9 \2,9 ′ components of F\ ,9 (C). 2 > h\2,9 (C) is defined for positive integer C ≥ 9 (and 9 I) as: = h\2,9 (C) k=,\2,9−1 (C)− k\2,9−1 (C − 1) (16)

where k=,\2,9−1 (C) refers to the extrapolation result (not error) of extrapolating k\2,9−1 (C)

using samples of k\2,9−1 from time 1 to time C − 1 that are assumed to be available without inaccuracies. Otherwise for positive integer C, = h\2,9 (C) 0 (17)

This completes defining h\2,9 (C) as the additional extrapolation contribution induced at time

9 when extrapolating k\2,9−1 (C). ′ Now let us refer back to Equation (15). W\ ,9 (C) is defined as the low frequency part of F (C), 2 \2,9 ′ ′ > along with k\ ,9 (C) defined as the high frequency part of F\ ,9 (C), demarcated by |l| l2 2 2> (for low frequency, |l| ≤ l2 ). Then, we define F\2,9 (C) for 9 I by: = F\2,9 (C) W\2,9 (C)+ k\2,9 (C) (18)

with k\2,9 (C) satisfying: k (C) = k ′ (9) (19) \2,9 \2,9 for positive integer C ≥ 9 and

k (C) = k ′ (C) (20) \2,9 \2,9 for positive integer C < 9. This completes the setup required for error analysis calculations. • The total extrapolation ‘error’, at the end of cycle 2, provided filter input \2 (C) at cycle 2, I2 accounting for the high frequency modification, is then given by 9=I F\2,9 (C). Í 1:16 Bryce Kim

3.12.4 ‘Low-frequency’ extrapolation error. Let us examine F\2,< (C) for positive integer < ≥ I. Given the discrete Fourier transform decomposition such that = 8lC W\2,< (C) l4 (21) l Õ the following holds for positive integer C ≥ < + 1: = 8l (C−1) Z\2,<+1 (C) l4 l Õ #` 43,:,l 1 +(8l)#` +148lCb (22) = :! (#` + 1)! Õ: 1 © 8lC ª where 43,:,l refers to error in computing numerical derivative­ of order : for 4 at C =®0, and 8l (C−1)« #` 43,:,l ¬ 0 ≤ Cb ≤ 1. Let us examine Equation (22). l4 :=1 :! refers to the errors induced by errors in polynomial coefficients when extrapolating  48lC , which in turn are about errors in Íl computing numerical derivative.  48l (C−1) (8l)#` +148lCb 1 refers to inherent polynomial ex- l (#` +1)! trapolation error, given by Taylor’s theorem. We now see that as far as 43,:,l is small enough and #` is large enough, the low-frequency extrapolation error can be ignored - as far as the high- frequency contribution of Z\2,<+1 (C) to F\2,<+2 (C) is small enough. This would be confirmed in the following discussion. As aforementioned, let 43,:,l be bounded as, applying only for |l| ≤ l2 : 1 |43,:,l | ≤ (23) [[:3 = 1 where 43 : as a convenient shortcut. [[ 3 ′ 3.12.5 ‘High-frequency’ extrapolation error. Given high-frequency signal k (C), k\ (C) is an- \2,< 2.< alyzed with discrete Fourier transform, transform-relevant samples ranging from C = 1 to C = I2, with < ≥ I: = 8lC k\2,< (C) l4 (24) l Õ Then the following holds for positive integer C ≥ < + 1: # ` 1 = 8l (C−1) h\2,<+1 (C) l4 3=,:,l (25) l = :! Õ Õ: 1 8lC where 3=,:,l is numerical derivative of 4 of order : at C = 0. 3=,:,l is bounded by: : |3=,:,l | ≤ l2 (26) which follows from the derivative filter requirement that frequency response be | (l)| < | (l2 )| for |l| ≠ l2 .

We intend to minimize magnitude of h\2,<+1 (C). Or say, we want to make |h\2,<+1 (C)| ≪ 1. 3.12.6 Parseval’s theorem. We would invoke Parseval’s theorem heavily in following error analy- sis. Parseval’s theorem states that: 1 # |D(C)|2 = |* (l)|2 (27) # = l ÕC 1 Õ where # represents the total number of samples assumed to be available. Signal filtering to obtain number of Hamiltonian paths 1:17

3.12.7 Consolidating both ‘high-frequency’ and ‘low-frequency’ errors. Let us consolidate both high-frequency and low-frequency extrapolation errors. From how F\2,9 (C) is defined , the fol- lowing is the magnitude upper bound on F\2,9+1 (C) for sufficiently large [:

I2 I2 2 < 2 2 max |F\2,9+1 (C)| (2l2 ) max |F\2,9 (C)| (28) " = # " = # ÕC 1 ÕC 1 where I2 = 2(#5 + #3 )+ #5 − 2. Equation (28) comes from the fact that 3=,:,l in Equation (25) is bounded by Equation (26). Therefore, we make the following observation. Let \2 (C) be decomposed via Fourier transform as: 8lC \2 (C) = l4 (29) l Õ [ with l |l | ≤ [ . Then,

1 ÍI2 |F (C)|2 ≤ I \2,I 2 C=1 Õ 2 2 2 3 + 2l2 2 4 |l | < |l | (30) [:0 [:0 l [ l [ Õ   Õ   where I = #5 + #3 + 1 and I2 = 2(#5 + #3 )+ #5 − 2 with Parseval’s theorem implicitly invoked. 8lC > [:0 First, l4 in \2 (C) for angular frequency l l2 decays in magnitude to |l |/[ in 6\2 (C). The > contribution to F\2,< induced by this high-frequency (|l| l2 ) part of 6\2 is the high-frequency part plus the additional error contribution given by Equation (25). This gives the upper bound in [:0 8lC magnitude of the total contribution as (1 + 2l2 )|l |/[ . Furthermore, for l4 in \2 (C) for angular frequency l ≤ l2 , Equation (23) along with Equation (22) suggests that its contribution [:0 = to F\2,I is upper bounded in magnitude by 2|l |/[ , given that :0 :3 , as assumed. As far as l2 ≪ 1 (by which it would mean l2 = 1/[ with the default assumption of large [), as guaranteed, the following upper bound holds from Equation (28):

I2 I2 I2 2 = |F\ ,<′ (C)| max |F (C)|2 ≤ max C 1 2 (31) \2,< 1 −(2l )2 " = ′+ = # 2 <Õ< 1 ÕC 1  Í 

I2 I2 2 1 2 5 2 |F\ ,< (C)| < |l | (32) 2 [:0 I2 <=I = [ l Õ ÕC 1   Õ Equation (32) provided magnitude upper bound on extrapolation error contributions to the next cycle input, having assumed that there was no error contribution to the present cycle input. This establishes that as far as :0 ≫ 1 (by which it would mean :0 = 2) and :` ≫ 1 (by which it : would mean :` = 2 and where [ ` ≡ #` ), extrapolation error does not pose an issue for Algorithm 2. 3.12.8 Seings for analysis of other types of errors. Having considered per-cycle extrapolation errors, we now consider other types of errors - sinusoid computation errors, addition computation errors, multiplication computation errors and filter coefficient errors. General settings would be discussed below. First, we need to define how many integer and fractional bits we are going to maintain for addition and multiplication. 1:18 Bryce Kim

• The requirement of maintaining [:108 integer bits (for each of real and complex parts) and [:105 fractional bits for each sum (addition) of two numbers is imposed. This assumes that each number involved has maximum of [:108 − 1 integer bits. • The requirement of maintaining [:1<8 integer digits and [:1<5 fractional digits per each prod- uct (multiplication) of two numbers is imposed. This assumes that each number involved has maximum of ([:1<8 − 2)/2 integer digits. • To simplify analysis, :108 = :1<8 . :105 + 2 = :1<5 would be imposed for a reason explained later. The above requirement simplifies error analysis in sense that we may separate addition computa- tion error and multiplication computation error analysis from sinusoidal calculation error analysis for Algorithm 1 - to be discussed in subsubsection 3.12.9. Eventually, the aim is to only leave addi- tion errors and extrapolation errors as of only concerns for Algorithm 1 and 2. 3.12.9 Errors in computing x(t) and f(t): sinusoid computation error. Line 4 and 14 in Algorithm 1 contain sources of sinusoid computation errors. Line 4 sinusoid computation errors can be brushed aside, given Line 11 (and Line 21, in that they are considered together) in the same algorithm, when: 1 2 : ≪ : (33) [[ B 2[ 105 where the left-hand side refers to the maximum error magnitude when computing sinusoid 48lC . Line 14 sinusoid computation errors can be brushed aside, as long as:

max |D2 [<]| 2 : ≪ : (34) [[ B 2[ 1<5 [+1 where max |D2 [<]| is assumed to be [ , accounting for errors. In the ideal case without other [ errors, max |D2 [<]| ≤ [ . 3.12.10 Errors in computing x(t) and f(t): multiplication error. Multiplication errors, for Algorithm 1, occurin Line 14. We may simplify error analysis for the multiplication error contribution to G (C) (and 5 (C)) as: [ 2 |4

2 [ 2 |40G (C)| ≤ [ [ : (36) 2[ 105  where 40G refers to the addition error contribution to G (C) (and in turn, 5 (C)). First [ again refers : to the number of levels. 2/2[ 105 refers to maximum error magnitude induced per each addition. [2 refers to the fact that Line 11 in Algorithm 1 has two inner loops with index from 1 to [. The Signal filtering to obtain number of Hamiltonian paths 1:19 exponential [ refers to the number of levels that errors exponentially build up, with Line 11 and 21 considered together. 3.12.12 Errors in per-cycle computation: filter coefficient error. Let the cycle input (in Algorithm [ 2) be \2 (C) (that is, assume as if 52 (C) = \2 (C)). Furthermore, assume |\2 (C)| ≤ [ . The goal here is to make an filter coefficient error at its occurrence less than multiplication and addition errors in magnitude as to allow us to ignore filter coefficient errors. Line 11 of Algorithm 2 has the first occurrence of filter coefficient errors. In order to ignore the filter coefficient error and consider only the multiplication error in the line,

1 < 2 : |\2< | : (37) 2[ ? 2[ 1<5

[ [:? where \2< represents the maximum magnitude of \2 (C), which we may count as [ . 1/2 in the left-hand side of Equation (37) represents maximum error magnitude in computing each filter coefficient. Analysis for Line 18 of Algorithm 2 remains same as analysis for Line 11. 3.12.13 Errors in per-cycle computation: multiplication error. The issue to be discussed here is how to make multiplication error magnitude less than addition error magnitude. Line 11 of Algorithm 2 contains the first occurrence of multiplication errors. To make the addi- tion error cover up the multiplication error in the line,

2 < 1 : : (38) 2[ 1<5 2[ 105 Line 18 and Line 23 share same aforementioned analysis. 3.12.14 Errors in per-cycle computation: addition errors without extrapolation errors. For subsub- section 3.12.15, 3.12.16 and 3.12.17 discussing addition errors in Algorithm 2, it would initially be assumed that there is no extrapolation error. Extrapolation errors induced by addition errors would be discussed separately. 3.12.15 Errors in per-cycle computation: Line 11 addition error. Line 11 of Algorithm 2 accumulates, = per each 6\2 (C) (or as if 62 (C) in the algorithm satisfies 62 (C) 6\2 (C)), the following addition error maximum magnitude at the end of Line 13:

2#5 : (39) 2[ 105 3.12.16 Errors in per-cycle computation: Line 18 addition error. Line 18 in Algorithm 2 accumulates the following maximum addition error magnitude at the end of Line 20 for each 3 [`]:

2(#3 + 1) : (40) 2[ 105

This contributes to 62 [9 + 1] at Line 23 (though not looping for all possible 9 - one looks at how 3 [`] at each 9 affects 62 [9 + 1]) with the following maximum magnitude:

4(#3 + 1) : (41) 2[ 105 where the multiplicative factor of 2 relative to Equation (40) comes from the denominator `!. 1:20 Bryce Kim

3.12.17 Errors in per-cycle computation: Line 23 addition error. The additions in Line 23 of Algo- rithm 2 initially contributes to the following maximum magnitude per each 6\2 (C):

2#` : (42) 2[ 105 3.12.18 Errors in per-cycle computation: addition error and extrapolation error. We now utilize the results in subsection 3.12. Equation (32) just needs to be modified and adapted for analysis of extrapolation errors caused by addition error contributions. Since addition error analysis centers = 8lC around contributions to 6\2 (C) l l4 , Equation (32) needs to change to:

I2 I2 1 Í 2 < 2 2 |F\2,< (C)| 5 |l | (43) I2 <=I = l Õ ÕC 1 Õ Let 4ℓ,2,9 (C) refers to F\2,9 (C) when 6\2 (C) is the Algorithm 2 Line ℓ addition error induced at time C, with ℓ referring to the line number. Let 4ℓ,2,I−1 (C) refer to 6\2 (C) itself. Let us now discuss the Algorithm 2 Line 11 addition errors and their induced extrapolation 2 errors. Given Equation (39), l |l | in Equation (43) is bounded by: 2 Í 2 2#5 |l | ≤ (44) [:105 l 2 Õ   Therefore: I2 I2 2 1 2 2#5 |411,2,9 (C)| ≤ 26 (45) [:105 I2 = − = 2 9ÕI 1 ÕC 1   2 For the Algorithm 2 Line 18 addition error, l |l | in Equation (43) is bounded by, given Equation (41): Í 2 2 4(#3 + 1) |l | ≤ (46) [:105 l 2 Õ   Therefore: I2 I2 2 1 2 4(#3 + 1) |418,2,9 (C)| ≤ 26 (47) [:105 I2 = − = 2 9ÕI 1 ÕC 1   2 For the Algorithm 2 Line 23 addition error, l |l | in Equation (43) is bounded by, given Equation (42): Í 2 2 2#` |l | ≤ (48) [:105 l 2 Õ   Therefore: I2 I2 2 1 2 2#` |423,2,9 (C)| ≤ 26 (49) [:105 I2 = − = 2 9ÕI 1 ÕC 1   3.13 Final error analysis It is important to note that addition errors do not interact with each other in sense that each addition error analysis can be done separately in terms of an upper bound. Furthermore, pure extrapolation errors do not induce addition errors in sense, again, that addition error analysis can be done independently of pure extrapolation errors in terms of an upper bound - though addition errors do induce extrapolation errors. Therefore, per each cycle, there are pure extrapolation errors, addition errors and extrapolation errors induced by addition errors. This simplifies calculations. Signal filtering to obtain number of Hamiltonian paths 1:21

Let us define 4 (C) at cycle [2. V here refers to the error source algorithm. ℓ refers to V,ℓ,01,02,..,0[2 the initial addition error source line number. (In case ℓ = 0, errors are pure extrapolation errors.) 0 9 refers to ‘at which time C ′ in cycle 2’ extrapolation error was induced, with 9 referring to 9th cycle, with the input source that induced the error having come from extrapolation error at time 0: in cycle : < 9. To summarize, extrapolation error function 4 (C) at cycle [2 was generated V,ℓ,01,02,..,0[2 < at time C = 0[2 (therefore, this function equals zero at integer times C 0[2 ), with input coming 2 from the extrapolation error function generated at time 0[2−1 in cycle [ − 1, assuming that the function comes from the input of the extrapolation error function generated at time 0[2−2 in cycle [2 −2 (note that time 0 does not refer to time ordering in cycle [2 −1or [2 or [2 −3), assuming 0[2−2 that the function comes from the input of the extrapolation error function generated at time 0[2−3 2 in cycle [ − 3 and so forth. When 0 9 = I − 1, the error function generated at time 0 9+1 in cycle 9 + 1 comes from the input of the extrapolation error function generated at time 0 9−1 in cycle 9 − 1. 2 In case 0 9 = I − 1 for all integer 1 ≤ 9 ≤ [ , then no extrapolation error was induced. There are [2 cycles - therefore, the sum of absolute value squared of extrapolation error func- 2 tions caused by ideal 52 (C) at the end of cycle [ (the final cycle) where 6[2 (C) is calculated is bounded above by @52 as, derived from Equation (32):

I2 I2 I2 1 2 .. |41,0,0 ,0 ,..,0 (C)| − I 1 2 [2 2 0 =I−1 0 =I−1 C=1  [Õ2 1Õ Õ I2 2 2 2 5 2 |41,0,I−1,I−1,..,I−1 (C)| ≤ 2[ |l | (50) [:0 = [ l ÕC 1    Õ 2 2 5 2 2[ |l | ≤ @5 (51) [:0 2 [ l   Õ with 2 2 [ 2 1 @5 = 50[ ([ ) (52) 2 [:0 [  [ with the assumption of l |l | ≤ [ as justified from the definition of 52 (C). Contributions of Algorithm 1 addition errors - sum of absolute value squared of addition error functions plus absoluteÍ value squared of extrapolation error functions - at the end of cycle [2, given Equation (36) and (32), are bounded above by @1,11 as:

I2 I2 I2 1 2 .. |41,11,0 ,0 ,..,0 (C)| ≤ @1,11 (53) I 1 2 [2 2 0 =I−1 0 =I−1 C=1 [Õ2 1Õ Õ 2[2[+1 2 2[2[+1 2 1 2 @1,11 = + 25 (54) [:105 [:105 [:0  2   2  [  Note the factor 25. Algorithm 1 errors are produced only for cycle 1, since samples of 5 (C) are only used for cycle 1 in Algorithm 2. Contributions of Algorithm 1 multiplication errors - sum of absolute value squared of addition error functions plus absolute value squared of extrapolation error functions - at the end of cycle 2 [ , given Equation (35) and (32), are bounded above by @1,14 as:

I2 I2 I2 1 2 .. |41,14,0 ,0 ,..,0 (C)| ≤ @1,14 (55) I 1 2 [2 2 0 =I−1 0 =I−1 C=1 [Õ2 1Õ Õ 1:22 Bryce Kim

2[[+1 2 2[[+1 2 1 2 @1,14 = + 25 (56) [:1<5 [:1<5 [:0  2   2  [  When one translates G (C) into 5 (C) via ~(C), a sinusoid computation error is incurred. This contributes - sum of absolute value squared of addition error functions plus absolute value squared of extrapolation error functions - at the end of cycle [2, given Equation (32):

I2 I2 I2 1 2 .. |41,22,0 ,0 ,..,0 (C)| ≤ @1,22 (57) I 1 2 [2 2 0 =I−1 0 =I−1 C=1 [Õ2 1Õ Õ

2 2 2 2 1 2 @1,22 = + 25 (58) [:1<5 [:1<5 [:0  2   2  [  : : assuming [[+1/2[ B ≪ 2/2[ 1<5 . From Equation (44) and (43), the Algorithm 2 Line 11 addition error contributions at the end of cycle [2 - sum of absolute value squared of addition error functions plus absolute value squared of extrapolation error functions - can be summed up with upper bound @2,11:

I2 I2 I2 1 2 .. |42,11,0 ,0 ,..,0 (C)| ≤ @2,11 (59) I 1 2 [2 2 0 =I−1 0 =I−1 C=1 [Õ2 1Õ Õ

2 2 2#5 @2,11 = 52[ (60) [:105  2  where [2 comes from the number of cycles, combined with Equation (32) for the multiplicative factor of 2. From Equation (46) and (43), the Algorithm 2 Line 18 addition error contributions at the end of cycle [2 - sum of absolute value squared of addition error functions plus absolute value squared of extrapolation error functions - can be summed up with magnitude upper bound @2,18:

I2 I2 I2 1 2 .. |42,18,0 ,0 ,..,0 (C)| ≤ @2,18 (61) I 1 2 [2 2 0 =I−1 0 =I−1 C=1 [Õ2 1Õ Õ

2 2 4(#3 + 1) @2,18 = 52[ (62) [:105  2  From Equation (48) and (43), the Algorithm 2 Line 23 addition error contributions at the end of cycle [2 - sum of absolute value squared of addition error functions plus absolute value squared of extrapolation error functions - can be summed up with magnitude upper bound @2,23:

I2 I2 I2 1 2 .. |42,23,0 ,0 ,..,0 (C)| ≤ @2,23 (63) I 1 2 [2 2 0 =I−1 0 =I−1 C=1 [Õ2 1Õ Õ

2 2 2#` @2,23 = 52[ (64) [:105  2  Therefore, the upper bound for sum of absolute value squared of error functions is: Signal filtering to obtain number of Hamiltonian paths 1:23

I2 I2 I2 1 2 .. |4V,ℓ,0 ,0 ,..,0 (C)| ≤ I 1 2 [2 2 ℓ 0 =I−1 0 =I−1 C=1 ÕV Õ [Õ2 1Õ Õ

@52 + @1,11 + @1,14 + @1,22 + @2,11 + @2,18 + @2,23 (65) This demonstrates that error contributions can be made negligible with right selections for the constants notated with :.

3.14 Constants k All constants with : label are positive integers.

• :0 = :3 : used in post-cutoff minimum attenuation 0< (:0) for filter 5 and in maximum error magnitude for angular frequency |l| ≤ l2 for derivative filter 3,`. Set as: :0 = :3 = 2. This is about the design choice to attenuate a high frequency signal with amplitude magnitude [ of [ sufficiently as to not hamper with the computation of =ℎ, the number of undirected Hamiltonian paths in a graph. • :2 : used in cut-off frequency l2 of both filter 5 and derivative filters 3,`. :2 = 1. • [:108 = [:1<8 : number of integer bits kept in product computation ([:1<8 ) or in sum compu- : tation ([ 108 ). Via Equation (5), related to :0 and :2 . • :5 5 2 : used in time complexity of finding filter coefficients. Related to :0, :2 and :? via Equa- tion (7). [:? • :? .1/[ refers to maximum error magnitude when computing each filter coefficient. :? = 7, given consideration in Equation (37), assuming :1<5 = 5. • [:105 , [:1<5 : the number of fractional digits kept in sum computation ([:105 ), in product : computation ([ 1<5 ). Set as: :105 = 3, :1<5 = 5, with the reason for this setup given by Equation (38). [:B 8lC • :B :1/[ represents maximum magnitude of error in computing 4 . Set as: :B = 7, coming from considering Equation (33) and (34). • :00, :20 , :01 , :21 , :02 , :22 , :?2 : undetermined constant coefficients used in Equation (2) in determining filter order (:00,:20), in Equation (5) in determining :108 (:01, :21 ), in Equation (6) in reflecting filter coefficient precision, :0 and :2 dependence on filter coefficient time complexity (:02 , :22 , :?2 ). :` • :`: used in #` = [ , which is the final order of numerical derivative computed. #5 , #3 ≫ #` is required. Thus, because of Equation (2), :0:00 + :2:20 ≫ :`. :` = 2.

Let us think of relationships between different # s and constant :s. As aforementioned, #5 , #3 ≫ #` . #3 is determined by #` (and thus :`), :0 and :2 . #5 is determined by :0 and :2 (along with : :00 and :20 for both #5 and #3 , though they are not parameters we can choose - and [ 2 = l2 ). This suggests that the real choice we have is about :0 and :2 .

3.15 Time complexity analysis In Algorithm 1, the line that dominates for time complexity is Line 14. Time complexity of Algo- 2 8lC rithm 1 is thus given by $ ([ 2B8=), where 2B8= refers to time complexity of calculating each 4 to precision set by :B . In Algorithm 2, the dominant line in terms of time complexity is Line 18 2 with $ ([ (#5 + #3 )#` #32

4 CONCLUSION The undirected problem was reduced to a signal filtering problem based on a spe- cial graph encoding into 5 (C). Construction of 5 (C) assigns distinct and unique angular frequency for Hamiltonian paths - zero frequency - with each possible path contributing amplitude of 1. Non- zero frequencies with potential non-zero amplitude are in the domain 1/|+ | |+ |+1 ≤ |l| ≤ 1 for 5 (C). This makes the lowpass filtering problem of extracting the zero-frequency amplitude - the number of Hamiltonian paths - very difficult if one sticks with the conventional filtering strategies, given wide bandwidth to be filtered out. Assuming validity of the filter order bound and the filter time complexity assumption corre- sponding to required filter design, one can find the way to filter out wide bandwidth and extract : the zero-frequency signal efficiently, taking $ (|+ | 552 ), where :5 5 2 is constant but is left undeter- mined, as this depends on exact time complexity of finding filter coefficients. The lowpass wide bandwidth filtering strategy was developed based on a frequency ‘divide and conquer’ idea. One must filter out angular frequency 1/2to1andthen1/4to1/2 and so on until one reaches 1/[[+1, not requiring change in cutoff frequency, accounting for time re-scaling. Each frequency range filtering is then considered to be a filtering cycle, or a filtering - extrapolation cycle. In order to provide sufficient number of filter input samples at the beginning of a subsequent cycle, filter output extrapolation is necessary at each cycle. This is done through careful construction of a local polynomial based on numerical differentiation.