<<

FAST Weighted Average Calculation Matthew Tambiah

To cite this version:

Matthew Tambiah. FAST Weighted Average Calculation Algorithm. 2021. ￿hal-03142251v2￿

HAL Id: hal-03142251 https://hal.archives-ouvertes.fr/hal-03142251v2 Preprint submitted on 22 Feb 2021

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. FAST Weighted Average Calculation Algorithm

Matthew A. Tambiah Bachelor’s in Electrical and Computer Engineering Harvard University, Cambridge, United States of America Founder and Chief Education Offcer of Fast Education Group https://orcid.org/0000-0001-6761-3723 Abstract This paper presents an effcient algorithm (method) for calculating Weighted Averages called the “Fast Weighted Average” calculation algorithm. In general, the algorithm presented in textbooks for calculating a Weighted Average of n elements (each with a Values and a Weight) will require n operations, assuming no Weights and no Values are zero.

The Fast Weighted Average calculation algorithm can calculate the Weighted Average of n elements using at most (n − 1) multiplication operations when using Normalized Weights. Furthermore, in many examples the computational complexity of the multiplication operations of the Fast Weight Average algorithm are reduced compared to the standard Weighted Average calculation algorithm, as the multiplication operations of the Fast Weighted Average algorithm often have fewer digits, though this depends on the data set. The Fast Weighted Average algorithm is particularly useful when people perform and interpret the calculations and make decisions based on the results. Review The Weighted Average of n data points or Values of variable x with Values , each with corresponding non-negative Normalized Weights {x1, x2, . . . xi, . . . xn} is denoted by and calculated as: {w1, w2, . . . wi, . . . wn} x n Weighted Average of x: x = ∑ (wi ⋅ xi) i=1 Normalized Weights mean that:

n ∑ wi = 1 i=1 If arbitrary weights are given , which must be non-negative, they {W1, W2, . . . Wi, . . . Wn} can be normalized as follows where represents the Normalized Weight: wi

Wi wi = n ∑i=1 Wi

www.fastmath.net 1 of 21 © 2020 – 2021 Matthew Tambiah I will assume none of the values are 0. If any are zero, then they will ignored and excluded from wi the count of . So we will assume positive Normalized weights . n n {w1, w2, . . . wi, . . . wn} Standard implementation of the Weighted Average calculation with Normalized Weights of n elements will require n multiplication operations if each Weight is positive (not zero) and each value is non-zero. If any of the values are zero, then this reduces the of multiplication operations required, but standard may require up to n multiplication operations.

In practice, any Values where either the value is zero or the weight is zero can be discarded (after Weights are Normalized, if needed) and the number of elements and multiplication operations reduced accordingly. This paper deals with situations where Weights are Normalized and each of the n elements in the Weighted Average calculation have positive (non zero, non-negative) Weights, and non-zero Values. FAST Weighted Average Calculation Algorithm The Fast Weighted Average calculation algorithm is implemented as follows. First, a value in the set of variables is selected as a Reference Value. We will use as the Reference Value for the x1 purposes of the formula. Any Value can be used as the Reference Value, and can simply be relabeled as without loss of generality. x1

Defne as follows: Δxi

Δxi = xi − x1 (Difference between xi and the Reference Vale of x1) This means:

xi = x1 + Δxi [Note that Δx1 = 0] The Fast Weighted Average calculation algorithm to calculate the Weighted Average of x, denoted by x, is expressed in the following formula:

n x = x1 + ∑ (wi ⋅ Δxi) i=2 Note that the index of the sum i goes from 2 to n, and so this requires (n − 1) multiplication operations. The sum would run from going from 1 to i n, but the frst term is zero since (Δx1 = 0), so the term when (i = 1) can be ignored.

I will call the terms is the Difference (from a Reference Value) and each term is a Δxi (wi ⋅ Δxi) Weighted Difference term.

Note that the Difference terms are generally smaller than the original values if the values are close to each, so this method can be easier for people to execute with pen and paper or and easier for people to estimate the Weighted Average given the values and Normalized Weights.

www.fastmath.net 2 of 21 © 2020 – 2021 Matthew Tambiah FAST Weighted Average Algorithm: 1) Discard any values which have zero Normalized weight

2) Select a Reference Value from the set of data points, which is referred to ax x1

3) Calculate the Difference between every other value and the Reverence Value

4) Calculated the Weighted Difference of each term (aside from the Reference Value) by multiplying the Weight times the Difference from the Reference Value.

5) Sum the Weighted Differences

6) Add the Sum of the Weighted Differences to the Reference Value of x1 Arbitrary Reference Value The formula can be modifed if different values of x are selected for the Reference Value or an Arbitrary value is selected as a Reference Value. Defne the Reference Value as A (“A” is for Anchor), which can be any value.

Now defne as follows: Δxi (difference between and the Reference Value or Anchor) Δxi = xi − A xi The Weighted Average of x can be calculated as:

n x = A + ∑ (wi ⋅ Δxi) i=1 In general, this formula will require n multiplication operations. However, if the Reference Value A is selected to be , meaning , then and the corresponding Weighted Difference xj (A = xj) (Δxj = 0) term can excluded from the sum so only (n − 1) multiplication operations are required.

That is, if and the sum runs from 2 to as given in the frst instance: ( j = 1) (A = x1) n n x = A + ∑ (wi ⋅ Δxi) i=2 If and the sum runs from 1 to ( : ( j = n) (A = xn) n − 1)

n−1 x = A + ∑ (wi ⋅ Δxi) i=1 If (1 < j < n) the sum excludes j :

www.fastmath.net 3 of 21 © 2020 – 2021 Matthew Tambiah j−1 n x = A + ∑ (wi ⋅ Δxi) + ∑ (wi ⋅ Δxi) i=1 i=j+1

In each of these cases, there are only multiplication operations required if for (n − 1) (A = xj) some j rather than n multiplication operations with the traditional Weighted Average formula.

In to reducing the number of multiplication operations, the Weighted Difference algorithm is generally easier for people to calculate as the Difference terms are smaller and often have fewer digits than the original Values when the Values are close together so the computational complexity of the multiplication operations is reduced because there are fewer digits to multiply. This applies in many practical circumstances but may not be applicable in certain circumstances.

The remainder of the document is designed for a wide audience — therefore, it may use slightly different terms for concepts than are used in academic papers. For example, “Normalized Weights” are sometimes called “Percentage Frequencies”. Expected Value Also note that the Expected Value calculation is essentially the same calculation as the Weighted Average calculation where instead of using the frequency percentage, is the Probability of the pi given event occurring, and is the Value assigned to that event. Ni

The Expected Value is essentially the Weighted Average outcome of the Value based on the probabilities. The Expected Value of N is denoted as N and is calculated as:

n Expected Value of N: N = ∑ (pi ⋅ Ni) i=1 In general, calculating the Expected Value will require n multiplication operations for n values which have non-zero (and positive) probabilities and when none of the values are zero. The calculation can be simplifed by selecting a Reference Value (or Anchor) equal to one of the values for some similar to the Weighted Average. Everything stated about Weighted Average (Nj j) calculations applies to the Expected Value. So an Expected Value of n outcomes can be calculated with at most (n − 1) multiplication operations if the probabilities and Values associated with each outcome are known. Weighted Average (or Expected Value) of Two Values Given two values we can simply the Weighted Average calculation. Note that the (N1, N2) Weighted Average of two Values will always be between the two Values, which should provide some intuition regarding whether to add or subtract the Weighted Difference from the Reference Value. I

www.fastmath.net 4 of 21 © 2020 – 2021 Matthew Tambiah also use the term Percentage Frequency for Normalized Weight and use instead of . Given two pi wi Values and corresponding Percentage Frequencies of (Normalized Weights): (N1, N2) (p1, p2)

2 N = ∑ (pi ⋅ Ni) = (p1 ⋅ N1) + (p2 ⋅ N2) i=1 if either or is zero, then no multiplication is required as then either or is will be 1. p1 p2 p1 p2

Without loss of generality, we can say as we simply label the larger of two values . (N2 ≥ N1) N2 Defne the variable as Difference between and : D N1 N2

D = N2 − N1 (D will be positive if N2 ≥ N1) Note that N2 = N1 + D

Thus, by the defnition, will be positive. The Weighted Average of , which is written as D (N1, N2) (N ) can be calculated as either:

N = N1 + (D ⋅ p2) (Version 1)

N = N1 − (D ⋅ p1) (Version 2) As discussed earlier, these equations only require a single multiplication operation to execute. Do you understand why this works? What does your intuition tell you? Average of Two Let’s consider the example where we have to calculate the Unweighted (or Direct) Average of two values/numbers . This is also equivalent to the scenario where these values have equal (N1, N2) Weight in a Weighted Average. In this section (N ) will refer to the Straight (Unweighted or Equally Weighted) Average.

The Traditional formula for the Average is:

N + N N = 1 2 2 If the two values are the same, then the Average is just the value which is repeated, and no further calculation is required: if then . N1 = N2 N = N1 = N2

If the two values are different, let be the larger of the two values. We can do this (N1, N2) N2 without loss of generality.

Defne the Difference D as:

D = N2 − N1

www.fastmath.net 5 of 21 © 2020 – 2021 Matthew Tambiah Defne d as: D d = 2 Note that both and are positive based on be the larger of the two values.The Average can D d N2 then be calculated as:

D N = N + d = N + ⟹ N = N − d 1 1 2 1 D N = N − d = N − ⟹ N = N + d 2 2 2 2 The average of two numbers can also be thought of as the Midpoint of two numbers, and many Algebra classes teach a Midpoint formula equivalent to the average formula above. In this case, M is denoted as the Midpoint of two values and the “Traditional” Midpoint formula is: (N1, N2) N + N M = 1 2 2 Therefore:

D M = N + d = N + ⟹ N = M − d 1 1 2 1 D M = N − d = N − ⟹ N = M + d 2 2 2 2 In this context, the Average or Midpoint is the point that is in the Middle of the two values. That is, the “Distance” or “Difference” from the Midpoint to each of the values and is the same. In N1 N2 this example, the “Difference” really means the Absolute Value of the Difference. The idea is that form the Midpoint, you must travel the same distance (in opposite directions) to reach and . N1 N2

To summarize: D M = N = N + d = N + 1 1 2 D M = N = N − d = N − 2 2 2

www.fastmath.net 6 of 21 © 2020 – 2021 Matthew Tambiah Proof of FAST Weighted Average Algorithm In general, a Weighted Average calculation of n values can be done with (n − 1) , given Normalized Weights.

As discussed, Weighted Average of N is denoted by N and calculated by:

n N = ∑ (pi ⋅ Ni) i=1 Where is the Percentage Frequency or Normalized Weight for each value ( in previous pi wi sections). Now we will defne each value relative to as a Reference Value. Defne as: N1 ΔNi

ΔNi = Ni − N1 [We subtract N1 from each value] This means:

Ni = N1 + ΔNi Inserting this expressions for into the Weighted Average Formula: Ni n n N = ∑ (pi ⋅ Ni) = ∑ (pi ⋅ (N1 + ΔNi) ) i=1 i=1 Multiplying through by : p1 n N = ∑ ( (pi ⋅ N1) + (pi ⋅ ΔNi) ) i=1 Separating terms of the sum:

n n N = ∑ (pi ⋅ N1) + ∑ (pi ⋅ ΔNi) i=1 i=1 is a constant value which can be factored out: N1 n n N = N1 ⋅ ∑ (pi) + ∑ (pi ⋅ ΔNi) i=1 i=1 We also have the constraint that:

n ∑ pi = 1 i=1 as the Percentage Frequencies (Normalized Weights) must sum to 100%. Therefore, the last expression becomes:

www.fastmath.net 7 of 21 © 2020 – 2021 Matthew Tambiah n N = N1 ⋅ 1 + ∑ (pi ⋅ ΔNi) i=1 This simplifes to our Fast Weighted Average Formula for the Weighted Average Calculation:

n N = N1 + ∑ (pi ⋅ ΔNi) i=1 Also note that in this context

ΔN1 = 0 as ΔN1 = N1 − N1 = 0 Therefore, the term for (i = 1) will be zero, so the index can run from 2 to n which saves one multiplication operation.

n N = N1 + ∑ (pi ⋅ ΔNi) i=2 We can pick whichever value we want to serve as the Reference value by assigning it the label . N1 We therefore have freedom over which value we use as a Reference Value. Proof of Arbitrary Reference Value Formula We can also perform Weighted Average calculations with an arbitrary Reference Value. Let’s defne the reference value with the variable A for Anchor.

Now we defne all our values relative to the Anchor A:

Ni = A + ΔNi where ΔNi = Ni − A We can go through the prior algebra where ether only difference is becomes . N1 A n n N = ∑ (pi ⋅ Ni) = ∑ (pi ⋅ (A + ΔNi) ) i=1 i=1 Multiplying through by : p1 n N = ∑ ( (pi ⋅ A) + (pi ⋅ ΔNi) ) i=1 Separating Sum terms:

n n N = ∑ (pi ⋅ A) + ∑ (pi ⋅ ΔNi) i=1 i=1 A is a constant value which can be factored out:

www.fastmath.net 8 of 21 © 2020 – 2021 Matthew Tambiah n n N = A ⋅ ∑ pi + ∑ (pi ⋅ ΔNi) i=1 i=1 n Again, we have constraint that: ∑ pi = 1 i=1

Therefore, the last expression becomes:

n N = A + ∑ (pi ⋅ ΔNi) i=1 This is the Weighted Differences method for Weighted Average and Expected Value calculations.

The variable is a “Difference” from the Anchor (Reference value) and the expression ΔNi is a Weighted Difference term. To calculate the Weighted Average we can sum the ( p1 ⋅ ΔNi) Weighted Differences and add them to the Anchor Value.

This will generally require n multiplications unless the Anchor value is selected as one of the values of N. Note we are excluding cases where a Percentage Frequency is 0, as these can be ignored, so we have n values which have non-zero Percentage Frequencies or Weights.

Again, since:

ΔNi = Ni − A Again, if the Anchor is set equal to a particular value of such as for some , then A N Nj j : (ΔNj = 0)

ΔNj = Nj − A

By assumption:

A = Nj

Therefore

ΔNj = Nj − Nj = 0

Therefore, if the Anchor is set to any of the value of such as for some then the Weighted N Nj j Difference Method (Algorithm) will require (n − 1) multiplication operations as the corresponding term in the Weighted Difference sum can be ignored as and we skip that term in the (ΔNj = 0) sum and do not multiply by as: pj

because pj ⋅ ΔNj = 0 (ΔNj = 0)

www.fastmath.net 9 of 21 © 2020 – 2021 Matthew Tambiah Proof for FAST Weighted Average Calculation Algorithm for Two Values Here is a formal proof how why the Fast Weighted Average Algorithm for two values.

As a review, given two Values and corresponding Percentage Frequencies of (N1, N2) (p1, p2) (Normalized Weights), the traditional formula for Weighted Average of variable N (N ) when N can take with two Values is:

2 N = ∑ (pi ⋅ Ni) = (p1 ⋅ N1) + (p2 ⋅ N2) i=1

N = (N1 ⋅ p1) + (N2 ⋅ p2) if either or is zero, then no multiplication is required as then either or is will be 1. p1 p2 p1 p2

If they are not equal, without loss of generality, we can say as we simply label the larger (N2 ≥ N1) of two values . Defne the variable as Difference between and : N2 D N1 N2

( will be positive if ) D = N2 − N1 D N2 ≥ N1 Note that: N2 = N1 + D

Thus, by the defnition, will be positive. The Weighted Average of , which is written as D (N1, N2) (N ) can be calculated as either:

N = N1 + (D ⋅ p2) (Version 1)

N = N1 − (D ⋅ p1) (Version 2) We will start with our Traditional Weighted Average Formula, and transform it into both versions using the relationship . (p1 + p2 = 1)

N = (N1 ⋅ p1) + (N2 ⋅ p2) Inserting the expression for that : N2 N2 = N1 + D

N = (N1 ⋅ p1) + (N1 + D) ⋅ p2 Multiplying through by : p2

N = (N1 ⋅ p1) + (N1 ⋅ p2) + (D ⋅ p2) Merging terms with : N1

N = N1 ⋅ (p1 + p2) + (D ⋅ p2)

www.fastmath.net 10 of 21 © 2020 – 2021 Matthew Tambiah Inserting expression : (p1 + p2 = 1)

N = N1 ⋅ (1) + (D ⋅ p2)

N = N1 + (D ⋅ p2) This is the expression we are looking for. We can also derive the other variant, using as the N2 Reference value in a similar way. Note that: ( ): N1 = N2 − D

N = (N1 ⋅ p1) + (N2 ⋅ p2) Inserting the expression for : N1

N = (N2 − D) ⋅ p1 + (N2 ⋅ p2) Multiplying through by and moving term to the end : p1 D

N = (N2 ⋅ p1) + (N2 ⋅ p2) − (D ⋅ p1) Merging terms with : N2

N = N2 ⋅ (p1 + p2) − (D ⋅ p1) Inserting expression : (p1 + p2 = 1)

N = N2 ⋅ (1) − (D ⋅ p1)

N = N2 − (D ⋅ p1) Alternate Derivations We can also derive these in a different way. Begin with:

N = (N1 ⋅ p1) + (N2 ⋅ p2) Insert expression as (p1 = 1 − p2) (p1 + p2 = 1)

N = N1 ⋅ (1 − p2) + (N2 ⋅ p2) Multiply through by : N1

N = N1 ⋅ 1 − (N1 ⋅ p2) + (N2 ⋅ p2) Rearrange terms:

N = N1 + (N2 ⋅ p2) − (N1 ⋅ p2) Factor out a : p2

N = N1 + (N2 − N1) ⋅ p2 = N1 + (D ⋅ p2)

www.fastmath.net 11 of 21 © 2020 – 2021 Matthew Tambiah N = N1 + (p2 ⋅ D) Similarly, begin with

N = (N1 ⋅ p1) + (N2 ⋅ p2) Insert expression as (p2 = 1 − p1) (p1 + p2 = 1)

N = (N1 ⋅ p1) + N2 ⋅ (1 − p1) Multiply through by : N2

N = (N1 ⋅ p1) + (N2 ⋅ 1) − (N2 ⋅ p1) Rearrange terms:

N = N2 + (N1 ⋅ p1) − (N2 ⋅ p1) Factor out a : p1

N = N2 + p1 ⋅ (N1 − N2) Note that:

(N1 − N2) = − (N2 − N1) = − D Therefore:

N = N2 + (p2 ⋅ (−D)) = N2 − (p2 ⋅ D)

N = N2 − (p1 ⋅ D)

www.fastmath.net 12 of 21 © 2020 – 2021 Matthew Tambiah Proof of Difference-Based with MidPoint Formula (Average of Two Values) Let’s prove the Difference-based MidPoint formula starting with the Traditional Midpoint Formula:

N + N Traditional Midpoint Formula: M = 1 2 2 The Fast MidPoint calculation algorithms we have are:

D M = N + [Version 1] 1 2 D M = N − [Version 2] 2 2 Where and (D = N2 − N1) (N2 ≥ N1) Proof Method A Version 1 Begin by inserting the expression for D in Version 1, which gives : N − N M = N + 2 1 1 2 It should be noted that:

2 ⋅ N N = 1 1 2 Therefore:

N − N 2 ⋅ N N − N M = N + 2 1 = 1 + 2 1 1 2 2 2 Combining terms onto a single fraction

2 ⋅ N + N − N M = 1 2 1 2 Cancelling terms: N1 N + N M = 1 2 2

www.fastmath.net 13 of 21 © 2020 – 2021 Matthew Tambiah Version 2 We will also show the Version 2 is consistent with the Traditional MidPoint Formula. Version 2 is:

D where M = N2 − 2 D = (N2 − N1) Inserting the expression for D into the equal above gives: N − N M = N − 2 1 2 2 It should be noted that:

2 ⋅ N N = 2 Therefore: 2 2 N − N 2 ⋅ N N − N M = N − 2 1 = 2 − 2 1 2 2 2 2 Combining terms onto a single fraction

2 ⋅ N − N + N M = 2 2 1 2 Cancelling terms: N2 N + N M = 2 1 2 Therefore these variations of the MidPoint Formula are mutually consistent.

Proof Method B We can also go the other way and being with the Traditional MidPoint Formula and manipulate it:

N + N M = 1 2 2 Version 1: Add and Subtract to the Numerator: N1 N + N + N − N M = 1 2 1 1 2 Rearrange terms:

(2 ⋅ N ) + (N − N ) 2 ⋅ N N − N = 1 2 1 = 1 + 2 1 2 2 2 N − N D M = N + 2 1 = N + 1 2 1 2

www.fastmath.net 14 of 21 © 2020 – 2021 Matthew Tambiah For Version 2, begin with:

N + N M = 1 2 2 Then Add and Subtract to the Numerator: N2 N + N + N − N N + (2 ⋅ N ) − N M = 1 2 2 2 = 1 2 2 2 2 Rearrange terms:

N − N + (2 ⋅ N ) N − N (2 ⋅ N ) N − N M = 1 2 2 = 1 2 + 2 = 1 2 + N 2 2 2 2 2 N − N N − N M = N + 1 2 = N − 2 1 2 2 2 2 D M = N − 2 2 Proof Method C The Fast MidPoint Formula can also be calculated from the Fast Weighted Average Formula for two values which were:

N = N1 + (D ⋅ p2) (Version 1)

N = N1 − (D ⋅ p1) (Version 2) Where and (D = N2 − N1) (N2 ≥ N1)

Note that in the MIdPoint or Average of two values, this is equivalent to the Weighted Average Formula where each weight is 50 % : N + N N N M = 1 2 = 1 + 2 2 2 2 1 1 M = ( 2 ⋅ N1) + ( 2 ⋅ N2)

Since 1 : 2 = 50 %

M = (50% ⋅ N1) + (50% ⋅ N2) This means the MidPoint is the Weighted Average of two values given equal weights so . The MidPoint is therefore : (p1 = p2 = 50%)

D M = N1 + (50% ⋅ D) = N1 + d = N1 + 2 D M = N2 − (50% ⋅ D) = N2 − d = N2 − 2

www.fastmath.net 15 of 21 © 2020 – 2021 Matthew Tambiah FAST Direct Average Algorithm The Direct Average of data points or Values of variable with values n x {x1, x2, . . . xi, . . . xn} is denoted by x and calculated as:

n ∑ (xi) Average of x: x = i=1 n The Direct Average of a set of numbers is the Unweighted Average of the set of numbers, and can also be thought of as the Weighted Average of the set of Values where each value is given Equal Weight:

n n ∑ (xi) 1 x = i=1 = ⋅ x [The Weight of each value is 1 ] ∑ ( i) n n i=1 n These weights are normalized as:

n 1 1 = ∑ ( ) = n ⋅ = 1 i=1 n n

Therefore, we can apply our Fast Weighted Average Formula with weight 1 . We can select any n value as a Reference Value or an Arbitrary Reference Value. Selecting a Value as the Reference Value and labeling it gives: x1 n ∑ (xi − x1) x = x + i=2 1 n Selecting an arbitrary Reference Value A gives:

n ∑ (xi − A) x = A + i=1 n The version using as the Reference Value makes the term equal to zero, and so this is x1 (x1 − A) excluded from the sum. Proof Again, set the reference value equal to A, and express each value relative to A:

Δxi = xi − A ⟹ xi = A + Δxi n n ∑ (xi) ∑ (A + Δxi) x = i=1 = i=1 n n

www.fastmath.net 16 of 21 © 2020 – 2021 Matthew Tambiah n n ∑ A ∑ + Δxi x = i=1 + = i=1 n n n n ⋅ A ∑ Δxi x = + i=1 n n n n ∑ Δxi ∑ (xi − A) x = A + i=1 = A + i=1 n n Setting equal to eliminates the frst term in the sum, so doing so allows the sum to run from A x1 i equals 2 to n as the frst term is zero. Assumptions Throughout this paper, I have assumed that none of the Normalized Weights or Percentage wi Frequencies values are 0 or negative. If any are zero, then they will ignored and excluded from pi the count of . So we will assume positive weights . So the number n n {w1, w2, . . . wi, . . . wn} n refers to the number of Values that remain after discarding values with zero Weight. Arbitrary Weights A similar method can be used if arbitrary (non-formalized) weights are given to calculate the Weighted average in at most (n − 1) multiplication operations given n positive arbitrary Weights and Values of variable with Values . {W1, W2, . . . Wi, . . . Wn} n x {x1, x2, . . . xi, . . . xn} Again, the Weights must be non-negative, and any Weights which are equal to zero will be discarded long with the corresponding Value.

The Weighted Average of variable x given arbitrary (non-Normalized) Weights is defned as:

n ∑i=1 (Wi ⋅ xi) x = n ∑i=1 Wi Defning the variable (“S” is for “Sum”) as: WS n WS = ∑ Wi i=1 We can rewrite the equation for the Weighted Average as:

n ∑ (Wi ⋅ xi) x = i=1 WS

www.fastmath.net 17 of 21 © 2020 – 2021 Matthew Tambiah In general, the formula above will require n multiplication operations and a operation at the end to divide by . Normalizing the Weights will also require division operations to divide each WS n original Weight by the sum of the Weights . (WS) We can calculate the Weighted Average without Normalizing the weights and with at most (n − 1) multiplication operations through the formula below. Again, assume that we use the value as the x1 Reference Value.

Again, defne as follows: Δxi

Δxi = xi − x1 ⟺ xi = x1 + Δxi The Weighted Average of x can be calculated as:

n ∑i=2 (Wi ⋅ Δxi) x = + x1 WS Note the sum runs from going from 2 to , and is outside of the sum, so we can calculating the i n WS sum of the Weighted Differences and divide once by . Therefore, implementing this formula will WS require at most (n − 1) multiplication operations and a single Division calculation. Pseudo-Code Pseudo-code for implementing this calculation algorithm is shown below.

Assume the inputs are arrays W[n] and x[n] which contain foating point numbers for the Weights and Values respectively. We will also assume x[0] is the Reference Value. Each array is zero- indexed and has n values. We also use the arrays D[n] for the Difference values, and WD[n] for the Weighted Difference values.

WD_Sum=0

W_Sum= W[0]

for i = 1 to (n-1) {

D[i]=x[i]-x[0];

WD[i]=W[i]*x[i];

WD_Sum = WD_Sum+WD[i]

W_Sum=W_Sum+W[i];

}

WA=x[0]+ (WD_Sum / W_Sum);

www.fastmath.net 18 of 21 © 2020 – 2021 Matthew Tambiah Return WA;

The above calculations can also be done in-place to not have additional arrays D[n] and WD[n]. That is when retaining the Weights, if you have the Weighted Difference and the Reference Value then one can reconstruct the original set of values given. Pseudo-code for in-place calculations is given below.

WD_Sum=0

W_Sum= W[0]

for i = 1 to (n-1) {

x[i]=x[i]-x[0];

x[i]=W[i]*x[i];

WD_Sum = WD_Sum+x[i]

W_Sum=W_Sum+W[i];

}

WA=x[0]+ (WD_Sum / W_Sum);

Return WA;

For both of these, the code can be modifed if the Weights are Normalized to simply not calculate the W_Sum value, and not divide by the W_Sum value, since this will always be 1 given the Weights are normalized. Pseudocode for normalized weights is shown below:

WD_Sum=0

for i = 1 to (n-1) {

x[i]=x[i]-x[0];

x[i]=w[i]*x[i];

WD_Sum = WD_Sum+x[i]

}

WA=x[0]+ WD_Sum

Return WA;

www.fastmath.net 19 of 21 © 2020 – 2021 Matthew Tambiah Arbitrary Reference Value We can also use an Arbitrary Reference Value A. Again, in general, an Arbitrary Reference Value which is not one of the values of will require will require multiplication operations. xi n Defne the Reference Value as A (“A” is for Anchor), which can be any value.

Defne as follows: Δxi

Δxi = xi − A ⟺ xi = ΔxI + A The Weighted Average of x can be calculated as:

n ∑ (Wi ⋅ Δxi) x = i=1 + A WS Again, setting equals to will make so we can ignore the corresponding term in the A xj (Δxj = 0) sum as we are multiplying by zero.

Pseudo-code for implementing this calculation algorithm is shown below. Inputs are arrays W[n] and x[n] which contain foating point numbers for the Weights, and Values respectively, and a separate variable A for the Reference Value.

WD_Sum=0

W_Sum= W[0]

for i = 0 to (n-1) {

x[i]=x[i]-A;

x[i]=W[i]*x[i];

WD_Sum = WD_Sum+x[i]

W_Sum=W_Sum+W[i];

}

WA=A + (WD_Sum / W_Sum);

Return WA;

Note that in this example, the sum runs from i going from 0 to n-1. Again, the are zero-indexed as per computer science standards.

www.fastmath.net 20 of 21 © 2020 – 2021 Matthew Tambiah Pseudocde for arbitrary Reference Values given Normalized Weights is shown below:

WD_Sum=0

for i = 0 to (n-1) {

x[i]=x[i]-A;

x[i]=w[i]*x[i];

WD_Sum = WD_Sum+x[i]

}

WA = A + WD_Sum;

Return WA;

This can also be further refned to save an operation and an intermediate variable as:

Temp = A

for i = 0 to (n-1) {

x[i]=x[i]-A;

x[i]=w[i]*x[i];

Temp = Temp+x[i]

}

Return Temp;

www.fastmath.net 21 of 21 © 2020 – 2021 Matthew Tambiah