ALawandorder Relation preserving maps. Symmetric relations, asymmetric relations, and antisymmetry. The Cartesian product and the Venn diagram. Euler circles, power sets, partitioning or fibering.
A-1 Law and order: Cartesian product, power sets
In daily life, we make comparisons of type ... is higher than ... is smarter than ... is faster than .... Indeed, we are dealing with order. Mathematically speaking, order is a binary relation of type
−1 xR1y : x
Example A.1 (Real numbers). x and y, for instance, can be real numbers: x, y ∈ R. End of Example.
Question: “What is a relation?” Answer: “Let us explain the term relation in the frame of the following example.” Question. Example A.2 (Cartesian product).
Define the left set A = {a1,a2,a3} as the set of balls in a left basket, and {a1,a2,a3} = {red,green,blue}. In contrast, the right set B = {b1,b2} as the set of balls in a right basket, and {b1,b2} = {yellow,pink}. Sequentically, we take a ball from the left basket as well as from the right basket such that we are led to the combinations A × B = (a1,b1) , (a1,b2) , (a2,b1) , (a2,b2) , (a3,b1) , (a3,b2) , (A.2) A × B = (red,yellow) , (red,pink) , (green,yellow) , (green,pink) , (blue,yellow) , (green,pink) . (A.3)
End of Example.
Definition A.1 (Reflexive partial order).
Let M be a non-empty set. The binary relation R2 on M is called reflexive partial order if for all x, y, z ∈ M the following three conditions are fulfilled: (i) x ≤ x (reflexivity) , (ii) if x ≤ y and y ≤ x,thenx = y (antisymmetry) , (A.4) (iii) if x ≤ y and y ≤ z,thenx ≤ z (transitivity) .
End of Definition.
Obviously, in a reflexive relation R, any element x ∈ M is in relation R to itself. But a relation is not symmetric if at least one element x ∈ M is in relation to an element y ∈ M, which in turn is notinrelationtox.IfxRy, but by no means yRx applies, we speak of an asymmetric relation.This notion should not be confused with antisymmetry:ifxRy and yRx applies for all x, y ∈ M,thenx = y is implied. And R is transitive in M if, for all x, y, z ∈ M, xRy and yRz implies xRz.Nowweare prepared for to introduce more strictly the method of a Cartesian product. 498 A Law and order
b2
b1
a1 a2 a3
Fig. A.1. Cartesian product, Cartesian coordinate system.
The elements of the Cartesian product can be illustrated as point set if A and B are bounded subsets of R. Actually, we consider the ordered pair (a, b)as(x, y) coordinate of the point P (a, b)in a Cartesian coordinate system such that all ordered pairs of A × B are represented as points within a rectangle. Example A.2 and Fig. A.1 have indeed prepared the following definition. Definition A.2 (Cartesian product). The Cartesian product A × B of arbitrary sets A and B is the set of all ordered pairs (a, b) whose left element is a ∈ A and whose right element is b ∈ B.Symbolically, we write
A × B := {(a, b) | a ∈ A, b ∈ B} . (A.5)
End of Definition.
For the Cartesian product, alternative notions are direct product, product set, cross set, pair set,or union set. Exercise A.1 (Cartesian product). The set of theoretical operations ∩, ∪,∆,\,and×, we shortly call intersection, union, symmetric difference, difference,andCartesian product. They are illustrated by Figs. A.2–A.8. With respect to these operations, draw theartesian C product A × B of the following sets A and B:
(i) A := {x ∈ N | x ∈ [1;3] ∨ x =4} , B := {y ∈ N | y ∈ [1;2] ∨ y =3} , (ii) A := {1, 2, 3} , (A.6) B := {y ∈ N | y ∈ [1;2[ ∪{3}} , (iii) A := [1;2] ∪ ]3;4[ , B := [0;1] ∪ [3;4[ . Here, we have applied the definitions of a closed, left and right open intervals: [x; y]:= x ≤•≤y, ]x; y]:= x<•≤y, (A.7) [x; y[:= x ≤• End of Exercise. A-1 Law and order: Cartesian product, power sets 499 AB A ∩ B Fig. A.2. Venn diagram/Euler circles A ∩ B: the intersection A ∩ B of two sets A and B is the set of all elements which are elements of the set A and the set B: A ∩ B := {x | x ∈ A ∧ x ∈ B}. AB A ∪ B Fig. A.3. Venn diagram/Euler circles A ∪ B: the union A ∪ B of two sets A and B is the set af all elements which are in the set A or the set B: A ∪ B := {x | x ∈ A ∨ x ∈ B}. AB A∆B Fig. A.4. Venn diagram/Euler circles: the symmetric difference A∆B of two sets A and B is the set af all elements which are either in set A or in set B: A∆B := {x | x ∈ A∨˙ x ∈ B}. AB A \ B Fig. A.5. Venn diagram/Euler circles: the difference set A \ B of two sets A and B is the set af all elements of A which are not in B: A \ B := {x | x ∈ A ∧ x ∈ B}. 500 A Law and order A × B y x Fig. A.6. Cartesian product A × B; A := {x ∈ N | x ∈ [1; 3] ∨x =4} and B := {y ∈ N | y ∈ [1; 2] ∨ y =3}. A × B y x Fig. A.7. Cartesian product A × B; A := {1, 2, 3} and B := [1; 2[∪{3}. y A × B x Fig. A.8. Cartesian product A × B; A := [1; 2]∪]3; 4[ and B := [0; 1] ∪ [3; 4[. In order to interpret the Cartesian product A × B as a set of third kind, we have to understand better the power set P (A) of a set, the intersection and union of set systems, and the partitioning of a set system into subsets called fibering. A-1 Law and order: Cartesian product, power sets 501 Example A.3 (Power set). The power set as the set of all subsets of a set A may be demonstrated by the set A = {1, 2, 3},whose complete list of subsets read M1 = ∅ , M2 = {1},M3 = {2},M4 = {3} , (A.8) M5 = {1, 2},M6 = {2, 3},M7 = {3, 1} , M8 = {1, 2, 3} , namely built on 8 elements: power(M)= ∅, {1}, {2}, {3}, {12}, {13}, {23}, {123} . (A.9) End of Example. Exercise A.2 (Power set). If n is the number of elements of a set A for which we write |A| = n,then |power(A)| =2n , (A.10) namely the power set of A has exactly 2n elements. The result motivates the name power set.The proof is based on complete induction: A ∅ 123 4 5 6 ... n . (A.11) power(A) 1248163264...2n End of Exercise. Example A.3 and Exercise A.2 have already used the following definition. Definition A.3 (Power set). The power set of a set A, shortly written power(A), is by definition the set of all subsets M of A: power(A):={M | M ⊆ A} (A.12) End of Definition. power(A) is a set sytem whose elements are just all subsets of A.IfA is a set of first kind,power(A) is a set of second kind. Inclusions of the above type can be illustrated by Hasse diagrams, also called order diagrams (H. Hasse 1896–1979). In such a diagram, two sets M1 and M2 are identified by two points and are connected by a straight line if the lower set M2 is a subset of M1 or M2 ⊆ M1.Inthis way, a set M iscontainedinanysetwhichisabove of M, illustrated by an upward line. Example A.4 (Hasse diagram). For the set A = {1, 2, 3}, |A| =3: power(A)= ∅, {1}, {2}, {3}, {12}, {13}, {23}, {123} , (A.13) |power(A)| =8. End of Example. 502 A Law and order M8 M5 M7 M6 M3 M2 M4 M1 Fig. A.9. Hasse diagram for power(A), |A| =3,|power(A)| =8. A-2 Law and order: Fibering For a set system (which we experienced by power( A), for instance) M = {A1,A2, ..., An},wecall ∩ M ∩n ∪ M ∪n = i=1Ai and = i=1Ai intersection and union of the set system. The inverse operation of the union of a set system, namely the partioning or fibering of a set system into specific subsets is given by the following definition. Definition A.4 (Fibering). ∗ A set system M = {M1,M2,...,Mn} (n ∈ N ) of subsets M1,...,Mn is called a partioning or a fibering of M if and only if (i) Mi = ∅ for any i ∈{1, 2, ..., n} , (ii) Mi ∩ Mj = ∅ for any i, j ∈{1, 2, ..., n} , ∪ ∪ ∪ ∪M ∪n (iii) M = M1 M2 ... Mn = = i=1Mi holds. These subsets of M, the elements of M, are called fibres of M or of M, respectively. End of Definition. nI other words, M1,...,Mn are non-empty subsets of M, their paired intersection Mi ∩ Mj is the empty set and their ordered union is M again. axiom analysis algorithm algebra absolute value abacus Fig. A.10. Hasse diagram, lexicographic order. A-2 Law and order: Fibering 503 Example A.5 (Fibering). N∗ { } P | ∈ P Let M = , M1 = 1 , M2 = (set of prime numbers) and M3 = x ab = x for any a and for b ∈ N∗ \{1} the set of compound numbers. Then M = {M1,M2,M3} (A.14) is a partitioning or a fibering of M. For instance, M1 = {1} , M2 = {2, 3, 5, 7, 11, 13, 17,...} , (A.15) M3 = {4, 6, 8, 9, 10, 12, 14,...} fulfills (i) M0 = ∅ , ∩ ∅ ∈{ } (ii) Mi Mj = (i, j 1, 2, 3 ,i= j) , (A.16) ∗ (iii) M = M1 ∪ M2 ∪ M3 = N . End of Example. Exercise A.3 (Fibering). Find three fibres of the set of M = {M0,M1,M2} . (A.17) Answer: M0 := 3Z := x | 3y = x and y ∈ Z = = {...,−6, −3, 0, 3, 6,...} , M1 := 3Z +1:= x | 3y +1=x and y ∈ Z = (A.18) = {...,−5, −2, 1, 4, 7,...} , M2 := 3Z +2:= x | 3y +2=x and y ∈ Z = = {...,−4, −1, 2, 5, 8,...} . End of Exercise. Example A.6 (Hasse diagram, lexicographic order). The Hasse diagram of a lexicographic order in a set of words which begin with the initial letter “a” is called a chain. That is, all elements are ordered along a half line or line; there are no bifurcations: compare with Fig. A.10. End of Example. 504 A Law and order Example A.7 (Inverse relation). The smaller-equal-relation ≤ is in the space of real numbers a reflexive partial order. Its inverse relation is the inverse relation ≥, again a reflexive partial order. End of Example. Definition A.5 (Irreflexive partial order). Let M be a non-empty set. The binary relation R1 on M is called irreflexive partial order if for all x, y, z ∈ M the two conditions (i) −x For inversion problems of map projections like the computation of conformal coordinates of type Gauss–Krueger or Universal Transverse Mercator Projection (UTM), or alternative coordinates of type Riemann or Soldner–Fermi, we may take advantage of the inversion of (i) an univariate homogeneous polynomial outlined in Section B-1, (ii) a bivariate homogeneous polynomial outlined in Section B-2, or (iii) a trivariate, in general, multivariate homogeneous polynomial of degree n, which is discussed in Section B-3. Note that on the basis of an algorithm that is outlined in R. Koenig and K. H. Weise (1951, p. 465–466, 501–511), H. Glasmacher and K. Krack (1964, degree 6) as well as G. Joos and K. J¨org (1991, degree 5) have developed symbolic computer manipulation software for the aside. inversion of a bivariate homogeneous polynomial. Technical Furthermore, solutions for the inversion of a univariate homogeneous polynomial are already tabulated in M. Abramowitz and J. A. Stegun (1965, p. 16, degree 7). However, we follow here E. Grafarend, T. Krarup, and R. Syffus (1996), where the inversion of a general multivariate homogeneous polynomial of degree n suited for symbolic computer manipulation is presented. For the mathematical foundation of the GKS algorithm, we refer to H. Bass, E. H. Cornell, and D. Wright (1962). B-1 Inversion of a univariate homogeneous polynomial of degree n Assume the univariate homogeneous polynomial of degree n,namelyy(x) of Box B.1, to be given and find the inverse univariate homogeneous polynomial of degree n,namelyx(y), i. e. from the set of coefficients {a11,a12,...,a1n−1,a1n}, by the algorithm that is outlined in Box B.1, find the set of coefficients {b11,b12,...,b1n−1,b1n}. Box B.1 (Algorithm for the construction of an inverse univariate homogeneous polynomial of degree n). 2 n−1 n y(x)=a11x + a12x + ··· + a1n−1x + a1nx , 2 n−1 n (B.1) x(y)=b11x + b12x + ··· + b1n−1x + b1nx . GKS algorithm: given {a11,a12,...,a1n−1,a1n}, find {b11,b12,...,b1n−1,b1n}. Forward substitution: 2 n−1 n x = b11y + b12y + ··· + b1n−1y + b1ny + β1n+1 , 2 2 3 n−1 n (B.2) x = b22y + b23y + ··· + b2n−1y + b2ny + β2n+1 , n−1 n−1 n x = bn−1n−1y + bn−1ny + βn−1n+1 , n n (B.3) x = bnny + βnn+1 , 2 3 2 3 2 3 y a11 a12 ··· a1n x 6 2 7 6 7 6 2 7 6y 7 6 0 a22 ... a2n 7 6x 7 6 7 6 7 6 7 6 7 = 6 7 6 7 + αn , (B.4) 4 . 5 4 ...... 5 4 . 5 n n y 00... ann x 506 B The inverse of a multivariate homogeneous polynomial Continuation of Box. subject to 2 a22 = a11 , 3 33 a = a11 , 4 23 11 12 44 a =2a a , 2 a = a11 , 34 12 2 a =3a11a , 3 24 11 13 45 12 (B.5) a =2a a + a12 , 2 2 a =4a11a , a35 =3a11a13 +3a11a12 , a25 =2a11a14 +2a12a13 , etc. etc. etc. 2 3 a11 a12 a13 ··· a1n 6 7 6 2 7 6 0 a11 2a11a12 ... a2n 7 6 7 6 7 A = 6 ...... 7 , (B.6) 6 7 4 00 ... an−1n−1 an−1n5 00 ... 0 ann 2 3 b11 b12 ... b1n−1 b1n 6 7 6 7 6 0 b22 ... b2n−1 b2n 7 6 7 6 7 B = 6 ...... 7 . (B.7) 6 7 4 00... bn−1n−1 bn−1n5 00... 0 bnn Consult Box B.4 for the general representation of amn. Backward substitution: AB= I ⇔ (i) b11a11 =1 ⇒ −1 b11 = a11 , (ii) b11a12 + b12a22 =0 ⇒ −1 −3 (B.8) b12 = −b11a12a22 = −a11 a12 , (iii) b11a13 + b12a23 + b13a33 =0 ⇒ ` ´ −1 −1 −1 −1 b13 = − (b11a13 + b12a23) a33 = a11 a12a22 a23 − a13 a33 , (iv) b11a14 + b12a24 + b13a34 + b14a44 =0 ⇒ ˆ ` ´ ˜ −1 −1 −1 −1 −1 −1 b14 = − (b11a14 + b12a24 + b13a34) a44 = a11 a12a22 a24 − a23a33 a34 + a13a33 a34 − a14 a44 . Consult Box B.5 for the general representation of b1n. B-1 Inversion of a univariate homogeneous polynomial of degree n 507 Notable for the GKS algorithm is the following. In the first step or the forward substitution, a set of equations {x, x2,...,xn−1,xn} is constructed by substituting x(y)intothepowersx, x2,...,xn−1,xn, finally written into a matrix equation. The upper triangular matrix A is gained by a multinomial expansion as indicated. In contrast, the second step or the backward substitution is based upon −1 the upper triangular matrix B := A , the inversion of A. Its first row contains the unknown { } −1 coefficients we are looking for: b11,b12,...,b1n−1,b1n The construction of A as well as A can be based on symbolic computer manipulation. The algebraic manipulation becomes more concrete when we pay attention to Examples B.1 and B.2. The first example aims at the inversion of an univariate 2 2 homogeneous polynomial of degree n =2,namelyy(x)=a11x + a12x → x(y)=b11y + b12y .The GKS algorithm determines the set of coefficients {b11,b12} from the two given coefficients a11 and a12. In contrast, the second example focuses on the inversion of an univariate homogeneous polynomial of 2 3 2 3 degree n =3,namelyy(x)=a11x + a12x + a13x → x(y)=b11y + b12y + b13y . The GKS algorithm determines the set of coefficients {b11,b12,b13} from the three given coefficients a11, a12,anda13 . Example B.1 (Inversion of an univariate homogeneous polynomial of degree n =2). 2 Assume the univariate homogeneous polynomial y(x)=a11x + a12x to be given and find the inverse 2 univariate homogeneous quadratic polynomial x(y)=b11y + b12y by the GKS algorithm. 1st step: 2 2 2 x(y)=b11y + b12y = b11a11x + b11a12 + b12a11 x + β13 , (B.9) 2 2 2 2 x (y)=b22y + β23 = b22a11x + β23 . 2nd step (forward substitution): x b11 b12 a11 a12 x β13 b11a11 b11a12 + b12a22 x β13 = + = + , (B.10) 2 2 2 x 0 b22 0 a22 x β23 0 b22a22 x β23 2 subject to a22 = a11. Both the matrices A := A and B := B are upper triangular such that ⇔ −1 AB= I2 B = A . (B.11) 3rd step (backward substitution): − −1 − −3 a22 a12 a11 a11 a12 = −1 =(a a )−1 = ⇒ b = a−1 ,b = −a−3a , (B.12) B A 11 22 −2 11 11 12 11 12 0 a11 0 a11 or 2 ⇒ −1 −2 ⇒ −1 b22a22 = b22a11 =1 b22 = a22 = a11 ,b11a11 =1 b11 = a11 , (B.13) ⇒ − −1 − −3 b11a12 + b12a22 =0 b12 = a22 a12b11 = a11 a12 , −1 − −3 2 x(y)=a11 y a11 a12y . (B.14) End of Example. 508 B The inverse of a multivariate homogeneous polynomial Example B.2 (Inversion of an univariate homogeneous polynomial of degree n =3). 2 3 Assume the univariate homogeneous polynomial y(x)=a11x + a12x + a13x to be given and find 2 3 the inverse univariate homogeneous quadratic polynomial x(y)=b11y + b12y + b13y by the GKS algorithm. 1st step: 2 3 x(y)=b11y + b12y + b13y 2 2 3 3 = b11a11x + b11a12 + b12a11 x + b11a13 +2b12a11a12 + b13a11 x + β14 , 2 2 3 x (y)=b22y + b23y + β24 (B.15) 2 2 3 3 = b22a11x + 2b12a11a12 + b23a11 x + β24 , 3 3 x (y)=b33y + β34 3 3 = b33a11x + β34 . 2nd step (forward substitution): ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ x b11 b12 b13 a11 a12 a13 x β ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 14⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 2⎥ ⎢ ⎥ ⎣x ⎦ = ⎣ 0 b22 b23⎦ ⎣ 0 a22 a23⎦ ⎣x ⎦ + ⎣β24⎦ , (B.16) 3 3 x 00b33 00a33 x β34 2 3 subject to a22 = a11, a23 =2a11a12,anda33 = a11. Both the matrices A := A and B := B are upper triangular such that ⇔ −1 AB= I3 B = A . (B.17) 3rd step (backward substitution): ⎡ ⎤− ⎡ ⎤ 1 −1 −3 −4 −1 2 a11 a12 a13 a −a a12 a (2a a − a13) ⎢ ⎥ ⎢ 11 11 11 11 12 ⎥ −1 ⎢ 2 ⎥ ⎢ −2 − −4 ⎥ ⇒ B = = ⎣ 0 a11 2a11a12⎦ = ⎣ 0 a 2a a12 ⎦ A 11 11 (B.18) 3 −3 00 a11 00 a11 ⇒ −1 − −3 −4 −1 2 − b11 = a11 ,b12 = a11 a12 ,b13 = a11 (2a11 a12 a13) , or 3 ⇒ −1 −3 2 ⇒ −1 −2 b33a33 = b33a11 =1 b33 = a33 = a11 ,b22a22 = b22a11 =1 b22 = a22 = a11 , −1 3 ⇒ − −4 b22a23 + b23a33 =2a11 a12 + b23a11 =0 b23 = 2a11 a12 , (B.19) ⇒ −1 −1 2 ⇒ − −3 b11a11 =1 b11 = a11 ,b11a12 + b12a22 = a11 a12 + b12a11 =0 b12 = a11 a12 , −1 − −3 3 ⇒ −4 −1 2 − b11a13 + b12a23 + b13a33 = a11 a13 a11 a12a23 + b13a11 =0 b13 = a11 (2a11 a12 a13) , −1 − −3 2 −4 −1 2 − 3 x(y)=a11 y a11 a12y + a11 (2a11 a12 a13)y . (B.20) End of Example. B-2 Inversion of a bivariate homogeneous polynomial of degree n 509 B-2 Inversion of a bivariate homogeneous polynomial of degree n Assume the bivariate homogeneous polynomial of degree n,namelyy(x)or{y1(x1,x2),y2(x1,x2)} of Box B.2, to be given and find the inverse bivariate homogeneous polynomial of degree n,namelyx(y) or {x1(y1,y2),x2(y1,y2)},i.e.fromthesetofcoefficients{A11, A12,...,A1n−1, A1n}, by the algorithm that is outlined in Box B.2, find the set of coefficients {B11, B12,...,B1n−1, B1n}. Box B.2 (Algorithm for the construction of an inverse bivariate homogeneous polynomial of degree n). " # y1 y(x):= = y2 " # " # " # " # " # " # " # x1 x1 x1 x1 x1 x1 x1 ⊗ ··· ⊗···⊗ ⊗···⊗ = A11 + A12 + + A1n−1 | {z } + A1n | {z } = (B.21) x2 x2 x2 x2 x2 x2 x2 n − 1times n times " # n [k] X x1 = A1k , k=1 x2 " # x1 x(y):= = x2 " # " # " # " # " # " # " # y1 y1 y1 y1 y1 y1 y1 ⊗ ··· ⊗···⊗ ⊗···⊗ = B11 + B12 + + B1n−1 | {z } + B1n | {z } = (B.22) y2 y2 y2 y2 y2 y2 y2 n − 1times n times " # n [k] X y1 = B1k . k=1 y2 GKS algorithm: given {A11, A12,...,A1n−1, A1n}, find {B11, B12,...,B1n−1, B1n}. 1st polynomial: " # " # 0 " # 1 0 " # 1 n [k] n [k1] n [k2] x1 X x1 X x1 X x1 @ A @ A = B11 A1k + B12 A1k1 ⊗ A1k2 + ···+ x2 x2 x2 x2 k=1 k1=1 k2=1 0 " # 1 0 " # 1 n [k1] n [kn−1] X x1 X x1 @ A @ A +B1n−1 A1k1 ⊗···⊗ A1kn−1 + x2 x2 k1=1 kn−1=1 0 " # 1 0 " # 1 n [k1] n [kn] X x1 X x1 @ A @ A +B1n A1k1 ⊗···⊗ A1kn = x2 x2 k1=1 kn=1 " # " #[2] (B.23) x1 x1 = B11A11 +(B11A12 + B12A11 ⊗ A11) + ···+ x2 x2 " #[n−1] x1 +(B11A1n−1 + ···+ B1n−1An−1n−1) + x2 " #[n] x1 +(B11A1n + ···+ B1nAnn) + x2 +β1n+1 . 510 B The inverse of a multivariate homogeneous polynomial Continuation of Box. 2nd polynomial: " #[2] " #[2] " #[3] " #[n−1] " #[n] x1 y1 y1 y1 y1 = B22 + B23 + ···+ B2n−1 + B2n + β2n+1 = x2 y2 y2 y2 y2 " #[2] " #[3] " #[n] x1 x1 x1 (B.24) = B22 (A11 ⊗ A11) +(B22A23 + B23A33) + ···+(B22A2n + ···+ B2nAnn) + x2 x2 x2 +β2n+1 . nth polynomial: " #[n] " #[n] x1 x1 = BnnAnn + βnn+1 . (B.25) x2 x2 (According to E. Grafarend and B. Schaffrin (1993) or W. H. Steeb (1991).) Forward substitution: 2 3 2 3 " #[1] " #[1] x1 x1 6 7 6 7 6 7 6 7 6 x2 7 2 3 2 3 6 x2 7 2 3 6 7 6 7 β1n+1 6" #[2] 7 B11 B12 ... B1n A11 A12 ... A1n 6" #[2] 7 6 x1 7 6 7 6 7 6 x1 7 6 7 6 7 6 7 6β2n+1 7 6 7 6 0 B22 ... B2n 7 6 0 A22 ... A2n 7 6 7 6 7 6 7 = 6 7 6 7 = 6 7 + 6 7 , (B.26) 6 x2 7 6 7 6 7 6 x2 7 6 . 7 6 7 4 ...... 5 4 ...... 5 6 7 4 5 6 . 7 6 . 7 6 7 6 7 βnn+1 6" # 7 00... Bnn 00... Ann 6" # 7 6 [n]7 6 [n]7 4 x1 5 4 x1 5 x2 x2 subject to A22 = A11 ⊗ A11 , A23 = A11 ⊗ A12 + A12 ⊗ A11 , nX−1 A2n = A1i ⊗ A1n−i ; i=1 [3] A33 = A11 , nX−2 n−Xi−1 A3n = A1i ⊗ A1j ⊗ A1n−i−j ; i=1 j=1 (B.27) [4] A44 = A11 , " !# nX−3 n−Xi−2 n−Xi−j−1 A4n = A1i ⊗ A1j ⊗ A1k ⊗ A1n−i−j−k ; i=1 j=1 k=1 " " #!# nX−4 nX−i−3 n−Xi−j−2 n−i−Xj−k−1 A5n = A1i ⊗ A1j ⊗ A1k ⊗ A1l ⊗ A1n−i−j−k−l . i=1 j=1 k=1 l=1 (Consult Box B.4 for the general representation of Amn.) B-2 Inversion of a bivariate homogeneous polynomial of degree n 511 Continuation of Box. Backward substitution: AB= I ⇔ 2 3 2 3 B11 B12 ... B1n−1 B1n A11 A12 ... A1n−1 A1n 6 7 6 7 6 7 6 7 6 0 B22 ... B2n−1 B2n 7 6 0 A22 ... A2n−1 A2n 7 (B.28) 6 7 6 7 6 7 6 7 6 ...... 7 6 ...... 7 = I . 6 7 6 7 4 00... Bn−1n−1 Bn−1n5 4 00... An−1n−1 An−1n5 00... 0 Bnn 00... 0 Ann −1 (i) B11A11 = I2 ⇒ B11 = A11 ; −1 −1 −1[2] (ii) B12A12 + B12A22 =0⇒ B12 = −B11A12A22 = −A11 A12A22 ; −1 (iii) B11A13 + B12A23 + B13A33 =0⇒ B13 = − (B11A13 + B12A23) A33 = ` ´ (B.29) −1 −1 −1 = A11 A12A22 A23 − A13 A33 ; −1 (iv) B11A14 + B12A24 + B13A34 + B14A44 =0⇒ B14 = − (B11A14 + B12A24 + B13A34) A44 = ˆ ` ´ ˜ −1 −1 −1 −1 −1 = A11 A12A22 A24 − A23A33 A34 + A13A33 A34 − A14 A44 . (Consult Box B.5 for the general representation of B1n.) Notable for the GKS algorithm is the following. In the first step or the forward substitution, a set of equations for (B.30) with respect to the Kronecker–Zehfuss product is constructed by substituting (B.31) into (B.32) into the powers of (B.33) set up in matrix equations for the first polynomial, the second polynomial, and finally the nth polynomial: [2] [n−1] [n] x1 x1 x1 x1 ,..., , (B.30) x2 x2 x2 x2 y1 x1 x1 x1 y(x):= = A11 + A12 ⊗ + ··· , (B.31) y2 x2 x2 x2 x1 y1 y1 y1 x(y):= = B11 + B12 ⊗ + ··· , (B.32) x2 y2 y2 y2 [n] x x x x 1 1 1 ⊗···⊗ 1 ,..., = 1 23 4 . (B.33) x2 x2 x2 x2 n times Throughout, we particularly take advantage of the fundamental Kronecker–Zehfuss product rule (AB) ⊗ (BD)=(A ⊗ B)(C ⊗ D), i. e. its reduction to the Cayley product of two matrices. A heavy computation of the matrices {A22, A23,...,A33, A34,...} is taken over by the general representation of Amn ∀ m The elaborate algebraic manipulation becomes more clear when we consider Examples B.3 and B.4. The first example illustrates our intention to invert a vector-valued bivariate homogeneous polynomial of degree n =2,namelyy(x)=A11x + A12x ⊗ x → x(y)=B11y + B12y ⊗ y. The Kronecker–Zehfuss product of column arrays is explicitly given. The GKS algorithm determines the set of matrices {B11, B12} from the two given matrices A11 and A12. In contrast, the second example introduces the problem of inversion of a vector-valued bivariate homogeneous polynomial of degree n =3,namely y(x)=A11x + A12x ⊗ x + A13x ⊗ x ⊗ x → x(y)=B11y + B12y ⊗ y + B13y ⊗ y ⊗ y. An explicit representation of the Kronecker–Zehfuss product of double and triple column arrays is presented. Again, the GKS algorithm determines the set of matrices {B11, B12, B13} from the three given matrices A11, A12,andA13. Example B.3 (Inversion of a bivariate homogeneous polynomial of degeree n =2). Assume the bivariate homogeneous polynomial y(x)=A11x + A12x ⊗ x to be given and find the inverse bivariate homogeneous polynomial x(y)=B11y + B12y ⊗ y by the GKS algorithm. Basic equations: y1 x1 x1 x1 y(x)= = A11 + A12 ⊗ (B.34) y2 x2 x2 x2 or 11 12 11 2 12 13 14 2 y1 = a11x1 + a11x2 + a12x1 + a12 + a12 x1x2 + a12x2 , (B.35) 21 22 21 2 22 23 24 2 y2 = a11x1 + a11x2 + a12x1 + a12 + a12 x1x2 + a12x2 , ⎡ ⎤ x2 ⎢ 1 ⎥ [2] ⎢ ⎥ x1 x1 x1 ⎢x1x2⎥ ⊗ ⎢ ⎥ = = ⎢ ⎥ (B.36) x2 x2 x2 ⎣x2x1⎦ 2 x2 (Kronecker–Zehfuss product); x1 y1 y1 y1 x(y)= = B11 + B12 ⊗ (B.37) x2 y2 y2 y2 or 11 12 11 2 12 13 14 2 x1 = b11y1 + b11y2 + b12y1 + b12 + b12 y1y2 + b12y2 , (B.38) 21 22 21 2 22 23 24 2 x2 = b11y1 + b11y2 + b12y1 + b12 + b12 y1y2 + b12y2 , ⎡ ⎤ y2 ⎢ 1 ⎥ [2] ⎢ ⎥ y1 y1 y1 ⎢y1y2⎥ ⊗ ⎢ ⎥ = = ⎢ ⎥ (B.39) y2 y2 y2 ⎣y2y1⎦ 2 y2 (Kronecker–Zehfuss product). B-2 Inversion of a bivariate homogeneous polynomial of degree n 513 1st polynomial: $ % x1 x1 x1 x1 = B11 A11 + A12 ⊗ + x2 x2 x2 x2 $ % $ % x1 x1 x1 x1 x1 x1 +B12 A11 + A12 ⊗ ⊗ A11 + A12 ⊗ = (B.40) x2 x2 x2 x2 x2 x2 [2] x x 1 ⊗ 1 = B11A11 +(B11A12 + B12A11 A11) + β13 . x2 x2 According to E. Grafarend and B. Schaffrin (1993) or W. H. Steeb (1991). Note that we here have used (AC) ⊗ (BD)=(A ⊗ B)(C ⊗ D), i. e. $ % $ % $ % x1 x1 x1 x1 A11 ⊗ A11 =(A11 ⊗ A11) ⊗ . (B.41) x2 x2 x2 x2 2nd polynomial: $ % x x y y x x 1 ⊗ 1 1 ⊗ 1 ⊗ 1 ⊗ 1 = B11 + β23 = B22 (A11 A11) + β23 . (B.42) x2 x2 y2 y2 x2 x2 Forward substitution: ⎡ ⎤ ⎡ ⎤ x1 x1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ x2 ⎥ B11 B12 A11 A12 ⎢ x2 ⎥ β13 ⎢ ⎥ = ⎢ ⎥ + , (B.43) ⎢ x x ⎥ 0 B 0 A ⎢ x x ⎥ β ⎣ 1 ⊗ 1 ⎦ 22 22 ⎣ 1 ⊗ 1 ⎦ 23 x2 x2 x2 x2 subject to A22 = A11 ⊗ A11. Note that the matrices A := A and B := B are upper triangular such ⇔ −1 that AB= I2 B = A . Backward substitution: B11 B12 A11 A12 AB= I6 ⇔ = I6 ; (B.44) 0 B22 0 A22 ⇒ −1 (i) B11A11 = I2 B11 = A11 , (B.45) ⇒ − −1 − −1 −1 ⊗ −1 (ii) B11A12 + B12A22 =0 B12 = B11A12A22 = A11 A12 A11 A11 . First, we have used (A ⊗ B)−1 = A−1 ⊗ B−1 for two invertible square matrices A and B. Second, we have used the standard solution of a system of upper triangular matrix equations. For the inverse polynomial representation, only the elements of the first row of the matrix B := B are of interest. An explicit write-up is [2] x1 y1 y1 −1 − −1 −1 ⊗ −1 (B.46) = A11 A11 A12 A11 A11 . x2 y2 y2 End of Example. 514 B The inverse of a multivariate homogeneous polynomial Example B.4 (Inversion of a bivariate homogeneous polynomial of degeree n =3). Assume the bivariate homogeneous polynomial y(x)=A11x + A12x ⊗ x + A13x ⊗ x ⊗ x to be given and find the inverse bivariate homogeneous polynomial x(y)=B11y + B12y ⊗ y + B13y ⊗ y ⊗ y by the GKS algorithm. Basic equations: y1 y(x)= = y2 (B.47) x1 x1 x1 x1 x1 x1 = A11 + A12 ⊗ + A13 ⊗ ⊗ , x2 x2 x2 x2 x2 x2 ⎡ ⎤ 3 x1 ⎢ ⎥ ⎢ 2 ⎥ ⎢x1x2⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢x1x2⎥ [3] ⎢ ⎥ x x x x ⎢x x2⎥ 1 1 1 1 ⎢ 1 2⎥ × ⊗ ⊗ = = ⎢ ⎥ ∈ R8 1 (B.48) ⎢ 2⎥ x2 x2 x2 x2 ⎢x2x1⎥ ⎢ ⎥ ⎢x2x ⎥ ⎢ 2 1⎥ ⎢ 2 ⎥ ⎣x2x1⎦ 3 x2 (triple Kronecker–Zehfuss product); x1 x(y)= = x2 (B.49) y1 y1 y1 y1 y1 y1 = B11 + B12 ⊗ + B13 ⊗ ⊗ , y2 y2 y2 y2 y2 y2 ⎡ ⎤ 3 y1 ⎢ ⎥ ⎢ 2 ⎥ ⎢y1 y2⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢y1 y2⎥ [3] ⎢ ⎥ y y y y ⎢y y2⎥ 1 1 1 1 ⎢ 1 2 ⎥ × ⊗ ⊗ = = ⎢ ⎥ ∈ R8 1 (B.50) ⎢ 2⎥ y2 y2 y2 y2 ⎢y2y1 ⎥ ⎢ ⎥ ⎢y2y ⎥ ⎢ 2 1⎥ ⎢ 2 ⎥ ⎣y2 y1⎦ 3 y2 (triple Kronecker–Zehfuss product). B-2 Inversion of a bivariate homogeneous polynomial of degree n 515 1st polynomial: 3 [k] x1 + x1 = B11 A1k + x2 k=1 x2 ⎛ ⎞ ⎛ ⎞ [k1] [k2] +3 x +3 x ⎝ 1 ⎠ ⊗ ⎝ 1 ⎠ +B12 A1k1 A1k2 + x x k1=1 2 k2=1 2 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ [k1] [k2] [k3] +3 x +3 x +3 x ⎝ 1 ⎠ ⊗ ⎝ 1 ⎠ ⊗ ⎝ 1 ⎠ (B.51) +B13 A1k1 A1k2 A1k3 = x x x k1=1 2 k2=1 2 k3=1 2 [2] x1 x1 = B11A11 +(B11A12 + B12A11 ⊗ A11) + x2 x2 [3] x ⊗ ⊗ ⊗ ⊗ 1 +[B11A13 + B12 (A11 A12 + A12 A11)+B13 (A11 A11 A11)] + β14 . x2 2nd polynomial: [2] [2] [3] x x x y y 1 ⊗ 1 1 1 1 = = B22 + B23 + β24 = x2 x2 x2 y2 y2 (B.52) [2] [3] x x ⊗ 1 ⊗ ⊗ ⊗ ⊗ 1 = B22 (A11 A11) +[B22 (A11 A12 + A12 A11)+B23 (A11 A11 A11)] + β24 . x2 x2 3rd polynomial: [3] [3] x x x x x 1 ⊗ 1 ⊗ 1 1 ⊗ ⊗ 1 = = B33 (A11 A11 A11) + β34 . (B.53) x2 x2 x2 x2 x2 Forward substitution: ⎡ ⎤ ⎡ ⎤ [1] [1] x x ⎢ 1 ⎥ ⎢ 1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎡ ⎤ ⎢ x2 ⎥ ⎡ ⎤ ⎡ ⎤ ⎢ x2 ⎥ ⎢ ⎥ ⎢ ⎥ β ⎢ ⎥ B11 B12 B13 A11 A12 A13 ⎢ ⎥ 14 ⎢ [2]⎥ ⎢ ⎥ ⎢ ⎥ ⎢ [2]⎥ ⎢ ⎥ ⎢ x1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ x1 ⎥ ⎢β ⎥ ⎢ ⎥ = ⎣ 0 B22 B23⎦ ⎣ 0 A22 A23⎦ = ⎢ ⎥ + ⎢ 24⎥ , (B.54) ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ x2 ⎥ ⎢ x2 ⎥ β ⎢ ⎥ 00B33 00A33 ⎢ ⎥ 34 ⎢ [3]⎥ ⎢ [3]⎥ ⎣ x1 ⎦ ⎣ x1 ⎦ x2 x2 subject to A22 = A11 ⊗ A11 , A23 = A11 ⊗ A12 + A12 ⊗ A11 , A33 = A11 ⊗ A11 ⊗ A11 . (B.55) Both the matrices A := A and B := B are upper triangular such that ⇔ −1 AB= I14 B = A . (B.56) 516 B The inverse of a multivariate homogeneous polynomial Backward substitution: ⎡ ⎤ ⎡ ⎤ B11 B12 B13 A11 A12 A13 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ AB= I14 ⇔ ⎣ 0 B22 B23⎦ ⎣ 0 A22 A23⎦ = I14 ; (B.57) 00B33 00A33 (i) B11A11 = I2 ⇒ −1 B11 = A11 ; (ii) B12A12 + B12A22 =0 ⇒ − B = −B A A−1 = −A−1A A 1[2] ; 12 11 12 22 11 12 22 (B.58) (iii) B11A13 + B12A23 + B13A33 =0 ⇒ − −1 B13 = (B11A13 + B12A23) A33 = − −1 −1[3] −1 −1[2] ⊗ ⊗ −1[3] = A11 A13A11 + A11 A12A11 (A11 A12 + A12 A11) A11 −1 − −1 ⊗ −1 ⊗ ⊗ −1 ⊗ −1 ⊗ −1 = A11 A13 + A12 A11 A11 (A11 A12 + A12 A11) A11 A11 A11 . First, we have used (A ⊗ B)−1 = A−1 ⊗ B−1 for two invertible square matrices A and B, secondly we have used the standard solution of a system of upper triangular matrix equations. For the inverse polynomial representation, only the elements of the first row of the matrix B := B are of interest. An explicit write-up is [2] x y y 1 −1 1 − −1 −1 ⊗ −1 1 − = A11 A11 A12 A11 A11 x2 y2 y2 (B.59) [3] y − −1 − −1 ⊗ −1 ⊗ ⊗ −1 ⊗ −1 ⊗ −1 1 A11 A13 A12 A11 A11 (A11 A12 + A12 A11) A11 A11 A11 . y2 End of Example. B-3 Inversion of a multivariate homogeneous polynomial of degree n Assume a multivariate homogeneous polynomial of degree n,namelyy(x), to be given and find the inverse multivariate homogeneous polynomial of degree n,namelyx(y), and +n [2] [n−1] [n] [k] p y(x)=A11x + A12x + ···+ A1n−1x + A1nx = A1kx ∀ x ∈ R , k=1 (B.60) +n [2] [n−1] [n] [k] p x(y)=B11y + B12y + ···+ B1n−1y + B1ny = B1ky ∀ y ∈ R , k=1 This defines the general problem of homogeneous polynomial series inversion. It reduces to construct the matrices {B11, B12,...,B1n−1, B1n} from the given matrices {A11, A12,...,A1n−1, A1n} by the GKS algorithm as outlined in Box B.3. B-3 Inversion of a multivariate homogeneous polynomial of degree n 517 Box B.3 (Algorithm for the construction of a multivariate homogeneous polynomial of degree n). 1st polynomial: 0 1 [k1] Xn Xn @ [k2]A x = B1k1 A1k2 x + β1n+1 . (B.61) k1=1 k2=1 2nd polynomial: 0 1 [k1] Xn Xn Xn [2] [k] @ [k2]A x = B2ky + β2n+1 = B2k1 A1k2 x + β2n+1 . (B.62) k=2 k1=2 k2=1 nth polynomial: [n] [n] [n] x = Bnny + βnn+1 = BnnAnnx + βnn+1 . (B.63) Again taking advantage of the basic product rule between Cayley and Kronecker–Zehfuss multiplication, i. e. (AC) ⊗ (BD)=(A ⊗ B)(C ⊗ D), we arrive at the following results. Forward substitution: 2 3 2 3 x x 6 7 6 7 6 [2] 7 6 [2] 7 6 x 7 6 x 7 6 7 6 7 6 7 6 7 6 . 7 = B A 6 . 7 . (B.64) 6 7 6 7 6 [n−1]7 6 [n−1]7 4x 5 4x 5 x[n−1] x[n−1] (Note that A is given by Box B.4.) Backward substitution: AB= I ⇒ (B.65) see first row B1n giveninBoxB.5. First, in the forward substitution, we construct a set of equations for {x, x[2],...,x[n−1], x[2]},where the column array powers are to be understood with respect to the Kronecker–Zehfuss product. Again, advantage is taken from the basic product rule (AC)⊗(BD)=(A⊗B)(C ⊗D), also called “reduction of the Kronecker–Zehfuss product to the Cayley product”, by replacing such an identity into the power −1 series equations, the upper triangular matrix A as well as its inverse B := A . Second, while Box B.4 collects the input matrices Amn, which built up A, by means of Box B.5, we compute its inverse B,namelyitsfirstrow{B11, B12,...,B1n−1, B1n}, the one being only needed. A symbolic computer manipulation software package of Box B.5 is available from the author, in particular, for the backward substitution step. 518 B The inverse of a multivariate homogeneous polynomial Box B.4 (Inversion of a multivariate homogeneous polynomial of degree n: the upper triangular matrix A := A). Recurrence relation: n−X(m−1) Amn = A1i ⊗ Am−1n−i ∀ m ≤ n. (B.66) i=1 Inversion relations (A11, A1n given): nX−1 [2] A22 = A11 , A2n = A1i ⊗ A1n−i , i=1 nX−2 n−Xi−1 [3] A33 = A11 , A3n = A1i ⊗ A1j ⊗ A1n−i−j , i=1 j=1 nX−3 n−Xi−2 n−Xi−j−1 [4] A44 = A11 , A4n = A1i ⊗ A1j ⊗ A1k ⊗ A1n−i−j−k , i=1 j=1 k=1 nX−4 nX−i−3 n−Xi−j−2 n−i−Xj−k−1 [5] (B.67) A55 = A11 , A5n = A1i ⊗ A1j ⊗ A1k ⊗ A1l ⊗ A1n−i−j−k−l , i=1 j=1 k=1 l=1 ... n−X(m−1) n−k1X−(m−2) n−k1−Xk2−(m−3) Amn = A1k1 ⊗ A1k2 ⊗ A1k3 ⊗···⊗ k1=1 k2=1 k3=1 n−k1−k2X−···−km−2 ⊗···⊗ A1km−1 ⊗ A1n−k1−k2−···−km−2−km−1 . km−1=1 Box B.5 (Inversion of a multivariate homogeneous polynomial of degree n: 1st row of the triangular matrix B := B ). Recurrence relation: ! n−1 X “ ”−1 −1 −1 [n] ` −1´[n] B1n = − B1iAin Ann ∀ n ≥ 2 , subject to Ann = A11 = A11 . (B.68) i=1 Inversion relations (A11, A1n given): ˆ ˜ −1 −1 −1 −1 −1 −1 B11 =+A11 , B12 = −A11 A12A22 , B13 =+A11 A12A22 A23 − A13 A33 , ˆ ` ´ ˜ −1 −1 −1 −1 −1 B14 =+A11 A12A22 A24 − A23A33 A34 + A13A33 A34 − A14 A44 , ˆ ` ` ´´ −1 −1 −1 −1 −1 B15 =+A11 A12A22 A25 − A24A44 A45 − A23A33 A35 − A34A44 A45 + ` ´ ˜ −1 −1 −1 −1 +A13A33 A35 − A34A44 A45 + A14A44 A45 − A15 A55 , ... (B.69) » nX−1 nX−2 −1 −1 −1 −1 B1n =+A11 − A1n + A1iAii Ain − A1iAii Ain−1An−1n−1An−1n− i=2 i=2 nX−3 – −1 −1 ` −1 ´ − A1iAii Ain−2An−2n−2 An−2n − An−2n−1An−1n−1An−1n −··· ∀ n ≥ 2 . i=2 J Gauss surface normal coordinates in geometry and gravity spaceThree-dimensional geodesy, minimal distance mapping, geometric heights. Reference plane, reference sphere reference ellipsoid-of-revolution, reference triaxial ellipsoid. With the advent of artifical satellites in an Earth-bound orbit, geodesists succeeded to position points of the topographic surface T2 by a set of {X, Y, Z}∈R3 coordinates in a three-dimensional reference frame at the mass center of the Earth oriented along the equatorial axes at some reference epoch t0 ∈ R. In particular, global positioning systems (“global problem solver”: GPS), were responsible for the materialization of three-dimensional geodesy in an Euclidean space. Based upon a triple {X, Y, Z}∈R3 of coordinates new concepts for converting these coordinates into heights with respect to a reference surfacehavebeendeveloped. In the geometry space, the triplet {X, Y, Z}∈T2 is transformed by a geodesic projection P2 S2 into geometric heights with respect to (i) a reference plane , (ii) a reference sphere r, (iii) E2 E2 a reference ellipsoid-of-revolution A1,A2 , or (iv) a reference triaxial ellipsoid A1.A2,A3 . Important! First, the geodesic projection is performed by a straight line as the geodesic in flat geom- etry space. Second, the special geodesic passing the point {X, Y, Z}∈T has been chosen which has minimal distance S to the reference surface. The length of the geodesic from { }∈T { }∈P2 S2 E2 E2 { } X, Y, Z to x,ˆ y,ˆ zˆ or r or A1,A2 or A1.A2,A3 ,inshort, x,ˆ y,ˆ zˆ being deter- mined by the minimal distance mapping, constitute the projective height in geometry space, Important! namely of type (i) planar, (ii) spherical, (iii) ellipsoidal, or (iv) triaxial ellipsoidal. Section J-1. By algebraic mean, Section J-1 outlines various step procedures to establish projection heights in geometry space. By means of minimal distance mapping, various computational steps, either forward or backwards, are reviewed depending on the nature of the projection surface. The projection surfaces include (i) the plane, (ii) the sphere, (iv) the ellipsoid-of-revolution, and (iv) the triaxial ellipsoid. Section J-2. More specific, Section J-2 reviews various algorithms of computing Gauss surface normal coordinates for the case of an ellipsoid-of-revolution. The highlight is the computational algorithm by means of Gr¨obner basis and the Buchberger algorithm in establishing an ideal for the polynomial solution for the minimum distance mapping. From the Baltic Sea Level Project, we refer to detailed solutions of twenty- one points varying from Finland, Sweden, Lithuania, Poland, and Germany and taking reference to the World Geodetic Datum 2000 with the {A1,A2} data A1 = 6 378 136.602 m (semi-major axis) and A2 = 6 656 751.860 m (semi-minor axis) following E. Grafarend and A. Ardalan (1999). Section J-3. Finally, Section J-3 presents the computation of Gauss surface normal coordinates for the case of a triaxial ellipsoid. For the Earth, we compute the position and orientation, and from parameters of the best fitting triaxial ellipsoid, we chose the geoid as the ideal Earth figure closest to the mean sea level. This important result is extended to other celestial bodies of triaxial nature, namely for Moon, Mars, Phobos, Amalthea, Io, and Mimas. 638 J Gauss surface normal coordinates in geometry and gravity space J-1 Projective heights in geometry space: from planar/spherical to ellipsoidal mapping 3 First, we outline how points in {R ,δkl} are connected by geodesics, namely by straight lines which are derived from a variational principle. The general solution of the differential equations of a geodesic, in particular, in terms of an affine parameter of its length, is represented by a linear one-dimensional 3 3 manifold embedded into {R ,δkl}. Second, based upon geodesics in {R ,δkl}, in Fig. J.1, we introduce the orthogonal projection of a point P (the peak of a mountain, the top of a tower) onto a horizontal P2 plane through P0 by p = π(P ) generating the height pP of P with respect to the plane 0 through P0. Alternatively, we may interpret the orthogonal projection p = π(P ) along a geodesic/straight line as a minimal distance mapping of P with respect to P0 generating p = π(P ). Third, by Fig. J.2, we illustrate the minimal distance mapping of a topographic point P ∈ T2 as an element of the topographic surface (two-dimensional Riemann manifold) of the Earth along a geodesic/straight line onto a plane P2 which may be chosen as the horizontal plane at some reference point. In this way, we generate the orthogonal projection p = π(P ) and the length of the shortest distance pP , called the geometric height of P with respect to P2. The choice of the height pP is very popular in photogrammetric and engineering surveying. By contrast, by Figs. J.3 and J.4, we illustrate the minimal distance mapping ∈ S2 ∈ E2 ∈ E2 of a point P onto p r or p A1,A2 or p A1,A2,A3 along a geodesic/straight line through P . P2 S2 Let us here assume that the reference surface is no longer the plane , but the sphere r of radius E2 r or the ellipsoid-of-revolution A1,A2 of semi-major axis A1 and semi-minor axis A2 or the triaxial E2 ellipsoid A1,A2,A3 with the axes A1 >A2 >A3. Fourth, by Box J.1, let us here outline a variant of the analytical treatment of generating geometric heights with respect to (i) the plane P2, (ii) the sphere S2 E2 E2 r, (iii) the ellipsoid-of-revolution A1,A2 , and (iv) the triaxial ellipsoid A1,A2,A3 by the minimal distance principle. By (J.15)–(J.18), the constraint Lagrangean is defined with respect to the Euclidean distance − 2 ∈ T2 ∈ P2 S2 E2 E2 X x /2 subject to X , x or r or A1,A2 or A1,A2,A3 , and the following con- P2 S2 straint. The point x is an element of the plane , the sphere r, the ellipsoid-of-revolution E2 E2 A1,A2 , or the triaxial ellipsoid A1,A2,A3 . The constraint enters the Lagrangean by a La- grange multiplier Λ. The routine of constraint optimization is followed by (J.9)–(J.13) The focus is on the normal equations (J.10)–(J.13), which constitute a system of algebraic equa- tions of second degree. A solution algorithm is outlined by (J.19)–(J.23). In order to guaran- Important! tee a minimal distance solution, the solution points of the nonlinear equations of normal type have to be tested with respect to the second variation, i. e. the positivity of the Hesse matrix of second derivatives with respect to the unknown coordinates {x1,x2,x3} of p = π(P ). First, we alternatively present a second variation of the contruction of projective heights in geometry space by a minimal distance mapping of a topographic point X ∈ T2 onto a reference surface of type P2 S2 E2 E2 (i) plane , (ii) sphere r, (iii) ellipsoid-of-revolution A1,A2 , and (iv) triaxial ellipsoid A1,A2,A3 , namely based upon X − x(u, v) =ext.x(u, v) indicates a suitable parameterization of the surfaces (i)–(iv) by means of coordinates {u, v}, which constitute a chart of the Riemannian manifold (i)–(iv). The first variation δL(u, v)=δ X − x(u, v) = 0 leads to the normal equations (J.15)– α (J.18), which establish the orthogonality of type X − x(˜u, v˜)=hn and ∂x/∂u (˜u, v˜)=tα, namely of the normal surfaces n and the surface tangent vector tα for all α ∈{1, 2}.In P2 S2 particular, projective heights for (i) the plane by (J.5), (ii) the sphere r by (J.6), (iii) E2 E2 the ellipsoid-of-revolution A1,A2 by (J.7), and (iv) triaxial ellipsoid A1,A2,A3 by (J.8). With respect to the second variation, besides (J.15)–(J.19) as the necessary condition for a Important! minimal distance mapping, (J.19)–(J.23) establishes the sufficiency condition. The sufficiency condition has been interpreted by the matrices of the first and second fundamental form in E. Grafarend and P. Lohse (1991). In addition, we like to refer to N. Bartelme and P. Meissl (1975), W. Benning (1974), H. Fr¨ohlich and H. H. Hansen (1976), B. Heck (2002), M. Heikkinen (1982), M. K. Paul (1973), P. O. Penev (1978), M. Pick (1985), H. S¨unkel (1976), T. Vincenty (1976, 1980), recently J. Awange and E. Grafarend (2005). J-1 Projective heights in geometry space: from planar/spherical to ellipsoidal mapping 639 P X X − x = h x p = π(P ) P0 q q 2 2 ` ´2 ` ´ P0p = P0p − pP = P0P 1 − pP /P0P = P0P 1 − pP /2P0P Fig. J.1. Projective heights in geometry space, orthogonal projection p = π(P ) of a topographic point X ∈ T2 2 2 onto a local horizontal plane X 0 ∈ T , minimal distance mapping δ X − x =0. Indeed, for a suitable choice of surface parameters/surface coordinates of a chart, the unconstrained optimization problem may be preferable to constraint optimization, however, which we do not want to treat here. X x Fig. J.2. Projective heights in geometry space, minimal distance mapping with respect to a reference plane 2 2 P at X 0 ∈ T , orthogonal projection p = π(P ). 640 J Gauss surface normal coordinates in geometry and gravity space X x Fig. J.3. Projective heights in geometry space, minimal distance mapping with respect to a reference sphere 2 2 2 Sr, spherical heights hS (length of the geodesic from X ∈ T to x ∈ Sr). Let us calculate the projection heights in geometry space, in detail, the minimal distance with P2 S2 E2 respect to a reference surface of type (i) plane , (ii) sphere r, (iii) ellipsoid-of-revolution A1,A2 , E2 and (iv) triaxial ellipsoid A1,A2,A3 in Box J.2. X x Fig. J.4. Projective heights in geometry space, minimal distance mapping with respect to a reference triaxial E2 E2 ellipsoid A1,A2,A3 or a reference ellipsoid-of-revolution A1,A2 , ellipsoidal heights hE (length of the geodesic ∈ T2 ∈ E2 E2 from X to x A1,A2 or A1,A2,A3 ). J-1 Projective heights in geometry space: from planar/spherical to ellipsoidal mapping 641 Box J.1 (Projection heights in geometry space: minimal distance mapping). Geodesics in geometry space: d x˙k p =0. (J.1) dt x˙ 2 +˙y2 +˙z2 Minimal distance mapping: S := [X − x(u, v)]2 +[Y − y(u, v)]2 +[Z − Z(u, v)]2 . (J.2) Reference surface: (i) x(u, v) ∈ P2 (plane) , (ii) 2 x(u, v) ∈ Sr (sphere) , (J.3) (iii) ∈ E2 x(u, v) A1,A2 (ellipsoid-of-revolution) , (iv) ∈ E2 x(u, v) A1,A2,A3 (triaxial ellipsoid) . Heights in geometry space: h := X − x(u, v) . (J.4) Box J.2 (Projection heights in geometry space: minimal distance mapping, formulae and relations). Stationary functional: (i) 1 2 3 4 1 2 L(x ,x ,x ,x ):= X − x + Λ(a1x + a2y + a3z + a4)= (J.5) ˆ 2 ˜ 1 − 1 2 − 2 2 − 3 2 4 1 2 3 = 2 (X x ) +(Y x ) +(Z x ) + x (a1x + a2x + a3x + a4) , (ii) 1 2 3 4 2 2 2 2 2 L(x ,x ,x ,x ):= X − x + Λ(x + y + z − r )= (J.6) =(X − x1)2 +(Y − x2)2 +(Z − x3)2 + x4[(x1)2 +(x2)2 +(x3)2 − r2] , (iii) » – 1 2 2 2 3 2 1 2 3 4 2 (x ) +(x ) (x ) L(x ,x ,x ,x ):= X − x + Λ 2 + 2 − 1 A1 A2 (J.7) or 1 2 3 4 1 2 2 2 3 2 4 ` 2 1 2 2 2 2 3 2 2 2´ L(x ,x ,x ,x ):=(X − x ) +(Y − x ) +(Z − x ) + x A2[(x ) +(x ) ]+A1(x ) − A1A2 , (iv) » – 1 2 2 2 3 2 L 1 2 3 4 − 2 (x ) (x ) (x ) − (x ,x ,x ,x ):= X x + Λ 2 + 2 + 2 1 A1 A2 A3 (J.8) or 1 2 3 4 2 4 ˆ 2 2 2 2 2 2 2 2 2 2 2 2˜ L(x ,x ,x ,x ):= X − x + x A2A3x + A1A3y + A1A2z − A1A2A3 . 642 J Gauss surface normal coordinates in geometry and gravity space Continuation of Box. First variation: ∂L ν (ˆx )=0∀ µ, ν ∈{1, 2, 3, 4} (J.9) ∂xµ ⇐⇒ (i) plane P2: 1 4 2 4 3 4 −(X − xˆ )+a1xˆ =0, −(Y − xˆ )+a2xˆ =0, −(Z − xˆ )+a3xˆ =0, (J.10) 1 2 3 a1xˆ + a2xˆ + a3xˆ + a4 =0; 2 (ii) sphere Sr: −(X − xˆ1)+ˆx1xˆ4 =0, −(Y − xˆ2)+ˆx2xˆ4 =0, −(Z − xˆ3)+ˆx3xˆ4 =0, (J.11) (ˆx1)2 +(ˆx2)2 +(ˆx3)2 − r2 =0; E2 (iii) ellipsoid-of-revolution A1,A2 : 1 2 1 4 2 2 2 4 3 2 3 4 −(X − xˆ )+A2xˆ xˆ =0, −(Y − xˆ )+A2xˆ xˆ =0, −(Z − xˆ )+A1xˆ xˆ =0, (J.12) 2 1 2 2 2 2 3 2 2 2 A2[(ˆx ) +(ˆx ) ]+A1(ˆx ) − A1A2 =0; E2 (iv) triaxial ellipsoid A1,A2,A2 : 1 2 2 1 4 2 2 2 2 4 3 2 2 3 4 −(X − xˆ )+A2A3xˆ xˆ =0, −(Y − xˆ )+A1A3xˆ xˆ =0, −(Z − xˆ )+A1A2xˆ xˆ =0, (J.13) 2 2 1 2 2 2 2 2 2 2 3 2 2 2 2 A2A3(ˆx ) + A1A3(ˆx ) + A1A2(ˆx ) − A1A2A3 =0. Second variation. The second variation decides about the solution of type “minimum” or “maximum” or “turning point”. In our case 2 1 ∂ L γ (ˆx ) > 0 (J.14) 2 ∂xk∂xl ⇐⇒ 2 3 +1 0 0 (i) plane P2: 4 0+105 > 0 ; (J.15) 00+1 2 3 +1 + Λˆ 00 2 4 5 (ii) sphere Sr: 0+1+Λˆ 0 > 0 ; (J.16) 00+1+Λˆ 2 3 2 +1 + A2Λˆ 00 E2 4 2 ˆ 5 (iii) ellipsoid-of-revolution A1,A2 : 0+1+A2Λ 0 > 0 ; (J.17) 2 00+1+A1Λˆ 2 3 2 2 +1 + A2A312Λˆ 00 E2 4 2 2 ˆ 5 (iv) triaxial ellipsoid A1,A2,A2 : 0+1+A1A3Λ 0 > 0 . (J.18) 2 2 00+1+A1A2Λˆ J-2 Gauss surface normal coordinates: case study ellipsoid-of-revolution 643 The solution algorithm presented in Box J.3 determines in the forward step from the first variational equations (i), (ii), and (iii) the quantitiesx ˆ1,ˆx2,andˆx3, and inserts them afterwards into (iv) as a second forward step. The backward step is organized in first solving forx ˆ4 in a polynomial equation of type linear, quadratic, or order three. Second, we have to decide whether our solution fulfills the condition of positivity of the Hesse matrix of second derivatives in order to discriminate the non- admissible solutions. Box J.3 (Solution algorithm). Forward step: (i) plane: (J.19) 2 2 2 (a1 + a2 + a3)Λˆ = a1X + a2Y + a3Z + a4 ; (ii) sphere: (Λˆ +1)2 =(X2 + Y 2 + Z2)/r2 ⇒ √ (J.20) 2 2 2 Λˆ− = −1 − X + Y + Z /r , √ 2 2 2 Λˆ+ = −1+ X + Y + Z /r ; (iii) ellipsoid-of-revolution: (J.21) 2 2 2 2 2 2 2 2 2 2 2 2 2 2 A1A2(1 + A1Λˆ) (1 + A2Λˆ) − A2(X + Y )(1 + A1Λˆ) − A1Z (1 + A2Λˆ) =0; (iv) triaxial ellipsoid: 2 2 2 2 2 2 2 2 2 A2A3X (1 + A1A3Λˆ) (1 + A1A2Λˆ) + (J.22) 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 +A1A3Y (1 + A2A3Λˆ) (1 + A1A2Λˆ) + A1A2Z (1 + A2A3Λˆ) (1 + A1A3Λˆ) − 2 2 2 2 2 2 2 2 2 2 2 2 −(1 + A2A3Λˆ) (1 + A1A3Λˆ) (1 + A1A2Λˆ) A1A2A3 =0 or 2 3 2 2 2 a6(Λˆ ) + a4(Λˆ ) + a2(Λˆ )+a0 =0 (J.23) (cubic equation for Λˆ) . Backward step: “Resetx ˆ4 ∼ Λˆ into (i), (ii), and (iii), and solve the three equations forx ˆ1,ˆx2,andˆx3, and test the condition of positivity of the Hesse matrix in order to discriminate the admissible solutions.” J-2 Gauss surface normal coordinates: case study ellipsoid-of-revolution First, we review surface normal coordinates for the ellipsoid-of-revolution. Second, we extend the derivation to three-dimensional surface normal coordinates in terms of the forward transformation as well as of the backward transformation by means of the constraint minimum distance mapping in terms of the Buchberger algorithm. 644 J Gauss surface normal coordinates in geometry and gravity space J-21 Review of surface normal coordinates for the ellipsoid-of-revolution The coordinates of the ellipsoid-of-revolution of type {ellipsoidal longitude, ellipsoidal latitude, ellip- soidal height} are surface normal coordinates in the following sense. They are founded on the famous Gauss map of the surface normal vector ν(x) of the ellipsoid-of-revolution (J.24) in terms of the semi- major axis A1 >A2 and of the semi-minor axis A2