J. Inverse Ill-Posed Probl. 2021; 29(3): 351–367

Research Article

Jan Boman* A containing the support of a Radon transform must be an ellipsoid. II: The general case

https://doi.org/10.1515/jiip-2020-0139 Received October 21, 2020; accepted December 8, 2020

n Abstract: If the Radon transform of a compactly supported distribution f ̸= 0 in ℝ is supported on the set of tangent planes to the boundary ∂D of a bounded convex domain D, then ∂D must be an ellipsoid. The special case of this result when the domain D is symmetric was treated in [J. Boman, A hypersurface containing the support of a Radon transform must be an ellipsoid. I: The symmetric case, J. Geom. Anal. (2020), DOI 10.1007/s12220-020-00372-8]. Here we treat the general case.

Keywords: Support of Radon transform, Radon transform supported in hypersurface

MSC 2010: 44A12

1 Introduction

n It has been known for a long time that the Radon transform of a compactly supported distribution f on ℝ can n be given a natural definition and that the result is a distribution on the manifold of in ℝ ; see, 2 e.g., [3]. For instance, if n = 2 and f is the Dirac at the point a = (a1, a2) ∈ ℝ , then the Radon trans- form Rf , defined by ⟨Rf, φ⟩ = ⟨f, R∗φ⟩ for test functions φ (see Section 2 for details), is a smooth measure supported on the set of lines that contain the point a. This set of lines corresponds to a sinus shaped curve p = a1 cos α + a2 sin α in the αp-plane, called the sinogram by tomographers. A distribution whose Radon transform is a measure on the set of tangents to a circle can be constructed as follows: Define the functions 1 1 f x 1 x 2 −1/2 and f x 1 x 2 1/2 0( ) = π ( − | | )+ 1( ) = π ( − | | )+ in the plane. Simple calculations show that 1 Rf ω, p 1 and Rf ω, p 1 p2 0( ) = 1( ) = 2( − ) for |p| < 1 and all ω, and obviously Rf(ω, p) = 0 for |p| ≥ 1. Denote the in two variables by ∆ and the characteristic function for the interval 1, 1 by . Using the identity [− ] χ[−1,1] 2 R(∆f)(ω, p) = ∂p Rf(ω, p), denoting the Dirac measure at the origin in ℝ by δ( ⋅ ), and noting that 1 ∂2 1 p2 χ p δ p 1 δ p 1 , 2 p( − )+ = − [−1,1]( ) + ( − ) + ( + ) we see that the distribution f = ∆f1 + f0 satisfies Rf(ω, p) = δ(p − 1) + δ(p + 1).

*Corresponding author: Jan Boman, Department of , Stockholm University, Stockholm, Sweden, e-mail: [email protected]. https://orcid.org/0000-0003-1885-6387

Open Access. © 2021 Boman, published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 Inter- national License. 352 | J. Boman, Radon transform supported in hypersurface

By means of affine transformations, we can easily construct similar examples where the circle is replacedby an ellipse. However, for other convex domains than ellipses such distributions do not exist. This was proved in [2] for the case of domains D that are symmetric with respect to a point, which may be assumed to be the ori- gin, D = −D. Here we will treat the general case by proving the following theorem; see [2] for additional background and references.

n Theorem 1.1. Let D be an open, convex, bounded subset of ℝ , n ≥ 2, with boundary ∂D. If there exists a dis- tribution f ̸= 0 with support in D such that the Radon transform of f is supported on the set of supporting planes for D, then ∂D must be an ellipsoid.

Note that a supporting plane for D is a tangent plane to the boundary ∂D if the boundary is C1 smooth. The search for distributions whose Radon transforms are supported on the set of tangent planes to the n boundary surface of a domain D ⊂ ℝ was motivated by an attempt to prove the following conjecture. Conjecture 1.2. Let D be a bounded convex domain in the plane and let K be a closed subset of D. Then there exists a smooth function f , not identically zero, supp f ⊂ D, such that its Radon transform Rf(L) vanishes for all lines L that intersect K.

If D is a ball, the assertion of Conjecture 1.2 is true and well known since long ago (see, e.g., [5]), and by an affine transformation also if the boundary of D is an ellipsoid. However, to my knowledge, for other convex sets Conjecture 1.2 is still open. 2 To explain the connection between Conjecture 1.2 and Theorem 1.1, let Kε = {x ∈ ℝ : dist(x, K) < ε}, where ε > 0 is so small that K2ε ⊂ D. If we could find a compactly supported distribution f ̸= 0 in the plane such that Rf is supported in the set of tangents to Kε, then the convolution f ∗ ϕ, where ϕ is a smooth function with support in a sufficiently small neighborhood of the origin, would prove the conjecture. Because the Radon transform of f ∗ ϕ is equal to the one-dimensional convolution of the Radon transform p 㨃→ Rf(ω, p) and p 㨃→ Rϕ(ω, p) for fixed ω, so Rf would be supported in a neighborhood of the set of tangent planes to Kε, which we can make as small as we please by choosing the support of ϕ sufficiently small. Theorem 1.1 obviously shows that this strategy to prove Conjecture 1.2 must fail. However, we think that Theorem 1.1 has independent interest. For instance, Theorem 1.1 turned out to provide a new proof of a special case of a well-known conjecture of Arnold; see [2] for details and references. From another point of view we can see the assertion of Theorem 1.1 as a partial answer to the follow- ing question: which sets of hyperplanes can be the support of a Radon transform with compact support? n Not much seems to be known about this question. For instance, if D ⊂ ℝ is a bounded convex set whose boundary ∂D is not an ellipsoid, we do not know if an arbitrarily small neighborhood of the set of tangent planes to ∂D can contain the support of Rf for some f of compact support (cf. Conjecture 1.2). However, we can easily characterize the sets that can be the complement of the unbounded connected component of the complement of the support of the Radon transform of a compactly supported distribution (Theorem 7.1). An outline of the proof of Theorem 1.1 is given at the end of Section 3.

2 Distributions on the manifold of hyperplanes

n n n 1 The manifold ℙ of hyperplanes in ℝ can be identified with the manifold (S − × ℝ)/(±1), the set of pairs n 1 n (ω, p) ∈ S − × ℝ, where (ω, p) is identified with (−ω, −p). Thus a function on ℙ can be represented as n 1 n an even function g(ω, p) = g(−ω, −p) on S − × ℝ. As in [2], a distribution on ℙ will be a linear form on n 1 n 1 C∞e (S − × ℝ), the set of smooth even functions on S − × ℝ, and a locally integrable even function h(ω, p) n 1 on S − × ℝ will be identified with the distribution

∞ n−1 Ce (S × ℝ) ∋ φ 㨃→ ∫ ∫ h(ω, p)φ(ω, p) dω dp, ℝ Sn 1 − J. Boman, Radon transform supported in hypersurface | 353 where dω is area measure on Sn−1. Using the standard definition of R∗, i.e.

R∗φ(x)(x) = ∫ φ(ω, x ⋅ ω) dω, Sn 1 − n we can then define the Radon transform of the compactly supported distribution f on ℝ as the linear form

∞ n−1 ∗ Ce (S × ℝ) ∋ φ 㨃→ ⟨f, R φ⟩. n n Let D ⊂ ℝ be an open, convex, bounded subset of ℝ with boundary ∂D. We may assume that the origin is contained in D. Introduce the supporting function of D:

n 1 ρ(ω) = sup{x ⋅ ω : x ∈ D}, ω ∈ S − . If we introduce the temporary notation

inf : , ρ−(ω) = {x ⋅ ω x ∈ D} we observe that sup : sup : . ρ−(ω) = − {−x ⋅ ω x ∈ D} = − {x ⋅ (−ω) x ∈ D} = −ρ(−ω) n 1 An arbitrary measure g(ω, p) on S − × ℝ can be written as , g(ω p) = q+(ω)δ(p − ρ(ω)) + q−(ω)δ(p − ρ−(ω))

= q+(ω)δ(p − ρ(ω)) + q−(ω)δ(p + ρ(−ω)) for some measures and . Since , we then have q+(ω) q−(ω) δ(t) = δ(−t) , g(−ω −p) = q+(−ω)δ(−p − ρ(−ω)) + q−(−ω)δ(−p + ρ(ω)) . = q+(−ω)δ(p + ρ(−ω)) + q−(−ω)δ(p − ρ(ω))

The condition for g(ω, p) to be even, g(ω, p) = g(−ω, −p), therefore becomes for all . q−(−ω) = q+(ω) ω It is therefore sufficient to introduce one density function, say , because then q+(ω) = q(ω) q−(ω) = q(−ω) (we shall see later that q(ω) must be a continuous function). We conclude that an arbitrary measure on the n manifold ℙ can be represented:

g(ω, p) = q(ω)δ(p − ρ(ω)) + q(−ω)δ(p + ρ(−ω)). j If the boundary ∂D is smooth, and hence ρ(ω) is smooth, we can argue similarly, using the fact that δ( )( ⋅ ) n is odd if j is odd, to see that an arbitrary distribution g(ω, p) on ℙ that is supported on

{(ω, p) : p = ρ(ω)} ∪ {(ω, p) : p = −ρ(−ω)} (2.1) can be written as m−1 j j j g(ω, p) = ∑ qj(ω)δ( )(p − ρ(ω)) + (−1) qj(−ω)δ( )(p + ρ(−ω)) (2.2) j=0 for some distributions ,..., on n−1. But if is not smooth, this is not always true. More- q0(ω) qm−1(ω) S ρ(ω) j over, the product qj(ω)δ( )(p − ρ(ω)) does not even make sense as a distribution if j ≥ 1, qj is a distribution of positive order, and ρ(ω) is only continuous. However, if g = Rf for some compactly supported distribution f, then the arguments in [2, Lemma 2] prove that the representation (2.2) is valid and qj(ω) must be continuous. For the convenience of the reader, we repeat the argument briefly in Lemma 2.1. The theory of the wave front set of distributions is not needed in this article, but it can be used to make n a helpful comment to the next lemma. It is well known that the restriction of a distribution u in ℝ to a smooth n submanifold Σ makes sense as a distribution on Σ provided the conormal of Σ ⊂ ℝ is disjoint from the wave front set WF(u); see [4, Chapter 8]. The 1–1 correspondence between the wave front set of a distribution f and 354 | J. Boman, Radon transform supported in hypersurface that of its Radon transform g = Rf shows in particular that if f has compact support, then WF(g) cannot con- n tain any conormal to a submanifold of ℙ of the form ω = constant.¹ This is reflected in the fact, well known from computerized , that the so-called sinogram, the density plot of a 2-dimensional Radon trans- form Rf(ω, p) in an αp plane with ω = (cos α, sin α), never contains vertical discontinuities. More precisely, if n n supp f is contained in the ball |x| ≤ B, then all conormals to the hypersurface γy = {L ∈ ℙ : y ∈ L} ⊂ ℙ must be disjoint from WF(Rf) if |y| > B. In particular, for any compactly supported distribution f, the restriction of 0 the distribution Rf(ω, p) to ω = ω always makes sense as a distribution on ℝ; cf. the distribution Rω in (2.3). n Lemma 2.1. Let D be an open, convex, bounded subset of ℝ . Let f be a compactly supported distribution n in ℝ and assume that g = Rf is supported on the set of supporting planes to D. Then there exist a number m and continuous functions qj(ω) such that the distribution g can be written in the form (2.2). n 1 Proof. For arbitrary ω ∈ S − , define the distribution Rω f on ℝ by

⟨Rω f, ψ⟩ = ⟨f, x 㨃→ ψ(x ⋅ ω)⟩ for ψ ∈ C∞(ℝ). (2.3)

The map ω 㨃→ Rω f must be continuous in the sense that ω 㨃→ ⟨Rω f, ψ⟩ is continuous for every test function ψ ∈ C∞(ℝ). Moreover, Rf can be expressed in terms of Rω f as follows: If φ(ω, p) = φ0(ω)φ1(p), then

⟨Rf, φ⟩ = ⟨f, R∗φ⟩ = ⟨f, ∫ φ0(ω)φ1(x ⋅ ω)⟩ dω Sn 1

− = ∫ φ0(ω)⟨f, φ1(x ⋅ ω)⟩ dω Sn 1

− = ∫ φ0(ω)⟨Rω f, φ1⟩ dω. (2.4) Sn 1

− Formula (2.4) shows that if g = Rf is supported on the hypersurface (2.1), then Rω f must be supported on the union of the two points p = ρ(ω) and p = −ρ(−ω) for every ω. Hence Rω f can be represented as the right- hand side of (2.2) for every ω. It remains only to prove that all qj(ω) are continuous. It is enough to prove 0 n 1 that qj(ω) is continuous in some neighborhood of an arbitrary ω ∈ S − . If we choose ψ such that ψ(p) = 0 0 in some neighborhood of −ρ(−ω ), then m m j j j ⟨Rω f, ψ⟩ = ∑ qj(ω)⟨δ( )(p − ρ(ω)), ψ(p)⟩ = ∑(−1) qj(ω)ψ( )(ρ(ω)). j=0 j=0 0 If we choose ψ(p) such that ψ(p) = 1 in a neighborhood of ρ(ω ), we get ⟨Rω f, ψ⟩ = q0(ω) in a neighborhood 0 0 of ω . Recalling that ω 㨃→ Rω f is continuous, we see that q0 must be continuous at ω . Next, choosing ψ(p) 0 such that ψ(p) = p in a neighborhood of ρ(ω ), we get

⟨Rω f, ψ⟩ = q0(ω)ρ(ω) − q1(ω), which shows that q1(ω) must be continuous since ρ(ω) is continuous. Continuing in this way completes the proof.

Remark 2.2. Theorem 1.1 shows that the function ρ(ω) must be real analytic and, as we shall see later, the functions qj(ω) must also be real analytic. It would be interesting to know if one could prove by arguments like those of the proof of Lemma 2.1 that qj(ω) must be real analytic. That would make the crucial Proposition 5.1 an immediate consequence of formula (5.5) and thereby considerably simplify the proof of Theorem 3.1. In particular, Lemma 5.2 and Remarks 4.3 and 4.6 could be omitted.

Next, we write down the conditions on qj(ω) and ρ(ω) for g(ω, p) to belong to the range of the Radon transform.

n 1 Up to the sign of the cotangent vector, this correspondence can be described as follows: to (y, ±ξ) ∈ T∗(ℝ ) correspond the n n conormals to the hypersurface γy = {L ∈ ℙ : y ∈ L} at the Ly,ξ = {x ∈ ℝ : (x − y) ⋅ ξ = 0}. J. Boman, Radon transform supported in hypersurface | 355

3 The range conditions

A compactly supported (ω, p)-even function or distribution g(ω, p) belongs to the range of the Radon trans- form if and only if the function k ω = (ω1,..., ωn) 㨃→ ∫ g(ω, p)p dp ℝ is the restriction to the unit sphere of a homogeneous polynomial of degree k in ω for every non-negative integer k; see [3]. k We now compute the moments ∫ g(ω, p)p dp for the expression (2.2). The computations are only ℝ j slightly different from those of [2]. By the definition of δ( ), for any a ̸= 0, j k ∫ δ( )(p − a)p dp = 0 if j > k, ℝ and j k j j k j k! k j ∫ δ( )(p − a)p dp = (−1) ∫ δ(p − a)∂p p dp = (−1) a − if j ≤ k. (k − j)! ℝ ℝ For arbitrary non-negative integers k, j, we define the constant ck,j by ck,j = 0 if j > k and k! ck,j = = k(k − 1) ⋅ ⋅ ⋅ (k − j + 1) if 0 ≤ j ≤ k. (3.1) (k − j)! For instance, if j = 2, then ck,j = k(k − 1) for all k. It follows that if g(ω, p) is defined by (2.2), then m−1 k j k j k j ∫ g(ω, p)p dp = ∑ ck,j(−1) (qj(ω)ρ(ω) − + qj(−ω)(−ρ(−ω)) − ) j 0 ℝ = for every k ≥ 0. Thus, for g(ω, p) to be the Radon transform of a distribution, it is necessary and sufficient that m−1 j k j k k j ∑ ck,j((−1) qj(ω)ρ(ω) − + (−1) qj(−ω)ρ(−ω) − ) j=0 is equal to the restriction to Sn−1 of a homogeneous polynomial of degree k for every k. In Sections 5 and 6, n we will show that those conditions imply that the graph of ρ(ω) is a second degree surface in ℙ , which, since the surface is bounded, implies that it is an ellipsoid. Thus Theorem 1.1 follows from the following purely algebraic result. We shall denote the set of homoge- neous polynomials of degree k by Pk.

n 1 Theorem 3.1. Assume that the strictly positive and continuous function ρ(ω) on S − and the continuous func- , ,..., tions q0 q1 qm−1, not all zero, satisfy the infinitely many identities m−1 j k j k k j ∑ ck,j((−1) ρ(ω) − qj(ω) + (−1) ρ(−ω) − qj(−ω)) = pk(ω) ∈ Pk (3.2) j=0 for k = 0, 1 ... , where ck,j is defined by (3.1). Then the graph of ρ(ω), the set of supporting planes to ∂D, is n a quadric in ℙ . In order to shorten formulas, we will sometimes write

q(−ω) = q̌(ω), ρ(−ω) = ρ̌(ω). For instance, if m = 3, the first six of equations (3.2) can then be written in matrix form asfollows:

1 0 0 1 0 0 q0 p0 ρ −1 0 −ρ̌ −1 0 q1 p1 2 2 (ρ −2ρ 2 ρ̌ 2ρ̌ 2 ) (q2) (p2) ( 3 2 3 2 ) ( ) = ( ) . (3.3) (ρ −3ρ 6ρ −ρ̌ −3ρ̌ −6ρ̌ ) (q̌0) (p3) 4 3 2 4 3 2 ρ −4ρ 12ρ ρ̌ 4ρ̌ 12ρ̌ q̌1 p4 ρ5 5ρ4 20ρ3 ρ5 5ρ4 20ρ3 q p ( − − ̌ − ̌ − ̌ ) ( ̌2) ( 5) 356 | J. Boman, Radon transform supported in hypersurface

Here is an outline of the proof of Theorem 3.1. As in [2], we first eliminate the functions qj from systems of, in this case 2m + 1, consecutive equations from the infinite system (3.2). This gives an infinite set of linear j j j equations in the 2m quantities ρ(ω) and ρ̌(ω) = ρ(−ω) for 0 ≤ j ≤ m − 1 with coefficients that are multiples of the polynomials pj. The matrix of the system of the first 2m of those equations is 2m 1 Π0 pj k − . = ( + )j,k=0 If this matrix is non-singular, then, just as in [2], we can easily prove that certain symmetric polynomial m m expressions in ρ(ω) and −ρ(−ω), for instance ρ(ω) ρ(−ω) and ρ(ω) − ρ(−ω), must be rational functions of m m ω = (ω1,..., ωn), and in fact that ρ(ω) ρ(−ω) and even ρ(ω)ρ(−ω) must be polynomials. As in [2], the fact that is not identically zero implies that the matrix Π has maximal rank (Proposition 5.1), but this was qm−1 0 more difficult to prove than the corresponding statement in [2]. The determinant of the matrix Π0(ω) is equal to a multiple of , so it was necessary to exclude the possibility that is qm−1(ω)qm−1(−ω) qm−1(ω)qm−1(−ω) identically zero without being identically zero. Knowing that Π is non-singular, it would have been qm−1 0(ω) easy to prove that 0. But without yet knowing that Π is non-singular, we had to con- qm−1(ω)qm−1(−ω) ̸= 0(ω) sider for a moment the possibility that the rank of Π0(ω) is smaller than maximal and deal with a linear system with fewer equations and unknowns (Lemma 5.2). Finally, knowing that ρ(ω)ρ(−ω) is a polynomial and that the other symmetric polynomials in ρ(ω) and −ρ(−ω) are rational was not enough to prove Theorem 3.1 (see Section 6). We had to deduce more information from system (3.2) than we did in Section 5. However, compar- ing two expression for the trace of the matrix S̃k (see Section 6), we could prove (Lemma 6.1) that the function ρ(ω) − ρ(−ω) must be a homogeneous first degree polynomial, that is, linear in ω. By using this fact, it is easy to complete the proof of Theorem 3.1. Alternatively, the fact that ρ(ω) − ρ(−ω) is linear implies that we can make a translation of the coordinate system so that ρ(ω) − ρ(−ω) vanishes, which means that we can apply the special case treated in [2].

4 Algebraic preliminaries

This section contains a study of the matrix that defines the infinite set of linear expressions in qj(ω) and qj(−ω) that are given by (3.2), of which (3.3) is an example. The main point is, just as in [2], that the sequence of 2m × 2m submatrices forms a geometric sequence, where the left and right quotients have special proper- ties. The role of the identities (4.12) is to help us eliminate the qj from the system. The left half of the matrix M, which describes the dependence on qj(ω), is similar to the corresponding matrix in [2], but simpler: here both the left and right quotients have an extremely simple structure; see (4.2) and (4.4). What is interesting, and perhaps not quite so obvious, is that the corresponding right quotient in the case of the full matrix is also very simple; it is a two-block matrix (4.9) where each block has the same form as T in (4.4). Consider a matrix L that consists of m columns and infinitely many rows and is defined as follows: the first column is 1, x, x2,... , the elements of the second column are the formal derivatives of those of the first, 0, 1, 2x, 3x2,... , and the elements of the third column are the second derivatives of the same elements, and so on. In other words, the entries ℓk,j of the matrix L can be written (D denotes formal differentiation with respect to x) as j k k j ℓk,j = D x = ck,j x − for 0 ≤ j ≤ m − 1 and all k ≥ 0, (4.1) where the constants ck,j are defined by (3.1). Denote the successive m × m submatrices of L by L0, L1, L2 etc. For instance, if m = 4, then 1 0 0 0 x 1 0 0 x 1 0 0 x2 2x 2 0 L , L etc. 0 = (x2 2x 2 0) 1 = (x3 3x2 6x 6 ) x3 3x2 6x 6 x4 4x3 12x2 24x

We are interested in the dependence of the matrix Lk on k. We shall see that there are matrices S and T such that k k Lk = S L0 = L0T for all k ≥ 0. (4.2) J. Boman, Radon transform supported in hypersurface | 357

Denote by δ( ⋅ ) the function on the set ℤ of integers for which δ(0) = 1 and δ(j) = 0 for all j ̸= 0. Define the m × m matrices S = (sk,j), and T = (tk,j), 0 ≤ k, j ≤ m − 1, by

sk,j = δ(j − k − 1) if k ≤ m − 2, m m−j , sm−1,j = −( j )(−x) tk,j = xδ(j − k) + (k + 1)δ(j − k − 1). (4.3)

For instance, if m = 4, this means that 0 1 0 0 x 1 0 0 0 0 1 0 0 x 2 0 S and T . (4.4) = ( 0 0 0 1 ) = (0 0 x 3) 4 3 2 −x 4x −6x 4x 0 0 0 x

Lemma 4.1. The matrices S, T, and Lk are non-singular as matrices over the field of rational functions in x, m (4.2) holds, and det S = det T = x . Moreover, det L0 = bm, where

bm = 1! ⋅ 2! ⋅ ⋅ ⋅ (m − 1)!. The identity of the first 1 rows in is trivial. The coefficients in the last row of the Proof. m − SLk = Lk+1 sm−1,j matrix S satisfy the identity

m m m−1 F t t x m tj x m−j tm s tj . (4.5) ( ) = ( − ) = ∑ ( j ) (− ) = − ∑ m−1,j j=0 j=0

The identity of the last rows of SL0 and L1 means that

m−1 for all , ∑ sm−1,kℓk,j = ℓm,j j k=0 that is, m−1 j k j m , 0 1. ∑ sm−1,k D x = D x ≤ j ≤ m − k=0 But this is (4.5) differentiated j times with respect to t and then t set equal to x. We next prove that for all 0. Taking into account the description of in (4.3), we see that Lk T = Lk+1 k ≥ T means precisely that Lk T = Lk+1 for 0 1 and all 0. ℓk,j−1j + ℓk,j x = ℓk+1,j ≤ j ≤ m − k ≥ By (4.1), the assertion therefore follows from the formula

j k 1 j k j k j 1 k D x + = D (xx ) = xD x + jD − x .

k Now we can prove that S L0 = Lk for all k by making repeated use of the identity SL0 = L0T. In fact,

k k 1 k 1 k 2 2 k S L0 = S − SL0 = S − L0T = S − L0T = ⋅ ⋅ ⋅ = L0T = Lk . (4.6)

It is obvious that L0 and T are non-singular as matrices over the field of rational functions in x and that m det T = x . Now it follows from (4.2) that all Lk are non-singular. More precisely, it is easily seen from the km −1 definition of L0 that det L0 = bm = 1! ⋅ 2! ⋅ ⋅ ⋅ (m − 1)!, and hence det Lk = bm x . Since S = L0TL0 , we also m have det S = det T = x . This completes the proof of Lemma 4.1. In the proof of Theorem 3.1, we shall have to consider matrices M = M(x, y) with 2m columns and infinitely many rows that consist of two blocks, each of the same kind as the matrix L above, the left block containing the variable x and the right block containing the variable y. As before, we introduce the successive square 358 | J. Boman, Radon transform supported in hypersurface submatrices, so that, for instance if m = 3, 1 0 0 1 0 0 x 1 0 y 1 0 2 2 (x 2x 2 y 2y 2 ) M0 ( ) (4.7) = (x3 3x2 6x y3 3y2 6y ) x4 4x3 12x2 y4 4y3 12y2 x5 5x4 20x3 y5 5y4 20y3 ( ) and x 1 0 y 1 0 ...... M1 = ( ...... ) . x6 6x5 30x4 y6 6y5 30y4

Define the 2m × 2m matrix S = (sk,j) by means of a sequence of 1’s next to the main diagonal,

sk,j = δ(j − k − 1) if k ≤ m − 2, and the entries in the last row , , s2m−1,j = σ2m−j(x y) where the polynomials are defined by the identity σ2m−j

2m−1 m m 2m j . (4.8) G(t) = (t − x) (t − y) = t − ∑ t σ2m−j j=0 m m For instance, σ2m = −x y and σ1 = m(x + y). The expression σν(x, y) is up to sign equal to the elementary symmetric polynomial of degree ν in 2m variables evaluated at (x,..., x, y,..., y). Furthermore, we define the matrix T as the block diagonal matrix

Tx 0 T = ( ) , (4.9) 0 Ty where Tx is the m × m matrix defined by (4.3) and Ty is the same with y instead of x. For instance, if m = 3, we have x 1 0 y 1 0 Tx = (0 x 2) and Ty = (0 y 2) , (4.10) 0 0 x 0 0 y and 0 1 0 0 0 0 0 0 1 0 0 0 ( 0 0 0 1 0 0 ) S ( ) , (4.11) = ( 0 0 0 0 1 0 ) 0 0 0 0 0 1 σ σ σ σ σ σ ( 6 5 4 3 2 1) where

σ1 = 3(x + y), 2 2 σ2 = −3(x + 3xy + y ), 3 2 2 3 σ3 = x + 9x y + 9xy + y , 2 2 σ4 = −3xy(x + 3xy + y ), 3 2 2 3 σ5 = 3x y + 3x y , 3 3 σ6 = −x y . J. Boman, Radon transform supported in hypersurface | 359

m m Lemma 4.2. The matrices M0, S, and T are non-singular, det S = det T = x y , and k k Mk = S M0 = M0T for all k ≥ 0. (4.12) Due to the block structure of the matrix , the fact that is an immediate consequence Proof. T Mk T = Mk+1 of (4.2). In fact, if we write , , then the identity becomes Mk = (Ak Bk) Mk T = Mk+1 T 0 , x , , , Mk T = (Ak Bk) ( ) = (Ak Tx Bk Ty) = (Ak+1 Bk+1) = Mk+1 0 Ty where the second last equality follows from (4.2). The identity SM0 = M1 can be proved in the same way as the identity SL0 = L1 above. For the first 2m − 1 rows the assertion is trivial. For the first element in the last row the assertion means that

2m−1 j 2m , ∑ σ2m−j x = x j=0 and this identity results if we set t = x in (4.8). For the second element in the last row we first differentiate (4.8) with respect to t and then set t = x. And so on up to the m-th element in the last row. For the last m elements in the last row we argue similarly, but set t = y instead. We can now finish the proof of (4.12) in exactly the same way as we did the analogous step forthe matrix Lk. Namely, reasoning as in (4.6), we get

k k 1 k 1 k 2 2 k S M0 = S − SM0 = S − M0T = S − M0T = ⋅ ⋅ ⋅ = M0T = Mk .

The fact that M0 is non-singular will follow from the next lemma, which gives an exact expression for the determinant of M0. However, to see that M0 is non-singular, it is enough to observe that the matrix M0(0, y) is non-singular. And after we have set x = 0, it is easy to make elementary column operations so that this matrix gets block diagonal form 0 Dm−1 ( ) , 0 Lm(y) where is a diagonal matrix of positive integers and is the matrix considered in Lemma 4.1, Dm−1 Lm(y) Lm m2 whose determinant is given there to be bm y . Remark 4.3. The matrix identities described in Lemma 4.2 remain valid if a number of columns in the matrix M are deleted from the right, so that M consists of m x-columns as before and only the first r y-columns, 1 ≤ r < m. For instance, if m = 3 and r = 2, 1 0 0 1 0 x 1 0 y 1 2 2 M0 = (x 2x 2 y 2y ) , x3 3x2 6x y3 3y2 x4 4x3 12x2 y4 4y3 and x 1 0 y 1 . . . . . M1 = ( . . . . . ) , x5 5x4 20x3 y5 5y4

−1 then S = M1M0 is a 5 × 5 matrix analogous to (4.11) with 3 2 σ5 = x y , 2 σ4 = −x y(2x + 3y), 2 2 σ3 = x(x + 6xy + 3y ), 2 2 σ2 = −3x − 6xy − y , σ1 = 3x + 2y. 360 | J. Boman, Radon transform supported in hypersurface

These facts are proved with exactly the same arguments as in Lemma 4.2, if only the expression G(t) in (4.8) is replaced by m+r m r m+r j . G(t) = (t − x) (t − y) = t − ∑ t σm+r−j j=0 Lemma 4.4. The determinant of the 2m × 2m matrix M0 is given by 2 m2 det M0 = bm(y − x) , where bm = 1! ⋅ 2! ⋅ ⋅ ⋅ (m − 1)!. Proof. If we introduce the column 2m-vectors

2 2m 1 t X0 = (1, x, x ,..., x − ) , 2 2m 2 t X1 = DX0 = (0, 1, 2x, 3x ,..., (2m − 1)x − ) , 2 etc., where the elements of are formal derivatives of the corresponding elements of , and X2 = D X0 Xk+1 Xk let Yk have the analogous meaning with the variable x replaced by y, then the matrix M0 can be written as , , ,..., , , ,..., . M0 = M0(x y) = (X0 X1 Xm−1 Y0 Y1 Ym−1)

To simplify notation and increase readability, we will first consider the case m = 3. To find the order of the zero of D(x, y) = det M0(x, y) at x = y, we will study the polynomial F(t) = D(x, x + t). Then note that

1 2 1 3 1 4 1 5 {Y0 x t X0 tX1 t X2 t X3 t X4 t X5, { ( + ) = + + 2 + 3! + 4! + 5! { { 1 2 1 3 1 4 Y1(x + t) = X1 + tX2 + t X3 + t X4 + t X5, (4.13) { 2 3! 4! { { 1 2 1 3 {Y2(x + t) = X2 + tX3 + t X4 + t X5. { 2 3! Recall that M0 is the 6 × 6 matrix (X0, X1, X2, Y0, Y1, Y2). Replacing the columns Y0, Y1, Y2 here with the respective expressions from (4.13), we see immediately that we can make elementary column operations to get rid of all terms containing X0, X1, and X2 in the expressions for Yj. This shows that det M0(x, x + t) can be written as t3 t4 t5 t2 t3 t4 t2 t3 det X , X , X , X X X , X X X , tX X X . ( 0 1 2 3! 3 + 4! 4 + 5! 5 2! 3 + 3! 4 + 4! 5 3 + 2! 4 + 3! 5) This shows already that 3 2 1 6 det M0(x, x + t) = O(t + + ) = O(t ) as t → 0.

However, we can also get rid of the X3 terms in the fourth and fifth columns, by subtracting suitable multiples of the last column. And finally we can get rid of the X4 term in the fourth column by subtracting a multiple of the fifth column from the fourth. Then we get

5 3 4 2 det M0(x, x + t) = det(X0, X1, X2, at X5, bt X4 + O(t ), tX3 + O(t )) for some constants a and b. This shows that

5 3 1 9 det M0(x, x + t) = O(t + + ) = O(t ) as t → 0, which implies that 9 det M0(x, y) = c(y − x) for some constant c. To compute the value of c we can set x = 0 in (4.7), which gives y3 3y2 6y 1 3 6 4 3 2 9 9 det M0(0, y) = 2 det (y 4y 12y ) = 2y (1 4 12) = 4y . y5 5y4 20y3 1 5 20 It follows that 9 det M0(x, y) = 4(y − x) . J. Boman, Radon transform supported in hypersurface | 361

If we perform the same computations for M0(x, y) with arbitrary m, we get, instead of the number 5 + 3 + 1 = 9, the number 1 2m 1 1 3 5 2m 1 m + − m2. + + + ⋅ ⋅ ⋅ + − = ⋅ 2 = And for the determinant of M0(0, y) we get

1 ⋅ 2! ⋅ 3! ⋅ ⋅ ⋅ (m − 1)! det Lm . It follows from Lemma 4.1 and formula (4.2) that

m m2 det Lm = det L0(det Tx) = 1! ⋅ 2! ⋅ 3! ⋅ ⋅ ⋅ (m − 1)! x . This completes the proof of Lemma 4.4.

The matrix 2m 1 Π0 pj k − = ( + )j,k=0 can be written as a row of column vectors , ,..., , where is the column vector (P0 P1 P2m−1) Pk , ,..., t . (4.14) Pk = (pk pk+1 pk+2m−1) n For ξ, η ∈ ℝ we denote by ζ = (ξ, η) the column vector t ζ = (ξ1,..., ξm , η1,..., ηm) . 2m 1 Since the matrix M0 is non-singular and SM0 = M0T, the matrix (ζ, Sζ,..., S − ζ) is non-singular if and only if the matrix 2m 1 W = (ζ, Tζ,..., T − ζ) (4.15) is non-singular. Therefore, we now study the matrix W.

2m 1 2m Lemma 4.5. The vectors ζ, Tζ,..., T − ζ are linearly independent in the vector space ℂ(x, y) over the field ℂ(x, y) of rational functions in x and y if and only if ξm ηm ̸= 0. k 0 Proof. The structure of the matrix Tx, see (4.10), shows that the vectors Tx ξ belong to the subspace {ξn = 0} 0 for all k if ξn = 0, and the analogous assertion is of course valid for Ty. Therefore, it is obvious that the condi- 2m 1 tion ξm ηm ̸= 0 is necessary for the vectors ζ, Tζ,..., T − ζ to be linearly independent. It remains to prove the sufficiency. This statement is a special caseofa well-known fact from linear algebra, so we only sketch 2m k the proof here. Denote the vector space ℂ(x, y) by V and the subspace generated by all vectors T ζ by E. It 2m 1 is enough to prove that E = V, because if for instance T − ζ can be written as a linear combination

2m−1 2m−2 T ζ = a0ζ + a1Tζ + ⋅ ⋅ ⋅ + a2m−2T ζ k with aj ∈ ℂ(x, y), then the same identity holds with ζ replaced by T ζ for all k ≥ 0, and using this fact for k = 1, 2,... shows that E must then be contained in a (2m − 1)-dimensional subspace of V. To prove that E = V we first note that TE ⊂ E. Then we use the standard fact, valid for an arbitrary linear operator on a finite- dimensional space, that E must be the sum of its generalized eigenspaces for T, which means in this case that E = Ex ⊕ Ey , (4.16) k where Ex = {u ∈ E : (T − x) u = 0 for some k} and Ey is defined analogously. Define k Vx = {u ∈ V : (T − x) u = 0 for some k} and similarly. The fact that 0 is easily seen to imply that , and in the same way 0 Vy ξm−1 ̸= Ex = Vx ηm−1 ̸= implies Ey = Vy. The fact that E = V now follows immediately from (4.16). By using the arguments of Lemma 4.4, the determinant of the 2m × 2m matrix (4.15) can in fact be computed explicitly, and its value is

m m m2 det W = cm ξm ηm(y − x) , (4.17) 362 | J. Boman, Radon transform supported in hypersurface

2 2 2 2 2 2 where cm > 0. We have c2 = 1, c3 = 2 b3, c4 = 3 3 b4, and c5 = 4 6 4 b5, where bm is the constant defined in Lemma 4.4. Let us explain the computation for m = 3. By using the notation from the previous lemma, the transpose W t of W can be written as

t W = (ξ0X0 + ξ1X1 + ξ2X2, ξ1X0 + 2ξ2X1, ξ2X0, η0Y0 + η1Y1 + η2Y2, η1Y0 + 2η2Y1, η2Y0), where the Xj and Yj denote column 6-vectors. Since ξ2 and η2 are different from zero, we can get rid ofthe X0 and Y0 in the first, second, fourth, and fifth columns by elementary column operations and obtain

t det W = (ξ1X1 + ξ2X2, 2ξ2X1, ξ2X0, η1Y1 + η2Y2, 2η2Y1, η2Y0).

Similarly, we can now get rid of X1 and Y1 in the first and fourth columns, which gives

t det W = det W = (ξ2X2, 2ξ2X1, ξ2X0, η2Y2, 2η2Y1, η2Y0) 3 3 = 4ξ2 η2(X2, X1, X0, Y2, Y1, Y0) 3 3 = 4ξ2 η2 det M0 3 3 9 = 4ξ2 η2 ⋅ 4(y − x) 3 3 9 = 16ξ2 η2(y − x) . It is clear that the analogous operations can be done in the general case and that this proves (4.17).

Remark 4.6. The assertion of Lemma 4.5 is also valid if the matrices Tx and Ty have different dimensions, for instance Tx is m × m as before and Ty is r × r. Write t ζ = (ξ1,..., ξm , η1,..., ηr)

m r m r 1 in ℝ + . The assertion is then that the vectors ζ, Tζ,..., T + − ζ are linearly independent in the vector space m r ℂ(x, y) + if and only if ξm ηr ̸= 0. The proof of this statement is the same as the proof of Lemma 4.5 (cf. Remark 4.3).

5 Proving that ρ ω ρ ω must be a polynomial

We will now use the matrices introduced in the previous section to formulate the equation system (3.2). Denote by ̃M the matrix with( 2m) columns(− ) and infinitely many rows that is obtained if x and y in M = M(x, y) are replaced by ρ(ω) and −ρ(−ω), respectively. The notations ̃M0, ̃M1,... and S̃ and T̃ will have the analo- gous meaning. The value of the determinant det M0 given by Lemma 4.4 shows that the determinant of ̃M0 is equal to 2 m2 det ̃M0(ω) = bm(ρ(ω) + ρ(−ω)) , which is a positive continuous function. Hence ̃M0 is invertible in the ring of matrices over the ring of con- tinuous functions. This fact will be important for us in the future. Moreover, we introduce the column 2m vector , ,..., , , ,..., t , Q = (q0(ω) q1(ω) qm−1(ω) q0(−ω) q1(−ω) qm−1(−ω)) and for each k ≥ 0 the column 2m vector of polynomials , ,..., t , 0, 1,.... Pk = (pk(ω) pk+1(ω) pk+2m−1(ω)) k =

The equation ̃M0Q = P0 is now almost the same as the first 2m − 1 equations from (3.2), but not quite because j of the factor (−1) in the first term of (3.2). This inconvenience could of course have been avoided withadif- ferent definition of the matrix M(x, y), but we wanted M(x, y) to have as much symmetry as possible. But this problem can easily be fixed by the introduction of one more matrix. Introduce the diagonal 2m × 2m matrix m m J = diag(1, −1,..., (−1) , 1, −1,..., (−1) ). J. Boman, Radon transform supported in hypersurface | 363

If m = 3, then ̃M0J is the matrix (3.3), and ̃M0JQ = P0 for arbitrary m. System (3.2) can now be written as

̃Mk JQ = Pk , k ≥ 0. (5.1) Using and (5.1), we then get SMk = Mk+1 for all . (5.2) SP̃ k = S̃̃Mk JQ = ̃Mk+1JQ = Pk+1 k Our goal is to deduce information about the entries of S̃ from (5.2). Incidentally, we note that solving Q from (5.1) with k = 0 shows that the assumptions of Theorem 3.1 imply that all qj(ω) must be continuous, which we already proved in Lemma 2.1. The entries of Pk are homogeneous polynomials. The entries of the last row of S̃, which we shall denote by σ̃2m(ω),..., σ̃1(ω), are defined by

σ̃k(ω) = σk(ρ(ω), −ρ(−ω)). The identity (5.2) therefore means that

2m−1 , 0. (5.3) ∑ σ̃2m−j pj+k = p2m+k k ≥ j=0

We want to solve σ̃j(ω) from this system to show that σ̃j(ω) must be rational functions. Therefore, the fact that the matrix Π (5.4) 0(ω) = (pi+j(ω))0≤i,j≤2m−1 is non-singular is of fundamental importance. Recalling that the matrix Π0(ω) can be written as a row of column vectors (4.14) and that k k Pk = S̃ ̃M0JQ = ̃M0JT̃ Q, and hence Π0 = ̃M0JW̃ with 2m 1 W̃ = (Q, TQ̃ ,..., T̃ − Q), we see that det Π 0 0 if and only if 0 0 0 by Lemma 4.5. And more precisely, for- 0(ω ) ̸= qm−1(ω )qm−1(−ω ) ̸= mula (4.17) shows that

2 det Π 2 2m 0 0 . (5.5) 0(ω) = bm cm(ρ(ω) + ρ(−ω)) qm−1(ω )qm−1(−ω ) (3.2) Proposition 5.1. Assume that assumption of Theorem 3.1 is valid and that the function qm−1(ω) is not identically zero. Then the matrix Π0(ω) is non-singular. For the proof we need the following lemma.

(3.2) Lemma 5.2. Assume that assumption of Theorem 3.1 is valid and that the function qm−1(ω) is not identi- cally zero. Then the functions σ̃j(ω) are rational functions and ρ(ω) is algebraic.

Proof. To make the simple idea of the proof clear we first assume that we already know that the matrix Π0(ω) is non-singular. Then the linear system consisting of the first 2m of equations (5.3) can be solved for the “unknowns” σ̃k(ω), which gives Fk(ω) σ̃k(ω) = , k = 1,..., 2m, G(ω) where Fk(ω) and G(ω) are homogeneous polynomials and

G(ω) = det Π0(ω).

Recalling that σ̃1(ω),..., σ̃2m(ω) are the elementary symmetric polynomials in m copies of each of the vari- ables ρ(ω) and −ρ(−ω), we see that ρ(ω) and ρ(−ω) must be roots of a polynomial equation with polynomial coefficients, so ρ(ω) must be an algebraic function. To consider the possibility that the determinant of Π0(ω) is identically zero, we will argue similarly, but working with linear systems with fewer unknowns and fewer equations. To begin with, Lemma 4.5 shows that in this case must be identically zero. At this point, we cannot exclude that also qm−1(ω)qm−1(−ω) 364 | J. Boman, Radon transform supported in hypersurface

is identically zero. Let be the largest number for which is not identically qm−1(ω)qm−2(−ω) r qm−1(ω)qr(−ω) zero. Choose a point 0 where 0 0 0. For a while we shall have to consider -vectors of ω qm−1(ω )qr(−ω ) ̸= (m + r) d numbers and polynomials. Set d = m + r and introduce the notation Pk for the column d-vector d ,..., t . Pk(ω) = (pk(ω) pk+m+r−1(ω)) By Remark 4.6, the fact that 0 0 0 implies that the vectors of real numbers qm−1(ω )qm+r(−ω ) ̸= d ,..., d P0(ω) Pm+r−1(ω) m r 0 are linearly independent in ℂ + for ω in some neighborhood of ω , and hence the determinant of the corre- sponding matrix is different from zero. But this determinant is a polynomial function of ω. Therefore, the m + r vectors of polynomials d ,..., d are linearly independent in the vector space m+r. By Remark 4.3, P0 Pm+r−1 ℂ(ω) the relationship between d and d is given by a matrix , whose entries in the last row are the elementary Pk Pk+1 S̃ symmetric polynomials in m copies of ρ(ω) and r copies of −ρ(−ω). We can therefore consider a linear system of m + r equations d d , 0,..., 1, S̃ Pk = Pk+1 k = m + r − and reason as above to conclude that the symmetric polynomials in ρ(ω) and ρ(−ω) must be rational func- tions of ω. And this in turn implies that ρ(ω) must be an algebraic function. This completes the proof of Lemma 5.2.

Proof of Proposition 5.1. Recall the equation P0(ω) = ̃M0(ω)JQ(ω), which expresses the 2m-vector P0 of polynomials in terms of the vector Q of density functions qj. The matrix ̃M0(ω) is pointwise invertible, and −1 since ρ(ω) is algebraic, the entries of ̃M0(ω) are algebraic, and so are the entries of the inverse ̃M0 (ω). It follows that −1 −1 must be algebraic. In particular, must be an algebraic Q(ω) = J ̃M0(ω) P0(ω) qm−1(ω) function. By assumption, is not identically zero. Hence cannot vanish in an open set, qm−1(ω) qm−1(ω) and cannot vanish everywhere. Choose a point 0 such that 0 0 0. By qm−1(ω)qm−1(−ω) ω qm−1(ω )qm−1(−ω ) ̸= 0 Lemma 4.5, we can now conclude that det Π0(ω) ̸= 0 in some neighborhood of ω , and hence the polynomial det Π0(ω) cannot be the zero-polynomial. This completes the proof.

Lemma 5.3. If the assumptions of Theorem 3.1 are valid, then ρ(ω)ρ(−ω) must be a homogeneous quadratic polynomial.

Proof. From Lemma 5.2 we learnt that all σ̃j(ω) must be rational functions of ω. But observe also that now that we know by Lemma 5.2 that the matrix Π0(ω) is non-singular, the argument in the first couple of lines of m m the proof of Lemma 5.2 proves very easily that all σ̃j(ω), and in particular σ̃2m(ω) = −ρ(ω) (−ρ(−ω)) , must be rational functions of ω. The idea of the proof that ρ(ω)ρ(−ω) must be a polynomial is to consider 2m of equations (5.2) together and express the resulting equation as a matrix equation. Observe that (5.2) implies

k (5.6) S̃ Pj = Pk+j for all j and k. Generalizing (5.4), we introduce the matrices

Π , 0, 1,.... k(ω) = (pk+i+j(ω))0≤i,j≤2m−1 k = Then we can combine 2m of equations (5.6) into the matrix identity

S̃Π0 = Π2m . This equation contains no more information than (5.2), but it has the advantage that we can operate with S̃ and obtain the identity k Π Π (5.7) S̃ 0 = 2m+k m m for arbitrary k. Now we use the product law for determinants. The determinant of S̃ is ±ρ(ω) ρ(−ω) and the determinant of Πk is a polynomial in ω for every k. Denoting the determinant of Π0(ω) by d(ω), we can conclude that

mk mk ρ(ω) ρ(−ω) d(ω) must be a homogeneous polynomial for every k. J. Boman, Radon transform supported in hypersurface | 365

Since ρ(ω)ρ(−ω) is already known to be a rational function, it now follows that ρ(ω)ρ(−ω) must be a homoge- neous polynomial. Since ρ(ω) by definition is a homogeneous function of degree 1, it follows that ρ(ω)ρ(−ω) must be a homogeneous quadratic polynomial. This completes the proof of Lemma 5.3.

6 Proving that ρ ω ρ ω must be a polynomial

The arguments in Section 5 do not suffice for proving that the graph of ρ(ω) is a quadric. To explain why we observe first that if the graph( of)ρ(−ω) is a(− quadric,) then ρ(ω) − ρ(−ω) must be a linear function of ω. To see this, 2 assume that p = ρ(ω) satisfies an equation F(p, ω) = p + pq1(ω) + q2(ω) = 0, where q1 is a linear function of ω and q2 is a homogeneous quadratic polynomial. Subtracting the equation F(p, ω) from F(−p, −ω) and dividing by ρ(ω) + ρ(−ω) gives ρ(ω) − ρ(−ω) + q1(ω) = 0, which proves the claim. For a small ε > 0, we consider the function

ε2ξ 6 ξ 6 1/2 ξ 3 ( 1 + | | ) 1 , 2 0 . ρ(ξ) = 2 + ε 2 ξ ∈ ℝ \{ } |ξ| |ξ|

The function ρ(ξ) is homogeneous of degree 1, and tends to |ξ| as ε tends to zero. The latter function is strictly convex, and hence ρ(ξ) must be strictly convex if ε is small enough. Moreover,

2 6 6 6 6 ε ξ1 + |ξ| 2 ξ1 |ξ| 2 ρ(ξ)ρ(−ξ) = 4 − ε 4 = 4 = |ξ| |ξ| |ξ| |ξ| is a polynomial, and ξ 3 2 1 ρ(ξ) − ρ(−ξ) = ε 2 |ξ| is a rational function. But ρ(ξ) − ρ(−ξ) is not a linear function. This example shows that we have to deduce more information about ρ from the assumptions of Theo- rem 3.1. Up to now we used properties of the determinant of a matrix. We shall now also use properties of the trace of a matrix.

Lemma 6.1. Under the assumptions of Theorem 3.1, the trace of T̃, i.e. σ1(ω) = m(ρ(ω) − ρ(−ω)), must be a polynomial.

Proof. The idea is to compare two different expressions for the trace of T̃ k for large k. If k is even, the trace of T̃ k is equal to k k k k mρ(ω) + m(−(ρ(−ω)) ) = mρ(ω) + mρ(−ω) .

By Lemma 5.3, we know already that σ1 = m(ρ(ω) − ρ(−ω)) is a rational function. To shorten formulas we write for a moment ρ(−ω) = ρ̌ as before. Assume that r ρ ρ , − ̌ = s where r and s are polynomials without common factor and the degree of s is greater than or equal to 1. We will show that this assumption leads to a contradiction. Since ρρ̌ is a polynomial, which we denote by q, we can write r2 q ρ2 ρ2 ρ ρ 2 2ρρ 2q 1 , + ̌ = ( − ̌) + ̌ = s2 + = s2 where q1 is a polynomial that lacks a common factor with s. Similarly,

q2 q ρ4 ρ4 ρ2 ρ2 2 2ρ2ρ2 1 2q2 2 , + ̌ = ( + ̌ ) − ̌ = s4 − = s4 366 | J. Boman, Radon transform supported in hypersurface

where q2 is another polynomial without common factor with s. Continuing in this way, we obtain for arbi- trary k of the form 2ν that q ρk ρk k , (6.1) + ̌ = sk where qk is a polynomial without common factor with s. On the other hand, the trace of p is equal to the trace of p, and p Π Π−1 for arbitrary by (5.7). T̃ S̃ S̃ = 2m+p 0 p The trace of S̃p is equal to the coefficient of λ2m−1 in the polynomial

det p det Π Π−1 . (S̃ − λI) = ( 2m+p 0 − λI) Here det Π is a homogeneous polynomial of degree 2 2 1 . Since the entries of Π are polynomi- 0 d = m( m − ) 2m+p als, the entries of the matrix Π Π−1 are rational functions whose denominators have degree at most 2m. 2m+p 0 d The important fact is that this number is independent of . It follows that the trace of p Π Π−1 is a ratio- p T̃ = 2m+p 0 2m nal function whose denominator has degree ≤ d . Comparing this with (6.1), we have a contradiction and the lemma is proved.

Rest of the proof of Theorem 3.1. By Lemma 6.1, we know that ρ − ρ̌ is a polynomial. But ρ and ρ̌ are restric- n 1 n tions to S − of homogeneous functions on ℝ \{0} of degree 1. Hence ρ − ρ̌ is a homogeneous polynomial of degree 1, in other words a linear function, say ρ − ρ̌ = a1ω1 + a2ω2 = a ⋅ ω. Multiplying by ρ, we obtain

2 ρ − ρρ̌ = ρa ⋅ ω.

Recalling that ρρ̌ is a quadratic polynomial, we now see that p = ρ(ω) and ω = (ω1, ω2) satisfy a quadratic equation as claimed. a a Alternatively, we can observe that a translation of coordinates x 㨃→ x + 2 changes ρ(ω) to ρ(ω) − ω ⋅ 2 a and ρ̌(ω) to ρ(−ω) + ω ⋅ 2 , so that ρ − ρ̌ will be replaced by a a ρ ω ω ρ ω ω ρ ω ρ ω ω a ω a ω a 0. ( ) − ⋅ 2 − ( (− ) + ⋅ 2 ) = ( ) − (− ) − ⋅ = ⋅ − ⋅ = 2 Thus ρ(ω) = ρ(−ω), D = −D, and ρ(ω) is a quadratic polynomial in a suitable chosen coordinate system, so the boundary of D is an ellipse. This completes the proof of Theorem 3.1 and hence of Theorem 1.1.

7 The support of a Radon transform

n If Σ is a closed bounded subset of ℙ , then the complement ∁Σ contains a unique unbounded, connected component, which we shall denote by Σ . We shall also need an abbreviation for the complement of Σ , (∁ )∞ (∁ )∞ namely F Σ Σ . (7.1) ( ) = ∁(∁ )∞ n This definition makes sense also if Σ is replaced by a subset of ℝ . For instance, if K is the union of the boundaries of two disjoint closed disks, then F(K) is the union of the disks. So, intuitively, the effect of the operation F is to fill the holes in K. It is obvious that F is a hull operation, Σ ⊂ F(Σ) = F(F(Σ)), and that Σ1 ⊂ Σ2 implies F(Σ1) ⊂ F(Σ2). n Moreover, for x ∈ ℝ we will denote by x̂ the set {L : x ∈ L} of hyperplanes that contain x, and similarly n n for a subset K ⊂ ℝ we define ̂K as the union of all x̂ for x ∈ K. Analogously, for L ∈ ℙ we could define L̂ as n the subset of ℝ consisting of all x ∈ L, but we shall not need this notion here. If is convex (and bounded), then the complements of and are of course connected, so K K ̂K (∁K)∞ = ∁K and . For a general compact set neither nor is necessarily connected. But (∁̂K)∞ = ∁̂K K ∁K ∁̂K ch (7.2) (∁̂K)∞ = ∁ ̂K because it is obvious that ch and the latter is equal to ch . Here we have used the common (∁̂K)∞ = (∁ ̂K)∞ ∁ ̂K notation ch K for the convex hull of K. J. Boman, Radon transform supported in hypersurface | 367

n If f is a compactly supported distribution in ℝ , it is obvious that

F(supp Rf) ⊂ (ch supp f)̂, (7.3) where the right-hand side should be interpreted as ̂K with K = ch supp f. To verify (7.3) we note that if L belongs to the complement of the right-hand side, then first of all L ∉ supp Rf, but in addition L must belong to the unbounded component of the complement of the support of Rf . The next theorem asserts that there is actually equality in (7.3).

n Theorem 7.1. For a compact subset Σ of ℙ we define the set F(Σ) by (7.1). If f is a compactly supported n distribution in ℝ , then F(supp Rf) = (ch supp f)̂. n This shows in particular that up to possible “holes” in the support we can characterize the subsets of ℙ that can be the support of a compactly supported distribution.

Proof of Theorem 7.1. It remains to prove

F(supp Rf) ⊃ (ch supp f)̂. Taking complements, we get the equivalent inclusion

supp , (7.4) (∁( Rf))∞ ⊂ ∁̂K where again K = ch supp f. To prove (7.4) take an arbitrary hyperplane L0 in the set on the left-hand side. By the definition of that set, there is a continuous path L(t), t ∈ [0, 1], L(0) = L0, L(1) = L1, inside the open set of hyperplanes in the complement of supp Rf that connects L0 to “infinity”, and hence in particular to a hyperplane L1 that does not intersect the convex hull of the support of f . Clearly, f vanishes in some neigh- borhood of L1. By the choice of the path L(t), we know that Rf(L) vanishes in a neighborhood of L(t) for each t. By continued use of the local unique continuation property of solutions to the equation Rf(L) = 0 as given by Strichartz [6], we can now infer that f must vanish in the union of all L(t) (for an example of this kind of application of the Strichartz theorem, see, e.g., [1, Theorem 3.1]). This shows that L0 does not meet the convex hull of the support of f , which completes the proof.

Acknowledgment: The author is grateful to Rikard Bögvad for helpful discussions on the contents of Section 4.

References

[1] J. Boman, A local uniqueness theorem for weighted Radon transforms, Inverse Probl. Imaging 4 (2010), 631–637. [2] J. Boman, A hypersurface containing the support of a Radon transform must be an ellipsoid. I: The symmetric case, J. Geom. Anal. (2020), DOI 10.1007/s12220-020-00372-8. [3] S. Helgason, The Radon Transform, Birkhäuser, Stuttgart, 1980. [4] L. Hörmander, The Analysis of Linear Partial Differential Operators. I, Springer, Berlin, 1983. [5] F. Natterer, The Mathematics of Computerized Tomography, Teubner, Stuttgart, 1986. [6] R. S. Strichartz, Radon inversion–variations on a theme, Amer. Math. Monthly 89 (1982), 377–384.