
“Multispectral Images Denoising by Intrinsic Tensor Sparsity Regularization”: Supplementary Material Qi Xie1, Qian Zhao1, Deyu Meng1;,∗ Zongben Xu1, Shuhang Gu2, Wangmeng Zuo3, Lei Zhang2 1Xi’an Jiaotong University; 2The Hong Kong Polytechnic University; 3Harbin Institute of Technology [email protected] [email protected] fdymeng [email protected] [email protected] [email protected] [email protected] Abstract easily deduce that hA; Ui = trace(AT U) attains its upper bound when B^ = B and C^ = C. Then we can obtain In this supplementary material, we provide the proofs to U^ = BCT . Theorems 1 and 2 presented in the maintext. We also present When m ≤ r we can obtain the result in the similar more clarifications on the parameter settings in our exper- way. iment. Furthermore, we show more experimental results to Before proving Theorem 2, we first give two lemmas. It further substantiate the effectiveness of our method. should be mentioned that, Gong et al. [2] has proved similar result as Lemma 2, but less rigorous. p Lemma 2. Let 0 < λ and 0 < " < min λ, λ , the 1. Proofs to Theorems 1 and 2 y following problem: We first give the following Lemma [4]. 1 Lemma 1. (von Neumann’s trace inequal- min f(x) = λ log (jxj + ") + (x − y)2 (2) x 2 ity) For any m × n matrices A and B, with T σ(A) = [σ1(A); σ2(A); ...σr(A)] and σ(B) = has a local minimal T [σ1(B); σ2(B); ...σr(B)] , where r = min(m; n), ( 0 if c2 ≤ 0 p being the singular values of A and B, Then Dλ,ε(y) = c1+ c2 ; (3) sign(y) 2 if c2 > 0 tr(AT B) ≤ σ(A)T σ(B): 2 where c1 = jyj − " and c2 = (c1) − 4(λ − "jyj). The equality is achieved when there exists unitarians U and Proof. We first consider the situation that y ≥ 0. It is T T V that A = UΣAV and B = UΣBV are SVDs of A easy to see that when x < 0, f(x) ≥ f(0). and B, respectively. When x ≥ 0, f(x) is differentiable, and by re-arranging Based on the result of Lemma 1, we can deduce the fol- f 0(x) = 0 we can get: lowing theorem. m×n 1 Theorem 1. 8 A 2 R , the following problem: λ + x − y = 0; x + " max hA; Ui; (1) U T U which is equivalent to ^ T has the following closed-from solution U = BC , where (x + ")f 0(x) = x2 + (" − y)x + λ − "y = 0: (4) A = BDCT denotes the condensed SVD of A. Proof. Without loss of generality, we assume m > r. For Thus we have: m×n T any U 2 R satisfying U U = I, it is easy to see that 1) if c2 < 0, (4) has no real solution, and it’s easy to see all of its singular values are 1. Thus, the condensed SVD of that f 0(x) > 0; x 2 (0; +1). Since f(x) is a continuous U can be written as function, its global minimum is x = 0; 2) if c = 0, (4) has the solutions x = c1 , and it’s easy U = BI^ C^T ; 2 2 r×r to see that f 0(x) ≥ 0; x 2 (0; +1). Since f(x) is a contin- m×r r×r r×r uous function, its global minimum is x = 0; where B 2 R ;C 2 R and Ir×r 2 R is the p c1− c2 unitary matrix. By von Neumann’s trace inequality, we can 3) if c2 > 0, (4) has two solutions, x1 = and p 2 c + c p ∗ 1 2 λ Corresponding author. x2 = 2 . Besides, since 0 < " < min λ, y , we can also obtain that: n c = (y − ")2 − 4(λ − "y) > 0 X 2 2 2 min (σi + 2σidi + di ) + g (d1; d2; :::dn) D y2 − 2"y + "2 ≥ 4C − 4"y i=1 n 2 X 2 (y + ") > 4C (5) , min (σi − di) + g(d1; d2; :::; dn) p d1;d2;:::;dn y − " > 2 C − 2" i=1 s.t. d1 ≥ d2 ≥ :::: ≥ dn ≥ 0: c1 > 0: From the above derivation, we can see that the optimal Thus x2 > x1 > c1 > 0, and it’s then easy to obtain that solution to (6) is f 0(x) > 0 when 0 < x < x or x > x and f 0(x) < 0 1 2 ^ ^ ^ T X^ = Udiag(d1; d2; :::; dn)V when x1 < x < x2. Therefore, x = x2 is the only local f(x) (0; +1) ^ ^ ^ minimum of over . where (d1; d2; :::; dn) is the solution to (7). For y < 0, we can prove in a similar way. Therefore, The proof is then completed. Theorem 1 holds. Based on the above two lemmas, we can now present the Lemma 3. Given Y 2 Rm×n, m ≥ n, let Y = following theorem. T m×n Udiag(σ1; σ2; :::; σn)V be the SVD of Y . Define di as i-th Theorem 2. Given Y 2 R , m ≥ n, let Y = singular value of X, the optimum to the following problem: Udiag(σ ; σ ; :::; σ )V T be the SVD of Y . Let 0 < λ, 1 2 pn λ 0 < " < min λ, , and define di as i-th singular σ1 1 2 min g (d1; d2; :::; dn) + kX − Y k (6) value of X. The solution of the following problem: m×n F X2R 2 n X 1 2 ^ ^ ^ ^ T min λ log(di + ") + kX − Y kF (8) can be expressed as X = Udiag(d1; d2; :::; dn)V , where X2 m×n 2 ^ ^ ^ R i=1 (d1; d2; :::; dn) is the optimum to the following optimization ^ ^ ^ T problem: can then be expressed as X^ = Udiag(d1; d2; :::; dn)V , ^ where di = Dλ,ε(σi); i = 1; 2; :::; n. n X 2 Proof. According to Lemma 3, the optimum to (8) min (σi − di) + g(d1; d2; :::; dn) ^ ^ ^ ^ T d1;d2;:::;dn can expressed as X = Udiag(d1; d2; :::; dn)V , where i=1 (7) (d^ ; d^ ; :::; d^ ) is the optimum to the following optimiza- s.t. d ≥ d ≥ :::: ≥ d ≥ 0: 1 2 n 1 2 n tion problem: n m×n ¯ ¯ T X 2 Proof. For any X 2 R , denote UDV as the SVD min (σi − di) + λ log(di + ") d1;d2;:::;dn of X, where D = diag(d1; d2; :::; dn) is the singular value i=1 (9) matrix, with d ≥ d ≥ :::: ≥ d ≥ 0. Based on the 1 2 n s.t. d ≥ d ≥ :::: ≥ d ≥ 0: property of Frobenius norm, we have 1 2 n We first consider the following unconstrained problem: 1 2 g (d1; d2; :::dn) + kX − Y kF n 2 X 2 1 1 min (σi − di) + λ log(di + ") : (10) = kY k2 − T r Y T X + kXk2 + g (d ; d ; :::d ) d1;d2;:::;dn 2 F 2 F 1 2 n i=1 n ¯ T 1 X 2 2 According to Lemma 2, a local optimum to (10) is di = = − T r Y X + (σi + di ) + g (d1; d2; :::dn) : ¯ ¯ ¯ 2 Dλ,ε(σi), and it’s easy to see that d1 ≥ d2 ≥ :::: ≥ dn ≥ 0 i=1 ^ ¯ holds. di = di; i = 1; 2; :::; n then corresponds to a local Therefore, (6) is equivalent to optimum to (9). The proof is completed. n T 1 X 2 2 min −T r Y X + (σi + di ) + g (d1; d2; :::dn) , U;D;¯ V¯ 2 2. Parameter setting details i=1 ( n ) −1 T 1 X 2 2 In our experiments we set β = cv , and in this section, min −max T r Y X + (σi + di )+g (d1; d2; :::dn) D U;¯ V¯ 2 we will provide clarifications on this parameter setting strat- i=1 egy. Let’s first consider the following matrix-based proxi- Based on von Neumann’s trace inequality, we know that mal problem: T r Y T X achieves its upper bound Pn (σ d ) if U¯ = U i=1 i i ^ 1 1 2 ¯ X = arg min log (jdij + ") + kX − Y k ; (11) and V = V . Thus (6) is equivalent to m×n F X2R β 2 where di is the i-th singular value of X. Based Theorem To further depict the denoising performance of our 2, we can define X^ = UDV^ T as the SVD of X^, and Y = method, we show in Fig. 1-3 two bands in several MSIs that T ^ ^ ^ UΣV as the SVD of Y , where D^ = diag(d1; d2; :::; dn) centered at 400nm (the darker one) and 700nm (the brighter and Σ = diag(σ1; σ2; :::; σn). Approximately let " = 0, one), respectively. From the figure, it is easy to observe that and then we have the proposed method evidently performs better than other competing ones, both in the recovery of finer-grained tex- ^ ^ hX;Y − Xi tures and coarser-grained structures. Especially, when the =T r(X^ T (Y − X^)) band energy is low, most competing methods begin to fail, T while our method still performs consistently well in such =T r UDV^ T U Σ − D^ V T harder cases. r 3.2. More real MSI denoising demonstration X ^ ^ = di(σi − di) i=1 The natural scene data set: We have used a natural 1 r p 2 −1 ! p 2 −1 ! scene data set [1] with some MSIs from real-world scenes X σi + σ − 4β σi − σ − 4β = in our real data experiment. This dataset comprises 15 2 2 i=1 rural scenes (containing rocks, trees, leaves, grass, earth, r 2 2 −1 etc.) and 15 urban scenes (containing walls, roofs, win- X σ − (σ − 4β ) = i i dows, plants, indoor, etc.).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-