A. Proof of Theorem 1 Given Θ (The Parameter of the Prediction Function), We Obtain the Adversarial Risk by the Following Optimization

A. Proof of Theorem 1 Given Θ (The Parameter of the Prediction Function), We Obtain the Adversarial Risk by the Following Optimization

Does Distributionally Robust Supervised Learning Give Robust Classifiers? A. Proof of Theorem 1 Given θ (the parameter of the prediction function), we obtain the adversarial risk by the following optimization: max Ep(x,y)[r(x, y)ℓ(gθ(x),y)] (24) r ∈Uf where f r(x, y) Ep(x,y) [f (r(x, y))] δ, Ep(x,y)[r(x, y)] = 1,r(x, y) 0( (x, y) ) . (25) U ≡{ | ≤ ≥ ∀ ∈X×Y } (0) Here, we are considering that ℓ( , ) is the 0-1 loss. Let Ωθ (x, y) ℓ(gθ(x),y)=0 . Then, we have (1) · · ≡{ | }⊆X×Y Ω (x, y) ℓ(g (x),y)=1 = Ω . Let r∗( , ) be the optimal solution of Eq. (24). θ ≡{ | θ } X×Y\ θ · · Note that Eq. (24) is a convex optimization problem. This is because the objective is linear in r( , ) and the uncertainty set is convex, which follows from the fact that f( ) is convex. Therefore, any local maximum of· · Eq. (24) is the global maximum. Nonetheless, there can be multiple solutions· that attain the same global maxima. Among those solutions, we (0) (1) now show that there exists r∗( , ) such that it takes the same values within Ω and Ω , respectively, i.e., · · θ θ (0) (1) r∗(x, y)=r∗ for (x, y) Ω ,r∗(x, y)=r∗ for (x, y) Ω , (26) 0 ∀ ∈ θ 1 ∀ ∈ θ where r∗ and r∗ are some constant values. This is because for any given optimal solution r′∗( , ) of Eq. (24), we can 0 1 · · always obtain optimal solution r∗( , ) that satisfies Eq. (26) by the following transformation: · · (0) (0) r∗ = Ep(x,y)[r′∗(x, y) 1 (x, y) Ω ]/Ep(x,y)[1 (x, y) Ω ], (27) 0 · { ∈ θ } { ∈ θ } (1) (1) r∗ = Ep(x,y)[r′∗(x, y) 1 (x, y) Ω ]/Ep(x,y)[1 (x, y) Ω ], (28) 1 · { ∈ θ } { ∈ θ } (0) (1) where 1 is an indicator function. Eqs. (27) and (28) are simple average operations of r′∗( , ) on regions Ω and Ω , {·} · · θ θ respectively. Utilizing the convexity of f( ), it is straightforward to see that r∗( , ) constructed in this way is still in the · · · feasible region of Eq. (25). It also attains exactly the same objective of Eq. (24) as the original solution r∗′( , ) does. This · ·(0) (1) concludes our proof that there exists an optimal solution of Eq. (24) such that it takes the same values within Ωθ and Ωθ , respectively. (i) Let pΩ(i) = Ep(x,y)[1 (x, y) Ωθ ] for i =0, 1, which is the proportion of data that is correctly and incorrectly θ { ∈ } classified, respectively. We note that p (0) + p (1) =1. Also, we see that p (1) is by definition the misclassification rate; Ωθ Ωθ Ωθ thus, it is equal to the ordinary risk, i.e., Ep(x,y)[ℓ(gθ(x),y)]. By using Eq. (26), we can simplify Eq. (24) as sup pΩ(1) r1, (29) (r ,r ) θ 0 1 ∈Uf′ where f′ (r0,r1) pΩ(0) f(r0)+pΩ(1) f(r1) δ, pΩ(0) r0 + pΩ(1) r1 =1,r0 0,r1 0 . (30) U ≡{ | θ θ ≤ θ θ ≥ ≥ } In the following, we show that Eq. (29) has monotonic relationship with p (1) . With a fixed value of p (1) , we can obtain Ωθ Ωθ the optimal (r0,r1) by solving Eq. (29). Let (r0∗(p1),r1∗(p1)) be the solution of Eq. (29) when p (1) is fixed to p1 with Ωθ 0 p 1. ≤ 1 ≤ First, we note that the first inequality constraint in Eq. (30) is a convex set and includes (1, 1) in its relative interior because pΩ(0) f(1) + pΩ(1) f(1) = 0 <δand pΩ(0) 1+pΩ(1) 1=1. Note that in a two dimensional space, the number of θ θ θ · θ · intersections between a line and a boundary of a convex set is at most two. For δ>0, there are always exactly two different points that satisfies both p (0) f(r0)+p (1) f(r1)=δ and p (0) r0 + p (1) r1 =1. We further see that the optimal Ωθ Ωθ Ωθ Ωθ solution of r1 is always greater than 1 because the objective in Eq. (29) is an increasing function of r1. Taking these facts into account, we can see that the optimal solution, (r0∗(p1),r0∗(p1)), satisfies either of the following two cases depending on whether the inequality constraint r 0 in Eq. (30) is active or not. 0 ≥ Case 1: p0 f(r∗(p1)) + p1 f(r∗(p1)) = δ, p0 r∗(p1)+p1 r∗(p1)=1, 0 <r∗(p1) < 1 <r∗(p1). (31) · 0 · 1 · 0 · 1 0 1 1 Case 2: r0∗(p1)=0,r1∗(p1)= , (32) p1 Does Distributionally Robust Supervised Learning Give Robust Classifiers? where p0 =1 p1. We now show that there is the monotonic relation between pΩ(1) and Eq. (29) for both the cases. To − θ this end, pick any p1′ such that p1 <p1′ 1, and let (r0∗(p1′ ),r1∗(p1′ )) be the solution of Eq. (29) when pΩ(1) is fixed to p1′ . ≤ θ Regarding Case 2 in Eq. (32), the adversarial risk in Eq. (29) is p 1 =1. On the other hand, it is easy to see that the 1 p1 · 1 active equality constraint stays r0 0 in Eq. (30) for pΩ(1) larger p1. Hence, we can show that r0∗(p1′ )=0,r1∗(p1′ )= p . ≥ θ 1′ 1 Therefore, the adversarial risk in Eq. (29) stays p1′ p =1. This concludes our proof for Eq. (10) in Theorem 1. · 1′ Regarding Case 1 in Eq. (31), we note that both the ordinary risk p1 and the adversarial risk p1 r1∗(p1) are strictly less than 1. Our goal is to show Eq. (9) in Theorem 1, which is equivalent to showing · p r∗(p ) <p′ r∗(p′ ). (33) 1 · 1 1 1 · 1 1 To do so, we further consider the following two sub-cases of Case 1 in Eq. (31): Case 1-a: p′ <p1 r∗(p1) (34) 1 · 1 Case 1-b: p′ p1 r∗(p1). (35) 1 ≥ · 1 In Case 1-b, we can straightforwardly show Eq. (33) as follows. p r∗(p ) p′ <p′ r∗(p′ ), (36) 1 · 1 1 ≤ 1 1 · 1 1 where the last inequality follows from 1 <r1∗(p1′ ). p1 Now, assume Cases 1 and 1-a in Eqs. (31) and (34). Our goal is to show that r1∗(p1′ ) satisfies r1∗(p1′ ) > r1∗(p1) because p1′ then Eq. (33) holds. To this end, we show that p1 r1′ = r1∗(p1) (37) p1′ · is contained in the relative interior (excluding the boundary) of Eq. (30) with p (1) fixed to p1′ . Then, because our objective Ωθ p1 in Eq. (29) is linear in r1, r1′ <r1∗(p1′ ) holds in our setting. Then, we arrive at r1∗(p1′ ) > p r1∗(p1). Formally, our goal is 1′ · to show r1′ in Eq. (37) satisfies p′ f(r′ )+p′ f(r′ ) <δ, p′ r′ + p′ r′ =1,r′ > 0,r′ > 0, (38) 0 · 0 1 · 1 0 0 1 1 0 1 where p′ =1 p′ . By Eqs. (31), (37) and the second equality of Eq. (38), we have 0 − 1 1 p1′ r1′ 1 p1 r1∗(p1) p0 r0′ = − = − · = r0∗(p1). (39) p0′ p0′ p0′ · The latter two inequalities of Eq. (38), i.e., r0′ > 0 and r1′ > 0 follow straightforwardly from the assumptions. Combining the assumption in Eq. (34) and the last inequality in Eq. (31), we have the following inequality. 0 <r0∗(p1) <r0′ < 1 <r1′ <r1∗(p1). (40) Thus, we can write r0′ (resp. r1′ ) as a linear interpolation of r0∗(p1) and 1 (resp. 1 and r1∗(p1)) as follows. r′ = α r∗(p )+(1 α) 1,r′ = β r∗(p )+(1 β) 1, (41) 0 · 0 1 − · 1 · 1 1 − · where 0 <α,β<1. Substituting Eqs. (37) and (39), we have 1 p′ p r∗(p ) α = 0 − 0 · 0 1 , (42) 1 r (p ) · p − 0∗ 1 0′ 1 p r∗(p ) p′ 1 p′ p r∗(p ) β = 1 · 1 1 − 1 = 0 − 0 · 0 1 . (43) r (p ) 1 · p r (p ) 1 · p 1∗ 1 − 1′ 1∗ 1 − 1′ Does Distributionally Robust Supervised Learning Give Robust Classifiers? Then, we have p′ f(r′ )+p′ f(r′ )=p′ f(α r∗(p )+(1 α) 1) + p′ f(β r∗(p )+(1 β) 1) 0 0 1 1 0 · 0 1 − · 1 · 1 1 − · p′ α f(r∗(p )) + (1 α) f(1) + p′ β f(r∗(p )) + (1 β) f(1) ( convexity of f( )) ≤ 0 ·{ · 0 1 − · } 1 ·{ · 1 1 − · } ∵ · = p′ α f(r∗(p )) + p′ β f(r∗(p )) ( f(1) = 0) 0 · 0 1 1 · 1 1 ∵ 1 1 =(p′ p r∗(p )) f(r∗(p )) + f(r∗(p )) ( Eqs. (42) and (43)) 0 − 0 · 0 1 1 r (p ) · 0 1 r (p ) 1 · 1 1 ∵ ! − 0∗ 1 1∗ 1 − " 1 1 =(p r∗(p ) p′ ) f(r∗(p )) + f(r∗(p )) 1 · 1 1 − 1 1 r (p ) · 0 1 r (p ) 1 · 1 1 ! − 0∗ 1 1∗ 1 − " 1 1 < (p r∗(p ) p ) f(r∗(p )) + f(r∗(p )) ( p′ >p .) 1 · 1 1 − 1 1 r (p ) · 0 1 r (p ) 1 · 1 1 ∵ 1 1 ! − 0∗ 1 1∗ 1 − " = p f(r∗(p )) + p f(r∗(p )) 0 · 0 1 1 · 1 1 = δ. (∵ the first equation of Eq. (31).) This concludes our proof that Eq.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    21 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us