n a l r u o o f J P c r i o o n b a E l e c t r b i l i t y

Vol. 16 (2011), Paper no. 67, pages 1844–1879.

Journal URL http://www.math.washington.edu/~ejpecp/

Quasi-sure Stochastic Analysis through Aggregation

† ‡ H. Mete SONER∗ Nizar TOUZI Jianfeng ZHANG

Abstract This paper is on developing stochastic analysis simultaneously under a general family of prob- ability measures that are not dominated by a single probability . The interest in this question originates from the probabilistic representations of fully nonlinear partial differential equations and applications to mathematical finance. The existing literature relies either on the capacity theory (Denis and Martini [5]), or on the underlying nonlinear partial differential equa- tion (Peng [13]). In both approaches, the resulting theory requires certain smoothness, the so called quasi-sure continuity, of the corresponding processes and random variables in terms of the underlying canonical process. In this paper, we investigate this question for a larger class of “non-smooth" processes, but with a restricted family of non-dominated probability measures. For smooth processes, our approach leads to similar results as in previous literature, provided the restricted family satisfies an additional density property. .

Key words: non-dominated probability measures, weak solutions of SDEs, uncertain volatility model, quasi-sure stochastic analysis.

AMS 2010 Subject Classification: Primary 60H10, 60H30.

Submitted to EJP on March 24, 2010, final version accepted August 19, 2011.

∗ETH (Swiss Federal Institute of Technology), Zürich and Swiss Finance Institute, [email protected]. Research partly supported by the European Research Council under the grant 228053-FiRM. Financial from the Swiss Finance Institute and the ETH Foundation are also gratefully acknowledged. †CMAP, Ecole Polytechnique Paris, [email protected]. Research supported by the Chair Financial Risks of the Risk Foundation sponsored by Société Générale, the Chair Derivatives of the Future sponsored by the Fédération Bancaire Française, and the Chair Finance and Sustainable Development sponsored by EDF and Calyon. ‡University of Southern California, Department of , [email protected]. Research supported in part by NSF grant DMS 06-31366 and DMS 10-08873.

1844 1 Introduction

It is well known that all probabilistic constructions crucially depend on the underlying probability measure. In particular, all random variables and stochastic processes are defined up to null sets of this measure. If, however, one needs to develop stochastic analysis simultaneously under a family of probability measures, then careful constructions are needed as the null sets of different measures do not necessarily coincide. Of course, when this family of measures is dominated by a single measure this question trivializes as we can simply work with the null sets of the dominating measure. However, we are interested exactly in the cases where there is no such dominating measure. An interesting example of this situation is provided in the study of financial markets with uncertain volatility. Then, essentially all measures are orthogonal to each other. Since for each probability measure we have a well developed theory, for simultaneous stochastic analysis, we are naturally led to the following problem of aggregation. Given a family of random variables or stochastic processes, X P, indexed by probability measures P, can one find an aggregator P X that satisfies X = X , P almost surely for every probability measure P? This paper studies exactly this abstract problem.− Once aggregation is achieved, then essentially all classical results of stochastic analysis generalize as shown in Section 6 below. This probabilistic question is also closely related to the theory of second order backward stochastic differential equations (2BSDE) introduced in [3]. These type of stochastic equations have several applications in stochastic optimal control, risk measures and in the Markovian case, they provide probabilistic representations for fully nonlinear partial differential equations. A uniqueness result is also available in the Markovian context as proved in [3] using the theory of viscosity solutions. Although the definition given in [3] does not require a special structure, the non-Markovian case, however, is better understood only recently. Indeed, [17] further develops the theory and proves a general existence and uniqueness result by probabilistic techniques. The aggregation result is a central tool for this result and in our accompanying papers [15, 16, 17]. Our new approach to 2BSDE is related to the quasi sure analysis introduced by Denis and Martini [5] and the G-stochastic analysis of Peng [13]. These papers are motivated by the volatility uncertainty in mathematical finance. In such financial models the volatility of the underlying stock process is only known to stay between two given bounds 0 a < a. Hence, in this context one needs to define probabilistic objects simultaneously for all probability≤ measures under which the canonical process B is a square integrable martingale with absolutely continuous quadratic variation process satisfying

ad t d B t ad t. ≤ 〈 〉 ≤ Here d B t is the quadratic variation process of the canonical map B. We denote the set of all such 〈 〉 measures by W , but without requiring the bounds a and a, see subsection 2.1. P As argued above, stochastic analysis under a family of measures naturally leads us to the problem of aggregation. This question, which is also outlined above, is stated precisely in Section 3, Definition 3.1. The main difficulty in aggregation originates from the fact that the above family of probabil- ity measures are not dominated by one single probability measure. Hence the classical stochastic analysis tools can not be applied simultaneously under all probability measures in this family. As a specific example, let us consider the case of the stochastic . Given an appropriate integrand P R t H, the stochastic I H dB can be defined classically under each probability measure t = 0 s s P. However, these processes may depend on the underlying probability measure. On the other hand

1845 we are free to redefine this integral outside the support of P. So, if for example, we have two proba- bility measures P1, P2 that are orthogonal to each other, see e.g. Example 2.1, then the integrals are immediately aggregated since the supports are disjoint. However, for uncountably many probability measures, conditions on H or probability measures are needed. Indeed, in order to aggregate these integrals, we need to construct a stochastic process It defined on all of the so that P P It = It for all t, almost surely. Under smoothness assumptions on the integrand H this aggrega- tion is possible and− a pointwise definition is provided by Karandikar [10] for càdlàg integrands H. Denis and Martini [5] uses the theory of capacities and construct the integral for quasi-continuous integrands, as defined in that paper. A different approach based on the underlying partial differen- tial equation was introduced by Peng [13] yielding essentially the same results as in [5]. In Section 6 below, we also provide a construction without any restrictions on H but in a slightly smaller class than W . P For general stochastic processes or random variables, an obvious consistency condition (see Defini- tion 3.2, below) is clearly needed for aggregation. But Example 3.3 also shows that this condition is in general not sufficient. So to obtain aggregation under this minimal condition, we have two alternatives. First is to restrict the family of processes by requiring smoothness. Indeed the previ- ous results of Karandikar [10], Denis-Martini [5], and Peng [13] all belong to this case. A precise statement is given in Section 3 below. The second approach is to slightly restrict the class of non- dominated measures. The main goal of this paper is to specify these restrictions on the probability measures that allows us to prove aggregation under only the consistency condition (3.4). Our main result, Theorem 5.1, is proved in Section 5. For this main aggregation result, we assume that the class of probability measures are constructed from a separable class of diffusion processes as defined in subsection 4.4, Definition 4.8. This class of diffusion processes is somehow natural and the conditions are motivated from stochastic optimal control. Several simple examples of such sets are also provided. Indeed, the processes obtained by a straightforward concatenation of determin- istic piece-wise constant processes forms a separable class. For most applications, this set would be sufficient. However, we believe that working with general separable class helps our understanding of quasi-sure stochastic analysis. The construction of a probability measure corresponding to a given diffusion process, however, contains interesting technical details. Indeed, given an F-progressively measurable process α, we would like to construct a unique measure Pα. For such a construction, we start with the Wiener P S>0 measure 0 and assume that α takes values in d (symmetric, positive definite matrices) and also R t satisfy α ds < for all t 0, P -almost surely. We then consider the P stochastic integral 0 s 0 0 | | ∞ ≥ Z t α 1/2 X t := αs dBs. (1.1) 0 α P Pα P Classically, the quadratic variation density of X under 0 is equal to α. We then set S := 0 α 1 Pα ◦ (X )− (here the subscript S is for the strong formulation). It is clear that B under S has the same α P Pα distribution as X under 0. One can show that the quadratic variation density of B under S is α equal to a satisfying a(X (ω)) = α(ω) (see Lemma 8.1 below for the existence of such a). Hence, Pα Pα S W . Let S W be the collection of all such local martingale measures S . Barlow [1] has observed∈ P that thisP inclusion⊂ P is strict. Moreover, this procedure changes the density of the quadratic variation process to the above defined process a. Therefore to be able to specify the quadratic variation a priori, in subsection 4.2, we consider the weak solutions of a stochastic differential

1846 equation ((4.4) below) which is closely related to (1.1). This class of measures obtained as weak solutions almost provides the necessary structure for aggregation. The only additional structure we need is the uniqueness of the map from the diffusion process to the corresponding probability measure. Clearly, in general, there is no uniqueness. So we further restrict ourselves into the class with uniqueness which we denote by W . This set and the probability measures generated by them, A W , are defined in subsection 4.2. P The implications of our aggregation result for quasi-sure stochastic analysis are given in Section 6. In particular, for a separable class of probability measures, we first construct a quasi sure stochastic integral and then prove all classical results such as Kolmogrov continuity criterion, martingale rep- resentation, Ito’s formula, Doob-Meyer decomposition and the Girsanov theorem. All of them are proved as a straightforward application of our main aggregation result. If in addition the family of probability measures is dense in an appropriate sense, then our aggrega- tion approach provides the same result as the quasi-sure analysis. These type of results, of course, require continuity of all the maps in an appropriate sense. The details of this approach are investi- gated in our paper [16], see also Remark 7.5 in the context of the application to the hedging problem under uncertain volatility. Notice that, in contrast with [5], our approach provides existence of an optimal hedging strategy, but at the price of slightly restricting the family of probability measures.

The paper is organized as follows. The local martingale measures W and a universal filtration are studied in Section 2. The question of aggregation is defined in SectionP 3. In the next section, we define W , W and then the separable class of diffusion processes. The main aggregation result, TheoremA P 5.1, is proved in Section 5. The next section generalizes several classical results of stochastic analysis to the quasi-sure setting. Section 7 studies the application to the hedging problem under uncertain volatility. In Section 8 we investigate the class S of mutually singular measures induced from strong formulation. Finally, several examples concerningP weak solutions and the proofs of several technical results are provided in the Appendix.

Notations. We close this introduction with a list of notations introduced in the paper.

R Rd P Ω := ω C( +, ) : ω(0) = 0 , B is the canonical process, 0 is the Wiener measure on Ω. • { ∈ } For a given stochastic process X , FX is the filtration generated by X . • F FB := = t t 0 is the filtration generated by B. • {F } ≥ F+ + + T := t , t 0 , where t := t+ := s>t s, • {F ≥ } F F F P P + P + + P t := t ( t ) and t := t ( ), where •F F ∨ N F F F ∨ N F∞ P ¦ © ( ) := E Ω : there exists E˜ such that E E˜ and P[E˜] = 0 . N G ⊂ ∈ G ⊂ is the class of polar sets defined in Definition 2.2. •NP P − T P  ˆtP := P t is the universal filtration defined in (2.3). • F ∈P F ∨ NP F R is the set of all stopping times τ taking values in + . •T − ∪ {∞} F ˆP is set of all ˆ P stopping times. • T − 1847 B is the universally defined quadratic variation of B, defined in subsection 2.1. •〈 〉 aˆ is the density of the quadratic variation B , also defined in subsection 2.1. • 〈 〉 S d is the set of d d symmetric matrices. • × S>0 d is the set of positive definite symmetric matrices. • W is the set of measures defined in subsection 2.1. • P S W is defined in the Introduction, see also Lemma 8.1. • P ⊂ P

MRP W are the measures with the martingale representation property, see (2.2). • P ⊂ P

Sets W , S, MRP are defined in subsection 4.2 and section 8, as the subsets of W , S, MRP • withP the additionalP P requirement of weak uniqueness. P P P

S>0 is the set of integrable, progressively measurable processes with values in d . • A S : P P and P is the set of diffusion matrices satisfying (4.1). W = W W ( ) W ( ) • A ∈P A A

W , S, MRP are defined as above using W , S, MRP, see section 8. •A A A P P P a a,b ab Sets Ωτ, Ωτ and the stopping time θ are defined in subsection 4.3. • ˆ ˆ L0 Lp P Lp H0 Hp Pa H2 Pa Hp H2 spaces , ( ), ˆ , and the integrand spaces , ( ), loc( ), ˆ , ˆ loc are • defined in Section 6.

2 Non-dominated mutually singular probability measures

R Rd F FB Let Ω := C( +, ) be as above and = be the filtration generated by the canonical process B. Then it is well known that this natural filtration F is left-continuous, but is not right-continuous. This P P F+ P F paper makes use of the right-limiting filtration , the completed filtration := t , t 0 , P P P F − {F ≥ } and the augmented filtration := t , t 0 , which are all right continuous. − {F ≥ } 2.1 Local martingale measures

We say a probability measure P is a local martingale measure if the canonical process B is a local martingale under P. It follows from Karandikar [10] that there exists an F progressively measur- R t able process, denoted as B dB , which coincides with the Itô’s integral, P− almost surely for all 0 s s local martingale measure P. In particular, this provides a pathwise definition− of

Z t T 1 B t := Bt Bt 2 BsdBs and aˆt := lim [ B t B t "]. " 0 0 " − 〈 〉 − ↓ 〈 〉 − 〈 〉 Clearly, B coincides with the P quadratic variation of B, P almost surely for all local martingale measure〈P〉. − −

1848 P Let W denote the set of all local martingale measures such that P P S>0 -almost surely, B t is absolutely continuous in t and aˆ takes values in d , (2.1) 〈 〉 S>0 where d denotes the space of all d d real valued positive definite matrices. We note that, for P P P × P different 1, 2 W , in general 1 and 2 are mutually singular, as we see in the next simple ∈ P example. Moreover, there is no dominating measure for W . P P P 1 Example 2.1. Let d = 1, 1 := 0 (p2B)− , and Ωi := B t = (1 + i)t, t 0 , i = 0, 1. Then, P P P P ◦P P {〈 〉 ≥ } P 0, 1 W , 0(Ω0) = 1(Ω1) = 1, 0(Ω1) = 1(Ω0) = 0, and Ω0 and Ω1 are disjoint. That is, 0 P ∈ P and 1 are mutually singular. P In many applications, it is important that W has martingale representation property (MRP, P ∈ P P for short), i.e. for any (F , P)-local martingale M, there exists a unique (P-almost surely) F - progressively measurable Rd valued process H such that Z t Z t 1/2 2 P aˆs Hs ds < and Mt = M0 + HsdBs, t 0, -almost surely. 0 | | ∞ 0 ≥ We thus define ¦P P© MRP := W : B has MRP under . (2.2) P ∈ P

The inclusion MRP W is strict as shown in Example 9.3 below. P ⊂ P Another interesting subclass is the set S defined in the Introduction. Since in this paper it is not directly used, we postpone its discussionP to Section 8.

2.2 A universal filtration

We now fix an arbitrary subset W . By a slight abuse of terminology, we define the following notions introduced by Denis andP Martini ⊂ P [5]. Definition 2.2. (i) We say that a property holds -quasi-surely, abbreviated as -q.s., if it holds P-almost surely for all P . P P P (ii) Denote := P ∈ P( ) and we call -polar sets the elements of . (iii) A probabilityNP measure∩ ∈P N PFis∞ called absolutelyP continuous with respect toNP if P(E) = 0 for all E . P ∈ NP In the stochastic analysis theory, it is usually assumed that the filtered probability space satisfies the usual hypotheses. However, the key issue in the present paper is to develop stochastic analysis tools simultaneously for non-dominated mutually singular measures. In this case, we do not have a good filtration satisfying the usual hypotheses under all the measures. In this paper, we shall use F P P the following universal filtration ˆ P for the mutually singular probability measures , : { ∈ P } F \ P  ˆ P := ˆtP t 0 where ˆtP := t for t 0. (2.3) {F } ≥ F P F ∨ NP ≥ ∈P F F Moreover, we denote by (resp. ˆP ) the set of all -stopping times τ (resp., ˆ P -stopping times R T T τˆ) taking values in + . ∪ {∞} 1849 P P P Remark 2.3. Notice that F+ F F . The reason for the choice of this completed filtration F is as follows. If we use the small⊂ filtration⊂ F+, then the crucial aggregation result of Theorem 5.1 P below will not hold true. On the other hand, if we use the augmented filtrations F , then Lemma 5.2 below does not hold. Consequently, in applications one will not be able to check the consistency condition (5.2) in Theorem 5.1, and thus will not be able to apply the aggregation result. See also Remarks 5.3 and 5.6 below. However, this choice of the completed filtration does not cause any problems in the applications.

F F We note that ˆ P is right continuous and all -polar sets are contained in ˆ0P . But ˆ P is not complete under each P . However, thanksP to the Lemma 2.4 below, all theF properties we need still hold under this filtration.∈ P P For any sub-σ algebra of and any probability measure P, it is well-known that an - P measurable random− variableG FX ∞is [ ( )] measurable if and only if there exists aF ∞- measurable X˜ suchG∨ that N X F=∞ X˜−, P-almost surely. The following result ex-G tends this property to processes and states that one can always consider any process in its F+- F+ F F+ progressively measurable version. Since ˆ P , the -progressively measurable version is also F ⊂ ˆ P -progressively measurable. This important result will be used throughout our analysis so as to F consider any process in its ˆ P -progressively measurable version. However, we emphasize that the F P ˆ P -progressively measurable version depends on the underlying probability measure . Lemma 2.4. Let P be an arbitrary probability measure on the canonical space , , and let X be P (Ω ) an F -progressively measurable process. Then, there exists a unique (P-almost surelyF) F∞+-progressively measurable process X˜ such that X˜ = X, P almost surely. If, in addition, X is càdlàg P-almost surely, then we can choose X˜ to be càdlàg P-almost− surely.

The proof is rather standard but it is provided in Appendix for completeness. We note that, the identity X˜ = X , P-almost surely, is equivalent to that they are equal d t dP-almost surely. However, ˜ P × if both of them are càdlàg, then clearly X t = X t , 0 t 1, -almost surely. ≤ ≤

3 Aggregation

We are now in a position to define the problem. P P F Definition 3.1. Let W , and let X , be a family of ˆ P -progressively measur- able processes. An FˆP-progressively ⊂ P measurable{ ∈ process P } X is called a -aggregator of the family P PP X , P if X = X , P-almost surely for every P . P { ∈ P } ∈ P Clearly, for any family X P, P which can be aggregated, the following consistency condition must hold. { ∈ P } Definition 3.2. We say that a family X P, P satisfies the consistency condition if, for any P P P { P ∈ P } 1, 2 , and τˆ ˆP satisfying 1 = 2 on ˆτP we have ∈ P ∈ T Fˆ P P 1 2 P X = X on [0, τˆ], 1 almost surely. (3.4) − Example 3.3 below shows that the above condition is in general not sufficient. Therefore, we are left with following two alternatives.

1850 F Restrict the range of aggregating processes by requiring that there exists a sequence of ˆ P - • n n P P progressively measurable processes X n 1 such that X X , -almost surely as n P { } ≥ → n → ∞ for all . In this case, the -aggregator is X := limn X . Moreover, the class ∈ P P →∞ P can be taken to be the largest possible class W . We observe that the aggregation results of Karandikar [10], Denis-Martini [5], and PengP [13] all belong to this case. Under some regularity on the processes, this condition holds. Restrict the class of mutually singular measures so that the consistency condition (3.4) is • sufficient for the largestP possible family of processes X P, P . This is the main goal of the present paper. { ∈ P }

We close this section by constructing an example in which the consistency condition is not sufficient for aggregation. Example 3.3. Let d 2. First, for each x, y 1, 2 , let Px,y : P xB1, yB2 1 and = [ ] = 0 (p p )− 1 2 ∈ Px,y ◦ Px,y Ωx,y := B t = x t, B t = y t, t 0 . Cleary for each (x, y), W and [Ωx,y ] = 1. Next, for{〈 each〉 a [1,〈 2],〉 we define ≥ } ∈ P ∈ 1 Z 2 P Pa,z Pz,a a[E] := ( [E] + [E])dz for all E . 2 1 ∈ F∞ We claim that P . Indeed, for any t t and any bounded -measurable random variable a W 1 < 2 t1 η, we have ∈ P F Z 2 P Pa,z Pz,a 2E a B B E B B E B B dz 0. [( t2 t1 )η] = [( t2 t1 )η] + [( t2 t1 )η] = − 1 { − − } P Hence a is a martingale measure. Similarly, one can easily show that I2d t d B t 2I2d t, P ≤ 〈 〉 ≤ a-almost surely, where I2 is the 2 2 identity matrix. × For a [1, 2] set ∈ 1 2 ” — Ωa := B t = at, t 0 B t = at, t 0 z [1,2] Ωa,z Ωz,a P {〈 〉 ≥ } ∪ {〈 〉 ≥ } ⊇ ∪ ∈ ∪ so that a[Ωa] = 1. Also for a = b, we have Ωa Ωb = Ωa,b Ωb,a and thus 6 P ∩P ∪ a[Ωa Ωb] = b[Ωa Ωb] = 0. ∩ ∩ P a P P Now let := a, a [1, 2] and set X t (ω) = a for all t, ω. Notice that, for a = b, a and b P +{ ∈ } 6 disagree on 0 ˆ0P . Then the consistency condition (3.4) holds trivially. However, we claim that a a there is no F-aggregator⊂ F X of the family X , a [1, 2] . Indeed, if there is X such that X = X , P P { ∈ } a-almost surely for all a [1, 2], then for any a [1, 2], ∈ ∈2 1 Z   P a P Pa,z Pz,a 1 = a[X. = a] = a[X. = a] = [X. = a] + [X. = a] dz. 2 1 n Let λn the on [1, 2] for integer n 1. Then, we have    ≥  Pa,z Pz,a λ1 z : [X. = a] = 1 = λ1 z : [X. = a] = 1 = 1, for all a [1, 2]. { } { } ∈ Pa,z Pz,a Set A1 := (a, z) : [X. = a] = 1 , A2 := (z, a) : [X. = a] = 1 so that λ2(A1) = λ2(A2) = 1. { } { } Moreover, A1 A2 (a, a) : a (0, 1] and λ2(A1 A2) = 0. Now we directly calculate that ∩ ⊂ { ∈ } ∩ 1 λ2(A1 A2) = λ2(A1) + λ2(A2) λ2(A1 A2) = 2. This contradiction implies that there is no aggregator.≥ ∪ − ∩

1851 4 Separable classes of mutually singular measures

The main goal of this section is to identify a condition on the probability measures that yields aggre- gation as defined in the previous section. It is more convenient to specify this restriction through the diffusion processes. However, as we discussed in the Introduction there are technical difficulties in the connection between the diffusion processes and the probability measures. Therefore, in the first two subsections we will discuss the issue of uniqueness of the mapping from the diffusion process to a martingale measure. The separable class of mutually singular measures are defined in subsection 4.4 after a short discussion of the supports of these measures in subsection 4.3.

4.1 Classes of diffusion matrices

Let t n Z o R S>0 F := a : + d -progressively measurable and as ds < , for all t 0 . A → | 0 | | ∞ ≥ P For a given W , let ∈ P P n P o W ( ) := a : a = aˆ, -almost surely . (4.1) A ∈ A Recall that aˆ is the density of the quadratic variation of B and is defined pointwise. We also define 〈 〉 [ P W := W ( ). A P A W ∈P

A subtle technical point is that W is strictly included in . In fact, the process A A a : 1 31 is clearly in . t = aˆt 2 + aˆt <2 W { ≥ } { } A\ A P P For any W and a W ( ), by the Lévy characterization, the following Itô’s stochastic integral under∈ PP is a P-Brownian∈ A motion: Z t Z t P 1/2 1/2 P Wt := aˆs− dBs = as− dBs, t 0. a.s. (4.2) 0 0 ≥ − Also since B is the canonical process, a = a(B ) and thus · 1/2 P P P P dBt = at (B )dWt , -almost surely, and Wt is a -Brownian motion. (4.3) ·

4.2 Characterization by diffusion matrices

In view of (4.3), to construct a measure with a given quadratic variation a W , we consider the stochastic differential equation, ∈ A 1/2 P dX t = at (X )dBt , 0-almost surely. (4.4) · In this generality, we consider only weak solutions P which we define next. Although the following definition is standard (see for example Stroock & Varadhan [18]), we provide it for specificity.

1852 Definition 4.1. Let a be an element of W . (i) For F stopping times Aand a probability measure P1 on , we say that P is a τ1 τ2 τ1 − ≤ ∈ T P1 P F P1 weak solution of (4.4) on [τ1, τ2] with initial condition , denoted as (τ1, τ2, , a), if the followings hold: ∈ P 1. P P1 on ; = τ1 F P 2. The canonical process Bt is a -local martingale on [τ1, τ2]; R t 1/2 P P 3. The process Wt : a B dBs, defined almost surely for all t τ1, τ2 , is a -Brownian = τ1 s− ( ) [ ] Motion. · − ∈ P1 (ii) We say that the equation (4.4) has weak uniqueness on [τ1, τ2] with initial condition if any two weak solutions P and P in τ , τ , P1, a satisfy P P on . 0 ( 1 2 ) = 0 τ2 P F (iii) We say that (4.4) has weak uniqueness if (ii) holds for any τ1, τ2 and any initial condition P1 on . ∈ T τ1 F We emphasize that the stopping times in this definition are F-stopping times. P P P R Note that, for each W and a W ( ), is a weak solution of (4.4) on + with initial value P ∈ P ∈ A P (B0 = 0) = 1. We also need uniqueness of this map to characterize the measure in terms of the Pa diffusion matrix a. Indeed, if (4.4) with a has weak uniqueness, we let W be the unique R Pa ∈ P weak solution of (4.4) on + with initial condition (B0 = 0) = 1, and define, ¦ © Pa W := a W : (4.4) has weak uniqueness , W := : a W . (4.5) A ∈ A P { ∈ A } We also define Pa MRP := MRP W , MRP := a W : MRP . (4.6) P P ∩ P A { ∈ A ∈ P } For notational simplicity, we denote Pa Fa FPa Fa F := , := , for all a W . (4.7) ∈ A P It is clear that, for each W , the weak uniqueness of the equation (4.4) may depend on the P ∈ P version of a W ( ). This is indeed the case and the following example illustrates this observation. ∈ A Example 4.2. Let a0(t) := 1, a2(t) := 2 and

n Bh B o a t : 1 1 1 t , where E : lim 0 1 +. 1( ) = + E (0, )( ) = p − = 0 ∞ h 0 2hln ln h 1 6 ∈ F ↓ − P Pa2 Then clearly both a0 and a2 belong to W . Also a1 = a0, 0-almost surely and a1 = a2, -almost P PAa2 surely. Hence, a1 W ( 0) W ( ). Therefore the equation (4.4) with coefficient a1 has two P ∈ A Pa2 ∩ A weak solutions 0 and . Thus a1 / W . ∈ A P Remark 4.3. In this paper, we shall consider only those W W . However, we do not know ∈ P ⊂ P P whether this inclusion is strict or not. In other words, given an arbitrary W , can we always P ∈ P find one version a W ( ) such that a W ? ∈ A ∈ A It is easy to construct examples in W in the Markovian context. Below, we provide two classes of A path dependent diffusion processes in W . These sets are in fact subsets of S W , which is defined in (8.11) below. We also constructA some counter-examples in the Appendix.A ⊂ Denote A

¦ d © Q := (t, x) : t 0, x C([0, t], R ) . (4.8) ≥ ∈

1853 Example 4.4. (Lipschitz coefficients) Let

2 S>0 at := σ (t, B ) where σ : Q d · → 2 is Lebesgue measurable, uniformly Lipschitz continuous in x under the uniform , and σ ( , 0) · ∈ . Then (4.4) has a unique strong solution and consequently a W . A ∈ A P Example 4.5. (Piecewise constant coefficients) Let a ∞ an1 τ ,τ where τn n 0 is a = n=0 [ n n+1) nondecreasing sequence of F stopping times with 0, as n , and{ a} ≥ ⊂ T with τ0 = τn n τn S>0 − ↑ ∞ → ∞ ∈ F values in d for all n. Again (4.4) has a unique strong solution and a W . ∈ A This example is in fact more involved than it looks like, mainly due to the presence of the stopping times. We relegate its proof to the Appendix.

4.3 Support of Pa

In this subsection, we collect some properties of measures that are constructed in the previous Pa subsection. We fix a subset W , and denote by := : a the corresponding subset A ⊂ A P { ∈ A } of W . In the sequel, we may also say P a property holds quasi surely if it holds quasi surely. A − P − F For any a and any ˆ P stopping time τˆ ˆP , let ∈ A − ∈ T Z t Z t [ n 1 o a : a ds a ds, for all t 0, τ . (4.9) Ωτˆ = ˆs = s [ ˆ + n] n 1 0 0 ∈ ≥ It is clear that

a ˆ , a is non-increasing in t, a a , and Pa a 1. (4.10) Ωτˆ τˆP Ωt Ωτˆ+ = Ωτˆ (Ω ) = ∈ F ∞ We next introduce the first disagreement time of any a, b , which plays a central role in Section 5: ∈ A Z t Z t a,b n o θ := inf t 0 : asds = bsds , ≥ 0 6 0 F and, for any ˆ P stopping time τˆ ˆP , the agreement set of a and b up to τˆ: − ∈ T a,b a,b a,b Ωτ := τˆ < θ τˆ = θ = . ˆ { } ∪ { ∞} Here we use the convention that inf = . It is obvious that ; ∞ a,b a,b a b a,b θ ˆP , Ωτ ˆτP and Ωτ Ωτ Ωτ . (4.11) ∈ T ˆ ∈ Fˆ ˆ ∩ ˆ ⊂ ˆ Remark 4.6. The above notations can be extended to all diffusion processes a, b . This will be important in Lemma 4.12 below. ∈ A

1854 4.4 Separability

We are now in a position to state the restrictions needed for the main aggregation result Theorem 5.1.

Definition 4.7. A subset 0 W is called a generating class of diffusion coefficients if A ⊂ A (i) 0 satisfies the concatenation property: a1[0,t) + b1[t, ) 0 for a, b 0, t 0. A ∞ ∈ A a,b ∈ A ≥ (ii) 0 has constant disagreement times: for all a, b 0, θ is a constant or, equivalently, a,bA ∈ A Ωt = or Ω for all t 0. ; ≥ We note that the concatenation property is standard in the stochastic control theory in order to establish the dynamic programming principle, see, e.g. page 5 in [14]. The constant disagreement times property is important for both Lemma 5.2 below and the aggregation result of Theorem 5.1 below. We will provide two examples of sets with these properties, after stating the main restriction for the aggregation result.

Definition 4.8. We say is a separable class of diffusion coefficients generated by 0 if 0 W is a generating class of diffusionA coefficients and consists of all processes a of theA form,A ⊂ A A X∞ X∞ n a a 1 n 1 , (4.12) = i Ei [τn,τn+1) n=0 i=1 n where (ai )i,n 0, (τn)n is nondecreasing with τ0 = 0 and ⊂ A ⊂ T inf n : τn = < , τn < τn+1 whenever τn < , and each τn takes at most countably many values,• { ∞} ∞ ∞ for each n, En, i 1 form a partition of . i τn Ω • { ≥ } ⊂ F We emphasize that in the previous definition the ’s are F stopping times and En . The τn i τn following are two examples of generating classes of diffusion− coefficients. ∈ F

Example 4.9. Let 0 be the class of all deterministic mappings. Then clearly 0 W and satisfies both propertiesA ⊂ (theA concatenation and the constant disagreement times properties)A ⊂ A of a generating class.

Example 4.10. Recall the set Q defined in (4.8). Let 0 be a set of deterministic Lebesgue measur- S>0 D able functions σ : Q d satisfying, → L 2 - σ is uniformly Lipschitz continuous in x under ∞-norm, and σ ( , 0) and R Rd · ∈ A - for each x C( +, ) and different σ1, σ2 0, the Lebesgue measure of the set A(σ1, σ2, x) is equal to 0, where∈ ∈ D n o A(σ1, σ2, x) := t : σ1(t, x [0,t]) = σ2(t, x [0,t]) . | | Let be the class of all possible concatenations of 0, i.e. σ takes the following form: D D ∈ D X σ t, x : ∞ σ t, x 1 t , t, x Q, ( ) = i( ) [ti ,ti+1)( ) ( ) i=0 ∈ 2 for some sequence ti and σi 0, i 0. Let 0 := σ (t, B ) : σ . It is immediate to check ↑ ∞ ∈ D ≥ A { · ∈ D} that 0 W and satisfies the concatenation and the constant disagreement times properties. Thus it is alsoA ⊂ a Agenerating class.

1855 We next prove several important properties of separable classes.

Proposition 4.11. Let be a separable class of diffusion coefficients generated by 0. Then W , A A A ⊂ A and -quasi surely is equivalent to 0-quasi surely. Moreover, if 0 MRP, then MRP. A A A ⊂ A A ⊂ A We need the following two lemmas to prove this result. The first one provides a convenient structure for the elements of . A Lemma 4.12. Let be a separable class of diffusion coefficients generated by 0. For any a F A A ∈ A and -stopping time τ , there exist τ τ˜ , a sequence an, n 1 0, and a partition ∈ T ≤ ∈ T { ≥ } ⊂ A En, n 1 τ of Ω, such that τ˜ > τ on τ < and { ≥ } ⊂ F { ∞} X a a t 1 for all t ˜. t = n( ) En < τ n 1 ≥ a,an a,an In particular, En Ωτ and consequently nΩτ = Ω. Moreover, if a takes the form (4.12) and ⊂ ∪ τ τn, then one can choose τ˜ τn+1. ≥ ≥ The proof of this lemma is straightforward, but with technical notations. Thus we postpone it to the Appendix. a,a a,an n We remark that at this point we do not know whether a W . But the notations θ and Ωτ ∈ A P P1 are well defined as discussed in Remark 4.6. We recall from Definition 4.1 that (τ1, τ2, , a) P ∈ P P1 means is a weak solution of (4.4) on [τ˜1, τ˜2] with coefficient a and initial condition . i Lemma 4.13. Let τ1, τ2 with τ1 τ2, and a , i 1 W (not necessarily in W ) and let E , i 1 ∈be T a partition≤ of . Let{ P0 ≥be a} probability⊂ A measure on Aand i τ1 Ω τ1 Pi { ≥P0 }i ⊂ F F (τ1, τ2, , a ) for i 1. Define ∈ P ≥ X X P E : Pi E E for all E and a : ai 1 , t , . ( ) = ( i) τ2 t = t Ei [τ1 τ2] i 1 ∩ ∈ F i 1 ∈ ≥ ≥ P P0 Then (τ1, τ2, , a). ∈ P R t Proof. Clearly, P P0 on . It suffices to show that both B and B BT a ds are P-local = τ1 t t t τ s F − 1 martingales on [τ1, τ2]. By a standard localization argument, we may assume without loss of generality that all the random variables below are integrable. Now for any τ1 τ3 τ4 τ2 and any bounded random variable , we have ≤ ≤ ≤ η τ3 ∈ F P X Pi h i E B B E B B 1 [( τ4 τ3 )η] = ( τ4 τ3 )η Ei − i 1 − ≥ X Pi h Pi   i E E B B 1 0. = τ4 τ3 τ3 η Ei = i 1 − |F ≥ R t Therefore B is a P-local martingale on τ , τ . Similarly one can show that B BT a ds is also [ 1 2] t t τ s P − 1 a -local martingale on [τ1, τ2].

1856 Proof of Proposition 4.11. Let a be given as in (4.12). ∈ A (i) We first show that a . Fix , with and a probability measure P0 on . W θ1 θ2 θ1 θ2 θ1 Set ∈ A ∈ T ≤ F

τ˜0 := θ1 and τ˜n := (τn θ1) θ2, n 1. ∨ ∧ ≥ P0 We shall show that (θ1, θ2, , a) is a singleton, that is, the (4.4) on [θ1, θ2] with coefficient a and initial conditionPP0 has a unique weak solution. To do this we prove by induction on n that P0 (τ˜0, τ˜n, , a) is a singleton. P First, let n 1. We apply Lemma 4.12 with ˜ and choose ˜ ˜ . Then, a P a t 1 = τ = τ0 τ = τ1 t = i 1 i( ) Ei for all t ˜ , where a and E , i 1 form a partition of . For i 1, let P≥0,i be the < τ1 i 0 i τ˜0 Ω ∈ A {P0 ≥ } ⊂ F ≥ unique weak solution in (τ˜0, τ˜1, , ai) and set P X P0,a E : P0,i E E for all E . ( ) = ( i) τ˜1 i 1 ∩ ∈ F ≥ P0,a P0 P We use Lemma 4.13 to conclude that (τ˜0, τ˜1, , a). On the other hand, suppose P0 ∈ P Pi ∈ (τ˜0, τ˜1, , a) is an arbitrary weak solution. For each i 1, we define by P ≥ Pi E : P E E P0,i E E c for all E . ( ) = ( i) + ( ( i) ) τ˜1 ∩ ∩ ∈ F i We again use Lemma 4.13 and notice that a1 a 1 c a . The result is that P Ei + i (Ei ) = i P0 P0 Pi P0,i ∈ (τ˜0, τ˜1, , ai). Now by the uniqueness in (τ˜0, τ˜1, , ai) we conclude that = on P . This , in turn, implies that P E E PP0,i E E for all E and i 1. Therefore, τ˜1 ( i) = ( i) τ˜1 PF E P P0,i E E P0,a E for∩ all E . Hence∩ ˜ , ˜ , ∈P0 F, a is a singleton.≥ ( ) = i 1 ( i) = ( ) τ˜1 (τ0 τ1 ) ≥ ∩ ∈ F P P0 We continue with the induction step. Assume that (τ˜0, τ˜n, , a) is a singleton, and denote its Pn P unique element by . Without loss of generality, we assume τ˜n < τ˜n+1. Following the same Pn arguments as above we know that (τ˜n, τ˜n 1, , a) contains a unique weak solution, denoted by R t + Pn+1. Then both B and B BT Pa ds are Pn+1-local martingales on τ˜ , τ˜ and on τ˜ , τ˜ . t t t 0 s [ 0 n] [ n n+1] Pn+1 − P0 P P0 This implies that (τ˜0, τ˜n+1, , a). On the other hand, let (τ˜0, τ˜n+1, , a) be an ∈ P P P0 ∈ P arbitrary weak solution. Since we also have (τ˜0, τ˜n, , a), by the uniqueness in the induction assumption we must have the equality P Pn∈on P . Therefore, P τ˜ , τ˜ , Pn, a . Thus by = τ˜n ( n n+1 ) uniqueness P Pn+1 on . This proves the inductionF claim for n∈ P1. = τ˜n 1 + F + Finally, note that Pm E Pn E for all E and m n. Hence, we may define P E : Pn E ( ) = ( ) τ˜n ∞( ) = ( ) for E . Since inf n : , then∈ F inf n : ˜ ≥ and thus . So we τ˜n τn = < τn = θ2 < θ2 = n 1 τ˜n can uniquely∈ F extend P{ to ∞}. Now∞ we directly{ check that}P ∞ θ , θ ,FP0, a ∨and≥ isF unique. ∞ θ2 ∞ ( 1 2 ) F ∈ P Pa (ii) We next show that (E) = 0 for all 0 polar set E. Once again we apply Lemma 4.12 with . Therefore a P a t 1 forA all−t 0, where a , i 1 and E , i 1 τ = t = i 1 i( ) Ei i 0 i ∞ ≥ ≥ { ≥ } ⊂ A { ≥ } ⊂ F∞ form a partition of Ω. Now for any 0-polar set E, A X X Pa Pa Pai (E) = (E Ei) = (E Ei) = 0. i 1 ∩ i 1 ∩ ≥ ≥ This clearly implies the equivalence between -quasi surely and 0-quasi surely. A A

1857 Pa (iii) We now assume 0 MRP and show that a MRP. Let M be a -local martingale. We A ⊂ A ∈ A Pa prove by induction on n again that M has a martingale representation on [0, τn] under for each n 1. This, together with the assumption that inf n : τn = < , implies that M has martingale ≥ R Pa { Pa ∞} ∞ representation on + under , and thus proves that MRP. ∈ A Since τ0 = 0, there is nothing to prove in the case of n = 0. Assume the result holds on [0, τn]. Apply Lemma 4.12 with τ = τn and recall that in this case we can choose the τ˜ to be τn+1. Hence a P a t 1 , t < τ , where a , i 1 and E , i 1 form a partition of . t = i 1 i( ) Ei n+1 i 0 i τn Ω For each≥i 1, define { ≥ } ⊂ A { ≥ } ⊂ F ≥ M i : M M 1 1 t for all t 0. t = [ t τn+1 τn ] Ei [τn, )( ) ∧ − ∞ ≥ i Pai i Then one can directly check that M is a -local martingale. Since ai 0 MRP, there exists H i i a P i such that dM H dB , P i -almost surely. Now define H : H 1∈ A, τ ⊂ At < τ . Then we t = t t t = i 1 t Ei n n+1 Pa ≥ ≤ have dMt = Ht dBt , τn t < τn+1, -almost surely. ≤ We close this subsection by the following important example. R S>0 Example 4.14. Assume 0 consists of all deterministic functions a : + d taking the form Pn 1 A Q → at − at 1 t ,t at 1 t , where ti and at has rational entries. This is a special case = i=0 i [ i i+1) + n [ n ) i ∞ ∈ of Example 4.9 and thus 0 W . In this case 0 is countable. Let 0 = ai i 1 and define P P iPai PA ⊂ A A PaA { } ≥ ˆ := ∞i 1 2− . Then ˆ is a dominating probability measure of all , a , where is the = ∈ A A separable class of diffusion coefficients generated by 0. Therefore, -quasi surely is equivalent to Pˆ -almost surely. Notice however that is not countable.A A A

5 Quasi-sure aggregation

In this section, we fix

a separable class of diffusion coefficients generated by 0 (5.1) A A a and denote := P , a . Then we prove the main aggregation result of this paper. P { ∈ A } For this we recall that the notion of aggregation is defined in Definition 3.1 and the notations θ a,b and a,b are introduced in subsection 4.3. Ωτˆ a F Theorem 5.1 (Quasi sure aggregation). For satisfying (5.1), let X , a be a family of ˆ P - progressively measurable processes. Then there existsA a unique ( q.s.){ -aggregator∈ A } X if and only if X a, a satisfies the consistency condition P − P { ∈ A } a b Pa a,b X = X , almost surely on [0, θ ) for any a 0 and b . (5.2) − ∈ A ∈ A Moreover, if X a is càdlàg Pa-almost surely for all a , then we can choose a -q.s. càdlàg version of the -aggregator X . ∈ A P P We note that the consistency condition (5.2) is slightly different from the condition (3.4) before. The condition (5.2) is more natural in this framework and is more convenient to check in applications. Before the proof of the theorem, we first show that, for any a, b , the corresponding probability measures Pa and Pb agree as long as a and b agree. ∈ A

1858 Lemma 5.2. For satisfying (5.1) and a, b , θ a,b is an F-stopping time taking countably many values and A ∈ A

Pa a,b Pb a,b (E Ωτ ) = (E Ωτ ) for all τˆ ˆP and E ˆτP . (5.3) ∩ ˆ ∩ ˆ ∈ T ∈ Fˆ a,b F Proof. (i) We first show that θ is an -stopping time. Fix an arbitrary time t0. In view of Lemma 4.12 with τ = t0, we assume without loss of generality that X X a a t 1 and b b t 1 for all t ˜, t = n( ) En t = n( ) En < τ n 1 n 1 ≥ ≥ where ˜ t , a , b and E , n 1 form a partition of . Then τ > 0 n n 0 n t0 Ω ∈ A { ≥ } ⊂ F

a,b [ ” an,bn — θ t0 = θ t0 En . { ≤ } n { ≤ } ∩

an,bn an,bn By the constant disagreement times property of 0, θ is a constant. This implies that θ t is equal to either or . Since E , weA conclude that a,b t for all t { 0. That≤ 0 Ω n t0 θ 0 t0 0 is,}θ a,b is an F-stopping; time. ∈ F { ≤ } ∈ F ≥

(ii) We next show that θ a,b takes only countable many values. In fact, by (i) we may now apply a,b Lemma 4.12 with τ = θ . So we may write X X a a˜ t 1 and b ˜b t 1 for all t < θ˜, t = n( ) E˜n t = n( ) E˜n n 1 n 1 ≥ ≥ ˜ a,b ˜ a,b ˜ ˜ where θ > θ or θ = θ = , a˜n, bn 0, and En, n 1 θ a,b form a partition of Ω. Then ˜ a,b a˜n,bn ∞˜ ∈ A { ≥ } ⊂ F it is clear that θ = θ on En, for all n 1. For each n, by the constant disagreement times ˜ a˜n,bn a,b≥ property of 0, θ is constant. Hence θ takes only countable many values. A (iii) We now prove (5.3). We first claim that,

a,b h Pa i E a,b for any E ˆ . (5.4) Ωτˆ θ ( ) τˆP ∩ ∈ F ∨ N F∞ ∈ F Indeed, for any t 0, ≥ a,b a,b a,b a,b E Ωτ θ t = E τˆ < θ θ t ∩ ˆ ∩ { ≤ } ∩ { } ∩ { ≤ } [ h 1 i E τ < θ a,b τ t θ a,b t . = ˆ ˆ m m 1 ∩ { } ∩ { ≤ − } ∩ { ≤ } ≥ a,b By (i) above, θ t t . For each m 1, { ≤ } ∈ F ≥ 1 Pa Pa E τ < θ a,b τ t ˆ + , ˆ ˆ tP 1 t 1 ( ) t ( ) m m m ∩ { } ∩ { ≤ − } ∈ F − ⊂ F − ∨ N F∞ ⊂ F ∨ N F∞ and (5.4) follows. a,i b,i By (5.4), there exist E , E θ a,b , i = 1, 2, such that ∈ F a,1 a,b a,2 b,1 a,b b,2 Pa a,2 a,1 Pb b,2 b,1 E E Ωτ E , E E Ωτ E , and (E E ) = (E E ) = 0. ⊂ ∩ ˆ ⊂ ⊂ ∩ ˆ ⊂ \ \ 1859 1 a,1 b,1 2 a,2 b,2 Define E := E E and E := E E , then ∪ ∩ 1 2 1 2 Pa 2 1 Pb 2 1 E , E θ a,b , E E E , and (E E ) = (E E ) = 0. ∈ F ⊂ ⊂ \ \ a a,b a 2 b a,b b 2 2 Thus P E P E and P E P E . Finally, since E a,b , following the ( Ωτˆ ) = ( ) ( Ωτˆ ) = ( ) θ definition of∩ Pa and Pb, in particular∩ the uniqueness of weak solution of∈ (4.4) F on the interval a,b a 2 b 2 [0, θ ], we conclude that P (E ) = P (E ). This implies (5.3) immediately. Remark 5.3. The property (5.3) is crucial for checking the consistency conditions in our aggregation result in Theorem 5.1. We note that (5.3) does not hold if we replace the completed σ algebra a b a b − τ τ with the augmented σ algebra τ τ. To see this, let d = 1, at := 1, bt := 1+1[1, )(t). F ∩F a,b − F ∩aF a,b Pa ∞ In this case, θ = 1. Let τ := 0, E := Ω1. One can easily check that Ω0 = Ω, (E) = 1, Pb a b a,b Pa Pb (E) = 0. This implies that E 0 0 and E Ω0 . However, (E) = 1 = 0 = (E). See also Remark 2.3. ∈ F ∩ F ⊂ 6

Proof of Theorem 5.1. The uniqueness of aggregator is immediate. By Lemma 5.2 and the P − a,b Pa Pb uniqueness of weak solutions of (4.4) on [0, θ ], we know = on θ a,b . Then the exis- tence of the -aggregator obviously implies (5.2). We now assume that theF condition (5.2) holds and prove theP existence of the -aggregator. P We first claim that, without loss of generality, we may assume that X a is càdlàg. Indeed, suppose that the theorem holds for càdlàg processes. Then we construct a -aggregator for a family X a, a , not necessarily càdlàg, as follows: P { ∈ A } R t - If X a R for some constant R > 0 and for all a , set Y a : X ads. Then, the family t = 0 s Y a,|a | ≤ inherits the consistency condition (5.2). Since∈ A Y a is continuous for every a , this family{ ∈ admits A } a -aggregator Y . Define X : lim 1 Y Y . Then one can verify∈ Adirectly t = " 0 " [ t+" t ] that X satisfies allP the requirements. → − R,a a - In the general case, set X := ( R) X R. By the previous arguments there exists -aggregator R R,a − ∨ ∧ R P X of the family X , a and it is immediate that X := limR X satisfies all the require- ments. { ∈ A } →∞ We now assume that X a is càdlàg, Pa-almost surely for all a . In this case, the consistency condition (5.2) is equivalent to ∈ A

a b a,b Pa X t = X t , 0 t < θ , -almost surely for any a 0 and b . (5.5) ≤ ∈ A ∈ A Step 1. We first introduce the following quotient sets of 0. For each t, and a, b 0, we say a,b A a,b ∈ A a t b if Ωt = Ω (or, equivalently, the constant disagreement time θ t). Then t is an ∼ ≥ ∼ equivalence relationship in 0. Thus one can form a partition of 0 based on t . Pick an element A A ∼ from each partition set to construct a quotient set 0(t) 0. That is, for any a 0, there exists a,b A ⊂ A a ∈ A a unique b 0(t) such that Ωt = Ω. Recall the notation Ωt defined in (4.9). By (4.11) and the ∈ A a constant disagreement times property of 0, we know that Ωt , a 0(t) are disjoint. R A { ∈ A } Step 2. For fixed t +, define ∈ X a ξ ω : X ω 1 a ω for all ω . (5.6) t ( ) = t ( ) Ωt ( ) Ω a 0(t) ∈ ∈A

1860 a The above uncountable sum is well defined because the sets Ωt , a 0(t) are disjoint. In this step, we show that { ∈ A } a Pa ξt is ˆtP -measurable and ξt = X t , -almost surely for all a . (5.7) F ∈ A We prove this claim in the following three sub-cases. a a a a c 2.1. For each a 0(t), by definition ξt = X t on Ωt . Equivalently ξt = X t (Ωt ) . Moreover, Pa ∈a Ac a + a {Pa 6 } ⊂a by (4.10), ((Ωt ) ) = 0. Since Ωt t and t is complete under , ξt is t -measurable and Pa a ∈ F F F (ξt = X t ) = 1. b 2.2. Also, for each a 0, there exists a unique b 0(t) such that a t b. Then ξt = X t on b a,b ∈ A ∈P Aa Pb + ∼Pa b Pb b Ωt . Since Ωt = Ω, it follows from Lemma 5.2 that = on t and (Ωt ) = (Ωt ) = 1. Pa b F Hence (ξt = X t ) = 1. Now by the same argument as in the first case, we can prove that ξt is a Pa a b t -measurable. Moreover, by the consistency condition (5.8), (X t = X t ) = 1. This implies that FPa a (ξt = X t ) = 1. 2.3. Now consider a . We apply Lemma 4.12 with τ = t. This implies that there exist a ∈ A a,aj sequence aj, j 1 0 such that Ω = j 1Ωt . Then { ≥ } ⊂ A ∪ ≥ a [ h a a,aj i ξt = X t = ξt = X t Ωt . { 6 } j 1 { 6 } ∩ ≥ Now for each j 1, ≥ a a,aj h aj a,aj i [ h aj a a,aj i ξt = X t Ωt ξt = X t Ωt X t = X t Ωt . { 6 } ∩ ⊂ { 6 } ∩ { 6 } ∩ Applying Lemma 5.2 and using the consistency condition (5.5), we obtain

 a a,a   a a,a  Pa j a j Paj j a j X t = X t Ωt = X t = X t Ωt { 6 } ∩ { 6 } ∩  a  Paj j a a,aj = X t = X t t < θ = 0. { 6 } ∩ { } a a j P j + Moreover, for aj 0, by the previous sub-case, ξt = X t ( t ). Hence there exists a + ∈Pa Aj j { 6 } ∈ N F D t such that (D) = 0 and ξt = X t D. Therefore ∈ F { 6 } ⊂ a a,a a,a a,a a,a j j j Pa j Paj j ξt = X t Ωt D Ωt and (D Ωt ) = (D Ωt ) = 0. { 6 } ∩ ⊂ ∩ ∩ ∩ a a,a a j j P + a This means that ξt = X t Ωt ( t ). All of these together imply that ξt = X t Pa + { 6 } ∩a P∈a N Fa { 6 } ∈ ( t ). Therefore, ξt t and (ξt = X t ) = 1. N F a ∈ F Finally, since ξt t for all a , we conclude that ξt ˆtP . This completes the proof of (5.7). ∈ F ∈ A ∈ F Step 3. For each n 1, set t n : i , i 0 and define i = n ≥ ≥

a,n a X∞ a n X∞ X : X 1 0 X n 1 tn ,tn for all a and X : ξ01 0 ξtn 1 tn ,tn , = 0 + ti ( i 1 i ] = + i ( i 1 i ] { } i=1 − ∈ A { } i=1 − Fn a,n n Fn where ξtn is defined by (5.6). Let ˆ : ˆ 1 , t 0 . By Step 2, X , X are ˆ -progressively i = tP {F + n ≥ } Pa n a,n measurable and (X t = X t , t 0) = 1 for all a . We now define ≥ ∈ A X : lim X n. = n →∞ 1861 Fn F F F Since ˆ is decreasing to ˆ P and ˆ P is right continuous, X is ˆ P -progressively measurable. More- over, for each a , ∈ A a \ h \ n a,n i \ a X t = X t , t 0 X is càdlàg X t = X t , t 0 X is càdlàg . { ≥ } { } ⊇ n 1{ ≥ } { } ≥ a a Therefore X = X and X is càdlàg, P -almost surely for all a . In particular, X is càdlàg, -quasi surely. ∈ A P

a Let τˆ ˆP and ξ , a be a family of ˆτP -measurable random variables. We say an ˆτP - ˆ a a a ˆ measurable∈ T random{ variable∈ A }ξ is a -aggregatorF of the family ξ , a if ξ = ξ , P -almostF P { ∈ A } surely for all a . Note that we may identify any ˆτP -measurable random variable ξ with the F ∈ A Fˆ ˆ P -progressively measurable process X t := ξ1[τˆ, ). Then a direct consequence of Theorem 5.1 is the following. ∞

Corollary 5.4. Let be satisfying (5.1) and τ ˆ . Then the family of ˆ -measurable random ˆ P τˆP variables ξa, a A has a unique ( -q.s.) -aggregator∈ T ξ if and only ifF the following consistency condition{ holds: ∈ A } P P

a b a,b Pa ξ = ξ on Ωτ , -almost surely for any a 0 and b . (5.8) ˆ ∈ A ∈ A

For the next result, we recall that the P-Brownian motion W P is defined in (4.2). As a direct consequence of Theorem 5.1, the following result defines the -Brownian motion.

a P Corollary 5.5. For satisfying (5.1), the family W P , a admits a unique -aggregator W. a Since W P is a Pa-BrownianA motion for every a { , we call∈ W A a } -universal BrownianP motion. ∈ A P Proof. Let a, b . For each n, denote ∈ A Z t n o a,b τn := inf t 0 : aˆs ds n θ . ≥ 0 | | ≥ ∧

Then B is a Pb-square integrable martingale. By standard construction of stochastic integral, see τn ·∧ b,m e.g. [11] Proposition 2.6, there exist F-adapted simple processes β such that

Z τn 1 1 b P n 2 b,m 2 2 o lim E as β as− ds 0. (5.9) m ˆ ( s ˆ ) = →∞ 0 | − | Define the universal process

Z t b,m b,m Wt := βs dBs. 0 Then

2 Pb n Pb o lim E sup W b,m W 0. (5.10) m t t = 0 t τn − →∞ ≤ ≤

1862 F a,b By Lemma 2.4, all the processes in (5.9) and (5.10) can be viewed as -adapted. Since τn θ , applying Lemma 5.2 we obtain from (5.9) and (5.10) that ≤

Z τn 1 1 a a b 2 P n 2 b,m 2 2 o P n b,m P o lim E as β as− ds 0, lim E sup W W 0. m ˆ ( s ˆ ) = m t t = 0 | − | 0 t τn − →∞ →∞ ≤ ≤ The first limit above implies that

2 Pa n Pa o lim E sup W b,m W 0, m t t = 0 t τn − →∞ ≤ ≤ which, together with the second limit above, in turn leads to

Pa Pb Pa Wt = Wt , 0 t τn, a.s. ≤ ≤ − a,b Clearly τn θ as n . Then ↑ → ∞ Pa Pb a,b Pa Wt = Wt , 0 t < θ , a.s. ≤ − a That is, the family W P , a satisfies the consistency condition (5.2). We then apply Theorem 5.1 directly to obtain{ the ∈aggregator A } W. P −

The Brownian motion W is our first example of a stochastic integral defined simultaneously underP all − Pa, a : ∈ A Z t 1/2 Wt = aˆs− dBs, t 0, q.s. (5.11) 0 ≥ P − We will investigate in detail the universal integration in Section 6.

a Remark 5.6. Although a and W P are F-progressively measurable, from Theorem 5.1 we can only deduce that a and W are Fˆ -progressively measurable. On the other hand, if we take a version of a ˆ P a W P that is progressively measurable to the augmented filtration F , then in general the consistency condition (5.2) does not hold. For example, let d = 1, at := 1, and bt := 1 + 1[1, )(t), t 0, as in Pa Pb ∞ ≥ Remark 5.3. Set W ω : B ω 1 a c ω and W ω : B ω B ω B ω 1 t . t ( ) = t ( ) + (Ω1) ( ) t ( ) = t ( ) + [ t ( ) 1( )] [1, )( ) ∞ Pa Pb a b a,b − Then both W and W are F F -progressively measurable. However, θ = 1, but ∩ Pb Pa Pb Pb a (W0 = W0 ) = (Ω1) = 0,

Pa Pb b so we do not have W = W , P -almost surely on [0, 1].

6 Quasi-sure stochastic analysis

In this section, we fix again a separable class of diffusion coefficients generated by 0, and set a := P : a . We shall develop the A-quasi sure stochastic analysis. We emphasizeA again P { ∈ A } P

1863 that, when a probability measure P is fixed, by Lemma 2.4 there is no need to distinguish the P P filtrations F+, F , and F . ∈ P L0 We first introduce several spaces. Denote by the collection of all ˆP -measurable random vari- p ables with appropriate dimension. For each p [1, ] and P F∞, we denote by L (P) the corresponding under the measure P and∈ ∞ ∈ P

p \ p Lˆ := L (P). P ∈P H0 H0 Rd Rd F Similarly, := ( ) denotes the collection of all valued ˆ P -progressively measurable pro- p a 0 cesses. H (P ) is the subset of all H H satisfying ∈ Z T a h p/2i p EP 1/2 2 H p a : a Hs ds < for all T > 0, T,H (P ) = s k k 0 | | ∞ R T and H2 Pa is the subset of H0 whose elements satisfy a1/2H 2ds < , Pa-almost surely, for loc( ) 0 s s all T 0. Finally, we define | | ∞ ≥ Hp \ Hp P H2 \ H2 P ˆ := ( ) and ˆ loc := loc( ). P P ∈P ∈P The following two results are direct applications of Theorem 5.1. Similar results were also proved in [5, 6], see e.g. Theorem 2.1 in [5], Theorem 36 in [6] and the Kolmogorov criterion of Theorem 31 in [6]. Proposition 6.1 (Completeness). Fix p 1, and let be satisfying (5.1). Lp ≥ APa (i) Let (Xn)n ˆ be a Cauchy sequence under each , a . Then there exists a unique random L⊂p Lp Pa ∈ A variable X ˆ such that Xn X in ( , ˆP ) for every a . ∈ p → F∞ ∈ A H p a (ii) Let (Xn)n ˆ be a Cauchy sequence under the norm T,H (P ) for all T 0 and a . Then p ⊂ H k · k p≥ a ∈ A there exists a unique process X ˆ such that Xn X under the norm T,H (P ) for all T 0 and a . ∈ → k · k ≥ ∈ A Lp Pa a Lp Pa a Proof. (i) By the completeness of ( , ˆP ), we may find X ( , ˆP ) such that Xn X in Lp Pa F∞ ∈ F∞ → a ( , ˆP ). The consistency condition of Theorem 5.1 is obviously satisfied by the family X , a , andF the∞ result follows. (ii) can be proved by a similar argument. { ∈ A} F Proposition 6.2 (Kolmogorov continuity criteria). Let be satisfying (5.1), and X be an ˆ P - Rn A Lp progressively measurable process with values in . We further assume that for some p > 1,X t ˆ for all t 0 and satisfy ∈ ≥ Pa E  p n+"a X t Xs ca t s for some constants ca, "a > 0. | − | ≤ | − | F ˜ Then X admits an ˆ P -progressively measurable version X which is Hölder continuous, -q.s. (with Pa P Hölder exponent αa < "a/p, -almost surely for every a ). ∈ A

1864 Proof. We apply the Kolmogorov continuity criterion under each Pa, a . This yields a family of Pa a a ∈ Aa a F -progressively measurable processes X , a such that X = X , P -almost surely, and X is { ∈ A }Pa Hölder continuous with Hölder exponent αa < "a/p, -almost surely for every a . Also in view a F ∈ A of Lemma 2.4, we may assume without loss of generality that X is ˆ P -progressively measurable for every a . Since each X a is a Pa-modification of X for every a , the consistency condition of Theorem∈ 5.1 A is immediately satisfied by the family X a, a . Then,∈ A the aggregated process X˜ constructed in that theorem has the desired properties.{ ∈ A }

Remark 6.3. The statements of Propositions 6.1 and 6.2 can be weakened further by allowing p to depend on a.

We next construct the stochastic integral with respect to the canonical process B which is simul- taneously defined under all the mutually singular measures Pa, a . Such constructions have been given in the literature but under regularity assumptions on the∈ integrand. A Here we only place standard conditions on the integrand but not regularity. H2 Theorem 6.4 (Stochastic integration). For satisfying (5.1), let H ˆ loc be given. Then, there F A ∈ exists a unique ( -q.s.) ˆ P -progressively measurable process M such that M is a local martingale under each Pa andP Z t Pa Mt = HsdBs, t 0, -almost surely for all a . 0 ≥ ∈ A If in addition H Hˆ 2, then for every a , M is a square integrable Pa-martingale. Moreover, Pa Pa R t E M 2 E ∈ a1/2H 2ds for all t ∈ A0. [ t ] = [ 0 s s ] | | ≥ R t Proof. For every a , the stochastic integral M a : H dB is well-defined Pa-almost surely as t = 0 s s Pa ∈ A a F -progressively measurable process. By Lemma 2.4, we may assume without loss of generality a F that M is ˆ P -adapted. Following the arguments in Corollary 5.5, in particular by applying Lemma 5.2, it is clear that the consistency condition (5.2) of Theorem 5.1 is satisfied by the family M a, a . Hence, there exists an aggregating process M. The remaining statements in the{ theorem∈ followsA} from classical results for standard stochastic integration under each Pa.

We next study the martingale representation. Theorem 6.5 (Martingale representation). Let be a separable class of diffusion coefficients gener- F A ated by 0 MRP. Let M be an ˆ P -progressively measurable process which is a quasi sure local martingale,A ⊂ that A is, M is a local martingale under P for all P . Then there existsP a − unique ( -q.s.) H2 ∈ P P process H ˆ loc such that ∈ Z t Mt = M0 + HsdBs, t 0, q.s.. 0 ≥ P − P P Proof. By Proposition 4.11, MRP. Then for each , all martingales can be represented as stochastic integrals with respectA ⊂ A to the canonical process.∈ P Hence,− there exists unique (P almost P H2 P − surely) process H loc( ) such that ∈ Z t P P Mt = M0 + Hs dBs, t 0, -almost surely. 0 ≥

1865 Then the quadratic covariation under Pb satisfies

Z t Pb P P M, B t = Hs aˆsds, t 0, almost surely. (6.1) 〈 〉 0 ≥ − Now for any a, b , from the construction of quadratic covariation and that of Lebesgue integrals, following similar∈ arguments A as in Corollary 5.5 one can easily check that

Z t Z t Pa Pa Pb Pb a,b Pa Hs aˆsds = M, B t = M, B t = Hs aˆsds, 0 t < θ , almost surely. 0 〈 〉 〈 〉 0 ≤ − This implies that

Pa Pb a H 1 a,b H 1 a,b , d t dP almost surely. [0,θ ) = [0,θ ) × − That is, the family HP, P satisfies the consistency condition (5.2). Therefore, we may aggre- gate them into a process{ H∈. ThenP } one may directly check that H satisfies all the requirements.

There is also -quasi sure decomposition of super-martingales. P F Proposition 6.6 (Doob-Meyer decomposition). For satisfying (5.1), assume an ˆ P -progressively measurable process X is a -quasi sure supermartingale,A i.e., X is a Pa-supermartingale for all a . P F ∈ A Then there exist a unique ( -q.s.) ˆ P -progressively measurable processes M and K such that M is a P -quasi sure local martingale and K is predictable and increasing, -q.s., with M0 = K0 = 0, and P P X t = X0 + Mt Kt , t 0, -quasi surely. − ≥ P P If further X is in class (D), -quasi surely, i.e. the family Xτˆ, τˆ ˆ is -uniformly integrable, for all P , then M is a -quasiP surely uniformly integrable{ martingale.∈ T} ∈ P P Proof. For every P , we apply Doob-Meyer decomposition theorem (see e.g. Dellacherie-Meyer P [4] Theorem VII-12).∈ A Hence there exist a P-local martingale M and a P-almost surely increasing P P P P process K such that M0 = K0 = 0, -almost surely. The consistency condition of Theorem 5.1 follows from the uniqueness of the Doob-Meyer decomposition. Then, the aggregated processes provide the universal decomposition.

The following results also follow from similar applications of our main result. F Proposition 6.7 (Itô’s formula). For satisfying (5.1), let A, H be ˆ P -progressively measurable processes with values in R and Rd , respectively.A Assume that A has finite variation over each time R t interval 0, t and H Hˆ 2 . For t 0, set X : A H dB . Then for any C2 function f : R R, [ ] loc t = t + 0 s s we have ∈ ≥ → Z t Z t 1 T f (X t ) = f (A0) + f 0(Xs)(dAs + HsdBs) + Hs aˆsHs f 00(Xs)ds, t 0, -q.s.. 0 2 0 ≥ P

Proof. Apply Itô’s formula under each P , and proceed as in the proof of Theorem 6.4. ∈ P

1866 Proposition 6.8 (local time). For satisfying (5.1), let A, H and X be as in Proposition 6.7. Then R x A for any x , the local time Lt , t 0 exists -quasi surely and is given by, ∈ { ≥ } P Z t x 2Lt = X t x X0 x sgn(Xs x)(dAs + HsdBs), t 0, q.s.. | − | − | − | − 0 − ≥ P −

Proof. Apply Tanaka’s formula under each P and proceed as in the proof of Theorem 6.4. ∈ P

Following exactly as in the previous results, we obtain a Girsanov theorem in this context as well.

Proposition 6.9 (Girsanov). For satisfying (5.1), let φ be Fˆ -progressively measurable and R t P φ 2ds < for all t 0, -quasiA surely. Let 0 s | | ∞ ≥ P ‚Z t 1 Z t Œ Z t 2 ˜ Zt := exp φsdWs φs ds and Wt := Wt φsds, t 0, 0 − 2 0 | | − 0 ≥ P EP where W is the -Brownian motion of (5.11). Suppose that for each , [ZT ] = 1 for some P QP QP P ∈ P T 0. On ˆT we define the probability measure by d = ZT d . Then, ≥ F QP ˜ 1 P 1 P W − = W − for every , ◦ ◦ ∈ P P i.e. W˜ is a Q -Brownian motion on [0, T, ] for every P . ∈ P

m We finally discuss stochastic differential equations in this framework. Set Q := (t, x) : t 0, x m m Rm R { ≥ R∈ C[0, t] . Let b, σ be two functions from Ω Q to and m,d ( ), respectively. Here, m,d ( ) is the space} of m d matrices with real entries.× We are interestedM in the problem of solvingM the following stochastic× differential equation simultaneously under all P , ∈ P Z t Z t X t = X0 + bs(X s)ds + σs(X s)dBs, t 0, q.s., (6.2) 0 0 ≥ P − where X t := (Xs, s t). ≤ Proposition 6.10. Let be satisfying (5.1), and assume that, for every P and τ , the P equation (6.2) has a uniqueA F -progressively measurable strong solution on interval∈ P[0, τ]. Then∈ T there is a -quasi surely aggregated solution to (6.2). P P P P F Proof. For each , there is a -solution X on [0, ), which we may consider in its ˆ P - progressively measurable∈ A version by Lemma 2.4. The uniqueness∞ on each [0, τ],τ implies that the family X P, P satisfies the consistency condition of Theorem 5.1. ∈ T { ∈ P }

1867 7 An application

As an application of our theory, we consider the problem of super-hedging contingent claims under volatility uncertainty, which was studied by Denis and Martini [5]. In contrast with their approach, our framework allows to obtain the existence of the optimal hedging strategy. However, this is achieved at the price of restricting the non-dominated family of probability measures. We also mention a related recent paper by Fernholz and Karatzas [8] whose existence results are obtained in the Markov case with a continuity assumption on the corresponding value function. Pa Let be a separable class of diffusion coefficients generated by 0, and := : a be the correspondingA family of measures. We consider a fixed time horizon,A sayP T ={1. Clearly∈ A all } the results in previous sections can be extended to this setting, after some obvious modifications. Fix a nonnegative ˆ1 measurable real-valued random variable ξ. The superhedging cost of ξ is defined by F −

( Z 1 ) v(ξ) := inf x : x + HsdBs ξ, -q.s. for some H , 0 ≥ P ∈ H R where the stochastic integral H dB is defined in the sense of Theorem 6.4 and H H0 belongs 0· s s to if and only if ∈ H Z 1 Z . T Ht aˆt Ht d t < -q.s. and HsdBs is a -q.s. supermartingale. 0 ∞ P 0 P

We shall provide a dual formulation of the problem v(ξ) in terms of the following dynamic opti- mization problem,

Pa Pa Pb V : ess sup E ξ ˆ , Pa-a.s., a , τ ˆ, (7.1) τˆ = [ τˆ] ˆ b (τˆ,a) |F ∈ A ∈ T ∈A where a,b a,b (τˆ, a) := b : θ > τˆ or θ = τˆ = 1 . A { ∈ A }

Theorem 7.1. Let be a separable class of diffusion coefficients generated by 0 MRP. Assume A P AP ⊂ A that the family of random variables Vτ , τˆ ˆ is uniformly integrable under all . Then { ˆ ∈ T} ∈ P Pa a v(ξ) = V (ξ) := sup V0 L (P ). (7.2) a k k ∞ ∈A R 1 Moreover, if v ξ < , then there exists H such that v ξ H dB ξ, -q.s.. ( ) ( ) + 0 s s ∞ ∈ H ≥ P

To prove the theorem, we need the following (partial) dynamic programming principle.

Lemma 7.2. Let be satisfying (5.1), and assume V (ξ) < . Then, for any τˆ1, τˆ2 ˆ with A ∞ ∈ T τˆ1 τˆ2, Pa Pb Pb ≤ V E V , Pa-almost surely for all a and b a, τ . τ τ ˆτˆ1 ( ˆ1) ˆ1 ≥ ˆ2 |F ∈ A ∈ A

1868 Proof. By the definition of essential supremum, see e.g. Neveu [12] (Proposition VI-1-1), there Pb EPbj Pb exist a sequence bj, j 1 b, τ such that V sup ξ ˆ , -almost surely. For ( ˆ2) τˆ2 = j 1 [ τˆ2 ] b{,n ≥ } ⊂ A Pbj b,n P≥b |F n 1, denote V : sup E ξ ˆ . Then V V , Pb-almost surely as n . By the τˆ2 = 1 j n [ τˆ2 ] τˆ2 τˆ2 ≥ ≤ ≤ |F Pb b,n↑ Pb Pb → ∞ monotone convergence theorem, we also have E V ˆ E V ˆ , Pb-almost surely, [ τˆ2 τˆ1 ] [ τˆ2 τˆ1 ] |PFb b,↑n P|Fb Pb as n . Since b a, τ , Pb Pa on ˆ . Then E V ˆ E V ˆ , Pa-almost ( ˆ1) = τˆ1 [ τˆ2 τˆ1 ] [ τˆ2 τˆ1 ] surely,→ as ∞n . Thus∈ A it suffices to show thatF |F ↑ |F → ∞ Pa Pb V E V b,n , Pa-almost surely for all n 1. (7.3) τ [ τ ˆτˆ1 ] ˆ1 ≥ ˆ2 |F ≥ Fix n and define

θ b : min θ b,bj . n = 1 j n ≤ ≤ b,bj F b By Lemma 5.2, θ are -stopping times taking only countably many values, then so is θn . More- b b over, since bj (b, τˆ2), we have either θn > τˆ2 or θn = τˆ2 = 1. Following exactly the same arguments as in∈ the A proof of (5.4), we arrive at

 Pb  b . ˆτˆ2 θ ( 1) F ⊂ F n ∨ N F Pbj Since Pbj Pb on , without loss of generality we may assume the random variables E ξ = ˆτˆ2 [ ˆτˆ2 ] b,n F EPbj b,n |F and V are b -measurable. Set Aj : ξ ˆ V and A˜1 : A1, A˜j : Aj i j Ai, τˆ2 θn = [ τˆ2 ] = τˆ2 = = < F ˜ ˜ { |F } \ ∪ 2 j n. Then A1, , An are θ b -measurable and form a partition of Ω. Now set ≤ ≤ ··· F n n X ˜b t : b t 1 t b t 1˜ 1 t . ( ) = ( ) [0,τˆ2)( ) + j( ) Aj [τˆ2,1]( ) j=1

We claim that ˜b . Equivalently, we need to show that ˜b takes the form (4.12). We know that b ∈ A and bj have the form

X∞ X∞ 0,m X∞ X∞ j,m b t b 1 0,m 1 0 0 and b t b 1 j,m 1 j j ( ) = i E [τm,τ ) j( ) = i E τ ,τ i m+1 i [ m m+1) m=0 i=1 m=0 i=1

b with the stopping times and sets as before. Since bj(t) = b(t) for t θn and j = 1, , n, ≤ ··· n X ˜b t b t 1 b 1˜ b t 1 b t ( ) = ( ) [0,θn ) + Aj j( ) [θn ,1]( ) j=1

X∞ X∞ 0,m = bi 1E0,m 0 b 1 τ0 θ b,τ0 θ b i τm<θn [ m n m+1 n ) m=0 i=1 ∩{ } ∧ ∧ n X X∞ X∞ j,m j m j j j + bi 1E , A˜ τ >θ b 1 τ θ b,τ θ b . i j m+1 n [ m n m+1 n ) j=1 m=0 i=1 ∩ ∩{ } ∨ ∨

0 b j b F By Definition 4.8, it is clear that τm θn and τm θn are -stopping times and take only countably many values, for all m 0 and 1 ∧ j n. For∨ m 0 and 1 j n, one can easily see that ≥ ≤ ≤ ≥ ≤ ≤ 1869 0,m 0 b j,m ˜ j b E τ < θ is τ0 θ b -measurable and that E Aj τ > θ is j b -measurable. By i m n m n i m+1 n τm θn ∩ { } F ∧ 0 b j b ∩ ∩ { } ˜F ∨ ordering the stopping times τm θn and τm θn we prove our claim that b . ˜ ∧ ∨ ∈ A It is now clear that b (b, τˆ2) (a, τˆ1). Thus, ∈ A ⊂ A Pa P˜b P˜b h P˜b i V E ξ ˆ E E ξ ˆ ˆ τ1 [ τˆ1 ] = [ τˆ2 ] τˆ1 ˆ ≥ |F |F F n P˜b h X P˜b i E E ξ1˜ ˆ ˆ = [ Aj τˆ2 ] τˆ1 j=1 |F F n P˜b h X Pbj i E E ξ1˜ ˆ ˆ = [ Aj τˆ2 ] τˆ1 j=1 |F F n ˜ ˜ Pb h X b,n i Pb b,n a E V 1˜ ˆ E V ˆ , P -almost surely. = τˆ2 Aj τˆ1 = [ τˆ2 τˆ1 ] j=1 F F

˜ Finally, since Pb Pb on and Pb Pa on , we prove (7.3) and hence the lemma. = ˆτˆ2 = ˆτˆ1 F F

Proof of Theorem 7.1. We first prove that v(ξ) V (ξ). If v(ξ) = , then the inequality is obvious. R t If v ξ < , there are x R and H such≥ that the process X ∞: x H dB satisfies X ξ, ( ) t = + 0 s s 1 quasi∞ surely. Notice that∈ the process∈ HX is a Pb-supermartingale for every b . Hence ≥ P − ∈ A EPb EPb Pb x = X0 [X1 ˆ0] [ξ ˆ0], a.s. b . ≥ |F ≥ |F − ∀ ∈ A Pa Pb By Lemma 5.2, we know that = on ˆ0 whenever a and b (0, a). Therefore, F ∈ A ∈ A EPb Pa x [ξ ˆ0], -a.s.. ≥ |F Pa Pa Pa The definition of V and the above inequality imply that x V0 , -almost surely. This implies Pa that x V L Pa for all a . Therefore, x V ξ .≥ Since this holds for any initial data x 0 ∞( ) ( ) that is super-replicating≥ k k ξ, we conclude∈ A that v(ξ) ≥V (ξ). ≥ 1 We next prove the opposite inequality. Again, we may assume that V (ξ) < . Then ξ Lˆ . For P each P , by Lemma 7.2 the family V , τ ˆ satisfies the (partial) dynamic∞ programming∈ τˆ ˆ principle.∈ P Then following standard arguments{ (see∈ T} e.g. [7] Appendix A2), we construct from this F P P family a càdlàg (ˆ P , )-supermartingale Vˆ defined by, P P Vˆt := lim Vr , t [0, 1]. (7.4) Q r t 3 ↓ ∈ P P Also for each τˆ ˆ, it is clear that the family Vτ , satisfies the consistency condition ∈ T { ˆ P P∈ P } (5.8). Then it follows immediately from (7.4) that Vˆt , satisfies the consistency condition P P (5.8) for all t [0, 1]. Since P-almost surely Vˆ is{ càdlàg,∈ P the } family of processes Vˆ , P also satisfy the∈ consistency condition (5.2). We then conclude from Theorem 5.1 that{ there exists∈ P a} unique aggregating process Vˆ. Note that Vˆ is a -quasi sure supermartingale. Then it follows from the Doob-Meyer decomposition of Proposition 6.6P that there exist a -quasi sure local martingale M and a -quasi sure increasing P P 1870 process K such that M0 = K0 = 0 and Vˆt = Vˆ0 + Mt Kt , t [0, 1), -quasi surely. Using the uniform integrability hypothesis of this theorem, we conclude− that∈ the previousP decomposition holds on [0, 1] and the process M is a -quasi sure martingale on [0, 1]. P F In view of the martingale representation Theorem 6.5, there exists an ˆ P -progressively measurable R 1 R t process H such that H T a H d t < and V V H dB K , t 0, -quasi surely. Notice 0 t ˆt t ˆt = ˆ0 + 0 s s t ∞ R 1 − ≥ P that V ξ and K K 0. Hence V H dB ξ, -quasi surely. Moreover, by the definition ˆ1 = 1 0 = ˆ0 + 0 s s ≥ ≥ P R 1 of V ξ , it is clear that V ξ V , -quasi surely. Thus V ξ H dB ξ, -quasi surely. ( ) ( ) ˆ0 ( ) + 0 s s ≥ P ≥ P Finally, since ξ is nonnegative, Vˆ 0. Therefore, Z t ≥ Z t V (ξ) + HsdBs Vˆ0 + HsdBs Vˆt 0, q.s.. 0 ≥ 0 ≥ ≥ P − This implies that H , and thus V (ξ) v(ξ). ∈ H ≥

Remark 7.3. Denis and Martini [5] require a a a for all a , (7.5) ≤ ≤ ∈ A S>0 for some given constant matrices a a in d . We do not impose this constraint. In other words, we may allow a = 0 and a = . Such≤ a relaxation is important in problems of static hedging in finance, see e.g. [2] and the references∞ therein. However, we still require that each a takes S>0 ∈ A values in d .

We shall introduce the set S MRP induced from strong formulation in Section 8. When 0 S, we have the following additionalA ⊂ A interesting properties. A ⊂ A P Remark 7.4. If each satisfies the Blumenthal zero-one law (e.g. if 0 S by Lemma 8.2 Pa ∈ P A ⊂ A below), then V0 is a constant for all a , and thus (7.2) becomes ∈ A Pa v(ξ) = V (ξ) := sup V0 . a ∈A Remark 7.5. In general, the value V (ξ) depends on , then so does v(ξ). However, when ξ is uniformly continuous in ω under the uniform norm, weA show in [16] that ( Z 1 ) EP P P sup [ξ] = inf x : x + HsdBs ξ, -a.s. for all S, for some H , (7.6) P S 0 ≥ ∈ P ∈ H ∈P and the optimal superhedging strategy H exists, where is the space of F-progressively mea- R 1 R . surable H such that, for all P , H T a H d t < H, P-almost surely and H dB is a P- S 0 t ˆt t 0 s s ∈ P ∞ supermartingale. Moreover, if S is dense in some sense, then A ⊂ A V (ξ) = v(ξ) = the S-superhedging cost in (7.6). P In particular, all functions are independent of the choice of . This issue is discussed in details in our accompanying paper [16] (Theorem 5.3 and PropositionA 5.4), where we establish a duality result for a more general setting called the second order target problem. However, the set-up in [16] is more general and this independence can be proved by the above arguments under suitable assumptions.

1871 8 Mutually singular measures induced by strong formulation

We recall the set S introduced in the Introduction as P ¦Pα © Pα P α 1 S := S : α where S := 0 (X )− , (8.7) P ∈ A ◦ α and X is given in (1.1). Clearly S W . Although we do not use it in the present paper, this class is important both in theoryP and⊂ inP applications. We remark that Denis-Martini [5] and our paper [15] consider the class W while Denis-Hu-Peng [6] and our paper [17] consider the class P S, up to some technical restriction of the diffusion coefficients. P We start the analysis of this set by noting that α 1/2 α P α is the quadratic variation density of X and dBs = α−s dXs , under 0. (8.8) Pα α P Since B under S has the same distribution as X under 0, it is clear that Pα Pα S P α the S -distribution of (B, aˆ, W ) is equal to the 0-distribution of (X , α, B). (8.9) In particular, this implies that

Pα α P S Pα aˆ(X ) = α(B), 0-a.s., aˆ(B) = α(W ), S -a.s., Pα α (8.10) and for any a W ( S ), X is a strong solution to SDE (4.4) with coefficient a. ∈ A Moreover we have the following characterization of S in terms of the filtrations. P  P P P FW P F Lemma 8.1. S = W : = . P ∈ P P 0 Proof. By (8.8), α and B are FX α -progressively measurable. Since F is generated by B, we con- P P 0 P 0 clude that F FX α . By completing the filtration we next obtain that F 0 FX α . Moreover, for P P P ⊂ α 0 α 0 0 ⊂ any α , it is clear that FX F . Thus, FX F . Now, we invoke (8.9) and conclude P = P ∈ AP ⊂ FW F P Pα = for any = S S. ∈ P P P P FW P F P Conversely, suppose W be such that = . Then B = β(W ) for some measurable S>0 ∈ P P Pα · mapping β : Q d . Set α := β(B ), we conclude that = S . → ·

P The following result shows that the measures S satisfy MRP and the Blumental zero-one law. ∈ P P Lemma 8.2. S MRP and every S satisfies the Blumenthal zero-one law. P ⊂ P ∈ P P P P F P Proof. Fix S. We first show that MRP. Indeed, for any , -local martingale M, P ( ) ∈ P W P ∈ P P Lemma 8.1 implies that M is a (F , P)-local martingale. Recall that W is a P Brownian motion. Hence, we now can use the standard martingale representation theorem. Therefore, there exists a P P unique FW -progressively measurable process H˜ such that Z t Z t ˜ 2 ˜ P P Hs ds < and Mt = M0 + HsdWs , t 0, -a.s.. 0 | | ∞ 0 ≥

1872 P 1/2 1/2 ˜ Since aˆ > 0, dW = aˆ− dB. So one can check directly that the process H := aˆ− H satisfies all the requirements.

We next prove the Blumenthal zero-one law. For this purpose fix E 0 . By Lemma 8.1, E P + P ∈ F ∈ W P P 0 . Again we recall that W is a Brownian motion and use the standard Blumenthal zero-one lawF for the Brownian motion. Hence P(E) 0, 1 . ∈ { }

We now define analogously the following spaces of measures and diffusion processes.

 Pa S := S W , S := a W : S . (8.11) P P ∩ P A ∈ A ∈ P Then it is clear that

S MRP W and S MRP W . P ⊂ P ⊂ P A ⊂ A ⊂ A The conclusion S W is strict, see Barlow [1]. We remark that one can easily check that the P ⊂ P diffusion process a in Examples 4.4 and 4.5 and the generating class 0 in Examples 4.9, 4.10, and A 4.14 are all in S. A Our final result extends Proposition 4.11.

Proposition 8.3. Let be a separable class of diffusion coefficients generated by 0. If 0 S, A A A ⊂ A then S. A ⊂ A Proof. Let a be given in the form (4.12) and, by Proposition 4.11, P be the unique weak solution to P SDE (4.4) on 0, with coefficient a and initial condition B0 0 1. By Lemma 8.1 and its [ ) P ( = ) = ∞ P proof, it suffices to show that a is FW -adapted. Recall (4.12). We prove by induction on n that

P P a 1 is W measurable for all t 0. (8.12) t t<τn t τn { } F ∧ − ≥ P Since τ0 = 0, a0 is 0-measurable, and (B0 = 0) = 1, (8.12) holds when n = 0. Assume (8.12) holds true for n. NowF we consider n + 1. Note that

a 1 a 1 a 1 . t t<τn+1 = t t<τn + t τn t<τn+1 { } { } { ≤ } By the induction assumption it suffices to show that

P W P at 1 t<τ is measurable for all t 0. (8.13) n+1 τn t τn+1 { } F ∨ ∧ − ≥ Apply Lemma 4.12, we have a P a t 1 for t < τ , where a and E , m 1 t = m 1 m( ) Em n+1 m 0 m form a partition of . Let Pm denote≥ the unique weak solution to SDE∈ A (4.4) on{ 0, ≥ with} ⊂ τn Ω [ ) F Pm ∞ coefficient am and initial condition (B0 = 0) = 1. Then by Lemma 5.2 we have, for each m 1, ≥ P E E Pm E E , E . (8.14) ( m) = ( m) τn 1 ∩ ∩ ∀ ∈ F + Morover, by (4.2) it is clear that

P Pm P Pm Wt = Wt , 0 t τn+1 , a.s. on Em (and a.s. on Em). (8.15) ≤ ≤ − − 1873 Pm Pm Now since a , we know a t 1 is W measurable. This, to- m 0 S m( ) t<τn+1 t τn 1 { } + Pm ∈ A ⊂ A F ∧ − m W P gether with the fact that Em τ , implies that am(t)1 t<τ 1E is measurable. n n+1 m τn t τn+1 By (8.14), (8.15) and that ∈a F a t for t < τ { on E} , weF see∨ that∧ a 1− 1 is t = m( ) n+1 m t t<τn 1 Em P + P { } W measurable. Since m is arbitrary, we get τn t τn+1 F ∨ ∧ − X a 1 a 1 1 t t<τn+1 = t t<τn+1 Em { } m 1 { } ≥ P P is W measurable. This proves (8.13), and hence the proposition. τn t τn+1 F ∨ ∧ −

9 Appendix

In this Appendix we provide a few more examples concerning weak solutions of (4.4) and complete the remaining technical proofs.

9.1 Examples

Example 9.1. (No weak solution) Let a0 = 1, and for t > 0,

n Bh B0 o at := 1 + 1E, where E := lim p − = 2 . h 0 2hln ln h 1 6 ↓ −

P Bh B0 Then E 0 . Assume is a weak solution to (4.4). On E, a = 2, then limh 0 − = 2, + 2hln ln h 1 ∈ F ↓ p − P P c Bh B0 P -almost surely, thus (E) = 0. On E , a = 1, then limh 0 − = 1, -almost surely and thus 2hln ln h 1 c ↓ p − P(E ) = 0. Hence there can not be any weak solutions.

Example 9.2 (Martingale measure without Blumenthal 0-1 law). Let Ω0 := 1, 2 and { } 1 P 1 P 2 . 0 ( ) = 0 ( ) = 2 ˜ P˜ P P Let Ω := Ω Ω0 and 0 the product of 0 and 0 . Define × ˜ ˜ Bt (ω, 1) := ωt , Bt (ω, 2) := 2ωt . P˜ P˜ ˜ 1 Then := 0 (B)− is in W . Denote ◦ P  

 B˜h B˜0  E := lim p − = 1 .  t 0 2hln ln h 1  ↓ −

Then E B˜ , and P˜ E P 1 1 . 0 0( ) = 0 ( ) = 2 ∈ F +

1874 ˜ 2 ˜ ˜ Example 9.3. (Martingale measure without MRP) Let Ω := (C[0, 1]) , (W, W 0) the canonical pro- P˜ ˜ ˜ P˜ cess, and 0 the Wiener measure so that W and W 0 are independent Brownian motions under 0. Let ϕ : R [0, 1] be a measurable function, and → Z t 1 ˜ ˜ ˜ 2 Bt := α˜sdWs where α˜t := [1 + ϕ(Wt0)] , t 0, 0 ≥

This induces the following probability measure P on Ω with d = 1, P P˜ ˜ 1 := 0 B− . ◦ P P Then is a square integrable martingale measure with d B t /d t [1, 2], -almost surely. P 〈 〉 ∈ P P˜ We claim that B has no MRP under . Indeed, if B has MRP under , then so does B˜ under 0. P˜ ˜ ˜ ˜ E 0 ˜ B ˜ B P˜ Let ξ := [W10 1 ]. Since ξ 1 and is obviously 0-square integrable, then there exists ˜ a 2 P˜ FB˜ |F ∈ F H ( 0, ) such that ∈ H Z 1 Z 1 P˜ P˜ ˜ E 0 ˜ ˜ a ˜ E 0 ˜ ˜ a ˜ P˜ ξ = [ξ] + Ht dBt = [ξ] + Ht α˜t dWt , 0 a.s.. 0 0 −

P˜ P˜ ˜ ˜ P˜ E 0 ˜ ˜ E 0 ˜ 2 ˜ Since W and W 0 are independent under 0, we get 0 = [ξW10] = [ ξ ]. Then ξ = 0, P˜ | | d 0-almost surely, and thus

P˜ P˜ E 0 ˜ ˜ 2 E 0 ˜ ˜ 2 [W10 B1 ] = [ξ B1 ] = 0. (9.1) | | | |

However, it follows from Itô’s formula, together with the independence of W and W 0, that Z 1 Z 1 P˜ P˜ h i P˜ h i E 0 ˜ ˜ 2 E 0 ˜ ˜ ˜ E 0 ˜ 2 [W10 B1 ] = W10 2Bt α˜t dWt + W10 α˜t d t | | 0 0 Z 1 Z 1 P˜ h i P˜ n o E 0 ˜ ˜  E 0 ˜ ˜ = Wt0 1 + ϕ(Wt0) d t = Wt0ϕ(Wt0)d t , 0 0 and we obtain a contradiction to (9.1) by observing that the latter expectation is non-zero for ϕ x : 1R x . ( ) = + ( )

We note that, however, we are not able to find a good example such that a W (so that (4.4) has unique weak solution) but B has no MRP under Pa (and consequently (4.4)∈ has A no strong solution).

9.2 Some technical proofs

Proof of Lemma 2.4. The uniqueness is obvious. We now prove the existence. P Q (i) Assume X is càdlàg, -almost surely. Let E0 := ω : X (ω) is not càdlàg . For each r (0, ), P ˜ + ˜ { · } ∈ P∩ ∞ there exists X r r such that Er := X r = X r ( ). Let E := E0 ( r Er ). Then (E) = 0. ∈ F n { 6 } ∈ N F∞ ∪ ∪ For integers n 1 k 0, set tk := k/n, and define ≥ ≥ n ˜ n n  ˜ n X t := X tn for t tk , tk , and X := ( lim X )1 n R . k+1 +1 n limn X ∈ →∞ { →∞ ∈ }

1875 n n n + n + Then for any t t , t , X n and X 0, t n . Since F+ is right continuous, ( k k+1] t t [0,t] ([ ]) t ∈ ∈ F k+1 | ∈ B ×F k+1 ˜ + ˜ + ˜ F+ we get X t t and X [0,t] ([0, t]) t . That is, X . Moreover, for any ω / E and n 1, n ∈n F | ∈ B × F ∈ ∈ ≥ if t (tk , tk 1], we get ∈ + n ˜ lim X t (ω) = lim X tn (ω) = lim X tn (ω) = X t (ω). n n k+1 n k+1 →∞ →∞ →∞ ˜ ˜ P So ω : there exists t 0 such that X t (ω) = X t (ω) E. Then, X is -indistinguishable from X and{ thus X˜ also has càdlàg≥ paths, P-almost surely.6 } ⊂ P R t (ii) Assume X is F -progressively measurable and is bounded. Let Y : X ds. Then Y is contin- t = 0 s uous. By (i), there exists F+-progressively measurable continuous process Y˜ such that Y˜ and Y are P ˜ P ˜ -indistinguishable. Let E0 := there exists t 0 such that Yt = Yt , then (E0) = 0 and Y (ω) is { ≥ 6 } · continuous for each ω / E0. Define, ∈ n ˜ ˜ ˜ n X t := n[Yt Yt 1 ]; X := ( lim X )1 n R for n 1. n n limn X − − →∞ { →∞ ∈ } ≥ R t As in (i), we see X˜ F+. Moreover, for each ω / E , X n ω n X ω ds. Then X˜ ω 0 t ( ) = t 1 s( ) ( ) = n ∈ ∈ − · X (ω), d t-almost surely. Therefore, X˜ = X , P-almost surely. · P F m (iii) For general -progressively measurable X , let X t := ( m) (X m), for any m 1. By (ii), X m has an F+-adapted modification X˜m. Then obviously the− following∨ ∧ process X˜ satisfies≥ all the m requirements: X˜ : lim X˜ 1 m R . = ( m ) limm X˜ →∞ { →∞ ∈ }

To prove Example 4.5, we need a simple lemma.

Lemma 9.4. Let τ be an F-stopping time and X is an F-progressively measurable process. Then τ(X ) is also an F stopping time. · − F Moreover, if Y is -progressively measurable and Yt = X t for all t τ(X ), then τ(Y ) = τ(X ). ≤ · · · F X Proof. Since τ is an -stopping time, we have τ(X ) t t for all t 0. Moreover, since X F X { B · ≤ } ∈ F ≥B is -progressively measurable, we know t t . Then τ(X ) t t and thus τ(X ) is an F stopping time. F ⊂ F { · ≤ } ∈ F · − Now assume Yt = X t for all t τ(X ). For any t 0, on τ(X ) = t , we have Ys = Xs for all X ≤ · ≥X { · } s t. Since τ(X ) = t t and by definition t = σ(Xs, s t , then τ(Y ) = t on the event τ≤(X ) = t . Therefore,{ · τ}( ∈Y ) F = τ(X ). F ≤ } · { · } · ·

R Proof of Example 4.5. Without loss of generality we prove only that (4.4) on + with X0 = 0 has a unique strong solution. In this case the stochastic differential equation becomes

X dX ∞ a X 1 dB , t 0, P a.s.. t = n( ) [τn(X ),τn+1(X )) t 0 n=0 · · · ≥ −

1876 We prove the result by induction on n. Let X 0 be the solution to SDE:

Z t 0 1/2 0 P X t = a0 (X )dBs, t 0 , 0 almost surely 0 · ≥ −

1 0 2 0 Note that a0 is a constant, thus X t = a0 Bt and is unique. Denote τ˜0 := 0 and τ˜1 := τ1(X ). By F 1 0 · Lemma 9.4, τ˜1 is an stopping time. Now let X t := X t for t τ˜1, and − ≤ Z t X 1 X 0 a1/2 X 1 dB , t τ˜ , P a.s. t = τ˜1 + 1 ( ) s 1 0 τ˜1 · ≥ −

Note that a , that is, for any y R and t 0, a B y, B t . Thus, for any 1 τ1 1( ) τ1( ) t x, x˜ C R ,∈Rd F, if x x˜ , 0 s t∈, then a x≥1 { · ≤a x˜ 1 · ≤ .} In∈ F particular, noting ( + ) s = s 1( ) τ1(x) t = 1( ) τ1(x˜) t ∈ 1 0 ≤ ≤ { ≤ } { ≤ } 1 0 that τ1(X ) = τ1(X ) = τ˜1, for each ω by choosing t = τ˜ we obtain that a1(X ) = a1(X ). Thus X 1 X 0 · a X 0 B· B , t τ˜ , and is unique. Now repeat the procedure· for n 1,· 2, we t = τ˜1 + 1( )[ t τ˜1 ] 1 = · − ≥ ··· obtain the unique strong solution X in [0, τ˜ ), where τ˜ := limn τn(X ). Since a is bounded, it is P ∞ ∞ →∞ · obvious that Xτ˜ := limt τ˜ X t exists 0-almost surely. Then, by setting X t := Xτ˜ for t (τ˜ , ) we complete the∞ construction.↑ ∞ ∞ ∈ ∞ ∞

n Proof of Lemma 4.12. Let a be given as in (4.12) and τ be fixed. First, since Ei , i 1 is a partition of Ω, then for any n 0, ∈ T { ≥ } ≥ n o n j Nn+1 E , ij 0 j n also form a partition of . j=0 ij ( ) Ω ∩ ≤ ≤ ∈ n n Next, assume τn takes values tk (possibly including the value ), k 1. Then τn = tk , k 1 form a partition of Ω. Similarly we have, for any n 0, ∞ ≥ {{ } ≥ } ≥ n o n+1 j Nn+2 τj t , kj 0 j n 1 form a partition of . j=0 = kj ( ) + Ω ∩ { } ≤ ≤ ∈ These in turn form another partition of Ω given by, nh i o n j j  \ n+1 N2(n+1) N E τj t τn 1 t , ij, kj 0 j n , kn 1 . (9.2) j=0 ij = kj + = kn 1 ( ) + ∩ ∩ { } { + } ≤ ≤ ∈ ∈

Denote by the family of all finite sequence of indexes I := (ij, kj)0 j n for some n such that 0 t0 < I < t n < . Then is countable. For each I , denote by≤ ≤ I the corresponding n, = k0 kn and define··· ∞ I ∈ I | |

 I h j j i \ € Š EI : E τj t τ τ I 1 > τ τ I 1 τ , = |j=| 0 ij = kj + + = = ∩ ∩ { ≤ } { | | } ∪ { | | ∞} I 1 X |X|− j I τ˜ : τ 1 , and a : a 1 j j 1 a 1 I . = I 1 EI I = i + + i| | + j [tk ,tk ) I [tk| | , ) I | | j 0 j j+1 | | I = | | ∞ ∈I

It is clear that EI is τ measurable. Then, in view of the concatenation property of 0, aI 0. F − A ∈ A In light of (9.2), we see that EI , I are disjoint. Moreover, since τn = for n large enough, { ∈ I } ∞

1877 F we know EI , I form a partition of Ω. Then τ˜ is an stopping time and either τ˜ > τ or τ˜ = τ = {. We now∈ I } show that − ∞ X a a t 1 for all t ˜. (9.3) t = I ( ) EI < τ I ∈I j In fact, for each I ij, kj 0 j n , ω EI , and t < τ˜ ω , we have τj ω t τ ω for = ( ) ( ) ( ) = kj ( ) j n and τ ω τ˜ ω >≤ t≤. Let∈ Ij j∈ t, ω n be such that τ ω t < τ ≤ω . Then n+1( ) = ( ) 0 = 0( ) j0 ( ) j0+1( ) 1 ≤ t 1 and 1 t 0 for≤j j , and thus ≤ [τj (ω),τj 1(ω))( ) = [τj (ω),τj 1(ω))( ) = = 0 0 0+ + 6

X∞ X∞ j X∞ j0 j0 a ω a t, ω 1 j ω 1 t a t, ω 1 j ω a t, ω , t ( ) = i ( ) E ( ) τj ω ,τj 1 ω ( ) = i ( ) 0 ( ) = i ( ) i [ ( ) + ( )) Ei j0 j=0 i=1 i=1

j0 j0 where the last equality is due to the fact that ω EI Ei and that Ei , i 1 is a partition of Ω. On ∈ ⊂ j0 { ≥ } j0 the other hand, by the definition of aI , it is also straightforward to check that aI (t, ω) = ai (t, ω). j0 This proves (9.3). Now since is countable, by numerating the elements of we prove the lemma. I I Finally, we should point out that, if τ = τn, then we can choose τ˜ = τn+1.

References

[1] Barlow, M.T. (1982) One-dimensional stochastic differential equation with no strong solution, Journal of the London Mathematical Society, 26 (2), 335-347. MR0675177

[2] Carr, P. and Lee, R. (2010), Hedging Variance Options on Continuous Semimartingales, Finance and Stochastics, 14 (2010), no. 2, 179-207. MR2607762

[3] Cheridito, P.,Soner, H.M. and Touzi, N., Victoir, N. (2007) Second order BSDE’s and fully nonlin- ear PDE’s, Communications in Pure and Applied Mathematics, 60 (7), 1081-1110. MR2319056

[4] Dellacherie, C. and Meyer P-A. (1980), Probabilités et potentiel, Chapters V-VIII, Hermann. MR0488194

[5] Denis, L. and Martini, C. (2006) A Theorectical Framework for the Pricing of Contingent Claims in the Presence of Model Uncertainty, Annals of Applied Probability 16 (2), 827-852. MR2244434

[6] Denis, L., Hu, M. and Peng, S. (2011) Function Spaces and Capacity Related to a Sublinear Expectation: Application to G-Brownian Motion Paths, Potential Analysis, 34 (2), 139-161. MR2754968

[7] El Karoui, N. and Quenez, M.-C. (1995) Dynamic programming and pricing of contingent claims in an incomplete markets. SIAM J. Control. Optim., 33 (1), 29–66. MR1311659

[8] Fernholz, D. and Karatzas, I. (2011) On optimal arbitrage, Annals of Applied Probability, 20 (2010), no. 4, 1179-1204. MR2676936

[9] Fleming, W.H. and Soner, H.M., Controlled Markov processes and viscosity solutions. Applica- tions of Mathematics (New York), 25. Springer-Verlag, New York, 1993. MR1199811

1878 [10] Karandikar, R. (1995) On pathwise stochastic integration, Stochastic Processes and Their Ap- plications, 57, 11-18. MR1327950

[11] Karatzas, I. and Shreve, S. (1991) Brownian Motion and Stochastic Calculus, 2nd Edition, Springer. MR1121940

[12] Neveu, J. (1975) Discrete Parameter Martingales. North Holland Publishing Company. MR0402915

[13] Peng, S. (2007) G-Brownian motion and dynamic risk measure under volatility uncertainty, G- expectation, G-Brownian motion and related stochastic calculus of Itô type. Stochastic analysis and applications, 541-567, Abel Symp., 2, Springer, Berlin, 2007. MR2397805

[14] Soner, H. M. and Touzi, N. (2002) Dynamic programming for stochastic target problems and geo- metric flows, Journal of European Mathematical Society, 4 (2002), no. 3, 201-236, MR1924400.

[15] Soner, H. M. Touzi, N. and Zhang, J. (2011) Martingale representation theorem for G expectation, Stochastic Processes and their Applications, 121, 265-287. MR2746175 − [16] Soner, H. M. Touzi, N. and Zhang, J. (2009) Dual Formulation of Second Order Target Problems, arXiv:1003.6050v1 [math.PR]

[17] Soner, H. M. Touzi, N. and Zhang, J. (2011) Wellposedness of second order backward SDEs, and Related Fields, to appear. MR2746175

[18] Stroock, D.W. and Varadhan, S.R.S. (1979), Multidimensional Diffusion Processes, Springer- Verlag, Berlin, Heidelberg, New York. MR0532498

1879