<<

Characterization through Bayesian

Rafael Díaz Hernández Rojas

Sapienza University of Rome

Isaac Pérez Castillo (IF-UNAM) Jorge Hirsch, Alfred U’Ren, Aldo Solís, Alí Angulo (ICN-UNAM) Matteo Marsili (ICTP, Italy)

AISIS, UNAM. October 2019

Randomness characterization through Bayesian model selection 1/16 Monte Carlo methods Probabilistc

How to tell if a number sequence is random?

Dynamical systems mappings Spin systems Correlated photons Particles decay sˆ = HHTTTHTHT...THTT sˆ = LLRRRLRLR . . . RLRR sˆ = 110001010 ... 0100

Randomness characterization through Bayesian model selection 2/16 How to tell if a number sequence is random?

Dynamical systems mappings Spin systems Correlated photons Particles decay sˆ = HHTTTHTHT...THTT sˆ = LLRRRLRLR . . . RLRR sˆ = 110001010 ... 0100

Monte Carlo methods Cryptography Probabilistc algorithms

Randomness characterization through Bayesian model selection 2/16 ↓

(Maximally) random sequences

Randomness characterization through Bayesian model selection 3/16 (Maximally) random sequences

↓ sˆ = HHTTTHTHT...THTT

Randomness characterization through Bayesian model selection 3/16 (Maximally) random sequences

↓ sˆ = 01001101...0010

Randomness characterization through Bayesian model selection 3/16 (Maximally) random sequences

↓ sˆ = 01001101...0010

What if the coin is biased?

Randomness characterization through Bayesian model selection 3/16 (Maximally) random sequences

H[X] ∼ Measure of randomness X H[X] = − px log2 px x

0 ≤ H[X] ≤ log2 |X|

1 Hmax = log2 |X| ⇐⇒ px = |X|

↓ sˆ = 01001101...0010

What if the coin is biased?

Randomness characterization through Bayesian model selection 3/16 (Maximally) random sequences

H[X] ∼ Measure of randomness X H[X] = − px log2 px x

0 ≤ H[X] ≤ log2 |X|

1 Hmax = log2 |X| ⇐⇒ px = |X|

1.0

0.8

0.6 H ↓ 0.4 0.2 sˆ = 01001101...0010 0.0 0.0 0.2 0.4 0.6 0.8 1.0 What if the coin is biased? p0

H[X] = −p0 log2 p0 − (1 − p0) log2(1 − p0) Randomness characterization through Bayesian model selection 3/16 If “random” =⇒ properties examined with the tests. Properties ; randomness Frequentist approach based on p-values

R. L. Wasserstein and N. A. Lazar, “The ASA’s statement on p-values: context, process, and purpose”, The

American Statistician, 129–133 (2016)

M. Baker, “Statisticians issue warning on p-values”, Nature 531, 151 (2016)

Pragmatic approach: NIST battery of tests

Same frequency of ‘0’ and ‘1’ (k ≈ k )  0 1  Longest string of consecutive 0’s sˆ = 01001101...0010 =⇒ Fourier transform ∼ white noise  . .

Each property is analysed as an hypothesis test =⇒ obtain a p-value

Randomness characterization through Bayesian model selection 4/16 Pragmatic approach: NIST battery of tests

Same frequency of ‘0’ and ‘1’ (k ≈ k )  0 1  Longest string of consecutive 0’s sˆ = 01001101...0010 =⇒ Fourier transform ∼ white noise  . .

Each property is analysed as an hypothesis test =⇒ obtain a p-value

If “random” =⇒ properties examined with the tests. Properties ; randomness Frequentist approach based on p-values

R. L. Wasserstein and N. A. Lazar, “The ASA’s statement on p-values: context, process, and purpose”, The

American Statistician, 129–133 (2016)

M. Baker, “Statisticians issue warning on p-values”, Nature 531, 151 (2016)

Randomness characterization through Bayesian model selection 4/16 Is π random?

NIST Random Number Test Suite for 106 digits ofπ

1. 1.

0.1 0.1 p - Value

0.01 Significance Level 0.01 MonobitFrequencyTest BlockFrequencyTest RunsTest LongestRunsOnes10000 BinaryMatrixRankTest SpectralTest NonOverlappingTemplateMatching OverlapingTemplateMatching MaurersUniversalStatisticTest LinearComplexityTest SerialTest ApproximateEntropyTest CumulativeSumsTest RandomExcursionsTest RandomExcursionsVariantTest CumulativeSumsTestReverse LempelZivCompressionTest

Random Number Test (a) Binary representation of the first (b) Results of the 15 NIST tests 302,500 digits of π.

Randomness characterization through Bayesian model selection 5/16 ∞ X (−1)n 2 2 2 π = 4 , π = 2 · √ · √ · ··· 2n + 1 2 p q p √ n=0 2 + 2 2 + 2 + 2

Randomness as “incompressibility”

Algorithmic sˆ is random iff the “shortest” to generate it is print(ˆs). AIT (Chaitin, Kolmogorov, Somolonoff): a mathematically formal theory that identifies (computationally) random ∼ incompressible. There isNO general algorithm capable of assessing whether any sequence is random. ... but Borel’s Normality criterion

C. S. Calude, Information and randomness: an algorithmic perspective, 2nd Edition (Springer, 2010)

Randomness characterization through Bayesian model selection 6/16 Randomness as “incompressibility”

Algorithmic Information Theory sˆ is random iff the “shortest” algorithm to generate it is print(ˆs). AIT (Chaitin, Kolmogorov, Somolonoff): a mathematically formal theory that identifies (computationally) random ∼ incompressible. There isNO general algorithm capable of assessing whether any sequence is random. ... but Borel’s Normality criterion

C. S. Calude, Information and randomness: an algorithmic perspective, 2nd Edition (Springer, 2010)

∞ X (−1)n 2 2 2 π = 4 , π = 2 · √ · √ · ··· 2n + 1 2 p q p √ n=0 2 + 2 2 + 2 + 2

Randomness characterization through Bayesian model selection 6/16 What happens if θ∗ ≈ 0.5?

... p-values

Primer on for bit sequences

|sˆ| = M bits, with k0 and k1 being the frequencies of 0’s and ‘1’s; k0 + k1 = M.

k0 k1 M : p0 = θ; p1 = 1 − θ =⇒ P (ˆs|θ, M) = θ (1 − θ) .

= -200 k0 400 = Maximization of P (ˆs|θ, M) M=1000 k0 900 = -400 M 1000 ∗ ) θ = k0/M

ℳ , θ -600 |  Fair “coin” (RNG)

P ( s ∗ 10 -800 =⇒ θ = 0.5 Log -1000 * * θ =0.4 θ =0.9 -1200 0.0 0.2 0.4 0.6 0.8 1.0 θ

Randomness characterization through Bayesian model selection 7/16 ... p-values

Primer on statistical inference for bit sequences

|sˆ| = M bits, with k0 and k1 being the frequencies of 0’s and ‘1’s; k0 + k1 = M.

k0 k1 M : p0 = θ; p1 = 1 − θ =⇒ P (ˆs|θ, M) = θ (1 − θ) .

= -200 k0 400 = Maximization of P (ˆs|θ, M) M=1000 k0 900 = -400 M 1000 ∗ ) θ = k0/M

ℳ , θ -600 |  Fair “coin” (RNG)

P ( s ∗ 10 -800 =⇒ θ = 0.5 Log -1000 * * ∗ θ =0.4 θ =0.9 What happens if θ ≈ 0.5? -1200 0.0 0.2 0.4 0.6 0.8 1.0 θ

Randomness characterization through Bayesian model selection 7/16 Primer on statistical inference for bit sequences

|sˆ| = M bits, with k0 and k1 being the frequencies of 0’s and ‘1’s; k0 + k1 = M.

k0 k1 M : p0 = θ; p1 = 1 − θ =⇒ P (ˆs|θ, M) = θ (1 − θ) .

= -200 k0 400 = Maximization of P (ˆs|θ, M) M=1000 k0 900 = -400 M 1000 ∗ ) θ = k0/M

ℳ , θ -600 |  Fair “coin” (RNG)

P ( s ∗ 10 -800 =⇒ θ = 0.5 Log -1000 * * ∗ θ =0.4 θ =0.9 What happens if θ ≈ 0.5? -1200 0.0 0.2 0.4 0.6 0.8 1.0 θ ... p-values

Randomness characterization through Bayesian model selection 7/16 Once a model M is chosen, The right question is: the usual question is: Given the observations sˆ how How well does it describe the likely it is that M is the true set of observations sˆ? model? P (ˆs|M, θ) or P (ˆs|M) P (M, θ|sˆ) or P (M|sˆ)

Bayes Theorem P (ˆs|M)P (M) P (ˆs|M)P (M) P (M|sˆ) = = P P (ˆs) i P (ˆs|Mi)P (Mi) Recipe for how to update our (un)certainty about a model – hypothesis – given some data.

N N Bayes n o ∗ {Mα}α=1 −−−→ P (Mα|sˆ) =⇒ M = arg max P (Mα|sˆ) sˆ α=1 Mα

Model selection as hypothesis test

A model, M, defines a family of distributions, including its dependence on parameters, P (ˆs|θ, M) , and their distribution P (θ|M) .

Randomness characterization through Bayesian model selection 8/16 The right question is: Given the observations sˆ how likely it is that M is the true model? P (ˆs|M, θ) or P (ˆs|M) P (M, θ|sˆ) or P (M|sˆ)

Bayes Theorem P (ˆs|M)P (M) P (ˆs|M)P (M) P (M|sˆ) = = P P (ˆs) i P (ˆs|Mi)P (Mi) Recipe for how to update our (un)certainty about a model – hypothesis – given some data.

N N Bayes n o ∗ {Mα}α=1 −−−→ P (Mα|sˆ) =⇒ M = arg max P (Mα|sˆ) sˆ α=1 Mα

Model selection as hypothesis test

A model, M, defines a family of probability distributions, including its dependence on parameters, P (ˆs|θ, M) , and their distribution P (θ|M) .

Once a model M is chosen, the usual question is: How well does it describe the set of observations sˆ?

Randomness characterization through Bayesian model selection 8/16 P (M, θ|sˆ) or P (M|sˆ)

Bayes Theorem P (ˆs|M)P (M) P (ˆs|M)P (M) P (M|sˆ) = = P P (ˆs) i P (ˆs|Mi)P (Mi) Recipe for how to update our (un)certainty about a model – hypothesis – given some data.

N N Bayes n o ∗ {Mα}α=1 −−−→ P (Mα|sˆ) =⇒ M = arg max P (Mα|sˆ) sˆ α=1 Mα

Model selection as hypothesis test

A model, M, defines a family of probability distributions, including its dependence on parameters, P (ˆs|θ, M) , and their distribution P (θ|M) .

Once a model M is chosen, The right question is: the usual question is: Given the observations sˆ how How well does it describe the likely it is that M is the true set of observations sˆ? model? P (ˆs|M, θ) or P (ˆs|M)

Randomness characterization through Bayesian model selection 8/16 Bayes Theorem P (ˆs|M)P (M) P (ˆs|M)P (M) P (M|sˆ) = = P P (ˆs) i P (ˆs|Mi)P (Mi) Recipe for how to update our (un)certainty about a model – hypothesis – given some data.

N N Bayes n o ∗ {Mα}α=1 −−−→ P (Mα|sˆ) =⇒ M = arg max P (Mα|sˆ) sˆ α=1 Mα

Model selection as hypothesis test

A model, M, defines a family of probability distributions, including its dependence on parameters, P (ˆs|θ, M) , and their distribution P (θ|M) .

Once a model M is chosen, The right question is: the usual question is: Given the observations sˆ how How well does it describe the likely it is that M is the true set of observations sˆ? model? P (ˆs|M, θ) or P (ˆs|M) P (M, θ|sˆ) or P (M|sˆ)

Randomness characterization through Bayesian model selection 8/16 N N Bayes n o ∗ {Mα}α=1 −−−→ P (Mα|sˆ) =⇒ M = arg max P (Mα|sˆ) sˆ α=1 Mα

Model selection as hypothesis test

A model, M, defines a family of probability distributions, including its dependence on parameters, P (ˆs|θ, M) , and their distribution P (θ|M) .

Once a model M is chosen, The right question is: the usual question is: Given the observations sˆ how How well does it describe the likely it is that M is the true set of observations sˆ? model? P (ˆs|M, θ) or P (ˆs|M) P (M, θ|sˆ) or P (M|sˆ)

Bayes Theorem P (ˆs|M)P (M) P (ˆs|M)P (M) P (M|sˆ) = = P P (ˆs) i P (ˆs|Mi)P (Mi) Recipe for how to update our (un)certainty about a model – hypothesis – given some data.

Randomness characterization through Bayesian model selection 8/16 Model selection as hypothesis test

A model, M, defines a family of probability distributions, including its dependence on parameters, P (ˆs|θ, M) , and their distribution P (θ|M) .

Once a model M is chosen, The right question is: the usual question is: Given the observations sˆ how How well does it describe the likely it is that M is the true set of observations sˆ? model? P (ˆs|M, θ) or P (ˆs|M) P (M, θ|sˆ) or P (M|sˆ)

Bayes Theorem P (ˆs|M)P (M) P (ˆs|M)P (M) P (M|sˆ) = = P P (ˆs) i P (ˆs|Mi)P (Mi) Recipe for how to update our (un)certainty about a model – hypothesis – given some data.

N N Bayes n o ∗ {Mα}α=1 −−−→ P (Mα|sˆ) =⇒ M = arg max P (Mα|sˆ) sˆ α=1 Mα

Randomness characterization through Bayesian model selection 8/16 Binary models: Likelihoods and inference

sˆ = 0100110101...1110100101, |sˆ| = M bits, with k0 and k1 the frequencies of ‘0’s and ‘1’s; k0 + k1 = M. 1 1 M : p = p = =⇒ P (ˆs|θ, M ) = ; 0 0 1 2 0 2M k0 k1 M1 : p0 = θ; p1 = 1 − θ =⇒ P (ˆs|θ, M1) = θ (1 − θ) .

Randomness characterization through Bayesian model selection 9/16 Binary models: Likelihoods and inference

sˆ = 0100110101...1110100101, |sˆ| = M bits, with k0 and k1 the frequencies of ‘0’s and ‘1’s; k0 + k1 = M. 1 1 M : p = p = =⇒ P (ˆs|θ, M ) = ; 0 0 1 2 0 2M k0 k1 M1 : p0 = θ; p1 = 1 − θ =⇒ P (ˆs|θ, M1) = θ (1 − θ) .

P (θ|M ) = δ(θ − 1 ) P (θ|M ) = P (θ) = Γ(1)√ 0 2 1 Jeff 2 1 Γ ( 2 ) θ(1−θ)

Randomness characterization through Bayesian model selection 9/16 Binary models: Likelihoods and inference

sˆ = 0100110101...1110100101, |sˆ| = M bits, with k0 and k1 the frequencies of ‘0’s and ‘1’s; k0 + k1 = M. 1 1 M : p = p = =⇒ P (ˆs|θ, M ) = ; 0 0 1 2 0 2M k0 k1 M1 : p0 = θ; p1 = 1 − θ =⇒ P (ˆs|θ, M1) = θ (1 − θ) .

P (θ|M ) = δ(θ − 1 ) P (θ|M ) = P (θ) = Γ(1)√ 0 2 1 Jeff 2 1 Γ ( 2 ) θ(1−θ)

Likelihoods of the models P (ˆs|M) = R dθ P (θ|M)P (ˆs|θ, M)

Z  1 1 P (ˆs|M )= dθ δ θ − P (ˆs|θ, M ) = 0 2 0 2M Z 1  1  Γ(1) k − 1 k − 1 Γ(1)Γ k0 + 2 Γ k1 + 2 P (ˆs|M )= dθ θ 0 2 (1 − θ) 1 2 = 1 2 1  2 1  Γ 2 Γ 2 Γ(M + 1)

Randomness characterization through Bayesian model selection 9/16 Binary models: phase diagrams

Posterior:

P (ˆs|Mα)P0(Mα) P (Mα|sˆ) = P0(ˆs)

∝ P (ˆs|Mα)

P (ˆs|M ) =⇒ 0 P (ˆs|M1)

Randomness characterization through Bayesian model selection 10/16 Binary models: phase diagrams

1.0 Posterior: {{0},{1}} P (ˆs|Mα)P0(Mα) 0.8 P (Mα|sˆ) = P0(ˆs)

∝ P (ˆs|Mα) 0.6 0 P (ˆs|M ) γ {{0, 1}} =⇒ 0 0.4 P (ˆs|M1)

P (M0|sˆ) Region where > 1 0.2 P (M1|sˆ) {{0},{1}} Region allowed by Borel 0.0 Normality 0 20 40 60 80 100 M

Randomness characterization through Bayesian model selection 10/16 Binary models: phase diagrams

1.0 Posterior: {{0},{1}} P (ˆs|Mα)P0(Mα) 0.8 P (Mα|sˆ) = P0(ˆs)

∝ P (ˆs|Mα) 0.6 0 P (ˆs|M ) γ {{0, 1}} =⇒ 0 0.4 P (ˆs|M1)

P (M0|sˆ) Region where > 1 0.2 P (M1|sˆ) {{0},{1}} Region allowed by Borel 0.0 Normality 0 20 40 60 80 100 M Ξ = {0,1} −→ 2 partitions ⇐⇒ 2 models M0 ←→ {{0,1}} M1 ←→ {{0}, {1}}

Randomness characterization through Bayesian model selection 10/16 β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

β = 2 =⇒ sˆreg = 11111...111.

Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

Randomness characterization through Bayesian model selection 11/16 β = 2 =⇒ sˆreg = 11111...111.

Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

Randomness characterization through Bayesian model selection 11/16 Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

β = 2 =⇒ sˆreg = 11111...111.

Randomness characterization through Bayesian model selection 11/16 Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

β = 2 =⇒ sˆreg = 11111...111.

β Partitions of a set Ξβ = {0,..., 2 − 1}

A partition, αK , is a grouping of the elements of Ξβ in K disjoint, non-empty K SK subsets, {ωK (r)}r=1 =⇒ αK = r=1 ωK (r).

Randomness characterization through Bayesian model selection 11/16 Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

β = 2 =⇒ sˆreg = 11111...111.

β Partitions of a set Ξβ = {0,..., 2 − 1}

A partition, αK , is a grouping of the elements of Ξβ in K disjoint, non-empty K SK subsets, {ωK (r)}r=1 =⇒ αK = r=1 ωK (r).

e.g., with β = 2 there are six partitions of Ξ2 = {0,1,2,3} in K = 3 subsets.

(1) (2) (3) α3 = {{0}, {1}, {2,3}}; α3 = {{0}, {2}, {1,3}}; α3 = {{0}, {3}, {1,2}} (4) (5) (6) α3 = {{1}, {2}, {0,3}}; α3 = {{1}, {3}, {0,2}}; α3 = {{2}, {3}, {0,1}}

Randomness characterization through Bayesian model selection 11/16 Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

β = 2 =⇒ sˆreg = 11111...111. One partition ⇐⇒ one model Partitions represent different ways to assign to strings.

θr (l) M (l) : p = ; ∀j ∈ ω (r); r = 1,...,K. α j K K ω(l)(r) K

Randomness characterization through Bayesian model selection 11/16 Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

β = 2 =⇒ sˆreg = 11111...111. One partition ⇐⇒ one model Partitions represent different ways to assign biases to strings.

θr (l) M (l) : p = ; ∀j ∈ ω (r); r = 1,...,K. α j K K ω(l)(r) K

(3) θ2 α3 = {{0}, {3}, {1,2}} =⇒ M (3) : p0 = θ0 6= p3 = θ1 6= p1 = p2 = ; α3 2

Randomness characterization through Bayesian model selection 11/16 Partitions as a tool to identify regularities

What about the sequence sˆreg = 0101010101...010101? ¿¿¿It’s random???

β = 1, sˆ =  110100101011...010110 “read” β bits  sˆ = 110 100101011...10110 −→ β = 2, sˆ = 310223...112 simultaneously  β β = 3s ˆ = 6453...15

β = 2 =⇒ sˆreg = 11111...111. One partition ⇐⇒ one model Partitions represent different ways to assign biases to strings.

θr (l) M (l) : p = ; ∀j ∈ ω (r); r = 1,...,K. α j K K ω(l)(r) K

1 α1 is unique =⇒ Mα1 ≡M sym : pj = 2β is the only model describing a maximally random process .

Randomness characterization through Bayesian model selection 11/16 Phase diagrams: β = 2

We can perform the model selection using only information about the β βki 2 −1 frequencies {γi = M }i=0 and the sequence length, M.

0.5 {{0},{1, 2},{3}} {{1},{2},{3},{4}} {{0},{1, 2},{3}} 0.5 0.4 {{0},{1, 3},{2}} { {0, 1, 2 }, {3}} 0.4 {{0},{1, 2, 3}} 0.3 {{0},{1},{2, 3}} 0 0, 1, 2, 3 0 0.3 {{0, 1, 2, 3}} {{0, 2, 3},{1}} γ {{ }} γ

0.2 {{0, 2},{1},{3}} 0.2 {{0, 1, 2},{3}} , 2, 3}} {{0}, {1 {{0, 1},{2},{3}} 0.1 0.1 {{0},{1, 2},{3}} {{0},{1, 2},{3}} {{1},{2},{3},{4}}

0.0 0.0 0 200 400 600 800 0 200 400 600 800 M M

(a) γ1 = γ2 = 1/4. (b) γ1 = 1/6; γ2 = 1/4.

Randomness characterization through Bayesian model selection 12/16 Comparing with Borel’s Normality bounds

A sequence sˆ is Borel-Normal if: (β) q γ − 1 < log2 M ,β ≤ log log M j 2β M 2 2

γ1=γ2=1/4 γ1=γ2=1/5 γ1=1/6, γ2=1/4 0.5 0.6

0.5 0.5 0.4

0.4 0.4 0.3 0 0 0 0.3 0.3 γ γ γ

0.2 0.2 0.2

0.1 0.1 0.1

0.0 0.0 0.0 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 M M M

Region where Msym is the likeliest, region allowed by Borel Normality test, and Approximated bounds obtained by an expansion of log Msym/Mα2 . Randomness characterization through Bayesian model selection 13/16 Comparing with NIST tests

Used a RNG (Mathematica) to generate 100 bit sequences of different length, with p0 = b ∈ (0.48, 0.52).

0.52 0.52

0.51 0.51

0.50 0.50 b ( ) b ( bias )

0.49 0.49

0 0.2 0.4 0.6 0.8 1.0 9 10 11 12 13 14 0.48 0.48 10 15 20 25 30 35 40 45 50 10 15 20 25 30 35 40 45 50 3 3 M( × 10 bits) M( × 10 bits) (b) Fraction of times M was selected (a) Average of tests passed sym using β = {1, 2, 3}.

Randomness characterization through Bayesian model selection 14/16 Analysing an experimentally generated sequence

Experimental setup to produce correlated photons.

A. Solis et al., “How random are random generated using photons?”, Physica

Scripta 90, 074034 (2015)

Randomness characterization through Bayesian model selection 15/16 Analysing an experimentally generated sequence

M = 4 × 109 bits! βmax . 5

Distribution of ∆t between the simultaneous detections. It is used to generate strings of 0 and 1.

A. Solis et al., “How random are random numbers generated using photons?”, Physica

Scripta 90, 074034 (2015) Randomness characterization through Bayesian model selection 15/16 Analysing an experimentally generated sequence

M = 4 × 109 bits! βmax . 5

β P (Msym|sˆ) log10 BFsym,α 1 0.999965 4.45 2 0.999562 ≥ 3.72 3 0.968353 ≥ 2.01 4 0.46718 ≥ 3.46

For β = 4, only models associated to partitions into K = 2 subsets were considered.  24  Distribution of ∆t between the = 32, 767 =⇒ P (M) = 3×10−5 . simultaneous detections. It is used to 2 generate strings of 0 and 1.

A. Solis et al., “How random are random numbers generated using photons?”, Physica

Scripta 90, 074034 (2015) Randomness characterization through Bayesian model selection 15/16 Analysing an experimentally generated sequence

M = 4 × 109 bits! βmax . 5

β P (Msym|sˆ) log10 BFsym,α 1 0.999965 4.45 2 0.999562 ≥ 3.72 3 0.968353 ≥ 2.01 4 0.46718 ≥ 3.46

For β = 4, only models associated to partitions into K = 2 subsets were considered.  24  Distribution of ∆t between the = 32, 767 =⇒ P (M) = 3×10−5 . simultaneous detections. It is used to 2 generate strings of 0 and 1. The device functions as a A. Solis et al., “How random are random numbers generated using photons?”, Physica random source .

Scripta 90, 074034 (2015) Randomness characterization through Bayesian model selection 15/16 Thank you!

Randomness characterization through Bayesian model selection 16/16