<<

Machine learning for Monte-Carlo integration

Valentin Hirschi

ACAT 11th January 2019 Plan

• Type of appearing in HEP • Standard phase- integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Cross-section computation + ` `

x1E x2E µF µF

p p

long distance long distance

dx1dx2dFS fa(x1,µF )fb(x2,µF ) ⇥ˆab X (ˆs, µF ,µR) a,b Phase-space Parton density Parton-level cross functions section

Dim[Φ(n)] ∼ 3n Peaked function

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Phase-space integrals

4 4 2 2 2 4 d p1 ...d pN ( p ) p ...p (p + ...p ) |M { i} | 1 N 1 N Z • Need a parametrisation to solve the deltas, e.g. N=2 : N 2 2 2 2 s =(p1 + p2) t =(p + p ) u =(p + p ) = M s t 1 3 1 4 i i X • Neural networks may help to cope with the high PS dim ( d =3 N 3 ) and peak structure of PDF & matrix elements. • Timings Tree-level One-loop Two-loop MG5aMC MadLoop VVamp [arXiv:1405.0301] [arXiv:1103.0621] [arXiv:1503.08835]

dd¯ ZZ 7 µs x 102 0.6 ms x 104 (1 min) ! O dd¯ ZZg 35 µs x 103 38 ms N/A ! dd¯ ZZgg 220 µs x 104 1200 ms N/A ! Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Loop integrals

d d N( ki ) d k1 ...d kNl { } Nd Z ⇧1 Di • Dimensionality fixed by number of loops, not ext. kinematics • Features mass threshold singularities and IR+UV divergences • Polynomial integrand, fast evaluation time for all topologies • Advanced analytic integration techniques developed, but purely numerical methods are now (re-)surfacing. • Machine learning can help cope with the large variance of the resulting integrands.

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Importance sampling

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2 IN =0.637 0.307/√N ± 0.0 0.0 0.0 0.1 0.2 0.3 0.4 1 π 0.0 0.2 0.4 0.6 0.8 1.0 0.5 1 ⇡ 1 π ⇡ I = dx cos x 2 cos 2 x ξ2 cos2 cos2 x[ξ2] x 2 I = dx(1 cx ) I = = !dx(1dξcx ) 2 !0 (1 cx2) ξ1 1−(1x[ξ] cx2) Z0 Z0 ≃ 1

IN =0.637 0.307/√N IN =0.637 0.031/√N ± ±

Phase-Space parametrisation is key to tame the variance

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Importance sampling

dq2 (q2 M 2 + iM)2 Z q2 M 2 ⇠ = arctan M ✓ ◆

Why Importance Sampling?

Probability of using that point p(x)

The change of variable ensures that the evaluation of the integrand is done where it is largest

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Importance sampling

Adaptative Monte-Carlo • Create a piece-wise approximation of the function on the fly

Essence of the implementation

1. Setup bins such that each of them contain the same contribution. ➡ Many bins where integrand is large 2. Use the resulting approximation for importance sampling.

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 VEGAS More than one Dimension • VEGAS works only with 1(few) dimension ➡ Otherwise number of “bins” scales like Nd :(

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 VEGAS More than one Dimension • VEGAS works only with 1(few) dimension ➡ Otherwise number of “bins” scales like Nd :(

Solution • Factorisation ansatz: projection on the axis → p(x)= p(x)•p(y)•p(z)…

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 VEGAS More than one Dimension • VEGAS works only with 1(few) dimension ➡ Otherwise number of “bins” scales like Nd :(

Solution • Factorisation ansatz: projection on the axis → p(x)= p(x)•p(y)•p(z)…

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 VEGAS More than one Dimension • VEGAS works only with 1(few) dimension ➡ Otherwise number of “bins” scales like Nd :(

Solution • Factorisation ansatz: projection on the axis → p(x)= p(x)•p(y)•p(z)…

• Factorisation breaks down

➡Additional change of variable needed

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Monte Carlo integration Monte CarloVEGAS technics choice of the phase-space parametrization has a strong impact on the efficiencyefficiency of of the anMC adaptative integration MC: integration : •The choice of the parameterisation has a anycase peakstrong 1 : isany aligned peakimpact is alongaligned on athe singlealong efficiency direction a single direction of the P-S of the P-S parametrizationparametrization

y2 y2 y2 y2 Grid

y1 y1 y1 y1

thetheadaptiveadaptative Monte-Carlo Monte-Carlo P-S P-S integration integrationisis veryvery efficient efficient →→

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019

MadWeight – p. 12/29

MadWeight – p. 7/17 MonteMonte Carlo Carlo integration integration Monte Carlo integration Monte Carlo integrationVEGAS choicechoice of the ofphase-space the phase-space parametrization parametrizationhashas a strong a strong impact impact on the on the choice of thephase-space parametrizationhas a strong impact on the choiceefficiencyefficiency of the ofphase-space the ofMC the MC integration parametrization integration: : has a strong impact on the efficiency of theMC integration: efficiency of• theTheMC choice integration of :the parameterisation has a solutionsomesolutionstrong to peaks the to the previous impact are previous not case aligned on case :the perform : perform along efficiency a a a change single change direction of of variables variables ofin the orderin P-S order some peaks are not aligned along a single direction of the P-S to alignparametrizationto align the the peaks peaks along along a single a single direction direction of of the the P-S P-S paramet parametrizationrization parametrization yy11++y2y2 y2 y2 y2 y2 y2 y2 Grid

y1 y2 y1 − y1 y1 y2 y1 y1 1 y1 y1 −

the adaptive Monte-Carlo P-S integrationis very efficient the→adaptive Monte-Carlo P-S integration converges slowly → the theadaptiveadaptive Monte-Carlo Monte-Carlo P-S P-S integration integrationis veryconverges efficient slowly → →

MadWeight – p. 12/29 Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations MadWeightACAT – p. 12/29 11.01.2019 MadWeight – p. 12/29 MadWeight – p. 12/29 Monte Carlo integration MonteMonte Carlo Carlo integration integration Monte Carlo integrationVEGAS has a strong impact on the choice of the phase-space parametrization has a strong impact on the choicechoice of the ofphase-space the phase-space parametrization parametrizationhas a stronghas a strong impact impact on the on the choiceefficiency of the ofphase-space the MC integration parametrization: has a strong impact on the efficiencyefficiency of the ofMC the integrationMC integration: : efficiency of• theTheMC choice integration of :the parameterisation: has a solution to the previous case : perform a change of variables in order y 1 solutionsolutionstrong to the to impact previous the previous on case the case : perform efficiency : perform a change a change of variables of variablesy 2 in orderin order some peaks are not aligned along a single direction of the P-S to alignto align the the peaks peaks along along a single a single direction direction of of the the P-S P-S paramet parametrizationrization MadWeight – p. 12/29 to align the peaks alongphase-space a single direction parametrization of the P-S parametrization parametrization MC integration yy11+ y2y2 y2 y2 + y1 + y2 converges slowly y2 y2 y2 Monte Carlo integrationRotation choice of the efficiency of the y 2 y 1 someGrid peaks are not aligned along a single direction of the P-S parametrization y1 y2 y1 − y1 y1 y2y1 y2 y1 y1 y1 − −

the adaptive Monte-Carlo P-S integrationis very efficient thethe→adaptiveadaptivethe Monte-Carloadaptive Monte-Carlo Monte-Carlo P-S integration P-S integration P-Sconverges integrationis very slowlyis very efficient efficient →→ → adaptive Monte-Carlo P-S integration

the MadWeight – p. 12/29 MadWeight – p. 12/29 Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019MadWeight – p. 12/29 → MadWeight – p. 12/29 MULTI-CHANNEL

It is often the case that no transformation can align all integrand peaks to the chosen axes.

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 MULTI-CHANNEL

It is often the case that no transformation can align all integrand peaks to the chosen axes.

Solution: combine different transformations = channels n n p(x)=! αipi(x) with ! αi =1 i=1 i=1

with each pi(x) taking care of one “peak” at the time

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 MULTI-CHANNEL

n p(x)=! αipi(x) i=1 with n ! αi =1 i=1

p1(x) p2(x)

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 MULTI-CHANNEL

n p(x)=! αipi(x) i=1 with n ! αi =1 i=1

Then, f(x) n f(x) I = f(x)dx = dx = ↵ p (x)dx p(x) i p(x) i Z Z i=1 Z X 1 ⇡

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 u u~ > g g QED=0 page 1/1

4 2 4 2 u~ g

u~ g

g u

g Example on a 2>2 processu u g

3 1 1 3 u u~ > g g QED=0 page 1/1 u u~ > g g QED=0 page 1/1 diagram 1 QCD=2 diagram 2 QCD=2

4 3 2 4 2 4 2 2 4 2 u~ g u~ g u~ g u~ u~ g g

g g u u u

g u g u u g u g u g 3 1 3 1 3 1 4 1 1 3 diagram 1 QCD=2 diagram 2 QCD=2 diagram 3 QCD=2 diagram 1 QCD=2 diagram 2 QCD=2

2 3 2 1 1 3 1 1 1 1 u~ g u~ g = 2 = 2 = 2 / sˆ (p1 + p2) / tˆ (p1 p3) / uˆ (p1 p4)

u u

u g u g

1 4 1 4 diagram 3 QCD=2 Three diagram 3 very QCD=2 different pole structures contributing

to the same matrix element Diagrams made by MadGraph5

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019

Diagrams made by MadGraph5 Diagrams made by MadGraph5 Diagram-based multi-channels [Madevent hep-ph/0208156] M 2 M 2 M 2 = i | i| M 2 = | i| M 2 | tot| M 2 | tot| M 2 | tot| Z Z Pj j i Z j j 1 | | X | | ⇡ Key Idea P P – Any single diagram is “easy” to integrate (pole structures/ suitable integration variables known from the propagators) – Divide integration into pieces, based on diagrams – All other peaks taken care of by denominator sum – Similar idea possible using jacobians of parametrisation. [RACOON hep-ph/9912261] N Integral – Errors add in quadrature so no extra cost – “Weight” functions already calculated during |M|2 calculation – Parallel in nature

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 17 Diagram-based multi-channels [Madevent hep-ph/0208156] M 2 M 2 M 2 = i | i| M 2 = | i| M 2 | tot| M 2 | tot| M 2 | tot| Z Z Pj j i Z j j 1 | | X | | ⇡ P P Drawbacks – Cannot account for the cuts imposed by the . – For higher-order computations, it cannot reproduce the intricate shape of IR-subtracted real-emission ME. – Fails badly when computing interferences only. – Necessitates access to the single squared topologies M 2 | j| – Multi-channel+VEGAS variance reduction saturates quickly

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 18 Event generation

1. pick x 2. calculate f(x) f(x) 3. pick 0y accept event, else reject it.

accepted I= = efficiency total tries

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 19 From integration to generation

dσ dO MC integrator

O Acceptance-Rejection

dσ dO Event generator

O This is possible only if f(x)<∞ and has definite sign

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 20 Improving unweighting efficiency

By combining it with importance sampling:

1. pick x distributed as p(x) 2. calculate f(x) and p(x)

f(x) 3. pick 0y p(x) accept event, else reject it.

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 21 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Numerical loop integration

d d N( ki ) d k1 ...d kN { } l ⇧Nd (q2 m2 + i) Z 1 i i Using Feynman parameters and sector decomposition [PySecDec arXiv:1703.09692] [FIESTA arXiv:1511.03614] (not discussed here)

Direct integration in space: Naive approach inapplicable even for finite integrals in d=4:

3.0 1 i NIntegrate[I] 2.5 dx 2 2 = ⇡ x m + i 2.0 Z1 1.5 Not reliable! 2 Setting m =1 and integrating 1.0 0.5 log () using a finite yields: 10 -2 2 4 6 8 10

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Numerical loop integration

d d N( ki ) d k1 ...d kN { } l ⇧Nd (q2 m2 + i) Z 1 i i Using Feynman parameters and sector decomposition [PySecDec arXiv:1703.09692] [FIESTA arXiv:1511.03614] (not discussed here)

Direct integration in momentum space: Naive approach inapplicable even for finite integrals in d=4:

3.0 1 i NIntegrate[I] 2.5 dx 2 2 = ⇡ x m + i 2.0 Z1 1.5 Not reliable! 2 Setting m =1 and integrating 1.0 0.5 log () using a finite yields: 10 -2 2 4 6 8 10

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Numerical loop integration in momentum space

An exact numerical representation can be obtained using analytical continuation and integral along a complex contour

1 0.4 dxI[x] 0.2 ( 1) Z1 1 -2 -1 1 2 1

-0.2 (1)

x¯ = x + i(x) -0.4

1 dx¯ dx I[¯x(x)] dx I1

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Complex loop deformation 1 Quickly becomes very intricate 4 d k 22 22 22 kk ((kk pp1)) ((kk pp11 pp22)) k¯µ(k) kµ + iµ(k) Z 1 ! k0 2 (k p1) =0 k2 =0

p~2

p~1 (k p p )2 =0 1 2 kz

Deformation required along a specific direction for each coloured surface.

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Complex loop deformation

Valid deformation in 4Nloop - dimensional Minkowski space: [ Weinzierl & al., :1211.0509 ] , applicable to arbitrary loop count

Double-box deformation

Complicated (non-C1) deformation yields an integrand with large variance Still, typically fast integrands, (50μs). O Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Complex loop deformation • Rather imprecise results (slow convergence) but for impressive topologies! [ Weinzierl & al., arxiv:1211.0509 ]

• Note: Integrand is a product, so multi-channeling methods are rather inefficient. • Integrand-level counterterms can be devised for divergent integrals: [ Weinzierl & al., arxiv:1112.3521 ] [ Anastasiou & al., arxiv:1812.03753 ] • Recently, the focus shifted to an alternative formulation of direct integration in momentum space: Loop-Tree duality. • It consists in analytically integrating over the loop momenta energies using Cauchy’s theorem, resulting in an integration in a cartesian product of 3D euclidian spaces, with a simplified deformation.

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Loop-Tree Duality (LTD) Analytically integrate over the loop energies using Cauchy’s theorem. [ Catani & al., arxiv:0804.3170 ] µ p1 2 (k p1) µ µ p1 + p2 2 k I 2 (k p p ) 1 2 µ p2 k0

k2 =0

(k p p )2 =0 1 2 kz (k p )2 =0 1 Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Loop-Tree Duality (LTD) Analytically integrate over the loop energies using Cauchy’s theorem. 2 0 [ Catani & al., arxiv:0804.3170 ] µ (q )⇥(q ) p1 i i 2 ⌘ (k p1) µ µ p1 + p2 LTD 2 k I = I1 2 (k p p ) 1 2 µ p2 k0 I1

k2 =0

(k p p )2 =0 1 2 kz (k p )2 =0 1 Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Loop-Tree Duality (LTD) Analytically integrate over the loop energies using Cauchy’s theorem. 2 0 [ Catani & al., arxiv:0804.3170 ] µ (q )⇥(q ) p1 i i 2 ⌘ (k p1) µ µ p1 + p2 LTD 2 k I = I1 I2 2 (k p p ) 1 2 µ p2 k0 I2 I1

k2 =0

(k p p )2 =0 1 2 kz (k p )2 =0 1 Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Loop-Tree Duality (LTD) Analytically integrate over the loop energies using Cauchy’s theorem. 2 0 [ Catani & al., arxiv:0804.3170 ] µ (q )⇥(q ) p1 i i 2 ⌘ (k p1) µ µ p1 + p2 LTD 2 I k I = I1 I2 3 2 (k p p ) 1 2 pµ 2 0 k I3 I2 I1

k2 =0

(k p p )2 =0 1 2 kz (k p )2 =0 1 Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Loop-Tree Duality (LTD) Analytically integrate over the loop energies using Cauchy’s theorem. 2 0 [ Catani & al., arxiv:0804.3170 ] µ (q )⇥(q ) p1 i i 2 ⌘ (k p1) µ µ p1 + p2 LTD 2 I k I = I1 I2 3 2 (k p p ) 1 2 pµ 2 0 k I3 I2 I1

k2 =0

• Integration along 3D euclidian thick lines • Deformation only needed on !

(k p p )2 =0 1 2 kz (k p )2 =0 1 Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Loop-Tree Duality (LTD) Promising results for one-loop finite integrals: [ Rodrigo & al., arxiv:1510.00187 ]

• Formal LTD developments beyond one loop: [ Bierenbaum & al., arxiv:1007.0213 ] [ Weinzierl & al., arxiv:1902.02135 ] • First two-loop numerical result for a full 3-point amplitude (no deformation though) H @2 loops [ Rodrigo & al., arxiv:1901.09853 ] ! Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Summary : integrals in HEP

Phase space integrals: • Relative high dimensionality (10-20) • Very variable integrand runtime speed ( 10 μs - 1 min ) • Prior knowledge of integrand ➡ Possible multi-channeling and parametrisation optimisation • Differential unweighted predictions are desirable (cuts).

Loop integrals: • Lower dimensionality (< 10) • Fast integrands ( ~100 μs ) • Large variance and little prior integrand knowledge • Only inclusive result is of interest

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 NN regression of integrand First simple application, use NN to create a reliable integrand fit:

9-dimensional camel function • Must be checked on real-life MEs.

• Observable cuts (theta functions) must be smoothened

• Many hyperparameters still need to be tuned for the particular case at hand.

[ Bendavid arxiv:1707.00028 ]

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 NN regression of integrand First simple application, use NN to create a reliable integrand fit:

9-dimensional camel function • Must be checked on real-life MEs.

• Observable cuts (theta functions) must be smoothened

• Many hyperparameters still need to be tuned for the particular case at hand.

[ Bendavid arxiv:1707.00028 ] If NN inference much faster than integrand, then we are basically done: [ Eq.2.102 arxiv:1405.0301 ] V := original integrand

V˜k := NN approximant @ iteration k

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 NN Integration replacing VEGAS If integrand faster than NN inference then all is not lost:

[ Klimek, Perelstein, arxiv:1810.11509 ] (s, t) (cos(✓)2, ) ⌘ ⌘

(I(s, t)) (I(cos(✓)2, )) << (I(s, t))

Consider a generative NN model effectively learning a change of variables. Contrary to VEGAS, it is not a piece-wise ansatz: no factorised approx. Saturation of the variance reduction much delayed. If perfectly trained, then V=0 and a single evaluation yields the exact integral.

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 “NNVegas” Results

9-dimensional camel function [ Bendavid arxiv:1707.00028 ]

+ e e qqg¯ !

[ Klimek, Perelstein, arxiv:1810.11509 ] Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Plan

• Type of integrals appearing in HEP • Standard phase-space integration techniques • Current numerical methods for loop integrals • Machine learning assisted MC-integration • Prospects

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019 Prospects

• Time is right for something like “NNVegas” ➡ Make applicable by non-experts, automatically adjusting hyperparams given case at hand.

• Explore different models (e.g. Boltzmann Machines).

• Explore idea of feeding redundant inputs to the NN, thereby feeding it prior integrand knowledge.

• Study NN integration thoroughly by setting up an interface to a generic HEP simulation tool (e.g. MG5aMC)

Valentin Hirschi, ETHZ Machine-learning for MC HEP simulations ACAT 11.01.2019