139 Diffusion Approximation for Bayesian Markov Chains Michael Duff
[email protected] GatsbyComputational Neuroscience Unit, University College London, WCIN 3AR England Abstract state--~action--+state transitions of each kind with their associated expected immediate rewards. Ques- Given a Markovchain with uncertain transi- tion probabilities modelledin a Bayesian way, tions about value in this case reduce to questions about we investigate a technique for analytically ap- the distribution of transition frequency counts. These considerations lead to the focus of this paper--the esti- proximating the mean transition frequency mation of the meantransition-frequency counts associ- counts over a finite horizon. Conventional techniques for addressing this problemeither ated with a Markovchain (an MDPgoverned by a fixed require the enumeration of a set of general- policy), with unknowntransition probabilities (whose ized process "hyperstates" whosecardinality uncertainty distributions are updated in a Bayesian grows exponentially with the terminal hori- way), over a finite time-horizon. zon, or axe limited to the two-state case and This is a challenging problem in itself with a long expressed in terms of hypergeometric series. history, 1 and though in this paper we do not explic- Our approach makes use of a diffusion ap- itly pursue the moreambitious goal of developing full- proximation technique for modelling the evo- blown policy-iteration methods for computing optimal lution of information state componentsof the value functions and policies for uncertain MDP’s,one hyperstate process. Interest in this problem can envision using the approximation techniques de- stems from a consideration of the policy eval- scribed here as a basis for constructing effective static uation step of policy iteration algorithms ap- stochastic policies or receding-horizon approaches that plied to Markovdecision processes with un- repeatedly computesuch policies.