
CORE Metadata, citation and similar papers at core.ac.uk Provided by Agder University Research Archive On the Analysis of a Random Walk-Jump Chain with Tree-based Transitions, and its Applications to Faulty Dichotomous Search∗ Anis Yazidiy and B. John Oommenz Abstract Abstract: Random Walks (RWs) have been extensively studied for more than a century [1]. These walks have traditionally been on a line, and the generalizations for two and three dimensions, have been by extending the random steps to the corresponding neighboring positions in one or many of the dimensions. Among the most popular RWs on a line are the various models for birth and death processes, renewal processes and the gambler’s ruin problem. All of these RWs operate “on a discretized line”, and the walk is achieved by performing small steps to the current-state’s neighbor states. Indeed, it is this neighbor-step motion that renders their analyses tractable. When some of the transitions are to non-neighbour states, a formal analysis is, typically, impossible because the difference equations of the steady-state probabilities are not solvable. One endeavor on such an analysis is found in [2]. The problem is far more complex when the transitions of the walk follow an underlying tree-like structure. The analysis of RWs on a tree have received little attention, even though it is an important topic since a tree is a counter-part space representation of a line whenever there is some ordering on the nodes on the line. Nevertheless, RWs on a tree entail moving to non-neighbor states in the space, which makes the analysis involved, and in many cases, impossible. In this paper, we consider the analysis of one such fascinating RW. We demonstrate that an analysis of the chain is feasible because we can invoke the phenomenon of “time reversibility”. Apart from the analysis being interesting in itself from an analytical perspective, the RW on the tree that this paper models, is a type of generalization of dichotomous search with faulty feedback about the direction of the search, rendering the real-life application of the model to be pertinent. To resolve this, we advocate the concept of “backtracking” transitions in order to efficiently explore the search space. Interestingly, it is precisely these “backtracking” transitions that naturally render the chain to be “time reversible”. By doing this, we are able to bridge the gap between deterministic dichotomous search and its faulty version. The paper contains the analysis of the chain, reports some fascinating limiting properties, and also includes simulations that justify the analytic steady-state results. ∗The second author is grateful for the partial support provided by NSERC, the Natural Sciences and Engineering Research Council of Canada. yThis author can be contacted at: Dept. of ICT, Oslo and Akershus University College, Oslo, Norway. E-mail: [email protected]. zAuthor’s status: Chancellor’s Professor, Fellow: IEEE and Fellow: IAPR. This author can be contacted at: School of Computer Science, Carleton University, Ottawa, Canada : K1S 5B6. The author is also an Adjunct Professor with University of Agder, Grimstad, Norway. E-mail: [email protected]. 1 Keywords: Time Reversibility, Controlled Random Walk, Random Walk with Jumps, Dichotomous Search, Learning Systems 1 Introduction: The theory of Random Walks (RWs) and its applications have gained an “exponential” amount of research interest since the early part of the last century. From the recorded literature, one perceives that the pio- neering treatment of a one-dimensional RW was due to Pearson in [3]. The RW is, usually, defined as a trajectory involving a series of successive random steps, which are, quite naturally, modeled using Markov Chains (MCs). MCs are probabilistic structures that possess the so-called “Markov property” – which, in- formally speaking, implies that the next “state” of the walk depends on the current state and not on the entire past states (or history). The latter property is also referred to as the “lack of memory” property, which imparts to the structure practical consequential implications since it permits the modeler to predict how the chain will behave in the immediate and distant future, and to thus quantify its behavior. Applications of RWs: It would be no exaggeration to state that tens of thousands of papers have been written that either deal with the analysis of RWs or their applications. Embarking on a comprehensive survey would thus be meaningless. In all brevity, we mention that RWs have been utilized in a myriad of applications stemming from areas as diverse as biology, computer science, economics and physics. For instance, concrete examples of these applications in biology are the epidemic models described in [4], the Wright-Fisher model, and the Moran Model in [5] etc::: RWs arise in the modeling and analysis of queuing systems [6], ruin problems [7], risk theory [8], and sequential analysis and learning theory as demonstrated in [9]. In addition to the above-mentioned classical application of RWs, recent applications include mobility models in mobile networks [10], collaborative recommendation systems [11], web search algorithms [12], and reliability theory for both software/hardware components [13] (pp. 83–111). Classification of RWs: RWs can be broadly classified in terms of their Markovian representations. Gen- erally speaking, RWs are either ergodic or possess absorbing barriers. In the simplest case, the induced MC is ergodic, implying that sooner or later, each state will be visited (w. p. 1), independent of the initial state. In such MCs, the limiting distribution of being in any state is independent of the corresponding initial distribution. This feature is desirable when the directives dictating the steps of the chain are a consequence of interacting with a non-stationary environment, allowing the walker to not get trapped into choosing any single state. Thus, before one starts the analysis of a MC, it is imperative that one understands the nature of the chain, i.e., if it is ergodic, which will determine whether or not it possesses a stationary distribution. A RW can also possess absorbing barriers. In this case, the associated MC has a set of transient states which it will sooner or later never visit again. When the walker reaches an absorbing barrier, it is “trapped”, and is destined to remain there forever. RWs with two absorbing barriers have also been applied to analyze problems akin to the two-choice bandit problems in [14] and the gambler’s ruin problem in [7], while their generalizations to chains with multiple absorbing barriers have their analogous extensions. Although RWs are traditionally considered to be uni-dimensional (i.e., on the line), multi-dimensional 2 RWs operate on the plane or in a higher dimensional space. The most popularly-studied RWs are those with single step transitions. The properties of such RWs have been extensively investigated in the literature. A classical example a RW of this type is the ruin problem in [7]. In this case, a gambler starts with a fortune of size s, and decides to play until he is either ruined (i.e. his fortune decreases to 0), or until he has reached a fortune of M. At each step, the gambler has a probability, p, of incrementing his fortune by a unit, and a chance q = 1 − p of losing a unit. The actual capital possessed by the gambler is represented by a RW on the line of integers from 0 to M, with the states 0 and M serving as the respective absorbing barriers. Of course, the game changes drastically to be ergodic if a player is freely given a unit of wealth if he is bankrupt, i.e., when his fortune is 0, and he forfeits a unit if he attains the maximum wealth of M. In these cases, the respective boundaries are said to be “reflecting”. Analysis of Ergodic RWs: Ergodic MCs possess the fascinating property that the probabilities of being in the various states converge to an asymptotic value, also known as the steady state or stationary distribution. For a chain with W states, characterized by the Markov matrix, H, this distribution, say Π, satisfies: HT Π = Π: (1) Most of the RWs that have been formally analyzed operate “on a discretized line”, and since the walk is achieved by performing small steps to the current-state’s neighbor states, such a neighbor-step motion renders their analyses tractable. This is because the asymptotic probability, πi, of being in state i, can be written in terms of πj, where fjg are integers centered around, or in the neighborhood of i. This then reduces to solving difference equations of πi in terms of the πj’s. Analysis of Ergodic RWs with “Jumps” (RWJ): When some of the transitions are to non-neighbour states, the MC takes a “jumps” to such a non-neighbor state, rather than a step. A formal analysis of RWJs is, typically, impossible because there are no known techniques to solve the the corresponding difference equations of the steady-state probabilities. The literature on RWJs is extremely sparse. One example RWJ was reported in [2], and it was applied in the online tracking of spatio-temporal event patterns in [15, 16]. Analysis of Ergodic RWs on Trees: Although RWs with with transitions on a line, such as the gambler’s ruin problem, have been extensively studied for almost a century, as one can observe from [1], that problems involving the analysis of RWs on a tree are intrinsically hard and have received little research attention. This is because they involves the hardest concepts of two arenas: Firstly, they involve specific RWJs, where the transitions are to non-neighbor states. Secondly, the non-neighbor states have an additional constraint in that they are associated with an underlying tree structure, as opposed to a line.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-