Fuzzy Information, 2004. Processing N

Fuzzy Information, 2004. Processing N

Fuzzy Situation based Navigation of Autonomous Mobile Robot using Reinforcement Learning Rhea Guanlao, Petr Musilek, Farooq Ahmed, and Armita Kaboli Department of Electrical and Computer Engineering University of Alberta, Edmonton, AB, T6G 2V4 Cunuda [email protected] Abseacf-Design of high-level control systems for and time consuming to implement. In recent research, the autonomous agents, such as mobile robots, is a challenging task. adoption of learning strategies such as evolutionary and The complexity of robotic tasks, the number of inputs and reinforcement learning has attempted to address this problem outputs of such systems and their inherent ambiguity preclude by automating the design of low level behaviours and the designer from finding an analytical description of the behaviour integration [4], [14]. However, the large state space problem. Using the technology of fuzzy sets, it is possible to use general knowledge and intuition to design a fuzzy coutrol system involved in mobile robot navigation often slows the learning that encodes the relationships in the control domain into the form of navigation tasks, and attempts to lower the dimensionality of fuzzy rules. However, control systems designed in this way are of the state space by grouping input data using expert severely limited in size und are usually far from being optimal. knowledge just shifts the element of ad-hoc human input from In this paper, several techniques are combined to overcome sncb one design stage to another. limitations. The control system is selected in the form of a Another way to reduce the state space dimensionality is to general fuzzy rule based system. Antecedents of this system cluster input data using a fuzzy clustering algorithm, so that correspond to various situations encountered by the robot and the resulting fuzzy clusters can then be considered as fuzzy are partitioned using a fuzzy clustering approach. Consequents states by the learning algorithm. Input clustering approach to of the rules describe luny sets for change of heading necessary to avoid collisions. While the parameters of input and output fuzzy learning obstacle avoidance behaviours is presented in this sets are designed prior to robot engagement in real world, the paper. First, sonar data from sample indoor environments are rules to govern its behaviour are acquired autonomously partitioned with a fuzzy clustering algorithm. The resulting endowing the robot with the ability to continuously improve its partitions may not correspond to the state space partitions that performance and to adapt to changing environment. This process a human expert would use, but they would provide a good is based on reiuforcement learning that is well suited for on-line description of typical situations the robot will encounter in its and real-time learning tasks. environment. In the learning stage, each fuzzy state is Index Term-Robotics, Navigation, Fuzzy Control, Fuzzy associated with an action to be taken in that state; and a Clustering, Reinforcement Learning. reinforcement learning algorithm is used to associate punishments generated from obstacle collisions to the state-action pair that caused the collision so that over several I. INTRODUCTION iterations an optimal obstacle avoidance policy is learned. The navigation task for autonomous mobile robots involves The paper is organized in five sections. Section I1 summarizes traversing unfamiliar environments while avoiding obstacles background knowledge of the navigation problem, fuzzy rule and dealing with noisy, imperfect sensor information. While based systems, 'clustering, and reinforcement learning. The several control schemes have been developed to accomplish proposed fuzzy situation based navigation system is described the navigation task, the behaviour based control paradigm in Section 111 and results obtained in simulation are presented continues to be an essential element of autonomous mobile and analysed in Section IV. Finally, Section V brings main robot navigation. Behaviour based controllers often conclusions and indicates possible directions of future work. incorporate reactive control schemes that use range sensors such as sonar and ladar to provide responsive control in 11. BACKGROUND dynamic and uncertain environments. Fuzzy logic control is well suited for behaviour based A. Navigation Problem approaches towards navigation as it can be made robust The basic task of an autonomous mobile robot is to navigate against sensor noise and imperfect odometry, and fuzzy safely to a specified goal location. The high level problem of behaviours have been implemented in many successful goal seeking is that of selecting strategies to specify the goals autonomous navigation systems [IO]. However, while fuzzy and of planning routes from a current position to these goals. logic behaviour based design can produce good performance The other aspect of navigation, safety, can be handled by in the navigation task, the design of fuzzy behaviours often various methods of obstacle avoidance. involves an ad-hoc trial and error approach that is inflexible 820 0-7803-8376-l/04/$20.00 Copyright 2004 IEEE. Obstacle avoidance typically uses information about robot's IF there is an obstacle close to front-left of the robot, immediate surroundings to choose action (e.g. turning) THEN turn right slightly. appropriate to avoid obstacles in the environment. Such The labels dose and slightly make the behaviours fuzzy and information is provided by the means of robot's sensory endow the underlying fuzzy control system with the ability to subsystem, equipped e.g. with proximity sensing devices. interpolate between the discrete cases described by a lited Pioneer 2DX robotic platform considered in this paper has 8 number of rules. sonar sensors spaced nearly evenly from -90 degrees to 90 D. Reinforcement Leaming degrees around its front side (cf. Figure 1). Reinforcement learning problem require online learning and dynamic adjustments in order to search for optimal methods and actions. Some examples include adaptive control systems, game playing machines, an autonomous robot navigation and exploration. Search heuristics such as genetic algorithms, genetic programming, or simulated annealing have been previously applied in solving such problems. However, they lack in the sense that they cannot learn while interacting with -9 its environment and update itself accordingly. Instead a more general machine learning technique based on positivehegative Fig. I Sonar configuration on mobile robotic platform Pioneer 2DX reinforcement and trial and error interactions with the environment is efficient. It is favoured for its generality and B. Fuzzy Clustering similarity to how humans think and learn on a higher level - Clustering, or cluster analysis, is a technique for partitioning through experience. Sutton describes reinforcement learning and fmding structures in data [6]. By partitioning a data set as a computational approach to understanding and automating into clusters, similar data are assigned to the same cluster goal-directed learning and decision-making [lo]. The learning whereas different data should belong to different clusters. agent determines which actions yield the most rewards in a While it is possible to assign each data-point strictly to only particular environment by exploring and performing several one cluster, such crisp assignment rarely captures the actual different actions repeatedly thus learning which actions to relationship among the i.e. real data-points can to some data, exploit. Reinforcement learning is a continual combination of degree simultaneously belong to several clusters. leads to This exploration and exploitation. A progressive combination of the formulation of fuzzy clustering [2], where membership the two should be performed to continually evaluate and degrees between zero and one are used instead of crisp improve the agent. assignment of the to clusters. In addition to better data A reinforced learning environment consists of a set of defmed conformity, fuzzy clustering provides ability to interpolate attributes. The actions are the vruying methods the learning between the cluster prototypes. This latter property is very agent explores in order to determine the optimal actions. The important when considering clustering in connection with reward function defines the desired goal of the learning rule-based systems as described in the following section. problem. The value function defmes which actions produce Fuzzy c-means clustering algorithm [Z] is a commonly used the highest rewards in the long run or after several states, as method of fuzzy clustering. It is based on the minimization of opposed to which actions produce the highest rewards in the an objective function J with respect to U, a fuzzy partition of immediate state. The policies determine how the learning the data set, and V, a set of c prototypes agent evaluates and responds to the rewards and values measured. It may defme, perhaps with a set of stimulus- response rules, when and how to explore and exploit actions. Finally, the model is used simply to "ic the behaviour of where uii denotes membership of data-point 4,je[l,n] in the environment. cluster V,ie[l,c], m2l is partition order, and 1/*11 is any norm expressing the similarity between measured data and 111. FUZZY SITUATION BASED NAVIGATION prototypes (e.g. Euclidean distance). Fuzzy partition is obtained through iterative optimization of (I) with the gradual The proposed navigation system is based

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us