<<

Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016 Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/asjc.1237

FULLY BAYESIAN FIELD SLAM USING GAUSSIAN MARKOV RANDOM FIELDS Huan N. Do, Mahdi Jadaliha, Mehmet Temel, and Jongeun Choi

ABSTRACT This paper presents a fully Bayesian way to solve the simultaneous localization and spatial prediction problem using a Gaussian Markov (GMRF) model. The objective is to simultaneously localize robotic sensors and predict a spatial field of interest using sequentially collected noisy observations by robotic sensors. The set of observations consists of the observed noisy positions of robotic sensing vehicles and noisy measurements of a spatial field. To be flexible, the spatial field of interest is modeled by a GMRF with uncertain hyperparameters. We derive an approximate Bayesian solution to the problem of computing the predictive inferences of the GMRF and the localization, taking into account observations, uncertain hyperparameters, measurement noise, kinematics of robotic sensors, and uncertain localization. The effectiveness of the proposed algorithm is illustrated by simulation results as well as by experiment results. The experiment results successfully show the flexibility and adaptability of our fully Bayesian approach ina data-driven fashion. Key Words: Vision-based localization, spatial modeling, simultaneous localization and mapping (SLAM), Gaussian process regression, Gaussian .

I. INTRODUCTION However, there are a number of inapplicable situa- tions. For example, underwater autonomous gliders for Simultaneous localization and mapping (SLAM) ocean cannot find usual geometrical models addresses the problem of a robot exploring an unknown from measurements of environmental variables such as environment under localization uncertainty [1]. The pH, salinity, and temperature [6]. Therefore, in contrast SLAM technology is essential to robotic tasks [2]. The to popular geometrical models, there is a growing num- variations of the SLAM problem are surveyed and ber of practical scenarios in which measurable spatial categorized with different perspectives in [3]. In gen- fields are exploited instead of geometric models. In this eral, most SLAM problems have geometric models regard, we may consider localization using various spa- [1,3,4]. For example, a robot learns the locations of tially distributed (continuous) signals such as distributed the landmarks while localizing itself using triangulation wireless ethernet signal strength [7] or multi-dimensional algorithms. Such geometric models could be classified magnetic fields [8]. For instance, a localization approach in two groups, namely, a sparse set of features which (so-called vector field SLAM) that learns the spatial vari- can be individually identified, often used in Kalman ation of an observed continuous signal was developed filtering methods [1], and a dense representation such by modeling the signal as a piecewise linear as an occupancy grid, often used in particle filtering and applying SLAM subsequently [9]. Gutmann et al. [9] methods [5]. demonstrated the approach by an indoor experiment in which a mobile robot performed localization based on an optical sensor detecting unique infrared patterns pro- Manuscript received July 8, 2014; revised March 2, 2015; accepted August 9, jected on the ceiling. Do et al. [10] localized a mobile 2015. robot for both indoors and outdoors by modeling the Huan N. Do, Mahdi Jadaliha, Mehmet Temel are with the Department of Mechanical Engineering, Michigan State University, East Lansing, MI 48824, high-dimensional visual feature vector extracted from USA. omni-directional images as a multivariate Gaussian ran- Jongeun Choi (corresponding author, e-mail: [email protected]) is with dom field. the Department of Mechanical Engineering and the Department of Elec- trical and Computer Engineering, Michigan State University, East Lansing, In this paper, we consider scenarios without geo- MI 48824, USA. metric models and tackle the problem of simultaneous The first two authors contributed equally to this paper. This work has been supported by the National Science Foundation through localization and prediction of a spatial field of inter- CAREER Award CMMI-0846547. This support is gratefully acknowledged. The est. With an emphasis on spatial field modeling, our authors would like to thank Mr. Alex Wolf (supported by Professorial Assis- tantship Program at MSU) for his work on the PID control of the mobile robot problem will be referred to as a field SLAM for the rest of in Section IV. the paper.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016

Spatial modeling and prediction techniques for ran- The results show flexibility and adaptability of our fully dom fields have been exploited for mobile robotic sensors Bayesian approach in a data-driven fashion. [6,11–18]. Random fields such as Gaussian processes and In this paper, we first formulate the field SLAM Gaussian Markov random fields (GMRFs) [19,20] have problem in order to simultaneously localize robotic sen- been frequently used for mobile sensor networks to statis- sors and predict a spatial random field of interest using tically model physical phenomena such as harmful algal sequentially collected noisy observations by robotic sen- blooms, pH, salinity, temperature, and wireless signal sors. The set of observations consists of observed noisy strength, [15–18,21]. positions of robotic sensing vehicles and noisy measure- The recent research efforts that are closely related ments of a spatial field. To be flexible, the spatial field of to our problem are summarized as follows. The authors interest is modeled by a GMRF with uncertain hyper- in [22] formulated Gaussian process regression under parameters. We then derive an approximate Bayesian uncertain localization, distributed versions of which are solution to the problem of computing the predictive infer- also reported in [23]. In [24], a physical spatio-temporal ences of the GMRF and the localization. Our method random field was modeled as a GMRF with uncertain takes into account observations, uncertain hyperparam- hyperparameters and the prediction problems with local- eters, measurement noise, kinematics of robotic sens- ization uncertainty were tackled. However, kinematics ing vehicles, and noisy localization. The effectiveness of or dynamics of the sensor vehicles were not incorpo- the proposed algorithm is illustrated by both simula- rated in [22,24]. Brooks et al. [25] used Gaussian process tion and experiment results. In particular, the experi- regression to model geo-referenced sensor measurements ment with a mobile robot equipped with a panoramic (obtained from a camera). The noisy measurements vision camera shows that the fully Bayesian approach and their exact sampling positions were utilized in the successfully converges to the maximum likelihood esti- training step. Then, the locations of newly sampled mation of the hyperparameter vector among a number of measurements were estimated by a maximum likeli- prior candidates. hood. However, this was not a SLAM problem since A preliminary version of this paper without exper- the training step had to be performed apriorifor a imental validation was reported at the 2013 American given environment [25]. The authors in [8,26] used the Control Conference [28]. Gaussian process regression to implement SLAM based on a magnetic field and experimentally showed its fea- Notation. Standard notations will be used throughout R Z sibility. O’Callaghan [27] used laser range-finder data the paper. Let and >0 be the sets of real and pos- to probabilistically classify a robot’s environment into itive integer numbers, respectively. The collection of n Rm | , , a region of occupancy. They provided a continuous m-dimensional( vectors){qi ∈ i = 1 ··· n} is denoted , , Rnm representation of a robot’s surroundings by employ- by q ∶= col q1 ··· qn ∈ . The operators of expec- ing a Gaussian process. In [7], a WiFi SLAM prob- tation and are denoted by E and Cov, lem was solved using a Gaussian process latent variable respectively. A random vector x, which has a multivari- model (GP-LVM). ate of vector 𝜇 and covariance To the best of our knowledge, the research efforts on matrix Σ, is denoted by x ∼  (𝜇, Σ).ForgivenG ={c, d} the field SLAM problem up to date have estimated hyper- and H ={1, 2}, the multiplication between two sets is parameters off-line apriori. Therefore, they have not defined as H × G ={(1, c), (1, d), (2, c), (2, d)}.Other addressed the uncertainties in the hyperparameters of the notations will be explained in due course. spatial model (e.g., Gaussian process) in a fully Bayesian II. SEQUENTIAL manner. For example, Gutmann et al. [9] assumed a piecewise linear function for the field and estimated the FOR FIELD SLAM linear model off-line apriori. Additionally Do et al. In this section, we provide a foundation of our [10] performed Gaussian process regression by estimat- approach by formulating the field SLAM problem and ing its hyperparameters off-line to build the localization subsequently by providing its approximate sequential model apriori. Bayesian solution. The main contribution of our work is to incor- porate the uncertainties in the hyperparameters into a 2.1 From Gaussian processes to Gaussian Markov model such that they converge to the optimal values in a random fields data-driven fashion. The advantage of our fully Bayesian In this section, we introduce a GMRF as a dis- approach in this paper is well demonstrated by simula- cretized Gaussian process on a lattice. Consider a tion (Section III) and experiment results (Section IV). Gaussian process

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd H. N. Do et al.: Fully Bayesian Field SLAM

z(q)∼(m(q), (q, q′)), ̃[j] [i] 𝜖[j], zt = z + t (1) where m(q) is a mean function, (q, q′) is a covariance 𝜖[j] function defined with respect to locations q, q′ in a com- where the measurement errors { t } are assumed to , ′  be the independent and identically distributed (i.i.d.) pact domain, i.e., q q ∈ c ∶= [0 xmax]×[0 ymax], . . . 𝜖[j] i i d  ,𝜎2 and z(q) is the realization of the Gaussian process z at Gaussian , i.e., t ∼ (0 𝜖 ). The measure- 𝜎2 > the robot position q. We discretize the compact domain ment noise level( 𝜖 0 is assumed) to be known, and we  [1], , [n] ⊂ Rd Sc into n spatial sites ∶= {s ··· s } ,where define 𝜖 ∶= col 𝜖[1], ···,𝜖[N] ∈ RN . n = hx × hy . h is chosen such that n ∈ Z. Note that t t t max max In addition, at time t, robot j samples a noisy obser- n → ∞ as h → ∞. The collection of realized values of the vation of its own vehicle position. random field in  is denoted by z ∶= (z[1], ···, z[n])T ∈ Rn,wherez[i] ∶= z(s[i]). The prior distribution of z is then ̃[j] [j] [j], given by z ∼  (𝜇, Σ), and so we have qt = qt + et (2) ( ) 1 T −1 [j] 𝜋(z)∝exp − (z − 𝜇) Σ (z − 𝜇) , where the observation errors {et } are distributed by 2 . . . [j] i i d  ,𝜎2 . et ∼ (0 e I) where 𝜇 ∶= (m(s[1]), ···, m(s[n]))T ∈ Rn and the i, j -th ele- 𝜎2 > The observation noise level( e 0 is assumed) to be ment of Σ is defined as Σ[ij] ∶= (s[i], s[j])∈R. The prior known, and we define e ∶= col e[1], ···, e[N] ∈ Rd×N . distribution of z can be written by a precision matrix t t t Q =Σ−1, i.e., z ∼  (𝜇, Q−1). This can be viewed as a We then have the following collective notations. discretized version of the Gaussian process (or a GMRF) ̃ 𝜖 , with a precision matrix Q on . Note that Q of this zt = Hq z + t t (3) GMRF is not sparse. However, a sparse version of Q, i.e., ̃ , qt = Ltqt + et Q̂ with local neighborhood that can represent the origi- nal Gaussian process can be found, for example, making ̂ where Lt is the observation matrix for the vehicle states, Q close to Q in some norm [29–31]. This approximate RN×n and Hq ∈ is defined by GMRF will be computationally efficient due to the spar- t ̂ { sity of Q. In our approach, any model for 𝜇𝜃 and Q𝜃, 1, if s[j] = q[i], where 𝜃 is the hyperparameter vector, can be used. In H[ij] = t qt 0, otherwise. our simulation and experimental studies, we will use a GMRF with a sparse precision matrix that represents a Gaussian process precisely as shown in [24,32]. 2.3 Kinematics of robotic vehicles

2.2 Multiple robotic sensors In this section, we introduce a specific model for the motion of robotic vehicles for a clear presentation. Each Consider N spatially distributed robots with sensors robotic sensor is modeled by a nonholonomic differen-  , , indexed by j ∈ ∶= {1 ··· N} sampling at time t ∈ tially driven vehicle in a two dimensional domain, i.e., Z Z >0. Suppose that the sampling time t ∈ is discrete.  ∈ R2. In this case, the kinematics of robot i can be Recall that the surveillance region is now discretized as given by a lattice that consists of n spatial sites, the set of which   [ ] [ ] is denoted by .Letn spatial( sites in be) indexed by ̇ [1,i] [i] 𝜓[i] [1] [n] n qt ut cos t [i]  ∶= {1, ···, n},andz ∶= col z , ···, z ∈ R be the , = + w , (4) q̇ [2 i] u[i] sin 𝜓[i] t corresponding static values of the scalar field at n special t t t sites.( We denote all) robots’ locations at time t by qt ∶= , , [1], , [N] N where {q[1 i], q[2 i]}, {𝜓[i]}, {u[i]},andw[i] denote the iner- col qt ··· qt ∈ , the( observations) made by all t t t t t tial position, the orientation, the linear speed, and the robots at time t by z̃ ∶= col z̃[1], ···,̃z[N] ∈ RN ,and t t t system noise of robot i in time t, respectively. the observed positions of all robots at time t by q̃ ∶= 𝜓[i] |  ( ) t We may assume that the orientation { t ∀i ∈ } ̃[1], , ̃[N] N ̃ ̃ 𝜂[i] |  col qt ··· qt ∈ . qt and zt are noisy observations can be updated from the turn rate { t ∀i ∈ }.Inthis case, the kinematics of the vehicle network can be further of qt and z at time t, respectively. At time t, robot j samples a noise corrupted mea- described in a collective and discretized form as follows. [j] [i] , surement at its current location qt = s ∈ ∀j ∈ ,  , i ∈ , i.e., qt+1 = qt + Ftut + wt (5)

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016

( ) 𝜋 | where ut is the known control input and wt in the i.i.d. value at time t. Therefore, qt t−1 can be updated by 𝜋 white( noise) realized by a known normal distribution the Gaussian approximation of (qt−1| ).  0, Σ . t−1 wt ( ) 𝜋 q | ≈ t ( t−1 )  𝜇 | + F u , Σ | +Σ . 2.4 Problem formulation and its Bayesian predictive qt−1 t−1 t−1 t−1 qt−1 t−1 wt−1 inference ( ) Similarly, 𝜋 z̃ |𝜃, , q is updated by the Gaussian In this section, we formulate the field SLAM prob- t t−1 t approximation of 𝜋(z|𝜃, ) as follows. lem with a GMRF over a lattice and provide its Bayesian t−1 solution. To be precise, we present the following assump- ( ) 𝜋 z̃ |𝜃, , q ≈ tions A1-A5 for the problem formulation. t ( t−1 t ) (6)  𝜇 , T . Hq z|𝜃, Σ𝜖 + Hq Σz|𝜃, H A1. The scalar random field z is generated by a GMRF t t−1 t t t−1 qt model which is given by z ∼  (𝜇 , Q−1),where 𝜃 𝜃 Remark II.3. For the sake of decreasing complexity and 𝜇𝜃 and Q𝜃 are given functions of a hyperparameter 𝜃 to make the entire algorithm sequential, the distribution vector . | ̃ |𝜃, , ̃ of qt t−1 and zt t−1 qt are approximated by normal A2. The noisy measurements {zt} and the noisy sam- ̃ distributions. pling positions {qt}, as in (3), are collected by robotic sensors in time t = 1, 2, ···. The joint distribution z, q ,𝜃| is obtained A3. The control inputs u and the orientation 𝜓 are t t−1 t t as follows. known deterministic values at time t. ( ) A4. The prior distribution of the hyperparameter vec- 𝜋 , ,𝜃| z qt t−1 = tor 𝜃 is discrete with a support Θ={𝜃(1), ···,𝜃(L)}. ( ) ( ) ( ) 𝜋 |𝜃, ,  𝜋 𝜃| ,  𝜋 | . A5. The prior distribution of the sampling posi- z qt t−1 qt t−1 qt t−1 tions in time t, 𝜋(q ), is discrete with a t Note that q̃ and z̃ are conditionally independent with support Ω(t)={q(k)|k ∈ (t)},whichis t t t respect to {z,𝜃, } and q̃ , respectively. We can then given at time t.Here,(t)={1, ···,𝜈(t)} t−1 t simplify 𝜋(q̃ |z,𝜃,q ,  ) and 𝜋(z̃ |z,𝜃,q , q̃ ,  ) by denotes the index in the support and 𝜈(t) t t t−1 t t t t−1 𝜋(q̃ |q ) and 𝜋(z̃ |z,𝜃,q ,  ), respectively. The obser- is the number of the probable possibilities t t t t t−1 vation model is given by( (3), thus) the probabilities( ) of for qt. the observed data are 𝜋 z̃ |z, q =  H z, Σ𝜀 and ( ) ( ) t t qt t 𝜋 q̃ |q =  L q , Σ . The measured random vari- Remark II.1. A1 states that the spatial field is modeled by t t t t et any GMRF. A2 is a standard noise assumption. Regard- ables have the following conditional joint distribution, ing A3, when 𝜓 is not accurately integrated from noisy ( ) ( ) ( ) t 𝜋 ̃ , ̃ | , ,𝜃, 𝜋 ̃ | ,𝜃, ,  𝜋 ̃ | . 𝜂 zt qt z qt t−1 = zt z qt t−1 qt qt turn rate t, we will show how to relax this condition of accurate orientation using the extended Kalman fil- ter (EKF) [33]. A4 and A5 indicate that we consider From Bayes’ rule, the posterior joint distribution of the hyperparameter vector and sampling positions as the scalar field values, the sampling positions, and the discrete random vectors. hyperparameter vector is given as follows. ( ) ( ) ( ) 𝜋 ̃ , ̃ | , ,𝜃, 𝜋 , ,𝜃| zt qt z qt t−1 z qt t−1 Problem II.2. . Consider the assumptions A1-A5. Our 𝜋 z, q ,𝜃| = ( ) . t t 𝜋 ̃ , ̃ | field SLAM problem is to find the predictive distribution, zt qt Dt−1  ̃ , ̃ mean, and variance of z conditional on t ∶= {z1∶t q1∶t}, ( ) ( ) 𝜋 ,𝜃| ∫ 𝜋 , ,𝜃| where In addition, qt t = z qt t dz is given as follows. z̃ ∶= col(z̃ , ···,̃z )∈RNt , 1∶t 1 t ( ) q̃ ∶= col(q̃ , ···, q̃ )∈Nt . 𝜋 , ,𝜃| 1∶t 1 t ∫ z qt t dz ( ) ( ) ( ) 𝜋 ̃ | 𝜋 𝜃| ,  𝜋 | The solution to Problem II.2 is derived( as follows.) qt qt qt t−1 qt t−1 𝜋 |𝜃, = ( ) (7) The( distribution of the) GMRF is given by z t−1 = 𝜋 z̃ , q̃ |D  𝜇 , t t t−1 z|𝜃, Σz|𝜃, . Recall that the evolution of qt is ( ) ( ) t−1 t−1 𝜋 |𝜃, ,  𝜋 ̃ | ,𝜃, ,  , given by (5) and the input ut is a known deterministic × ∫ z qt t−1 zt z qt t−1 dz

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd H. N. Do et al.: Fully Bayesian Field SLAM

( ) ( ) ( ) ∑ ( ) ∫ 𝜋 | 𝜃, ,  𝜋 ̃ | ,𝜃, ,  𝜋 where z q z z q dz = E Σ | ,𝜃, = Σ | ,𝜃, 𝜋 q |𝜃, , ( ) t t−1 t t t−1 z qt t z qt t t t ̃ | ,𝜃, 𝜋 ̃ | ,𝜃, q ∈Ω(t) zt qt t 1 ,and (zt qt t 1) is given by (6). t − − ( ) ∑ ( ) Cov 𝜇 | ,𝜃, = 𝜇 | ,𝜃, − 𝜇 |𝜃, 𝜋 𝜃| ,  z qt t z qt t z t Remark II.4. From the Bayes’ rule, ( qt t−1) is given 𝜋 𝜃, | ( ) qt∈Ω(t) ( qt t−1) by . We can compute 𝜋 𝜃,q | for all the 𝜋 | t t−1 ( )T ( ) (qt t−1) 𝜇 𝜇 𝜋 |𝜃, , × z|q ,𝜃, − z|𝜃, qt t possible combinations of qt in the previous iteration t t t using (7). However, for the sake( of reducing) the computa-( ) 𝜇 𝜇T 1 ×Σz|𝜃, =− z|𝜃, |𝜃, + ( ) tional cost, we approximate 𝜋 𝜃|q ,  by 𝜋 𝜃| . t t z t 𝜋 𝜃| t t−1 t−1 t Therefore, we have ∑ ( ) ( ) 𝜇 𝜇T 𝜋 ,𝜃| , × Σ | ,𝜃, + | ,𝜃, | ,𝜃, qt z qt t z qt t z qt t t ( ) q ∈Ω(t) 𝜋 ,𝜃| t qt t ≈ ( ) ( ) ( ) ( ) | ,𝜃, 𝜋 q̃ |q 𝜋 𝜃|D 𝜋 q |D 𝜋 z̃ |q ,𝜃, where the predictive mean and covariance of z qt t t t t−1 ( t t−1) t t t−1 . are calculated using the Gaussian process regression as 𝜋 ̃ , ̃ | zt qt Dt−1 follows.

Marginalizing out uncertainties on the possible q 𝜇 𝜇 t z|q ,𝜃, = z|𝜃, and 𝜃, we obtain the following. t t t−1 ( ) T −1 ̃ 𝜇 , +Σ|𝜃, H Σ̃ |𝜃, , z − ̃ |𝜃, , z t−1 qt zt t−1 qt t zt t−1 qt

( ) ∑ ( ) Σz|q ,𝜃, =Σz|𝜃, 𝜋 z,𝜃| = 𝜋 z, q ,𝜃| , t t t−1 t t t T −1 . −Σz|𝜃, H Σ̃ |𝜃, , Hq Σz|𝜃, qt∈Ω(t) t−1 qt zt t−1 qt t t−1 ( ) ∑ ( ) 𝜋 z| = 𝜋 z,𝜃| . | t t Finally, the first and second moments of qt t are 𝜃∈Θ obtained as follows. ∑ ( ) 𝜇 𝜋 | , 𝜃 q | = qt qt t Our estimation of qt and can be updated using t t q ∈Ω(t) measured data up to time t as follows. t∑ ( )2 ( ) Σ | = q − 𝜇 | 𝜋 q | . qt t t qt t t t ( ) ∑ ( ) qt∈Ω(t) 𝜋 | 𝜋 ,𝜃| , qt t = qt t 𝜃∈Θ The fully Bayesian field SLAM is now summarized ( ) ∑ ( ) (8) 𝜋 𝜃| = 𝜋 q ,𝜃| . as Algorithm 1 in the Appendix. t t t To demonstrate the usefulness of the proposed field qt∈Ω(t) SLAM, we apply it to two case studies under simulation and experimental environments in the following sections. |𝜃, The predictive probability and mean of z t are obtained as follows. III. SIMULATION STUDY ( ) ( ) 𝜋 ,𝜃| z t In this section, we demonstrate the effectiveness 𝜋 z|𝜃, = ( ) , t 𝜋 𝜃| t of the proposed sequential Bayesian inference algorithm ∑ (i.e., field SLAM) using a numerical experiment. We 1 ( ) 𝜇 |𝜃, = ( ) 𝜇 | ,𝜃, 𝜋 q ,𝜃| . construct our simulation scenario as follows. The robot z t 𝜋 𝜃| z qt t t t t qt∈Ω(t) moves through a scalar field z that can be characterized by a GMRF model, and takes noisy measurements of its 2-D locations and the scalar field. We assume that we |𝜃, Thepredictivecovariancematrixofz t can be know the control signal that was sent to the robot and obtained using the law of total variance Σ |𝜃, = ( ) ( ) z t the initial prior distribution of the hyperparameter vec- E 𝜇 E Σz|qt,𝜃, + Cov z|qt,𝜃, , where the and Cov is tor of the GMRF. In particular, we consider that a robot t t  computed over qt. Such variables are is moving in a discretized surveillance region .Thespa- obtained as follows. tial sites in  consist of 31 × 31 grid points, i.e., n = 961,

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016

 uniformly distributed over the surveillance region c ∶= the model in [24] are selected as follows. The mean vec- [−15, 15]×[−15, 15]. The evolution of the robot location tor 𝜇𝜃 is chosen to be zero, and the precision matrix 𝛼 . can be more detailed as follows. Q𝜃 is chosen with hyperparameters√ √ = 0 1 equivalent ( ) to a bandwidth 𝓁 = 2∕ 𝛼 ≈ 4.47, and 𝜅 = 50 , qt+1 = Q qt + Ftut + vt = qt + Ftut + wt (9) 𝜎2 𝜋𝛼𝜅 . equivalent to f = 1∕4 ≈ 0 016. The prior distri- bution of the hyperparameter vector 𝜃 is discrete with where Q ∶  →  is the nearest neighbor rule quantizer c a support that takes an input and returns a projected value on . vt is the process noise and wt is the quantization error 𝜅,𝛼 , . 𝜅,𝛼 , 𝜅,𝛼 , 𝜅, . 𝛼 , 𝜅, 𝛼 , between( the continuous) and discretized states, i.e., wt ∶= Θ={( ) (0 1 ) (10 ) ( 0 1 ) ( 10 )} .  Q qt + Ftut + vt −(qt + Ftut) As the cardinality of → increases, we have that wt vt. A special case of (9) is along with the corresponding uniform probabilities that Ftut is controlled and wt is chosen such that the next {0.2, 0.2, 0.2, 0.2, 0.2}. The measurement noise variance  location qt+1 is on a grid point in . In this case, we have in (1) is given by 𝜎𝜖 = 0.1. vt = wt. A robot takes measurements at time t ∈ {1, 2, ···, 100} with localization uncertainty. In Fig. 1d,e,f true, noisy, and probable sampling positions 3.1 GMRF configuration are shown in circles, stars, and dots, respectively, at time In this simulation study, we realize the spatial field t = 100. In this simulation, the of developed in [24], where a GMRF wrapped around in the noise in the observed sampling position is given by 𝜎 a torus structure. Thus the top edge (respectively, the e = 10 in (2). The probable sampling positions that left edge) and the bottom edge (respectively, the right form support Ω(t), are selected within the confidence ‖ [i] ̃[i]‖ ≤ 𝜎 edge) are neighbors to each other. The parameters of region of Pr( qt − qt e).

Fig. 1. The prediction results of Cases 1, 2 and 3 at time t = 100 are shown in the first, second, third columns, respectively. The first, second, and third rows correspond to the prediction, prediction error variance, and squared empirical error fields between the prediction and true fields. True, noisy, and probable sampling positions are shown in circles, stars, and dots, respectively. The x and y axis represent 2-D localization, and colors represent the value of the desired quantity in the locations.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd H. N. Do et al.: Fully Bayesian Field SLAM

3.2 Simulation results with accurate orientation to the noise level of the sampled positions (denoted by green stars). The results of the simultaneous localization and Table I shows the root mean squared errors (RMSE) spatial prediction are summarized for three methods in predictions of the scalar field and localizations using as follows. GMRF regression with true sampling positions (Case 1), GMRF regression with noisy sampling positions (Case Case 1. Fig. 1a,d,g shows the prediction, predic- • 2) and the proposed approach with uncertain sampling tion error variance, and squared (empirical) error positions (Case 3). This shows the effectiveness of our fields, using exact sampling positions. With the true solution to Problem II.2. sampling positions, the best prediction quality is expected for this case. • Case 2. Fig. 1b,e,h shows the resulting fields, by 3.3 Simulation results with noisy orientation using sampled noisy positions. The results clearly illustrate that naively applying GMRF regression In this section, we show how to relax the condi- 𝜓 𝜓 to noisy sampling positions can potentially distort tion that t is not known and t needs to be com- 𝜂 prediction at a significant level. Fig. 1h shows that puted from the noisy turn rate t using the EKF. We squared error of this case is considerably higher than extend the evolution of the robot location described in that of Case 1. (9) to include the orientation state as follows. For the • Case 3. Fig. 1c,f,i shows the resulting fields, by sake of simplicity, we will describe the algorithm for applying Algorithm 1. The resulting prediction qual- one robot. 𝐱 R3 ity is much improved as compared to Case 2 and is Define t ∈ as the state vector that includes posi- [1], [2],𝜓 T even comparable to the result for Case 1. tion and orientation of the robot, i.e., xt =[qt qt t] . Therefore, we have the state transition equation of the The true positions of the robot in the simulation mobile robot:  , , , for time t ∈ s ∶= {10 11 ··· 30} are shown in Fig. 2 ⎡ 𝜓 ⎤ ut cos t by red diamonds and lines. The estimated sampling posi- 𝐱 𝐱 ⎢ u 𝜓 ⎥ 𝜉 E |  t+1 = t +Δt ⎢ t sin t ⎥ + t tions of the robot (qt t) for t ∈ s are shown in ⎣ 𝜂 ⎦ (10) blue dots with estimated confidence regions. Fig. 2 clearly ( ) t 𝐱 , ,𝜂 𝜉 , shows that the proposed approach in this paper signifi- = f t ut t + t cantly reduces the localization uncertainty as compared where 𝜉 ∼  (0, Σ𝜉 ) is the system process noise. Notice t t 𝜉 R3 that t ∈ includes wt in (9) and the noise on the angu- 15 lar rate. The robot takes measurement of its own spatial location, i.e., 10 ̃ 𝐱 , qt = M t + et (11)

5 where [ ] 100 0 M = . 010

−5 We consider two localization schemes as follows.

−10 Table I. The RMSE of predicted random field and localization

−15 −15 −10 −5 0 5 10 15 RMSE Predicted field Localization Fig. 2. Cases 1–3: The trajectories of true, predicted, and noisy sampling positions of the robot are shown by red Case 1 0.0862 0.00* diamonds, blue dots, and green stars for time Case 2 0.1133 6.74 t ∈{10, 11, ···, 30}. The blue ellipsoids show the Case 3 0.1021 3.63 confidence regions of about 90% for the estimated ∗Since Case 1 uses exact sampling positions, sampling positions. the localization error is zero.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016

Table II. Comparison between GP-EKF We built a two-wheeled mobile robot equipped and EKF localization with a panoramic vision camera, a micro-controller, Method Averaged RMSE and two wireless routers as shown in the right-side of Fig. 3. The micro-controller (Arduino MEGA Case 4 EKF 12.3 board, Open Source Hardware platform, Italy) runs a Case 5 GP-EKF 9.9 proportional–integral–derivative (PID) controller. One wireless router (TP-Link TL-WR703N 150M, TP-LINK Inc., San Dimas, CA, USA) is used for streaming the • Case 4. EKF recursively predicts the location for video recorded from the web-cam (Logitech HD web- the next iteration using (10), then corrects the noisy cam C310, Logitech, Newark, CA, USA) equipped with ̃ ◦ measurement qt based on the updated covariance a 360 panoramic lens (Kogeto Panoramic Dot Optic matrix. Notice that in Case 4, the EKF does not Lens, Kogeto, New York, NY, USA) and another router utilize the field measurement. is used for receiving the control commands remotely via • Case 5. We apply our proposed method as a the Internet. post-processing step that utilizes the results from the The robot takes command inputs and a low-level ̃ 𝜓 EKF in Case 4, i.e., qt and t are the estimated states PID controller programmed in a micro-controller tracks from the EKF. the inputs so that the mobility of the robot can be described by We present a Monte Carlo simulation result with 50 iter- [ ] 𝜓 ations to compare our proposed method with the EKF. ut cos t , qt+1 = qt +Δt 𝜓 (12) The averaged RMSEs are shown in Table II. The EKF ut sin t and our proposed methods are indicated as “EKF” and 𝜓 “GP-EKF”, respectively. Table II clearly indicates that where Δt is the sampling time. qt, t,andut are the robot our method outperforms the EKF by 19.5%. position, the robot orientation, and the command input, 𝜓 In this example study, the fixed running time using respectively. The orientation t is re-calculated by the Matlab on a PC (3.2 GHz Intel i7 Processor, Intel Cor- given turn rates so that (12) is updated online. poration, Santa Clara, CA, USA) is about 100 seconds Fig. 3 shows the mobile robot in the testing envi- for each iteration of time, which is fast enough for a real ronment. The mobile robot is controlled remotely by world implementation. a user while the panoramic vision of the surrounding scene is recorded. Another overhead web camera (Log- itech HD webcam C910, Logitech, Newark, CA, USA) is IV. EXPERIMENTAL STUDY mounted on the ceiling to measure the ground truth posi- tions of the robot for evaluating the proposed approach. In this section, we present experiment results when Two PCs are used in the experiment, i.e., one for send- our scheme is applied to vision-based localization. ing command inputs to the robot and the other one for

Fig. 3. Left: The testing environment. Right: The mobile robot equipped with a panoramic vision camera.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd H. N. Do et al.: Fully Bayesian Field SLAM processing the images from the overhead web camera vertical patterns occurring in the surrounding vision for tracking robot’s trajectory for later evaluation, and images. Note that only the phase of the FFT is effected by recording the scene streamed from the panoramic vision the direction of the robot while the magnitude is robust camera installed on the robot. In summary, there are with respect to the heading angle. Therefore, we utilize three data sets collected from the experiment, all sampled the magnitude of the FFT of the features distributed in at 0.5 Hz. The command inputs, turn rates, and sam- the horizontal direction from (13) as follows. ,휂, pling times {ut t Δt} are sampled. The scene recorded by the panoramic vision web-cam is streamed on-line. m[t](𝔲)=|F [t](𝔲, 0)|, The positions of the mobile robot are tracked by the overhead camera. where | ⋅ | denotes the magnitude of a complex num- ber. To extract a good feature value, we fit m[t](𝔲) to an 4.1 Informative scalar field exponential function:

In this section, we select an informative scalar field ln m̂ [t](𝔲, a (t), a (t)) = a (t)𝔲 + ln a (t), from the vision data to be useful in localization. For 1 0 1 0 each sampling time, the frame (doughnut shape image in ∀𝔲 ∈{0, ···, M − 1}. Fig. 4a) from the video captured by the panoramic vision camera is unwrapped into a panoramic image. The height The noisy scalar field observation by the robot at t is the thickness of the doughnut shape image (Fig. 4a) is now considered to be 휋 and the width is 2 × router,whererouter is its outer radius. Each panoramic image is converted into a gray scale and M∑−1 { } ⋆ ⋆ resized as a square picture as shown in Fig. 4b. To find an {a (t), a (t)} = argmin ln m̂ [t] − ln m[t] , 1 2 , informative feature from the vision data, a fast Fourier a1(t) a2(t) 𝔲=0 Transform (FFT) is applied to the square images. For an z̃(t)∶=a⋆(t). image of size M × M captured at sampling time t,the 1 two-dimensional FFT is given by The standardized measurements {z̃(t)} are plotted ( ( )) in Fig. 4d. M∑−1M∑−1 a b In this section, we briefly explained how the infor- F [t](𝔲, 𝔳)= f [t](a, b)exp −j2휋 𝔲 +𝔳 M M mative scalar field was selected for vision-based localiza- a=0 b=0 tion. However, the feature selection for this application ∀𝔲, 𝔳 ∈{0, 2, ···, M − 1}, is another rich research topic in robotics and machine (13) learning communities [34], which is beyond the scope of this paper. where f [t](a, b) is the gray scale square image in the spatial coordinates, 𝔲 and 𝔳 are coordinates in the fre- 4.2 GMRF modeling quency domain, and j is the imaginary unit. For example, the two-dimensional magnitude plot from one image In this section, we show how to model z(t) via a is shown in Fig. 4c. Note that energy waves mainly GMRF model. The discretized surveillance region  con- distribute along the horizontal axis due to the dominant sists of 41×61 grid points (instead of 31×31 as in the

(a) (b) (c) (d)

2 1.5 1

15 0.5 10 0 5 −0.5 0 150 150 −1 100 100 −1.5 50 50 −2 0 0 1 5 10 15 20 25 30 35 40

Fig. 4. Feature extraction process: (a) The original doughnut shape image. (b) The resized panoramic image. (c) The zoomed-in frequency-domain image. (d) The sampled (standardized) scalar field of interest over time iteration.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016

(a) (b) (c)

1 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 10α 10α 10α α 10κ α 10κ α 10κ κ κ κ 0.1α 0.1κ 0.1α 0.1κ 0.1α 0.1κ 0 0 0 0 0 0

Fig. 5. The probabilities distribution of the hyperparameters vector 휃 at (a) t = 1, (b) t = 3 and (c) t = 40.

Fig. 7. The 3D view of the predicted scalar field and measurement of the robot at each sampling point are Fig. 6. The top-view of the prediction error variance plotted plotted in gray surface and red line, respectively. with the real robot’s trajectory in dashed white line. 80 simulation study) to be consistent with the frame ratio of the overhead camera. 70 The maximum likelihood estimation (MLE) [20] of the hyperparameter vector is found, which yields 훼 = 0.0661 (equivalent to a bandwidth 𝓁 = 5.5), and 휅 = 60 . 휎2 . 1 672 (equivalent to f = 0 72). To test the data-driven adaptability of our fully

Bayesian SLAM, we construct a prior distribution (as 50 if we don’t have the MLE) centered around the MLE such as

40 Θ={(휅,훼), (0.1휅,훼), (10휅,훼), (휅,0.1훼), (휅,10훼)}, 30 40 50 60 70 80 90 Fig. 8. The ground truth, predicted path and noisy along with the corresponding uniform probabilities observation are plotted in red diamonds, blue dots and {0.2, 0.2, 0.2, 0.2, 0.2}. We then investigate the posterior green stars, respectively. distribution of 휃 over the support set Θ. Note that Θ con- tains the MLE (휅,훼) hoping that our scheme converges to case) is about 1000 seconds for each iteration, which can this MLE. The measurement noise level is set by 휎휖 = 0.1. be reduced when the structure of the loop in Algorithm 1 The running time (using the same PC as in the simulation is exploited for parallelized computation.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd H. N. Do et al.: Fully Bayesian Field SLAM

4.3 Experimental results problem,” IEEE Trans. Rob. Autom, Vol. 17, No. 3, pp. 229–241 (2001). The mobile robot moves through the testing envi-  2. Laut, J., E. Henry, O. Nov, and M. Porfiri, “De- ronment and takes measurements for t ∈ e ∶= {1, ···, 40}. The updated discrete probability distribu- velopment of a mechatronics-based citizen science tion of the hyperparameter vector 휃 in(8)atthesam- platform for aquatic environmental monitoring,” pling times t = 1, t = 3andt = 40 are shown IEEE/ASME Trans. Mechatron., Vol. 19, No. 5, in Fig. 5. Fig. 5 clearly demonstrates the convergence pp. 1541–1551 (2014). of the posterior distribution of 휃 by the peak at the 3. Thrun, S., “Simultaneous localization and map- MLE (휅,훼). ping,” In Jefferies, M. E. and W-K Yeap (Eds.) The top-view of the prediction error variance is Robotics and cognitive approaches to spatial mapping, depicted in Fig. 6 with true positions of the robot plotted Springer Berlin Heidelberg, pp. 13–41 (2008). in a dashed, white line. The 3D view of the predicted field 4. Leonard, J. and H. Durrant-Whyte, “Simultaneous is plotted in Fig. 7 along with the interpolated measure- map building and localization for an autonomous ments from the robot over all sampling points. In contrast mobile robot,” Proc. Intell. Robots Syst. (IROS), to the simulation case, the robot mobility model may have IEEE Piscataway, New Jersey, USA, pp. 1442–1447 large modeling error due to the difference between the (1991). model and real robot dynamics. The RMSE of predicted 5. Grisettiyz, G., C. Stachniss, and W. Burgard, “Im- robot locations when only using open-loop kinematics is proving grid-based SLAM with rao-blackwellized 0.3393 (m) over the length of a 9 (m) trajectory. The esti-  particle filters by adaptive proposals and selective mated sampling positions of the robot for all t ∈ e are shown in blue dots in Fig. 8 along with the ground truth resampling,” Proc. IEEE Int. Conf. Robot. Autom., and noisy measured positions in red diamonds and green IEEE Piscataway, New Jersey, USA, pp. 2432–2437 stars, respectively. Our proposed approach reduces the (2005). RMS error to 0.1332 (m) with 61% improvement from the 6. Leonard, N., D. Paley, F. Lekien, R. Sepulchre, D. open-loop prediction. Fratantoni, and R. Davis, “Collective motion, sensor networks, and ocean sampling,” Proc. of the IEEE, Vol. 95, pp. 48–74 (2007). V. CONCLUSION 7. Ferris, B., D. Fox, and N. Lawrence, “WiFi-SLAM using Gaussian process latent variable models,” Proc. In this paper, we provide an approximate Bayesian 20th Int. Joint Conf. Artif. Intell., AAAI Press, solution to the problem of simultaneous localization and Menlo Park, California, pp. 2480–2485 (2007). spatial prediction (field SLAM), taking into account 8. Vallivaara, I., J. Haverinen, A. Kemppainen, and kinematics of robots and uncertainties in the precision J. Roning, “Simultaneous localization and map- matrix, the sampling positions, and the measurements of ping using ambient magnetic field,” Proc. IEEE Int. a GMRF in a fully Bayesian manner. In contrast to [24], Conf. Multisensor Fusion Integ. Intell. Syst, IEEE the kinematics of the robotic vehicles are integrated into Piscataway, New Jersey, USA, pp. 14–19 (2010). the inference algorithm. The simulation results show that the proposed approach estimates the sampling positions 9. Gutmann, J. S., E. Eade, P. Fong, and M. Munich, and predicts the spatial field along with their predic- “Vector Field SLAM: Localization by Learning tion error variances successfully, in a fixed computa- the Spatial Variation of Continuous Signals,” IEEE tional time. The feasibility of the proposed approach is Trans. Robot., Vol. 28, pp. 650–667 (2012). tested in a real world experiment. The experiment results 10. Do, H. N., M. Jadaliha, J. Choi, and C. Y. Lim, indicate the adaptability of the approach to handle the “Feature selection for position estimation using an noise from location measurement, scalar field observa- omnidirectional camera,” Image Vis. Comput., tions, and the uncertainty of the kinematics model in a Vol. 39, pp. 1–9 (2015). data-driven fashion. 11. Lynch, K. M., I. B. Schwartz, P. Yang, and R. A. Freeman, “Decentralized environmental modeling REFERENCES by mobile sensor networks,” IEEE Trans. Robot., Vol. 24, pp. 710–724 (2008). 1. Dissanayake, M., P. Newman, S. Clark, H. 12.Choi,J.,S.Oh,andR.Horowitz,“Distributedlearn- Durrant-Whyte, and M. Csorba, “A solution to the ing and cooperative control for multi-agent systems,” simultaneous localization and map building (SLAM) Automatica, Vol. 45, pp. 2802–2814 (2009).

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016

13. Cortés, J., “Distributed Kriged Kalman filter for 25. Brooks, A., A. Makarenko, and B. Upcroft, spatial estimation,” IEEE Trans. Autom. Control, “Gaussian process models for indoor and outdoor Vol. 54, pp. 2816–2827 (2009). sensor-centric robot localization,” IEEE Trans. 14. Xu, Y., J. Choi, S. Dass, and T. Maiti, “Se- Robot., Vol. 24, pp. 1341–1351 (2008). quential Bayesian prediction and adaptive sampling 26. Kemppainen, A., J. Haverinen, I. Vallivaara, and algorithms for mobile sensor networks,” IEEE J. Röning, “Near-optimal SLAM exploration in Trans. Autom. Control, Vol. 57, No. 8, pp. 2078–2084 Gaussian processes,” Proc. IEEE Int. Conf. Multi- (2012). sensor Fusion Integ. Intell. Syst., IEEE Piscataway, 15. Xu, Y. and J. Choi, “Adaptive sampling for learning New Jersey, USA, pp. 7–13 (2010). Gaussian processes using mobile sensor networks,” 27. O’Callaghan, S., F. Ramos, and H. Durrant-Whyte, Sensors, Vol. 11, pp. 3051–3066 (2011). “Contextual occupancy maps using Gaussian 16. Xu, Y., J. Choi, and S. Oh, “Mobile sensor net- processes,” Proc. IEEE Int. Conf. Robot. Autom., work navigation using Gaussian processes with trun- IEEE Piscataway, New Jersey, USA, pp. 1054–1060 cated observations,” IEEE Trans. Robot., Vol. 27, (2009). pp. 1118–1131 (2011). 28. Jadaliha, M. and J. Choi, “Fully Bayesian simul- 17. Xu, Y., J. Choi, S. Dass, and T. Maiti, “Efficient taneous localization and spatial prediction using Bayesian spatial prediction with mobile sensor net- Gaussian Markov random fields (GMRFs),” Amer. works using Gaussian Markov random fields,” Auto- Control Conf. (ACC), IEEE Piscataway, New Jersey, matica, Vol. 49, No. 12, pp. 3520–3530 (2013). USA, pp. 4592–4597 (2013). 18. Xu, Y. and J. Choi, “Spatial prediction with mobile 29. Rue, H. and H. Tjelmeland, “Fitting Gaussian sensor networks using Gaussian processes with Markov random fields to Gaussian fields,” Scand. J. built-in Gaussian Markov random fields,” Automat- Stat., Vol. 29, pp. 31–49 (2002). ica, Vol. 48, No. 8, pp. 1735–1740 (2012). 19. Cressie, N., “ nonstationary data,” J. Am. 30. Cressie, N. and N. Verzelen, “Conditional-mean Stat. Assoc., Vol. 81, pp. 625–634 (1986). least-squares fitting of Gaussian Markov random 20. Rasmussen, C. E. and C. K. I. Williams, Gaus- fields to Gaussian fields,” Comput. Stat. Data Anal., sian processes for . The MIT Vol. 52, pp. 2794–2807 (2008). Press, Cambridge, Massachusetts, England, London 31. Hartman, L. and O. Hössjer, “Fast kriging of large (2006). data sets with Gaussian Markov random fields,” 21. Krause, A., A. Singh, and C. Guestrin, Comput. Stat. Data Anal., Vol. 52, pp. 2331–2349 “Near-optimal sensor placements in Gaussian pro- (2008). cesses: theory, efficient algorithms and empirical 32. Lindgren, F., H. Rue, and J. Lindström, “An explicit studies,” J. Mach. Learn. Res., Vol. 9, pp. 235–284 link between Gaussian fields and Gaussian Markov (2008). random fields: the stochastic partial differential 22. Jadaliha, M., Y. Xu, and J. Choi, “Gaussian pro- equation approach,” J. R. Stat. Soc. Ser. Part B-Stat. cess regression using Laplace approximations under Methodol., Vol. 73, pp. 423–498 (2011). localization uncertainty,” Proc. Amer. Control Conf., 33. Ribeiro, M. I. Kalman and extended Kalman filters: IEEE Piscataway, New Jersey, USA, pp. 1394–1399 Concept, derivation and properties, Institute for Sys- (2012). tems and Robotics, Technical University of Lisbon, 23. Choi, S., M. Jadaliha, J. Choi, and S. Oh, Lisbon, Portugal, 2004. “Distributed gaussian process regression under localization uncertainty,” J. Dynam. Syst. Meas. 34. DeSouza, G. N. and A. C. Kak, “Vision for mobile robot navigation: A survey,” IEEE Trans. Pattern Control-Trans. ASME, Vol. 137, No. 3, pp. 031002 Anal. Mach. Intell., Vol. 24, No. 2, pp. 237–267 (2015). (2002). 24. Jadaliha, M., Y. Xu, and J. Choi, “Efficient Spa- tial Prediction Using Gaussian Markov Random Fields Under Uncertain Localization,” Proc. ASME VI. APPENDIX Dynam. Syst. Control Conf., The American Soci- ety of Mechanical Engineering (ASME) Two Park The proposed fully Bayesian SLAM algorithm is Avenue, New York, pp. 253–262 (2012). summarized in Algorithm 1.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd H. N. Do et al.: Fully Bayesian Field SLAM

Huan N. Do earned his Bachelor of Foundation fellowship to pursue a PhD program in Science with a Cum Laude Honor in Mechanical Engineering at Michigan State Univer- Mechanical Engineering from Korean sity. His research involves Bayesian methods, machine Advanced Institute of Science and learning, large-scale data analysis, feature selection, Technology (KAIST) in 2012. He appearance-based localization and biomedical engineer- was awarded the Vietnam Education ing. He is a member of ASME.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016

Mahdi Jadaliha received the B.S. degree in Jongeun Choi received his Ph.D. and M.S. Electrical Engineering from Iran Univer- degrees in Mechanical Engineering from sity of Science and Technology, Tehran, the University of California at Berkeley Iran, in 2005 and the M.S. degree in Elec- in 2006 and 2002 respectively. He also trical Engineering from Sharif University received the B.S. degree in Mechanical of Technology, Tehran, Iran, in 2007. He Design and Production Engineering from earned the Ph.D. degree in Mechanical Yonsei University at Seoul, Republic of Engineering at Michigan State University, East Lans- Korea in 1998. He is currently Associate Professor with ing, Michigan, U.S.A. under the guidance of Prof. J. the Departments of Mechanical Engineering and Elec- Choi. Currently, he holds the position of Data Scientist at trical and Computer Engineering at the Michigan State Monsanto. In the last few years, he focused on using University. His current research interests include sys- BIG DATA along different statistical models to predict tems and control, system identification, and Bayesian phenotype and genotype data over agricultural products. methods, with applications to mobile robotic sensors, environmental adaptive sampling, engine control, human motor control systems, and biomedical problems. He Mehmet Temel received the B.Sc. degree in is Associate Editor for Journal of Dynamic Systems, Mechatronics Engineering from Sabanci Measurement and Control (JDSMC). He co-organized University, Istanbul, Turkey in 2007 and a special issue on Stochastic Models, Control, and the M.Sc. degree in Mechanical Engi- Algorithms in Robotics in JDSMC and served as its neering from the University of Delaware, co-editor during 2014-2015. His papers were finalists for Newark, DE in 2010. He served as a visit- the Best Student Paper Award at the 24th American Con- ing scholar at Michigan State University, trol Conference (ACC) 2005 and the Dynamic System East Lansing, MI. His research interests are in the area and Control Conference (DSCC) 2011 and 2012. He is a of design, mechatronics, robotics, and control. Currently, recipient of an NSF CAREER Award in 2009. Dr. Choi he is employed as Mechanical Engineer in the industry. is a member of ASME and IEEE.

© 2015 Chinese Automatic Control Society and Wiley Publishing Asia Pty Ltd