
Journal of Machine Learning Research 22 (2021) 1-61 Submitted 8/19; Revised 4/20; Published 1/21 Finite Time LTI System Identification Tuhin Sarkar [email protected] Department of Electrical Engineering and Computer Sciences Massachusetts Institute of Technology Cambridge, MA 02139, USA Alexander Rakhlin [email protected] Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139, USA Munther A. Dahleh [email protected] Department of Electrical Engineering and Computer Sciences Massachusetts Institute of Technology Cambridge, MA 02139, USA Editor: Benjamin Recht Abstract We address the problem of learning the parameters of a stable linear time invariant (LTI) system with unknown latent space dimension, or order, from a single time{series of noisy input-output data. We focus on learning the best lower order approximation allowed by finite data. Motivated by subspace algorithms in systems theory, where the doubly infinite system Hankel matrix captures both order and good lower order approximations, we construct a Hankel-like matrix from noisy finite data using ordinary least squares. This circumvents the non-convexities that arise in system identification, and allows accurate estimation of the underlying LTI system. Our results rely on careful analysis of self-normalized martingale difference terms that helps bound identification error up to logarithmic factors of the lower bound. We provide a data-dependent scheme for order selection and find an accurate realization of system parameters, corresponding to that order, by an approach that is closely related to the Ho-Kalman subspace algorithm. We demonstrate that the proposed model order selection procedure is not overly conservative, i.e., for the given data length it is not possible to estimate higher order models or find higher order approximations with reasonable accuracy. Keywords: Linear Dynamical Systems, System Identification, Non{parametric statistics, control theory, Statistical Learning theory 1. Introduction Finite-time system identification—the problem of estimating the system parameters given a finite single time series of its output|is an important problem in the context of control theory, time series analysis, robotics, and economics, among many others. In this work, we focus on parameter estimation and model approximation of linear time invariant (LTI) c 2021 T. Sarkar and A. Rakhlin and M. A. Dahleh. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v22/19-725.html. T. Sarkar and A. Rakhlin and M. A. Dahleh systems or linear dynamical system (LDS), which are described by Xt+1 = AXt + BUt + ηt+1 Yt = CXt + wt: (1) p×n n×n n×m 1 Here C 2 R ;A 2 R ;B 2 R ; fηt; wtgt=1 are process and output noise, Ut is an external control input, Xt is the latent state variable and Yt is the observed output. The goal here is parameter estimation, i.e., learning (C; A; B) from a single finite time series of T fYt;Utgt=1 when the order, n, is unknown. Since typically p; m < n, it becomes challenging to find suitable parametrizations of LTI systems for provably efficient learning. When 1 fXjgj=1 are observed (or, C is known to be the identity matrix), identification of (C; A; B) in Eq. (1) is significantly easier, and ordinary least squares (OLS) is a statistically optimal estimator. It is, in general, unclear how (or if) OLS can be employed in the case when Xt's are not observed. To motivate the study of a lower-order approximation of a high-order system, consider the following example: Example 1 Consider M1 = (A1;B1;C1) with 2 0 1 0 0 ::: 03 203 6 0 0 1 0 ::: 07 607 6 7 6 7 6 . .. .7 6.7 > A1 = 6 . .7 B1 = 6.7 C1 = B1 (2) 6 7 6 7 4 0 0 0 0 ::: 15 405 −a 0 0 0 ::: 0 1 n×n n×1 where na 1 and n > 20. Here the order of M1 is n. However, it can be approximated well by M2 which is of a much lower order and given by 0 0 0 A = B = C = B>: (3) 2 1 0 2 1 2 2 (1) (2) For the same input Ut, if Yt ;Yt be the output generated by M1 and M2 respectively then a simple computation shows that 1 (1) (2) X (Y − Y )2 sup t t ≤ 4n2a2 1 U 2 U t=1 t This suggests that the actual value of n is not important; rather there exists an effective order, r (which is 2 in this case). This lower order model captures \most" of the LTI system. Since the true model order is not known in many cases, we emphasize a nonparametric approach to identification: one which adaptively selects the best model order for the given data and approximates the underlying LTI system better as T (length of data) grows. The key to this approach will be designing an estimator M^ from which we obtain a realization (C;^ A;^ B^) of the selected order. 2 Finite Time LTI System Identification 1.1 Related Work Linear time invariant systems are an extensively studied class of models in control and systems theory. These models are used in feedback control systems (for example in planetary soft landing systems for rockets (A¸cıkme¸seet al., 2013)) and as linear approximations to many non{linear systems that nevertheless work well in practice. In the absence of process and output noise, subspace-based system identification methods are known to learn (C; A; B) (up to similarity transformation)(Ljung, 1987; Van Overschee and De Moor, 2012). These typically involve constructing a Hankel matrix from the input{output pairs and then obtaining system parameters by a singular value decomposition. Such methods are inspired by the celebrated Ho-Kalman realization algorithm (Ho and Kalman, 1966). The correctness of these methods is predicated on the knowledge of n or presence of infinite data. Other approaches include rank minimization-based methods for system identification (Fazel et al., 2013; Grussler et al., 2018), further relaxing the rank constraint to a suitable convex formulation. However, there is a lack of statistical guarantees for these algorithms, and it is unclear how much data is required to obtain accurate estimates of system parameters from finite noisy data. Empirical methods such as the EM algorithm (Dempster et al., 1977) are also used in practice; however, these suffer from non-convexity in problem formulation and can get trapped in local minima. Learning simpler approximations to complex models in the presence of finite noisy data was studied in Venkatesh and Dahleh (2001) where identification error is decomposed into error due to approximation and error due to noise; however the analysis assumes the knowledge of a \good" parametrization and does not provide statistical guarantees for learning the system parameters of such an approximation. More recently, there has been a resurgence in the study of statistical identification of LTI systems from a single time series in the machine learning community. In cases when C = I, i.e., Xt is observed directly, sharp finite time error bounds for identification of A; B from a single time series are provided in Faradonbeh et al. (2017); Simchowitz et al. (2018); Sarkar and Rakhlin (2018). The approach to finding A; B is based on a standard ordinary least squares (OLS) given by T ^ ^ X > > > 2 (A; B) = arg min jjXt+1 − [A; B][Xt ;Ut ] jj2: A;B t=1 Another closely related area is that of online prediction in time series Hazan et al. (2018); Agarwal et al. (2018). Finite time regret guarantees for prediction in linear time series are provided in Hazan et al. (2018). The approach there circumvents the need for system identification and instead uses a filtering technique that convolves the time series with eigenvectors of a specific Hankel matrix. Closest to our work is that of Oymak and Ozay (2018). Their algorithm, which takes inspiration from the Kalman{Ho algorithm, assumes the knowledge of model order n. This limits the applicability of the algorithm in two ways: first, it is unclear how the techniques can be extended to the case when n is unknown|as is usually the case|and, second, in many cases n is very large and a much lower order LTI system can be a very good approximation of the original system. In such cases, constructing the order n estimate might be unnecessarily conservative (See Example 1). Consequently, the error bounds do not reflect accurate dependence on the system parameters. 3 T. Sarkar and A. Rakhlin and M. A. Dahleh When n is unknown, it is unclear when a singular value decomposition should be performed to obtain the parameter estimates via Ho-Kalman algorithm. This leads to the question of model order selection from data. For subspace based methods, such problems have been addressed in Shibata (1976) and Bauer (2001). These papers address the question of estimating order in the context of subspace methods. Specifically, order estimation is achieved by analyzing the information contained in the estimated singular values and/or estimated innovation variance. Furthermore, they provide guarantees for asymptotic consistency of the methods described. It is unclear, however, if these techniques and guarantees can be extended to the case when only finite data is available. Another line of literature studied in Ljung et al. (2015) for example, approaches the identification of systems with unknown order by first learning the largest possible model that fits the data and then performing model reduction to obtain the final system. Although one can show that asymptotically this method outputs the true model, we show that such a two step procedure may underperform in a finite time setting.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages61 Page
-
File Size-