ISSN: 2332-2071 Volume 6 Number 1 2018

Mathematics and

http://www.hrpub.org

Horizon Research Publishing, USA

http://www.hrpub.org

Mathematics and Statistics Mathematics and Statistics is an international peer-reviewed journal that publishes original and high-quality research papers in all areas of mathematics and statistics. As an important academic exchange platform, scientists and researchers can know the most up-to-date academic trends and seek valuable primary sources for reference. The subject areas include, but are not limited to the following fields: Algebra, Analysis, Applied mathematics, Approximation theory, Combinatorics, Computational statistics, Computing in Mathematics, Design of experiments, Discrete mathematics, Dynamical systems, Geometry and Topology, Logic and Foundations of mathematics, Number theory, Numerical analysis, Probability theory, Quantity, Recreational mathematics, Sample Survey, Statistical modelling, Statistical theory. General Inquires Publish with HRPUB, learn about our policies, submission guidelines etc. Email: [email protected] Tel: +1-626-626-7940 Subscriptions Journal Title: Mathematics and Statistics Journal’s Homepage: http://www.hrpub.org/journals/jour_info.php?id=34 Publisher: Horizon Research Publishing Co.,Ltd Address: 2880 ZANKER RD STE 203 SAN JOSE, CA 95134 USA Publication Frequency: bimonthly Electronic Version: freely online available at http://www.hrpub.org/journals/jour_info.php?id=34 Online Submission Manuscripts should be submitted by Online Manuscript Tracking System (http://www.hrpub.org/submission.php). If you are experiencing difficulties during the submission process, please feel free to contact the editor at [email protected]. Copyright Authors retains all copyright interest or it is retained by other copyright holder, as appropriate and agrees that the manuscript remains permanently open access in HRPUB 's site under the terms of the Creative Commons Attribution International License (CC BY). HRPUB shall have the right to use and archive the content for the purpose of creating a record and may reformat or paraphrase to benefit the display of the record. Creative Commons Attribution License (CC-BY) All articles published by HRPUB will be distributed under the terms and conditions of the Creative Commons Attribution License(CC-BY). So anyone is allowed to copy, distribute, and transmit the article on condition that the original article and source is correctly cited. Open Access Open access is the practice of providing unrestricted access to peer-reviewed articles via the internet. It is also increasingly being provided to scholarly monographs and book chapters. All original research papers published by HRPUB are available freely and permanently accessible online immediately after publication. Readers are free to copy and distribute the contribution under creative commons attribution-non commercial licence. Authors can benefit from the open access publication model a lot from the following aspects: • High Availability and High Visibility-free and unlimited accessibility of the publication over the internet without any restrictions; • Rigorous peer review of research papers----Fast, high-quality double blind peer review; • Faster publication with less cost----Papers published on the internet without any subscription charge; • Higher Citation----open access publications are more frequently cited. Mathematics and Statistics

Editor-in-Chief

Prof. Dshalalow Jewgeni Florida Inst. of Technology, USA Members of Editorial Board

Jiafeng Lu Zhejiang Normal University, China

Nadeem-ur Rehman Aligarh Muslim University, India

Debaraj Sen Concordia University, Canada

Mauro Spreafico University of São Paulo, Brazil

Veli Shakhmurov Okan University, Turkey

Antonio Maria Scarfone Institute of Complex Systems - National Research Council, Italy

Liang-yun Zhang Nanjing Agricultural University, China

Ilgar Jabbarov Ganja state university, Azerbaijan

Mohammad Syed Pukhta Sher-e-Kashmir University of Agricultural Sciences and Technology, India

Vadim Kryakvin Southern Federal University, Russia

Rakhshanda Dzhabarzadeh National Academy of Science of Azerbaijan, Azerbaijan

Sergey Sudoplatov Sobolev Institute of Mathematics, Russia

Birol Altın Gazi University, Turkey

Araz Aliev Baku State University, Azerbaijan

Francisco Gallego Lupianez Universidad Complutense de Madrid, Spain

Hui Zhang St. Jude Children's Research Hospital, USA

Yusif Abilov Odlar Yurdu University, Azerbaijan

Evgeny Maleko Magnitogorsk State Technical University, Russia

İmdat İşcan Giresun University, Turkey

Emanuele Galligani University of Modena and Reggio Emillia, Italy

Mahammad Nurmammadov Baku State University, Azerbaijan

Horizon Research Publishing http://www.hrpub.org ISSN: 2332-2071 Table of Contents

Mathematics and Statistics

Volume 6 Number 1 2018

New Gradient Methods for Bandwidth Selection in Bivariate Kernel Density Estimation (https://www.doi.org/10.13189/ms.2018.060101) Siloko, I. U., Ishiekwene, C. C., Oyegue, F. O...... 1

Exponential Dichotomy and Bifurcation Conditions of Solutions of the Hamiltonian Operators Boundary Value Problems in the Hilbert Space (https://www.doi.org/10.13189/ms.2018.060102) Pokutnyi Oleksandr ...... 9

Mathematics and Statistics 6(1): 1-8, 2018 http://www.hrpub.org DOI: 10.13189/ms.2018.060101

New Gradient Methods for Bandwidth Selection in Bivariate Kernel Density Estimation

Siloko, I. U.1,*, Ishiekwene, C. C.2, Oyegue, F. O.2

1Department of Mathematical Sciences, Edwin Clark University, Nigeria 2Department of Mathematics, University of Benin, Nigeria

Copyright©2018 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract The bivariate kernel density estimator is smoothing parameter selectors by some researchers [3, 4]. fundamental in data smoothing methods especially for data The importance of the bivariate kernel density estimator exploration and visualization purposes due to its ease of cannot be overemphasized because it occupies a unique graphical interpretation of results. The crucial factor which position of bridging the univariate kernel density determines its performance is the bandwidth. We present estimator and other higher dimensional kernel estimators new methods for bandwidth selection in bivariate kernel [5]. The usefulness of the bivariate kernel density density estimation based on the principle of gradient estimator is mainly in its simplicity of presentation of method and compare the result with the biased probability density estimates, either as surface plots or cross-validation method. The results show that the new contour plots. It also helps in understanding other higher methods are reliable and they provide improved methods dimensional kernel estimators [2]. In bivariate kernel density estimation, , is taken to be the two random for a choice of smoothing parameter. The asymptotic mean variables with a joint probability density function ( , ). integrated squared error is used as the measure of The random variables𝐱𝐱 𝐲𝐲 , , = , ,…, are the set of performance of the new methods. observations and is the sample size. The bivariate𝒇𝒇 𝐱𝐱 𝐲𝐲 𝒊𝒊 𝒊𝒊 Keywords Bandwidth, Bivariate Kernel Density kernel density estimate of𝐗𝐗 𝐘𝐘( 𝒊𝒊 , )𝟏𝟏 is𝟐𝟐 of the𝒏𝒏 form Estimator, Biased Cross-validation, Gradient Method, 𝒏𝒏 ( , ) = , Asymptotic Mean Integration Squared Error 𝒇𝒇 𝐱𝐱 𝐲𝐲 (1.1) 𝟏𝟏 𝒏𝒏 𝐱𝐱−𝐗𝐗𝒊𝒊 𝐲𝐲−𝐘𝐘𝒊𝒊 where 𝒇𝒇�>𝐱𝐱 0𝐲𝐲 and𝒏𝒏𝒉𝒉 𝐱𝐱𝒉𝒉𝐲𝐲 ∑>𝒊𝒊=𝟏𝟏 0𝑲𝑲 �are𝒉𝒉𝐱𝐱 the𝒉𝒉𝐲𝐲 � smoothing ( , ) parameters𝐱𝐱 in the and𝐲𝐲 axes and is a bivariate 𝒉𝒉kernel function𝒉𝒉 [5, 6]. The bivariate kernel 1. Introduction density estimator in (1.1)𝐗𝐗 can be𝐘𝐘 written as [7]𝑲𝑲 𝐱𝐱 𝐲𝐲 Kernel density estimators are widely used nonparametric ( , ) = (1.2) estimation techniques due to their simple forms and 𝟏𝟏 𝒏𝒏 𝟏𝟏 𝐱𝐱−𝐗𝐗𝒊𝒊 𝟏𝟏 𝐲𝐲−𝐘𝐘𝒊𝒊 𝒊𝒊=𝟏𝟏 smoothness. Kernel density estimation is the construction Bivariate𝒇𝒇� 𝐱𝐱 𝐲𝐲bandwidth𝒏𝒏 ∑ selection𝒉𝒉𝐱𝐱 𝑲𝑲 � 𝒉𝒉 𝐱𝐱is� 𝒉𝒉a𝐲𝐲 𝑲𝑲difficult � 𝒉𝒉𝐲𝐲 � problem of a probability density estimates from a given sample with which may be simplified by imposing constraints on and . For example, and may be restricted to be few assumptions about the underlying probability density 𝐱𝐱 the diagonal elements of the bandwidth matrix and the𝒉𝒉 function and the kernel function. Kernel density estimation 𝐲𝐲 𝐱𝐱 𝐲𝐲 is a popular tool for visualising the distribution of data [1]. advantages𝒉𝒉 of imposing𝒉𝒉 restrictions𝒉𝒉 on and has

These estimates depend on a bandwidth also known as the been investigated [8]. One of the popular𝐱𝐱 methods𝐲𝐲 of smoothing parameter which controls the smoothness and a bandwidth selection that is data based𝒉𝒉 is the 𝒉𝒉biased kernel function which plays the role of a weighting cross-validation method that considers the asymptotic function [2]. Bandwidth selection is a key issue in kernel mean integrated squared error [9]. The bivariate biased methods and has attracted the attention of researchers over cross-validation method is based on minimizing an the years. It is still an active research area in kernel density estimate of the asymptotic mean integrated squared error estimation. Progress has been made recently on data based and is of the form [10] 2 New Gradient Methods for Bandwidth Selection in Bivariate Kernel Density Estimation

, = + × + + + ( ) ( ) ( ) (1.3) 𝟏𝟏 𝟏𝟏 𝒏𝒏 𝟐𝟐 𝟐𝟐 𝟐𝟐 𝟐𝟐 𝟐𝟐 𝐱𝐱 𝐲𝐲 𝒊𝒊=𝟏𝟏 𝒋𝒋≠𝒊𝒊 𝟏𝟏 𝟐𝟐 𝟏𝟏 𝟐𝟐 𝟏𝟏 𝟐𝟐 where 𝑩𝑩𝑩𝑩𝑩𝑩= �𝒉𝒉 𝒉𝒉, � =𝟒𝟒𝝅𝝅𝝅𝝅𝒉𝒉𝐱𝐱𝒉𝒉𝐲𝐲 and𝟒𝟒 𝒏𝒏 𝒏𝒏 −is𝟏𝟏 the𝒉𝒉𝐱𝐱𝒉𝒉 𝐲𝐲standard∑ normal∑ �� ∆density∆ function.� − 𝟖𝟖� ∆ ∆ � 𝟖𝟖� 𝝓𝝓 ∆ 𝝓𝝓 ∆ 𝐱𝐱−𝐗𝐗𝒊𝒊 𝐲𝐲−𝐘𝐘𝒊𝒊 This∆ paper𝟏𝟏 � 𝒉𝒉focuses𝐱𝐱 � ∆ 𝟐𝟐on �the𝒉𝒉𝐲𝐲 methods� 𝝓𝝓 of selecting bandwidth for bivariate kernel density estimator using the gradient methods. The rest of the paper is organized as follows: in section 2, we present the asymptotic mean integrated square error of the bivariate kernel density estimator; in section 3, we present the gradient methods while section 4 talks about numerical illustrations of results. Section 5 concludes the paper.

2. Asymptotic MISE Approximations The estimate ( , ) in (1.2) is measured by the asymptotic mean integrated squared error (AMISE). A straightforward asymptotic approximation of (1.2) using the multivariate version of Taylor’s series expansion yields the integrated variance𝒇𝒇� (IV)𝐱𝐱 𝐲𝐲 and the integrated squared bias (ISB) as ( ) = and 𝒅𝒅 𝑹𝑹 𝑲𝑲 (2.1) ( ) = (𝑰𝑰𝑰𝑰) 𝒏𝒏𝒉𝒉𝐱𝐱𝒉𝒉𝐲𝐲 , 𝟐𝟐 � 𝟏𝟏 𝟐𝟐 𝟐𝟐 𝟐𝟐 𝝏𝝏 𝒇𝒇 𝐱𝐱 𝐲𝐲 𝟐𝟐 𝟐𝟐 𝟐𝟐 𝟐𝟐 where ( ) is the roughness of the𝑰𝑰𝑰𝑰 kernel,𝑰𝑰 𝟒𝟒 𝝁𝝁𝟐𝟐 𝑲𝑲is the∫ trace𝒕𝒕𝒕𝒕 � ∑of𝒋𝒋 =a𝟏𝟏 matrix𝝏𝝏𝐱𝐱 𝝏𝝏𝐲𝐲 (matrix𝒉𝒉𝐱𝐱𝒉𝒉𝐲𝐲� 𝒅𝒅of𝐱𝐱 second𝒅𝒅𝐲𝐲 partial derivatives) of , is the dimension of the kernel and ( ) is the second moment of the kernel. Combining the terms in (2.1) yields an estimate𝑹𝑹 of𝑲𝑲 the asymptotic mean integrated𝟐𝟐 squared𝒕𝒕𝒕𝒕 error and is of the form 𝒇𝒇 𝒅𝒅 𝟐𝟐 ( ) 𝝁𝝁 𝑲𝑲 ( , ) ( ) = + ( ) = + ( ) ( ) (2.2) 𝒅𝒅 𝟐𝟐 𝒅𝒅 𝑹𝑹 𝑲𝑲 𝟏𝟏 𝟐𝟐 𝟐𝟐 𝟐𝟐 𝝏𝝏 𝒇𝒇 𝐱𝐱 𝐲𝐲 𝟐𝟐 𝟐𝟐 𝑹𝑹 𝑲𝑲 𝟏𝟏 𝟐𝟐 ″ 𝟐𝟐 𝟐𝟐 𝒏𝒏𝒉𝒉𝐱𝐱𝒉𝒉𝐲𝐲 𝟒𝟒 𝟐𝟐 ∑𝒋𝒋=𝟏𝟏 𝝏𝝏𝐱𝐱 𝝏𝝏𝐲𝐲 𝐱𝐱 𝐲𝐲 𝒏𝒏𝒉𝒉𝐱𝐱𝒉𝒉𝐲𝐲 𝟒𝟒 𝟐𝟐 𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 (𝝁𝝁, 𝐊𝐊) ∫ 𝒕𝒕𝒕𝒕 � 𝒉𝒉 𝒉𝒉 � 𝒅𝒅𝐱𝐱𝒅𝒅𝐲𝐲 𝝁𝝁 𝐊𝐊 𝑹𝑹 𝒇𝒇 where ( ) = 𝟐𝟐 is the roughness of ( , ). 𝟐𝟐 ″ 𝟐𝟐 𝝏𝝏 𝒇𝒇 𝐱𝐱 𝐲𝐲 𝟐𝟐 𝟐𝟐 ″ 𝑹𝑹 𝒇𝒇 � 𝒕𝒕𝒕𝒕 �� 𝟐𝟐 𝟐𝟐 𝒉𝒉𝐱𝐱𝒉𝒉𝐲𝐲� 𝒅𝒅𝐱𝐱𝒅𝒅𝐲𝐲 𝒇𝒇 𝐱𝐱 𝐲𝐲 The smoothing parameter𝒋𝒋 =that𝟏𝟏 𝝏𝝏 minimizes𝐱𝐱 𝝏𝝏𝐲𝐲 the AMISE of (2.2) is given by

( ) 𝟏𝟏 × (2.3) ( ) (𝒅𝒅 ) �𝒅𝒅+𝟒𝟒� 𝟏𝟏 𝒅𝒅𝒅𝒅 𝑲𝑲 −�𝒅𝒅+𝟒𝟒� 𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 𝟐𝟐 ″ = 𝑯𝑯 ≈ �𝝁𝝁𝟐𝟐 𝐊𝐊 𝑹𝑹 𝒇𝒇 � 𝒏𝒏 ( ) The yields an 𝟒𝟒 and the bandwidths are of order . The problem with (2.3) is −�𝒅𝒅+𝟒𝟒� −𝟏𝟏⁄ 𝒅𝒅+𝟒𝟒 that it has𝑨𝑨𝑨𝑨 a 𝑨𝑨𝑨𝑨bias𝑨𝑨 term which depends on ( ) and cannot be evaluated if the true density function is not known. 𝑯𝑯 𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 𝑶𝑶 �𝒏𝒏 � 𝒏𝒏 Several suggestions have been made and one″ of the simplest solutions to this problem is to obtain the value of ( ) from the normal distribution for unimodal𝑹𝑹 𝒇𝒇 data but for data that is multimodal (which is kernel density𝒇𝒇 estimations’ expectation),″ it behaves poorly in performance and in that situation, it serves as an initial point for other iterative methods 𝑹𝑹[11,𝒇𝒇 12].

3. The Gradient Method Gradient methods are iterative methods of optimizing functions that are at least twice continuously differentiable [13]. Gradient methods involve generating successive points in the direction of the gradient function with the desire to obtain the stationary points [14]. The gradient of a function ( ) denoted by ( ) is given by

( ) 𝒇𝒇= 𝐱𝐱 , , … , 𝛁𝛁𝒇𝒇 𝐱𝐱 (3.1) 𝝏𝝏𝝏𝝏 𝝏𝝏𝝏𝝏 𝝏𝝏𝝏𝝏 𝑻𝑻 Gradient methods are iterative techniques 𝛁𝛁with𝒇𝒇 𝐱𝐱 the elements�𝝏𝝏𝐱𝐱𝟏𝟏 𝝏𝝏𝐱𝐱𝟐𝟐 of the𝝏𝝏𝐱𝐱 𝒏𝒏gradient� being nonlinear and a positive scalar (stepsize) that determines the distance between the current point and the previous point [13]. Generally, the gradient method requires higher derivatives of the function to be approximated. In this work, we modify the gradient methods of Barzilai and Borwein and the relaxed steepest descent method of Raydan and Svaiter [15, 16]. In the application of the gradient methods, we replace the unknown quantity ( ) in (2.3) by a suitable kernel based estimate denoted by ( ) , i.e. ( ) = ( ) . The kernel function used ″is the standard normal kernel that produces smooth density estimates and ″simplifies the mathematical computations𝑹𝑹 𝒇𝒇 needed. In the case of the standard normal kernel, successive𝑹𝑹��𝒇𝒇 smoothing𝐱𝐱 � 𝑹𝑹 parameters𝒇𝒇 𝑹𝑹�� 𝒇𝒇will𝐱𝐱 be� obtained from the approximation given by ( ) ( ) × { ( )}. (3.2) ( ) × 𝒅𝒅 ( ) 𝟏𝟏⁄ 𝒅𝒅+𝟒𝟒 𝒅𝒅𝒅𝒅 𝑲𝑲 − 𝟏𝟏⁄ 𝒅𝒅+𝟒𝟒 𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 𝟐𝟐 𝑯𝑯 ≈ ��𝝁𝝁𝟐𝟐 𝐊𝐊 𝑹𝑹��𝒇𝒇 𝐱𝐱 �� � 𝒏𝒏 Mathematics and Statistics 6(1): 1-8, 2018 3

Since all gradient methods are iterative methods and smoothing parameter using (3.2) above. they require the choice of a starting value say we apply STEP4. Test a criterion for stopping the iterations. If the a kernel based estimate as an initial point for the iteration = + 𝐨𝐨 test is satisfied, then stop. Otherwise, consider process and is of the form 𝐗𝐗 and continue with step 2 by updating the step size, 𝐭𝐭 𝐭𝐭 𝟏𝟏 ( ) = , where = ( ). = 𝟏𝟏 × , (3.3) 𝐓𝐓 ( ) 𝒅𝒅 � � 𝟏𝟏 𝒈𝒈𝐭𝐭+𝟏𝟏𝒈𝒈𝐭𝐭+𝟏𝟏 𝑹𝑹 𝑲𝑲 𝒅𝒅+𝟒𝟒 −�𝒅𝒅+𝟒𝟒� 𝐭𝐭+𝟏𝟏 𝐓𝐓 𝐭𝐭+𝟏𝟏 𝐭𝐭+𝟏𝟏 𝐨𝐨 𝟐𝟐 𝛌𝛌 𝒈𝒈𝐭𝐭+𝟏𝟏𝐀𝐀 𝒈𝒈𝐭𝐭+𝟏𝟏 𝒈𝒈 𝛁𝛁𝒇𝒇 𝐗𝐗 where is𝐗𝐗 the standard�𝝁𝝁𝟐𝟐 𝐊𝐊 𝝈𝝈 deviation𝒋𝒋� 𝒏𝒏of the variate. 3.2. The Relaxed Steepest Descent Method 𝒋𝒋 3.1. The𝝈𝝈 Barzilai and Borwein Gradient 𝒋𝒋Method𝒕𝒕𝒉𝒉 Another modification of the steepest descent method This method solves the problems associated with the was made by Raydan and Svaiter [16] and they concluded classical gradient method with their novel approach of the that the poor performance of the method is a function of the stepsize selection. It requires lesser computations and the stepsize and not the search direction. In solving the rate of convergence is also accelerated, though with the problem of the stepsize’s selection, they introduced a scalar same search direction as the classical gradient method but called the relaxation parameter which lies between 0 with better performance [17]. The method is used for and 2 into the classical steepest method. The classical 𝜽𝜽 obtaining large sparse linear system of equations from the steepest descent method is of the form solution of partial differential equations and also applies in = – , = , , , …, (3.6) optimization theory due to its ease of computation and where = ( ). The introduction of the relaxation 𝐗𝐗𝐭𝐭+𝟏𝟏 𝐗𝐗𝐭𝐭 𝛌𝛌𝐭𝐭𝒈𝒈𝐭𝐭 𝐭𝐭 𝟎𝟎 𝟏𝟏 𝟐𝟐 implementation [18, 19]. The Barzilai and Borwein parameter which is a random scalar on the 𝐭𝐭 𝐭𝐭 gradient method is of the form interval𝒈𝒈 [ , 𝛁𝛁]𝒇𝒇, resulted𝐗𝐗 in the modified steepest descent 𝜽𝜽 = + , = , , , …, (3.4) method which is of the form 𝟎𝟎 𝟐𝟐 𝐭𝐭+𝟏𝟏 𝐭𝐭 𝐭𝐭 = – , = , , , …, (3.7) where =𝐗𝐗 and𝐗𝐗 is𝐒𝐒 the𝐭𝐭 stepsize.𝟎𝟎 𝟏𝟏 𝟐𝟐 The value of the 𝟏𝟏 𝐭𝐭+𝟏𝟏 𝐭𝐭 𝐭𝐭 𝐭𝐭 stepsize can𝐭𝐭 be obtained𝐭𝐭 from (3.5) as where =𝐗𝐗 𝐗𝐗. 𝜽𝜽𝛌𝛌 𝒈𝒈 𝐭𝐭 𝟎𝟎 𝟏𝟏 𝟐𝟐 𝐒𝐒 − 𝛌𝛌𝐭𝐭 𝛌𝛌 𝐓𝐓 𝒈𝒈𝐭𝐭 𝒈𝒈𝐭𝐭 = , = , , , …, (3.5) Multiplying𝐭𝐭 𝐓𝐓the stepsize by the relaxation parameter 𝐓𝐓 𝛌𝛌 𝐭𝐭 𝐭𝐭 𝒈𝒈𝐭𝐭 𝒈𝒈𝐭𝐭 resulted in the𝒈𝒈 improvement𝐀𝐀 𝒈𝒈 of the method by accelerating 𝐭𝐭 𝐓𝐓 where = 𝛌𝛌 ( 𝒈𝒈 )𝐭𝐭,𝐀𝐀 𝒈𝒈𝐭𝐭 is𝐭𝐭 the𝟎𝟎 𝟏𝟏Hessian𝟐𝟐 matrix of its rate of convergence when applied to numerical evaluated at and must be symmetric. The value of problems [20]. It should be noted that when = , the 𝐭𝐭 𝐭𝐭 𝒈𝒈 𝛁𝛁𝒇𝒇 𝐗𝐗 𝐀𝐀 ( ) 𝒇𝒇 classical steepest descent method will be obtained and the kernel based𝐭𝐭 estimate must be positive, i.e. ( ) >𝐗𝐗 0 because𝐀𝐀 the smoothing parameter must so . 𝜽𝜽 𝟏𝟏 𝑹𝑹��𝒇𝒇 𝐱𝐱 � be positive. The estimate ( ) = when � ALGORITHM𝜽𝜽 ≠ 𝟏𝟏 2 (The Modify Relaxation Method of substituted𝑹𝑹�𝒇𝒇 𝐱𝐱 �into (3.2) will give the smoothing parameter 𝐭𝐭+𝟏𝟏 Raydan and Svaiter). that minimizes the AMISE. In all𝑹𝑹� �the𝒇𝒇 𝐱𝐱 gradient� 𝐗𝐗 methods considered, the modifications were in the method of ( ) STEP1. Compute = 𝟏𝟏 × , obtaining the initial point of the iteration which is kernel ( ) 𝒅𝒅 �𝒅𝒅+𝟒𝟒� 𝟏𝟏 𝑹𝑹 𝑲𝑲 −�𝒅𝒅+𝟒𝟒� based and the introduction of the third step that will be used 𝐨𝐨 𝟐𝟐 where is the standard𝐗𝐗 �� 𝝁𝝁deviation𝟐𝟐 𝐊𝐊 𝝈𝝈𝒋𝒋� of the𝒏𝒏 �variate to obtain the required solution. = , for 𝒋𝒋 , is the sample size and is the dimension ALGORITHM 1 (The Modify Barzilai and Borwein of the kernel.𝛔𝛔 𝒋𝒋𝐭𝐭𝐭𝐭 Method). STEP2.𝒋𝒋 𝟏𝟏 For𝟐𝟐 𝒏𝒏= , , , …, 𝒅𝒅 (a) Compute the gradient vector = ( ) . ( ) 𝒕𝒕 𝟎𝟎 𝟏𝟏 𝟐𝟐 STEP1. Compute = 𝟏𝟏 × , (b) Compute the step size = 𝐭𝐭 . 𝐭𝐭 ( ) 𝒅𝒅 �𝒅𝒅+𝟒𝟒� 𝟏𝟏 𝒈𝒈𝐓𝐓 𝛁𝛁𝒇𝒇 𝐗𝐗 𝑹𝑹 𝑲𝑲 −�𝒅𝒅+𝟒𝟒� 𝒈𝒈𝐭𝐭 𝒈𝒈𝐭𝐭 𝐨𝐨 𝟐𝟐 (c) Update = 𝐭𝐭 . 𝐓𝐓 where is the standard𝐗𝐗 �� 𝝁𝝁deviation𝟐𝟐 𝐊𝐊 𝝈𝝈𝒋𝒋� of the𝒏𝒏 �variate 𝛌𝛌 𝒈𝒈𝐭𝐭 𝐀𝐀 𝒈𝒈𝐭𝐭 for = , , is the sample size and is the dimension 𝐭𝐭+𝟏𝟏 𝐭𝐭 ( ) 𝐭𝐭=𝐭𝐭 𝒋𝒋 STEP3. Employ𝐗𝐗 𝐗𝐗 − 𝜽𝜽𝛌𝛌 𝒈𝒈 to compute the of the kernel.𝛔𝛔 𝒋𝒋𝐭𝐭𝐭𝐭 smoothing parameter using (3.2) above. � 𝐭𝐭+𝟏𝟏 STEP2.𝒋𝒋 𝟏𝟏 For𝟐𝟐 𝒏𝒏= , , , …, 𝒅𝒅 STEP4. Test a criterion𝑹𝑹�𝒇𝒇 for𝐱𝐱 �stopping𝐗𝐗 the iterations. If the (a) Compute the vector = ( ). test is satisfied, then stop. Otherwise, consider = + 𝒕𝒕 𝟎𝟎 𝟏𝟏 𝟐𝟐 (b) Compute the step size 𝐭𝐭 = 𝐭𝐭 . and continue with step 2 by updating the step size, 𝒈𝒈 𝛁𝛁𝒇𝒇 𝐓𝐓𝐗𝐗 𝒈𝒈𝐭𝐭 𝒈𝒈𝐭𝐭 𝐭𝐭 𝐭𝐭 𝟏𝟏 = . 𝐭𝐭 𝐓𝐓 = , where = ( ). (c) Set 𝛌𝛌 𝒈𝒈𝐭𝐭 𝐀𝐀 𝒈𝒈𝐭𝐭 𝐓𝐓 𝟏𝟏 𝒈𝒈𝐭𝐭+𝟏𝟏𝒈𝒈𝐭𝐭+𝟏𝟏 (d) Update𝐭𝐭 = + . 𝐭𝐭+𝟏𝟏 𝐓𝐓 𝐭𝐭+𝟏𝟏 𝐭𝐭+𝟏𝟏 𝐒𝐒 − 𝛌𝛌𝐭𝐭 In the𝛌𝛌 application𝒈𝒈𝐭𝐭+𝟏𝟏𝐀𝐀 𝒈𝒈 𝐭𝐭+of𝟏𝟏 this algorithm,𝒈𝒈 𝛁𝛁 𝒇𝒇our𝐗𝐗 relaxation STEP3. Employ𝐭𝐭+𝟏𝟏 𝐭𝐭 ( )𝐭𝐭 = to compute the parameter = where is the sample size. The 𝐗𝐗 𝐗𝐗 𝐒𝐒 𝟏𝟏 𝑹𝑹��𝒇𝒇 𝐱𝐱 � 𝐗𝐗𝐭𝐭+𝟏𝟏 𝜽𝜽 𝒏𝒏 𝒏𝒏 4 New Gradient Methods for Bandwidth Selection in Bivariate Kernel Density Estimation

relaxation parameter is chosen as = because of the role value of the AMISE. of the sample size in the choice of smoothing𝟏𝟏 parameter. One very important and notable step to be taken when 𝒏𝒏 Generally, it is known that the𝜽𝜽 smoothing parameter examining bivariate data set is to consider the Scatterplot depends very strongly on the sample size such that as the of the data. But as it has been in most cases, while kernel sample size increases, the smoothing parameter tends to be density estimate will reveal or highlight important features, Scatterplot cannot play this vital role [2]. Scatterplots have reduced [21]. The stopping criterion is < with = 10 where is the tolerance level. The results been regarded as the most frequently used tools for 𝐭𝐭+𝟏𝟏 t of the modify−5 methods shall be compared‖𝐗𝐗 with− the𝐗𝐗 biased‖ 𝜺𝜺 graphically displaying bivariate data sets but with the cross𝛆𝛆-validation method.𝛆𝛆 serious disadvantage that the eye is only drawn to the peripheries of the data cloud, while structures in the main body of the data will be hidden by the high density of the 4. Results and Discussion points [22]. In kernel density estimates, these disadvantages of the Scatterplots are removed because they In order to illustrate the efficiency of these methods, we have an advantage in the presentation of information compare their performances with the biased regarding the distribution of the data set. As noted from the cross-validation method using the asymptotic mean Scatterplots of the data sets considered, the modes are not integrated squared error (AMISE) as the error criterion apparent from the Scatterplot vas in the kernel density function. The results are presented by comparing the estimates and this exemplify the usefulness of the bivariate gradient methods (Modify Barzilai and Borwein (MBB) kernel density estimates for highlighting structure. and the Modify Relaxed Steepest Descent(MRSD)) with The first data set examined is the Volcanic Crater data of the Biased Cross-Validation (BCV)) for the bivariate Bunyaruguru Volcanic Field in Western Uganda [23]. It kernel density estimator and using the asymptotic mean involves the Locations of Centers of Craters of 120 integrated squared error (AMISE) as the error criterion volcanoes in two variables in which variable X represents function for measuring their performance. As generally the first center while variable Y represents the second known, one method is better than the other when it gives a center. Figure 1 shows the Scatterplot of the Crater data smaller value of the AMISE [12]. The comparison also and the Scatterplot clearly show a strong relationship involves the kernel estimates (graphs) of the methods between the variables with correlation coefficient = considered since kernel density estimation has direct . . It is evident that the two Locations of the applications on data analysis such as exploratory data Centers of Craters are highly positively correlated.𝝆𝝆 A analysis and data visualisation [1, 2, 6]. 𝟎𝟎significant𝟖𝟖𝟖𝟖𝟖𝟖𝟖𝟖𝟖𝟖𝟖𝟖 feature of this data set that is very noticeable Two sets of data were used to illustrate the results of the from the kernel density estimates (graphs) is the bimodality methods, showing in a tabular form the smoothing of the data but this is hidden as presented by the Scatterplot. parameter, the Asymptotic Integrated Variance (AIV), the We standardized the data in order to obtain equal variances Asymptotic Integrated Squared Bias (AISB) and the in each dimension because in most multivariate statistical Asymptotic Mean Integrated Squared Error (AMISE) analysis, the data should be standardized in order to make using the bivariate standard normal kernel. An important sure that the difference among the ranges of variables will point to note from the Tables below is that in terms of disappear [2, 10, 24, 25]. Figures 2, 3, and 4 below show performance, the gradient methods resulted in a smaller the kernel estimates of the Crater data.

Figure 1. Scatterplot of the Volcanic Crater Data

Mathematics and Statistics 6(1): 1-8, 2018 5

Figure 2. Kernel Estimate of the Crater Data using BCV Smoothing Parameter

Figure 3. Kernel Estimate of the Crater Data using MBB Smoothing Parameter.

Figure 4. Kernel Estimate of the Crater Data using MRSD Smoothing Parameter

The biased cross-validation method yields smoothing parameter that produces an estimate with the bimodality being clearly present as shown in Figure 2. The gradient methods also yield smoothing parameter values that retain the bimodality of the data as shown in Figure 3 and Figure 4 respectively. The table below shows the bandwidths, the asymptotic integrated variance (AIV), the asymptotic integrated squared biased (AISB) and the asymptotic mean integrated squared error (AMISE) of the methods considered. Table 1. Bandwidths, AIV, AISB and AMISE for the Crater Data

Methods.

BCV 0.45042𝒉𝒉𝐱𝐱 0.30224𝒉𝒉𝐲𝐲 0.00487124𝑨𝑨𝑨𝑨𝑨𝑨 0.00066901𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 0.00554025𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 MBB 0.48675 0.48268 0.00282256 0.00148993 0.00431249 MRSD 0.48802 0.48393 0.00280795 0.00150548 0.00431343

6 New Gradient Methods for Bandwidth Selection in Bivariate Kernel Density Estimation

From Table 1 above, it is obvious that in terms of distribution [2]. The bivariate kernel density estimates are performance, the biased cross-validation method produced bimodal and it is evident that the time interval until the next the largest AMISE value. The gradient methods yield a eruption is highly positively correlated with the duration of smoothing parameter with smaller AMISE value as shown the eruption. The Scatterplot of the Old Faithful data is in Table 1. shown in Figure 5 while Figures 6, 7 and 8 are the bivariate The second data set examined is the waiting time kernel estimates of the data for the methods considered. between eruptions and the duration of the eruption for the The Scatterplot show a strong relationship between the Old Faithful Geyser in Yellowstone National Park, variables with correlation coefficient = . . The Wyoming, USA [26]. The data set is made up of 272 data is also standardized to obtain equal variances in each observations on two variables in which variable X dimension [24, 25]. 𝝆𝝆 𝟎𝟎 𝟗𝟗𝟗𝟗𝟗𝟗𝟗𝟗𝟗𝟗 represents the duration of the eruption while variable Y Table 2 below shows the smoothing parameters, the represents the waiting time between eruptions. One very asymptotic integrated variance (AIV), the asymptotic important point to note from the bivariate kernel estimates integrated squared bias (AISB) and the asymptotic mean of this data is that the data set is bimodal and this provides integrated squared error (AMISE) of the methods very strong evidence in favour of eruption times and the considered. time interval until the next eruption exhibiting a bimodal

Figure 5. Scatterplot of the Old Faithful Data

Figure 6. Kernel Estimate of the Old Faithful Data using BCV Smoothing Parameter

Mathematics and Statistics 6(1): 1-8, 2018 7

Figure 7. Kernel Estimate of the Old Faithful Data using MBB Smoothing Parameter

Figure 8. Kernel Estimate of the Old Faithful Data using MRSD Smoothing Parameter

Table 2. Bandwidths, AIV, AISB and AMISE for the Old Faithful Data

Methods.

𝐲𝐲 BCV 0.29382𝒉𝒉𝐱𝐱 0.39688𝒉𝒉 0.00250897𝑨𝑨𝑨𝑨𝑨𝑨 0.00040169𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 0.00291066𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 MBB 0.36263 0.38846 0.00207688 0.00048786 0.00256474 MRSD 0.36364 0.38955 0.00206532 0.00049335 0.00255867

Table 2 shows the performances of the biased and higher dimensional kernel, and that is a standard cross-validation method and the gradient methods and normal product kernel, the gradient methods based on their from the results, the biased cross-validation method performance is at least as competitive as the𝑲𝑲 existing biased yielded larger AMISE value. Again the gradient methods cross-validation. yield a smoothing parameter with the smaller AMISE values as presented in Table 2.

5. Conclusions REFERENCES [1] Simonoff, J. S. Smoothing Methods in Statistics. The methods presented are compared with the biased Springer-Verlag, New York, 1996. cross-validation method because they are based on a suitable estimate of the asymptotic mean integrated [2] Silverman, B. W. Density Estimation for Statistics and Data Analysis. Chapman and Hall, London, 1986. squared error (AMISE). The results presented show that the new methods are reliable and they provide improved [3] Chacón, J. E. and Duong, T. Data-Driven Density Derivative methods for a choice of smoothing parameter. An Estimation, with Applications to Nonparametric Clustering advantage of the gradient methods is that they can be easily and Bump Hunting. Electronic Journal Statistics, 7, 499–532 2013. computed provided the function is at least twice differentiable. [4] Jiang, M. and Provost, S. B. A Hybrid Bandwidth Selection As for the bivariate case that sits between𝒇𝒇 the univariate Methodology for Kernel Density Estimation. Journal of

8 New Gradient Methods for Bandwidth Selection in Bivariate Kernel Density Estimation

Statistical Computation and Simulation, 84(3), 614–627, Method. IMA Journal of Numerical Analysis, 8(1), 141–148, 2014. 1988.

[5] Duong, T. and Hazelton, M. L. Plug-In Bandwidth Matrices [16] Raydan, M and Svaiter, B. F. Relaxed Steepest Descent and for Bivariate Kernel Density Estimation. Nonparametric Cauchy-Barzilai-Borwein Method. International Journal of Statistics, 15(1), 17–30, 2003. Computational Optimization and Applications, 21(2), 155– 167, 2002. [6] Scott, D.W. Multivariate Density Estimation. Theory, Practice and Visualisation. Wiley, New York, 1992. [17] Raydan, M. Convergence Properties of the Barzilai and Borwein Gradient Method. A Thesis Submitted in Partial [7] Zhange, X., Wu, X., Pitt, D. And Liu, Q. A Bayesian Fulfillment of the Requirements for the Degree, Doctor of Approach to Parameter Estimation for Kernel Density Philosophy, Rice University, Houston, Texas, 1991. Estimation via Transformation. Annals of Actuarial Science, 5(2), 181–193, 2011. [18] Farid, M., Leong, W. J. and Hassan M. A. A New Two-Step Gradient-Type Method for Large-Scale Unconstrained [8] Wand, M. P. and Jones, M. C. Comparison of Smoothing Optimization. Computers and Mathematics with Parameterizations in Bivariate Kernel Density Estimation. Applications, 59, 3301–3307, 2012. Journal of the American Statistical Association, 88, 520–528, 1993. [19] Dai, Y. H. A New Analysis on the Barzilai-Borwein Gradient Method. JORC, 1, 187–198, 2013. [9] Scott, D. W. and Terrell, G. R. Biased and Unbiased Cross-Validation in Density Estimation. Journal of the [20] Battaglia, J. P. The Eigenstep Method (An Iterative Method American Statistical Association, 82, 1131–1146, 1987. for Unconstrained Quadratic Optimization). American Journal of Operational Research, 2013, 3(2), 57–64 [10] Sain, R. S., Baggerly, A. K. and Scott, D. W. [21] Zambom, A. Z. and Dias, R. A Review of Kernel Density Cross-Validation of Multivariate Densities. Journal of Estimation with Applications to Econometrics. American Statistical Association, 89, 807–817, 1994. Universidade Estadual de Campinas, 2012. [11] Zhange, X., King, M. L. and Hyndman, R. J. A Bayesian [22] Wand, M. P. and Jones, M. C. Kernel Smoothing. Chapman Approach to Bandwidth Selection for Multivariate Kernel and Hall, London, 1995. Density Estimation. Computational Statistics and Data Analysis, 50, 3009–3031, 2006. [23] Bailey, T. C. and Gatrell, A. C. Interactive spatial data analysis. Longman, Harlow, 1995. [12] Jarnicka, J. Multivariate Kernel Density Estimation with a Parametric Support. Opuscula Mathematica, 29(1), 41–45, [24] Cula, S. G and Toktamis, O. Estimation of Multivariate 2009. Probability Density Function with Kernel Functions. Journal of the Turkish Statistical Association, 3(2), 29–39, 2000. [13] Hansen, B. E. Econometrics. University of Wisconsin, Spring, 2013. [25] Sain, R. S. Multivariate Locally Adaptive Density Estimation. Computational Statistics and Data Analysis, 39, [14] Cameron. A. C. and Trivedi, P. K. Micro-econometrics 165–186, 2002. Methods and Applications. Cambridge University Press, New York, USA, 2005. [26] Azzalini, A. and Bowman, A. W. A Look at Some Data on the Old Faithful Geyser. Applied Statistics, 39, 357–365, [15] Barzilai, J. and Borwein, J. M. Two Point Step size Gradient 1990.

Mathematics and Statistics 6(1): 9-15, 2018 http://www.hrpub.org DOI: 10.13189/ms.2018.060102

Exponential Dichotomy and Bifurcation Conditions of Solutions of the Hamiltonian Operators Boundary Value Problems in the Hilbert Space

Pokutnyi Oleksandr

Institute of Mathematics, National Academy of Sciences of Ukraine, Kiev, 01004, Ukraine

Copyright c 2018 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract Sufficient conditions for the existence of solutions lX(·, ε) = α. (2) for a weakly linear perturbed boundary value problem are obtained in the so called resonance (critical) case. Iterative Here X(t, ε) is an unknown operator-valued function from the 1 process for finding solutions has been presented. Necessary space C (J, L(H)). and sufficient conditions of the existence of solutions, boun- 1 ded solutions, generalized solutions and quasi solutions are C (J, L(H)) := {X : J → L(H), obtained. dX(t) ||X|| = sup max{||X(t)||, || ||} < ∞} Keywords Bifurcation Conditions, Lyapunov Equation, t∈J dt Exponential Dichotomy, Vishik-Lyusternik Method is the space of continuously differentiable operator-valued functions (or another function spaces), J ⊂ R; A ∈ L(H) is a linear and bounded operator

1 Introduction [A, X(t)] = AX(t) − X(t)A, D(t), Φ(t) ∈ C(J, L(H)) are given strongly-continuous Methods of perturbation theory, whose foundations were operator-valued functions; l : C1(J, L(H)) → H is linear laid by Poincare and Lyapunov, are a powerful tool in applied 1 and bounded operator which translates solutions of (1) into the mathematics, mechanics and permit one to obtain approximate Hilbert space H . At first we seek the solution of boundary va- analytical representations of solutions of rather complicated 1 lue problem (1, 2) which for ε = 0 turns in one of the solutions boundary value problems. Most of these methods arose when of generating problem (3, 4) solving specific problems of mechanics, celestial mechanics, and physics [1]-[8]. Numerous examples of various problems dX(t) = [A, X(t)] + Φ(t), (3) that can be studied by operator methods of perturbation the- dt ory can be found in the monograph [6]. The present paper uses the theory of generalized inverse and Moore-Penrose pseudoin- lX(·, ε) = α. (4) verse operators [9] – [13] to construct perturbation theory for 1 the Lyapunov type equation [14], [15] in resonance cases [16]. We find solution X(t, ε) ∈ C (J; L(H))×C(0, ε0] for a fixed We investigate bifurcation conditions of solutions of boundary ε0 > 0 (J ⊂ [a; b]). value problems in the Hilbert space when considering problem has the set of solutions not for any right hand side of the equa- tion. 2 Unperturbed equation We use the modification of the well-known Vishik- Lyusternik [17] method on the case of operator differential 1) Consider the case when generating problem has solutions boundary value problems. [18], [19]. We find the solution of boundary value problem (1), Statement of the problem. Consider boundary value pro- (2) in the form of a series blem +∞ X dX(t, ε) X(t, ε) = εiX (t). = [A, X(t, ε)] + Φ(t) + εD(t)X(t, ε), t ∈ J (1) i dt i=0 Exponential Dichotomy and Bifurcation Conditions of Solutions of 10 the Hamiltonian Operators Boundary Value Problems in the Hilbert Space

Equating coefficients under corresponding powers of ε for ε0 1. Classical solutions. If the operator Q is normally sol- we obtain the following boundary value problem vable, i.e., has closed range (R(Q) = R(Q)), then it is well known [11] that g ∈ R(Q) if and only if PN(Q∗)g0 = 0, where dX (t) 0 P ∗ is the projection onto the cokernel of Q [11]. In this = [A, X0(t)] + Φ(t), (5) N(Q ) dt case, there exists a generalized inverse Q− [11] and the solu- tion set of equation (8) can be represented in the form lX0(·) = α. (6) − Solutions of the equation (5) have the following form: M = Q g0 + PN(Q)M0,M0 ∈ L(H),

Z t where PN(Q) is the projection onto the kernel of Q. tA −tA (t−τ)A (τ−t)A X0(t, M) = e Me + e Φ(τ)e dτ, (7) 2. Strong generalized solutions. Now assume that the range 0 of Q is not closed, R(Q) 6= R(Q). Let us show that Q admits for every operator M ∈ L(H). Really a normally solvable extension Q, R(Q) = R(Q). Since the operator Q is linear and bounded, we have the following direct dX (t, M) sum decomposition of the space H1: 0 = AetAMe−tA − etAMe−tAA + Φ(t)+ dt H1 = R(Q) ⊕ Y, Z t (t−τ)A (τ−t)A ⊥ +A e Φ(τ)e dτ− where Y = R(Q) . Suppose that 0 Z t L(H) = N(Q) ⊕ X. − e(t−τ)AΦ(τ)e(τ−t)AdτA = 0 By virtue of these decompositions, there exist projections

PN(Q),PX , orthogonal projections PR(Q), and PY onto the = AX0(t, M) − X0(t, M)A + Φ(t). respective subspaces. Let H2 = L(H)/N(Q) be the quotient space of L(H) by the kernel N(Q). It is well known [13], [20] that there exists a continuous bijection p : X → H and a pro- t 2 Z 0 (t−τ)A (τ−t)A (t−τ)A (t−τ)A jection j : L(H) → H2. The triple (L(H),H2, j) is a locally ( e Φ(τ)e dτ)t = e Φ(t)e + 0 trivial bundle with fiber PN(Q)L(H). Consider the operator

t Z 0 −1 (t−τ)A (τ−t)A Q = PR(Q)Qj p : X → R(Q) ⊂ R(Q). + (e Φ(τ)e )tdτ = 0 One can readily verify that it is linear, injective, and continu- Z t (t−τ)A 0 (τ−t)A ous. Now we use completion [21] with respect to the norm = Φ(t) + (e )tΦ(τ)e dτ+ 0 ||x||X = ||Qx||F , where F = R(Q), and obtain a new space Z t X and an extended operator Q. Then (t−τ)A (τ−t)A 0 + e Φ(τ)(e )tdτ = 0 Q : X → R(Q),X ⊂ X, Z t = Φ(t) + Ae(t−τ)AΦ(τ)e(τ−t)Adτ− and the operator thus constructed is a homeomorphism bet- 0 ween X and R(Q). Consider the extended operator Q = Z t QPX : L(H) → H1, − e(t−τ)AΦ(τ)e(τ−t)AAdτ = 0 L(H) = N(Q) ⊕ X,H1 = R(Q) ⊕ Y. Z t = Φ(t) + A e(t−τ)AΦ(τ)e(τ−t)Adτ− Obviously, QM = QM, M ∈ L(H), and the operator Q is 0 normally solvable (R(Q) = R(Q)) and hence generalized- − Z t invertible with generalized-inverse Q , which will be called − e(t−τ)AΦ(τ)e(τ−t)AdτA. 0 the strong generalized-inverse of Q [13], [20]. − Substituting in the boundary condition (6) we obtain the follo- Now we use the operator Q to establish the generalized wing operator equation solvability of equation (8). The criterion for the generalized solvability of equation (8) has the form

QM = g0, (8) PY g0 = 0, where − and any element of the set {Q g + P M : M ∈ L(H)} QM = le·AMe−·A, 0 N(Q) 0 0 will be called the corresponding strong generalized solution of Z · (·−τ)A (τ−·)A equation (8). g0 = α − l e Φ(τ)e dτ. 0 Remark 1. i. Note that PN(Q) = PN(Q) and the set − Let us study the solvability of the operator equation (8) in the {Q g0 + PN(Q)M0 : M0 ∈ L(H)} consists of usual solu- Hilbert spaces. We distinguish three types of solutions. tions of the equation QM = g0; hence the element g0 belongs Mathematics and Statistics 6(1): 9-15, 2018 11

to the range of the operator Q. If g0 ∈ R(Q), then the strong lX(·, ε) = α. (15) generalized solution defined above is a classical solution. Due to the theorem 1 we obtain the following corollary. (In ii. If we consider equality QM = g0 at every point x ∈ H this case Q = l). then for the representation of solutions we can use the Moore- Corollary. Boundary value problem (14), (15), in Hilbert + + Penrose pseudo invertible operator Q or Q . spaces has: 3. Generalized quasi-solutions. Now consider the case in 1. Strong generalized solutions if and only if which g0 ∈/ R(Q), or, equivalently, the element g0 satisfies the Z · condition PY g0 6= 0. Then there exist no strong generalized PY {α − l Φ(τ)dτ} = 0; (16) solutions, but there exist elements of X that are solutions of 0 the variational problem inf||QM −g|| , where Q = Q P H1 X X 2. Generalized quasi-solutions if and only if and the infimum is taken over all M0 ∈ X. The set of these − Z · elements has the form {Q g + P M : M ∈ L(H)}. 0 N(Q) 0 0 PY {α − l Φ(τ)dτ}= 6 0; (17) We call them generalized quasi-solutions by analogy with the 0 usual quasi-solutions [11]. Thus, the following theorem holds. 3. Under condition (16) or (17), the strong generalized solu- Theorem 1. Boundary value problem (5), (6), in Hilbert tions, classical solutions, and quasi-solutions of the boundary spaces has: value problem (14), (15) have the form 1. Strong generalized solutions if and only if · t − Z Z Z · X0(t, C) = P C + Q {α − l Φ(τ)dτ} + Φ(τ)dτ, (·−τ)A (τ−·)A N(Q) PY {α − l e Φ(τ)e dτ} = 0; (9) 0 0 0 (18) C ∈ L(H). We can see that in the case when [A, X(t, ε)] = 0 2. Generalized quasi-solutions if and only if we don’t get a trivial situation. Z · (·−τ)A (τ−·)A PY {α − l e Φ(τ)e dτ}= 6 0; (10) 0 3 Bifurcation condition 3. Under condition (9) or (10), the strong generalized solu- Assume that the boundary value problem (3), (4) ((5), (6)) tions, classical solutions, and quasi-solutions of the boundary has classical solutions; i.e., condition (9) is satisfied. Let us value problem (5), (6) have the form find conditions on the operators D(t) and l under which the − X (t, M ) = etAP M e−tA +etAQ αe−tA +(G[Φ])(t), perturbed boundary value problem (1), (2) has classical soluti- 0 0 N(Q) 0 ons. Let us show that this problem can be solved with the use (11) where of the operator Z · Z t (·−τ)A τA −τA (τ−·)A (t−τ)A (τ−t)A B0M = PY l e D(τ)e PN(Q)Me e dτ (G[Φ])(t) = e Φ(τ)e dτ− 0 0 (19) Z · As we say eariler, following [22], we seek solutions of the −l e(·−τ)AΦ(τ)e(τ−·)Adτ boundary value problem (1), (2) in the form of the series 0 +∞ is the generalized Green operator of the boundary value pro- X i X(t, ε) = ε Xi(t) (20) blem (5), (6) and M0 is an arbitrary operator of L(H). i=0 Remark 2. If the range of Q is closed, i.e., the operator Q is normally solvable, then condition (9) guarantees the exis- in powers of the small parameter ε. Operator M0 ∈ L(H) tence of classical solutions. In this case, it is equivalent to the will be determined at the next stage of the iterative process. 1 condition The coefficient X1(t) of ε is a solution of the boundary value Z · problem α − l e(·−τ)AΦ(t)e(τ−·)Adτ ∈ R(Q). dX (t) 0 1 = [A, X (t)] + D(t)X (t, M ), (21) dt 1 0 0

Remark 3. If the operator A and operator-valued function lX1(·) = 0. (22) X(t, ε) are commute [A, X(t, ε)] = 0 then we obtain the fol- In view of condition (9), the solvability criterion of problem lowing boundary value problem (21), (22) acquires the form 0 Z · X (t, ε) = Φ(t) + εD(t)X(t, ε), (12) (·−τ)A (τ−·)A −PY l e D(τ)X0(τ, M0)e dτ = 0, (23) lX(·, ε) = α. (13) 0 whence we finally obtain the operator equation Generating boundary value problem has the following form · Z − 0 (·−τ)A τA −τ X (t, ε) = Φ(t), (14) B0M0 = −PY [l e D(τ){e Q αe + (24) 0 Exponential Dichotomy and Bifurcation Conditions of Solutions of 12 the Hamiltonian Operators Boundary Value Problems in the Hilbert Space

+(G[Φ])(τ)}e(τ−·)Adτ]. or in the form of operator equation

For simplicity suppose that the operator B0 is generalized in- Z · (·−τ)A vertible and P ∗ P = 0, where P ∗ is the projection B0M1 = −PY l e D(τ)G[D(·)X0(·, M 0)](τ)× N(B0 ) Y N(B0 ) ∗ 0 onto the kernel of adjoint operator B0 . Then equation (24) is (29) Z · solvable. The set of strong generalized solutions of equation (τ−·)A (·−τ)A (24) has the form ×e dτ − PY l e D(τ)× 0 · Z − (τ−·)A − (·−τ)A τA −τA ×G[D(·)Y 0(·)[PN(B )Mρ]](τ)e dτ. M0 = −B0 PY [l e D(τ){e Q αe + (25) 0 0 By virtue of solvability, we find the operator (τ−·)A +(G[Φ])(τ)}e dτ] + PN(B0)Mρ, M1 = M 1 + F1[PN(B0)Mρ], where Mρ ∈ L(H) is arbitrary, PN(B0) is a projection onto the kernel of operator B0. For convenience, we rewrite this where relation in the form Z · − (·−τ)A M 1 = −B0 PY l e D(τ)G[D(·)X0(·, M 0)](τ)× 0 M0 = M 0 + PN(B0)Mρ, (τ−·)A where ×e dτ, · F [P M ] = P M − Z − 1 N(B0) ρ N(B0) ρ M = −B−P [l e(·−τ)AD(τ){eτAQ αe−τA+ 0 0 Y Z · 0 − (·−τ)A −B0 PY l e D(τ)G[D(·)Y 0(·)[PN(B0)Mρ]](τ)× +(G[Φ])(τ)}e(τ−·)Adτ]. 0 ×e(τ−·)Adτ. Then the solution set of the boundary value problem (5), (6) has the form Then the set of solutions of boundary value problem (21), (22) has the form

X0(t, Mρ) = X0(t, M 0) + Y 0(t)[PN(B0)Mρ],Mρ ∈ L(H)

X1(t, Mρ) = X1(t, M 1) + Y 1(t)[PN(B0)Mρ],Mρ ∈ L(H) − X (t, M ) = etAP M e−tA + etAQ αe−tA + G[Φ](t), 0 0 N(Q) 0 where operator Y 0 is defined by the rule tA −tA X1(t, M 1) = e PN(Q)M 1e + (G[D(·)X0(·, M 0)])(t), tA −tA Y 0(t)[R] := e PN(Q)Re . tA −tA Y 1(t)[R] := (G[D(·)Y 0(·)[R]])(t) + e PN(Q)F1[R]e . Now we use the linearity of the generalized Green operator to represent the solution set of the boundary value problem in Hence the solution set of the boundary value problem (26), (27) the form has the form

tA −tA X (t, M ) = etAP M e−tA + (G[D(·)X (·,M )])(t), X1(t, M1) = e PN(Q)M1e + (G[D(·)X0(·,Mρ)])(t), 2 2 N(Q) 2 1 ρ or or in the from

tA −tA tA −tA X2(t, M2) = e P M2e + (G[D(·)X1(·, M 1)])(t)+ X1(t, M1) = e PN(Q)M1e + (G[D(·)X0(·, M 0)])(t)+ N(Q)

+(G[D(·)Y 1(·)[PN(B0)Mρ]])(t). +(G[D(·)Y 0(·)[PN(B0)Mρ)])(t), Arguing further by induction, one can readily show that if the where the operator M1 will be found at the next step of the iterative process. The coefficient X (t) of ε2 in the series (20) condition on the product of projections is satisfied, then the 2 i is a solution of the boundary value problem problem of determining the coefficient Xi(t) of ε in the series (20) is reduced to the solvability of the operator boundary value dX (t) problem 2 = [A, X (t)] + D(t)X (t, M ), (26) dt 2 1 1 dXi(t) = [A, Xi(t)] + D(t)Xi−1(t, Mi−1), (30) lX2(·) = 0. (27) dt

The solvability condition (9) for the problem (26), (27) beco- lXi(·) = 0. (31) mes The operator Mi is determined as follows: Z · (·−τ)A (τ−·)A −PY l e D(τ)X1(τ, M1)e dτ = 0, (28) Mi = M i + Fi[P Mρ], 0 N(B0) Mathematics and Statistics 6(1): 9-15, 2018 13 where 4 Conclusions Z · − (·−τ)A M i = −B0 PY l e D(τ)× 0 1. Proposed in the article method works in the following case: assume that unperturbed boundary value problem (5), (6) (τ−·)A ×G[D(·)Xi−1(·, M i−1)](τ)e dτ, does not have strong generalized (or classical solutions), i.e. condition (10) is satisfied. In this case we seek solutions of the

Fi[PN(B0)Mρ] = PN(B0)Mρ− boundary value problem (1), (2) in the form of part of the series

Z · +∞ − −B P l e(·−τ)AD(τ)× X i 0 Y X(t, ε) = ε Xi(t). 0 i=−k ×G[D(·)Y (·)[P M ]](τ)e(τ−·)Adτ, i−1 N(B0) ρ 2. Number of solutions of boundary value problem (1), (2) depends from the dimension of the subspace P L(H). and then the set of solutions of the boundary value problem N(B0) (30), (31) can be represented in the form 3. We can use the modification of theorem 1 and 2 for inves- tigating the following boundary value problem X (t, M ) = X (t, M ) + Y (t)[P M ],M ∈ L(H) i ρ i i i N(B0) ρ ρ dX(t, ε) = [A(t),X(t, ε)] + Φ(t) + εD(t)X(t, ε), t ∈ J dt where (32)

tA −tA lX(·, ε) = α, (33) Xi(t, M i) = e PN(Q)M ie +(G[D(·)Xi−1(·, M i−1)])(t), with nonstationary operator-valued function A(t) which can be unbounded in general case [23]. Y (t)[R] := (G[D(·)Y (·)[R]])(t) + etAP F [R]e−tA. i i−1 N(Q) i 4. Proposed in the article method can be used for finding of The convergence of the series (20) for given ε can be proved by bounded on the whole axis solutions. Namely, consider opera- the standard majorant method as in [22]. Thus, the following tor differential equation theorem holds. dX(t) Theorem 2. If the unperturbed operator boundary value = [A(t),X(t)] + Φ(t), (34) problem (3), (4) has classical solutions, then, under the condi- dt tion where t ∈ R, Φ(t) ∈ BC(R, L(H)), BC(R, L(H)) is the P ∗ P = 0, Banach space of continuous and bounded on R operator-valued N(B0 ) Y functions: the operator boundary value problem (1), (2) has a ρ- parameter family of classical solutions in the form of the series BC(R,H) := {Φ: R → H, ||Φ||L(H) = sup ||Φ(t)|| < ∞}; t∈R +∞ X i A(t) ∈ L(H), t ∈ R: Xi(t, ε, Mρ) = ε [Xi(t, M i) + Y i(t)[PN(B0)Mρ]], i=0 |||A||| = sup ||A(t)|| < ∞, t∈R for any Mρ ∈ L(H), which is absolutely convergent for suffi- ciently small given ε ∈ (0, ε∗]; here and homogeneous equation

X (t, M ) = X (t, M ) + Y (t)[P M ],M ∈ L(H), dX(t) 0 ρ 0 0 0 N(B0) ρ ρ = [A(t),X(t)]. (35) dt tA −tA tA − −tA X0(t, M 0) = e PN(Q)M 0e + e Q αe + G[Φ](t), We say that evolution operator U(t) is defined on (35) if the

tA −tA following equality is hold: Y 0(t)[PN(B0)Mρ] := e PN(Q)PN(B0)Mρe , dU(t) tA −tA = [A(t),U(t)],U(0) = I. Xi(t, M i) = e PN(Q)M ie +(G[D(·)Xi−1(·, M i−1)])(t), dt Definition. [15], [24]. Equation (35) admits an exponential Y (t)[P M ] := (G[D(·)Y (·)[P M ]])(t)+ i N(B0) ρ i−1 N(B0) ρ dichotomy on the interval J if there exist a projector P (P 2 = tA −tA P ) and constants K ≥ 1 and α > 0 such that, for any t, s ∈ J, +e P Fi[P Mρ]e , i = 1, 2, .... N(Q) N(B0) the following estimates are true:

Fi[P Mρ] = P Mρ− N(B0) N(B0) ||U(t)PU −1(s)|| ≤ Ke−α(t−s), t ≥ s, Z · − (·−τ)A −1 α(t−s) −B0 PY l e D(τ)× ||U(t)(E − P )U (s)|| ≤ Ke , s ≥ t, 0 where U(t) = U(t, 0) is the evolution operator of equation (τ−·)A ×G[D(·)Y i−1(·)[PN(B0)Mρ]](τ)e dτ. (35). As in [23] we can prove the following theorem. Exponential Dichotomy and Bifurcation Conditions of Solutions of 14 the Hamiltonian Operators Boundary Value Problems in the Hilbert Space

Z +∞ Theorem 3. Suppose that the homogeneous equation (35) α1(t−s) ≤ ||Φ||L(H) K1e ds = admits an exponential dichotomy on the semiaxes R+ and R− t with projectors P and Q and constants K1, α1 and K2, α2, e−α1s respectively. If the operator α1t +∞ = ||Φ||L(H)K1e |t = −α1 D = P − (E − Q): L(H) → L(H), (36) K1 = ||Φ||L(H) . which acts from the Banach space L(H) into itself, is genera- α1 lized invertible, then the following assertions are true: (i) in order that solutions of equation (34) bounded on the REFERENCES entire real axis exist, it is necessary and sufficient that the operator-valued function Φ(t) ∈ BC(R, L(H)) satisfies the [1] Bogolyubov N.N., and MitropolŁkii Yu.A. Asimptoticheskie condition metody v teorii nelineinykh kolebanii (Asymptotic Methods +∞ Z in the Theory of Nonlinear Oscillations), Gostekhteorizdat, H(t)Φ(t)dt = 0; (37) Moscow, 1955. −∞ (ii) under condition (37), solutions of the equation (34) [2] Maslov V.P. Asimptoticheskie metody i teoriya vozmushche- bounded on the entire axis have the form nii (Asymptotic Methods and Perturbation Theory), Nauka, Moscow, 1988. X(t, M) = U(t)PPN(D)M + (G[Φ])(t),M ∈ L(H), (38) [3] Arnold V.I., Afraimovich V.S., IlŁashenko Yu.S., and ShilŁikov where L.P. Teoriya bifurkatsii (Bifurcation Theory), VINITI, Moscow, 1980.  R t −1 0 PU (s)Φ(s)ds−  +∞ [4] De la Llave R. A tutorial on KAM theory, Proceedings of Sym-  − R (E − P )U −1(s)Φ(s)ds+  t posia in Pure Mathematics, Providence: Amer. Math. Soc., vol.  − R +∞ −1  +PD ( 0 (E − P )U (s)Φ(s)ds 69, 175-?96, 2001.  0  + R QU − 1(s)Φ(s)ds), t ≥ 0, (G[f])(t) = U(t) −∞ R t QU −1(s)Φ(s)ds− [5] Maslov V.P. Operatornye metody (Operator Methods), Nauka,  −∞ Moscow, 1973.  − R 0(E − Q)U −1(s)Φ(s)ds+  t  − R +∞ −1  (E − Q)D ( 0 (E − P )U (s)Φ(s)ds [6] Nayfeh A.H. Perturbation Methods, Wiley, New York, 1973.  0  + R QU −1(s)Φ(s)ds, t ≤ 0 −∞ [7] Grebennikov E.A. and Ryabov Yu.A. Konstruktivnye metody is the generalized Green operator of the problem of solutions analiza nelineinykh sistem (Constructive Methods for the Ana- lysis of Nonlinear Systems), Nauka, Moscow, 1979. bounded on the entire axis R and possessing the following pro- perties: [8] Treschev D., Zubelevich O. Introduction to the perturbation the- ory of Hamiltonian systems, Springer-Verlag, Berlin, Heidel- Z +∞ (G[Φ])(0 + 0) − (G[Φ])(0 − 0) = − H(t)Φ(t)dt = 0, berg, 2010. −∞ [9] Moore E. H. On the Reciprocal of the General Algebraic Matrix (LG[Φ])(t) = Φ(t), t ∈ R, (Abstract), Bull. Amer. Math. Soc., v.26, 394-395, 1920.

dX(t) [10] Penrose R. A. Generalized Inverse for Matrices, Proc. Cam- LX(t) := − [A(t),X(t)], dt bridge Philos. Soc., vol. 51, 406 – 413, 1955. −1 −1 where H(t) = PN(D∗)QU (t) = PN(D∗)(I − P )U (t), [11] Boichuk A.A., and Samoilenko A.M. Generalized Inverse Ope- − D - is the generalized inverse of the operator D, PN(D) and rators and Fredholm Boundary Value Problems, VSP, Utrecht, PN(D∗) are the projectors that project L(H) onto the kernel 2004. N(D) and the cokernel N(D∗) of the operator D, respecti- vely. [12] Korolyuk V.S. and Turbin A.F. Matematicheskie osnovy fazo- Remark. Now we show convergence one of the infinite in- vogo ukrupneniya slozhnykh sistem (Mathematical Foundations tegrals: of the Phase Lumping of Complex Systems), Naukova Dumka, Kiev, 1978. Z +∞ ||U(t) (E − P )U −1(s)Φ(s)ds|| ≤ [13] Pokutnyi O.O. Generalized-invertible operator in the Banach t Hilbert and Frechet spaces, Visnik of the Kiev National Ta- ras Shevchenko university, Series of physical and mathematical +∞ Z sciences, No. 4, 167–171, 2013. ≤ ||U(t)(E − P )U −1(s)Φ(s)||ds ≤ t [14] Boichuk O. A., Krivosheya S. A. Criterion for the solvability of Z +∞ −1 matrix equations of the Lyapunov type, Ukr. Mat. Zh., 50, No. ≤ ||Φ||L(H) ||U(t)(E − P )U (s)||ds 8, 1162-?169, 1998. t Mathematics and Statistics 6(1): 9-15, 2018 15

[15] Krein M.G., Daletskiy Yu. L. Ustoichivost’ resheniy differen- [20] Boichuk A.A., Pokutnyi A.A. Perturbation theory of operator cial’nih uravneniy v banahovom prostranstve (Stability of so- equations in th Frechet and Hilbert spaces, Ukrainian mathema- lutions of differential equations in the Banach space), Nauka, tical journal, vol. 67, No. 9, 1327-1335, 2016. Moscow, 1979. [21] Lyashko S.I., Nomirovskii D.A., Petunin Yu.I., and Semenov [16] Tikhonov A.N. and Arsenin V.Ya. Metody resheniya nekorrekt- V.V., Dvadtsataya problema Gil Łerta. Obobshchennye reshe- nykh zadach (Methods for Solving Ill Posed Problems), Nauka, niya operatornykh uravnenii (HilbertŁ Twentieth Problem. Ge- Moscow, 1986. neralized Solutions of Operator Equations), VilŁams, Moscow, [17] Vishik M.I., Lyusternik L.A. The solutions of some perturba- 2009. tion problems for matrices and selfadjoint or non-selfadjoint differential equations I, Russian Mathematical Surveys, vol. 15, [22] Boichuk A.A. and Pokutnij A.A. Bounded solutions of linear No. 3, 3-80, 1960. perturbed differential equations in a Banach space, Tatra Moun- tains Math. Publ., vol. 38, 29-?1, 2007.

[18] Panasenko E. V., Pokutnyi O. O. Boundary-Value Problems for [23] Pokutnyi O.A. Bounded solutions of linear and weakly nonli- the Lyapunov Equation in Banach Spaces, Journal of mathema- near differential equations in Banach space with unbounded li- tical sciences, v. 223, 1-7, 2017. near part, Differential equations, v. 48, No. 6, 803-?13, 2012. [19] Pokutnyi O.O. Boundary value problem for an operator- differential Riccati Equation in the Hilbert space on the interval, [24] Palmer K. J. Exponential dichotomies and transversal homocli- Advances in Pure Mathematics, 5, 865-873, 2015. nic points, J.Diff. Eq., vol. 55, 225-256, 1984.

Mathematics and Statistics

Call for Papers

Mathematics and Statistics is an international peer-reviewed journal that publishes original and high-quality research papers in all areas of mathematics and statistics. As an important academic exchange platform, scientists and researchers can know the most up- to-date academic trends and seek valuable primary sources for reference.

Aims & Scope

Algebra Discrete Mathematics Analysis Dynamical Systems Applied Mathematics Geometry and Topology Approximation Theory Statistical Modelling Combinatorics Number Theory Computational Statistics Numerical Analysis Computing in Mathematics Probability Theory Design of Experiments Recreational Mathematics

Editorial Board

Dshalalow Jewgeni Florida Inst. of Technology, USA Jiafeng Lu Zhejiang Normal University, China Nadeem-ur Rehman Aligarh Muslim University, India Debaraj Sen Concordia University, Canada Mauro Spreafico University of São Paulo, Brazil Veli Shakhmurov Okan University, Turkey Antonio Maria Scarfone National Research Council, Italy Liang-yun Zhang Nanjing Agricultural University, China Ilgar Jabbarov Ganja state university, Azerbaijan Mohammad Syed Pukhta Sher-e-Kashmir University, India Vadim Kryakvin Southern Federal University, Russia Rakhshanda Dzhabarzadeh National Academy of Science of Azerbaijan, Azerbaijan Contact Us Sergey Sudoplatov Sobolev Institute of Mathematics, Russia Birol Altin Gazi University, Turkey Horizon Research Publishing Araz Aliev Baku State University, Azerbaijan 2880 ZANKER RD STE 203 Francisco Gallego Lupianez Universidad Complutense de Madrid, Spain SAN JOSE, CA 95134 Hui Zhang St. Jude Children's Research Hospital, USA USA Yusif Abilov Odlar Yurdu University, Azerbaijan Email: [email protected] Evgeny Maleko Magnitogorsk State Technical University, Russia İmdat İşcan Giresun University, Turkey Emanuele Galligani University of Modena and Reggio Emillia, Italy Mahammad Nurmammadov Baku State University, Azerbaijan

Manuscripts Submission

Manuscripts to be considered for publication have to be submitted by Online Manuscript Tracking System(http://www.hrpub.org/submission.php). If you are experiencing difficulties during the submission process, please feel free to contact the editor at [email protected].