sensors

Review Radar Imaging: Fundamentals,

Challenges, and Advances Review JungangCompressed Yang , Tian Jin, Sensing Chao Xiao * andRadar Xiaotao Imaging: Huang Fundamentals, College of Electronic Science and Technology, National University of Defense Technology, ChangshaChallenges, 410073, China and Advances * Correspondence: [email protected]

Jungang Yang, Tian Jin, Chao Xiao * and Xiaotao Huang  Received: 3 June 2019; Accepted: 11 July 2019; Published: 13 July 2019  College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China * Correspondence: [email protected] Abstract: In recent years, sparsity-driven regularization and compressed sensing (CS)-based radar imagingAbstract: methods In recent have years, attracted sparsity significant-driven regularization attention. Thisand compressed paper provides sensing an introduction(CS)-based radar to the fundamentalimaging methods concepts have of thisattracted area. significant In addition, attention. we will This describe paper both provides sparsity-driven an introduction regularization to the andfun CS-baseddamental concepts radar imaging of this area. methods, In addition, along we with will other describe approaches both sparsity in a-driven unified regularization mathematical framework.and CS-based This radar will provideimaging readers methods, with along a systematic with other overview approaches of in radar a unified imaging mathematical theories and methodsframework. from This a clear will mathematical provide readers viewpoint. with a systematic The methods overview presented of radar in imaging this paper theories include and the minimummethods variance from a clear unbiased mathematical estimation, viewpoint. The (LS)methods estimation, presented Bayesian in this maximum paper include a posteriori the (MAP)minimum estimation, variance matched unbiased filtering, estimation, regularization, least squares and (LS) CS reconstruction. estimation, Bayesian The characteristics maximum a of theseposteriori methods (MAP) and their estimation, connections matched are also filtering, analyzed. regularization, Sparsity-driven and regularization CS reconstruction. and CS The based radarcharacteristics imaging methods of these represent methods an and active their research connections area; there are also are still analyzed. many unsolved Sparsity-driven or open problems,regularization such as and the CS sampling based radar scheme, imaging computational methods represent complexity, an active sparse research representation, area; there influence are still of clutter,many and unsolved model or error open compensation. problems, such We as willthe sampling summarize scheme, the challenges computational as well complexity, as recent advancessparse relatedrepresentation, to these issues. influence of clutter, and model error compensation. We will summarize the challenges as well as recent advances related to these issues. Keywords: radar imaging; synthetic aperture radar; compressed sensing; sparse reconstruction; Keywords: radar imaging; synthetic aperture radar; compressed sensing; sparse reconstruction; regularization regularization

1. Introduction 1. Introduction Radar imaging technique goes back to at least the 1950s. In the past 60 years, it has been stimulated Radar imaging technique goes back to at least the 1950s. In the past 60 years, it has been bystimulated hardware by performance, hardware performance, imaging theories, imaging and theories, signal and processing technologies. technologies. Figure1 showsFigure the1 developmentalshows the developmental history of radar history imaging of radar methods. imaging methods.

FigureFigure 1. 1.Developmental Developmental history of of radar radar imaging imaging methods. methods.

Since the development of radar imaging techniques, the main theory that has been used has always been matched filtering [1–3]. Matched filtering is a linear process; it has the advantages of Sensorssimplic2019ity, 19 ,and 3100; stability. doi:10.3390 However,/s19143100 the drawbacks of the matched filtering methodwww.mdpi.com are also/journal obvious./sensors

Sensors 2019, 19, x; doi: www.mdpi.com/journal/sensors Sensors 2019, 19, 3100 2 of 19

Since the development of radar imaging techniques, the main theory that has been used has always been matched filtering [1–3]. Matched filtering is a linear process; it has the advantages of simplicity and stability. However, the drawbacks of the matched filtering method are also obvious. Since it does not exploit any prior information concerning the expected targets, its performance is limited by the signal bandwidth. It also requires a dense sampling to record the signals, according to the Shannon–Nyquist sampling theorem. Thus, the matched filtering method places significant requirements on the measured data, but only produces results with limited performance. As higher and higher imaging performance is demanded, the matched filtering method will struggle to meet the requirements. Apart from the matched filtering framework, from a more generic mathematical viewpoint, radar imaging can be viewed as an [4–7], whereby a spatial map of the scene is recovered using the measurements of the scattered electric field. The radar observation process is a Fredholm integral (F-I) equation of the first kind [8]. Due to observation limitations, such as limited bandwidth and limited observation angles, this inverse problem is usually ill-posed [9,10]. The classic least squares (LS) estimation method cannot solve such ill-posed inverse problems efficiently. The matched filtering method can be viewed as using an approximation to eliminate the irreversible or unstable term in the LS solution. This approximation leads to limited resolution and side-lobes in the results. Thus, matched filtering methods typically provide an image that blurs the details of the scene. Using proper models for the targets, super-resolution methods can improve the resolution of the imaging result [11,12]. Besides using approximation, the ill-posed inverse problem can be solved by another approach, i.e., adding an extra constraint to the LS formula and yielding a stable solution. This approach is called regularization [8]. In order to make the solution after regularization closer to the true value, the additional constraint should represent appropriately some prior knowledge. The regularization approach can also be explained by the Bayesian maximum a posteriori (MAP) estimation theory [6,13,14], which uses prior knowledge in a probabilistic way. In the radar imaging scenario, imposing sparsity is one possible form of prior knowledge [15]. The advantages of the sparsity-driven regularization methods include increased image quality and robustness to limitations in data quantity. Compressed sensing (CS) refers to the use of under-sampled measurements to obtain the coefficients of a sparse expansion [16–20]. This paper summarizes the fundamentals, challenges and recent advances of sparse regularization and CS-based radar imaging methods. Using a unified mathematical model, we derive the best estimator (i.e., the minimum variance unbiased estimator), the LS estimator, the Bayesian MAP estimator, matched filtering, regularization, and CS reconstructions of the scene. The characteristics of these methods and their connections are also analyzed. Finally, we present some key challenges and recent advances in this area. These include the sampling scheme, the computational complexity, the sparse representation, the influence of clutter, and the model error compensation.

2. Mathematical Fundamentals of Radar Imaging

2.1. Radar Observation Model In the continuous signal domain, under the Born approximation, the radar observation process can be denoted as [4] Z s(r) = A(r, r0 )g(r0 )dr0 + n (1) where s(r) denotes the observed data at the observation position of r, g(r0 ) denotes the reflectivity coefficient at r0 in the scene, A(r, r0 ) denotes the system response from r0 to r, and n denotes noise. Assuming the system is shift invariant, Equation (1) can be rewritten as Z s(r) = A(r r0 )g(r0 )dr0 + n (2) − Sensors 2019, 19, 3100 3 of 19

It can be seen that the radar observation model is a convolution process. Equation (1) is a Fredholm integral (F-I) equation of the first kind [8]. From a mathematical viewpoint, radar imaging can be viewed as the solution of the F-I equation—i.e., we want to recover g(r) from the observed data s(r) using the observation equation. Unfortunately, according to the theory of integral equations, solving the F-I equation is usually an ill-posed problem [8]. In practice, since digitization is commonly used, the observed data are discrete. Based on Equation (1), the discrete observation model can be written as

s = Ag + n (3) where s is stacked from the samples of s(r), g is stacked from the samples of g(r0 ), A is formed from samples of A(r, r0 ), and n is the observation noise vector.

2.2. Best Linear Unbiased Estimate and Least Squares Estimate of the Scene From the observation model shown in (3), radar imaging can be viewed as an estimation problem, in which the scene g is estimated based on the observed data s in a noisy environment. According to estimation theories, the minimum variance unbiased estimate is the “best” estimate in terms of estimation square error. From Equation (3), it can be seen that when the radar observation model is linear, the minimum variance unbiased estimate is the best linear unbiased estimate [13]—i.e., the expression of the best estimate of the scene is

^ H 1 1 H 1 g = (A C− A)− A C− s (4) h i where C is the covariance of the noise term (C = E nnH ). In practice, a more tractable approach is LS estimation, which can be denoted as

^ g = argmin s-Ag 2 (5) g k k2

Therefore, the LS estimate of the scene is

^ 1 g = (AHA)− AHs (6)

If n is white Gaussian noise, we have C = σ2I, where I is the identity matrix. Under such condition, Equations (4) and (6) are the same. Therefore, the LS estimate will equal to the best estimate in white Gaussian noise [13]. If we want to use Equation (6) to calculate the best estimate of the scene, a prerequisite is that (AHA) is invertible. However, in practice, this prerequisite is usually not satisfied, as discussed below. We assume that the size of A is M N, where M denotes the number of measurements and N denotes × the number of unknown grid points. Then, the size of (AHA) is N N. × One case is that M < N, i.e., the number of measurements is less than the unknown variables. CS is a typical example of this case. In such a case, rank(AHA) = rank(A) M < N, i.e., (AHA) ≤ is irreversible. In the above case, it can be seen that due to limited number of measurements, (AHA) is irreversible. Is it possible to make (AHA) invertible by increasing the number of measurements (i.e., make M > N. As mentioned previously, due to physical limitations, such as limited bandwidth and limited observation angles, if we take more measurements, the interval between the adjacent measurements will be smaller. 1 Thus, the coherence between the adjacent columns in A will increase. Consequently, (AHA)− will probably be ill-conditioned. In summary, the LS solution usually contains irreversible or ill-posed terms. This problem is inherent, and is derived from the property of the F-I equation of the first kind [8]. Sensors 2019, 19, 3100 4 of 19

2.3. Matched Filtering Method

1 SensorsExamining 2019, 19, x Equation (6), it can be seen that the irreversible or ill-posed term is (AHA)− . We4 of can 19 H H 1 multiply (A A) in the left side of Equation (6) to eliminateH −(1A A)− . In this way, we can avoid explicitly calculating the nonexistent or unstable term ()AA 1. This leads to the matched filtering explicitly calculating the nonexistent or unstable term (AHA)− . This leads to the matched filtering method, which can be denoted as method, which can be denoted as ^ gAAgAsˆ = ()=HHH^ˆ H (7) gMFMF= (A A)g = A s (7) Equation (7) can be viewed as multiplying multiplying the the best best estimate estimate of of the the scene scene with with (()AAAHH A).. The matrix H (()AAAHA) isis thethe autocorrelation of of the the system system response, response, which which usually usually has has a sinc a sinc pulse pulse shape shape [1,21]. [1 ,The21]. Thematched matched filtering filtering result result can canbe viewed be viewed as asthe the convolution convolution of ofthe the best best estimate estimate of of the the scene scene and and the sinc function. A A point target will be spread spread,, and side side-lobes-lobes will also appear in t thehe matched filtering filtering result [21]. This implies that the matched filtering filtering method can can only only provide an image that blurs the details of the scene. The matched filtering filtering method has a limited resolution, which depends on the autocorrelation of the system responseresponse [[1].1]. FigFigureure2 2 shows shows anan exampleexample ofof thethe matchedmatched filteringfiltering method.method. SixSix point targets are set in the scene. It can be seen seen that that the the matched matched filtering filtering result result is is the the convolution convolution of the targets targets and the the autocorrelation autocorrelation of the system response. As As a a result, result, an an id ideaea point point target target is is spread spread into into a a sinc sinc waveform. waveform. Consequently Consequently,, targets will will interfere interfere with with each each other, other, and and two two closely closely spaced spaced targets targets may not may be resolved not be resolved in the matched in the matchedfiltering result.filtering result. Equation (7) is the original form of the matched filtering filtering equation. I Inn practice, in order to reduce the computational cost and make it more convenient for implementation, some transformationstransformations and approximations are usually adopted for EquationEquation (7). Equation (7) can represent many widely used imaging algorithms, such such as backprojection algorithms, range Doppler algorithms, chirp scaling algorithmsalgorithms,, and ωK algorithms [1]. [1].

Figure 2. MatchedMatched filtering filtering example. Two Two closely spaced targets cannot be resolved.

2.4. Regularization Method Examining the LS formula ( (EquationEquation (5)), (5)), it it can can be be seen seen that that it it only only relies relies on on the the observed observed data. data. In order to make the ill ill-posed-posed inverse problem become well well-posed,-posed, we can add an extra constraint to the LS formula [8–10]. This leads leads to the regularization method, which can be denoted as

2 ^ gsˆ =+ Aggargmin -( ) 2 L g = argmin s-Ag 2 + λL(g) (8(8)) gg {k k2 } where  is the regularization parameter and L()g is the added penalty function. In order to make where λ is the regularization parameter and L(g) is the added penalty function. In order to make the L()g thesolution solution of Equation of Equation (8) closer (8) closer to the true to the value, trueL ( value,g) should represent should appropriate represent appropriate prior knowledge prior knowledgefor the problem. for the problem. A typical choice of L()g is

L()gg= p p (9)

 where p denotes the p -norm, i.e.,

Sensors 2019, 19, 3100 5 of 19

A typical choice of L(g) is p L(g) = g (9) k kp where denotes the p-norm, i.e., k · kp l   N !1/p  P p  gi p> 0 g p =  = (10) k k  i 1  Number of nonzero elements in g p= 0

Then, Equation (8) can be rewritten as

^ n po g = argmin s-Ag 2 + λ g (11) g k k2 k kp

The choice of p can control the result of the regularization method. If we want to enforce sparsity in the result, we should choose p in the range 0 p 1 [16,17]. For p = 1, Equation (11) can be ≤ ≤ compared to the solution of the CS type methods [16]. Equation (11) can be solved by gradient search algorithms, such as the Newton iteration [22].

2.5. Bayesian Maximum a Posteriori Estimation p It should be noted that in Equation (11), the added constraint term λ g represents prior k kp knowledge [17,23]. Another prior knowledge-based estimation method is Bayes theory. The main idea behind the Bayesian estimation framework is to account explicitly for the errors, and also for incomplete prior knowledge. Assuming that the noise n in Equation (3) is white and Gaussian, we have

 1  p(n) exp n 2 (12) ∝ −2σ2 k k2 where σ2 is the noise variance. Then we obtain the expression of likelihood

 1  p(s g) exp s g 2 (13) ∝ −2σ2 k − k2

We assume that the scene has a prior probability density function, as

n po p(g) exp α g (14) ∝ − k kp If 0 p 1, the magnitude of the scene is more likely to concentrate around zero, which implies ≤ ≤ that the scene is sparse. For a review on sparsity enforcing priors for the Bayesian estimation approach, the reader can refer to [6]. Using the prior probability density of g shown in (14), and according to the Bayes rule, we obtain

  p(s g)p(g) 1 1 p p(g s) = exp s g 2 α g (15) | p(s) ∝ p(s) −2σ2 k − k2 − k kp

Then the MAP estimate can be obtained easily as

^ p g = argmaxp(g s) = argmin s g 2 + 2σ2α g (16) g g k − k2 k kp

Comparing Equations (11) and (16), it can be seen that when λ = 2σ2α, these two equations are equivalent, i.e., the regularization method is equivalent to Bayesian MAP estimation. Sensors 2019, 19, 3100 6 of 19

2.6. Compressed Sensing Method For the observation model shown in Equation (3), if the scene (i.e., g) is sparse, according to CS theory, it can be stably reconstructed using reduced data samples. The reconstruction method can be written as [16,17] ^ 2 Sensors 2019, 19, x g = argmin g s.t. s-Ag < ε 6 of(17) 19 g k k0 k k2 wherewhere s.t.s.t. means subject to and εdenotes denotes the the allowed allowed data data error error in in the the reconstruction reconstruction process. process. Equation (17) is NP-hardNP-hard and and computationally computationally didifficultfficult to to solve solve [ [17].17]. Matching pursuit is an approximate method for for obtaining obtaining an an 0 sparse sparse solution. solution. InIn CSCS theory,theory, aa moremore tractabletractable approachapproach is l 0 taking the 1-norm instead of the 0-norm, which is called the 1 relaxation: taking the l 1 -norm instead of thel -norm, which is called thel 1 relaxation:

^ 2 ggsAgˆ =argmin s.t. - 2  g = argmin g 12s.t. s-Ag < ε (18(18)) g k k1 k k2

If ggis is sparse sparse and andA satisfiesA satisfies some some specific specific conditions, conditions, Equations Equations (18) and(18) (17)and will (17) have will thehave same the solution,same solution, and this and solution this solution is the exact is the or exact approximate or approximate recovery recovery of g [16 ,of17 ].g Equation [16,17]. Equation (18) can be (18) solved can usingbe solved convex using programming, convex programming, which is morewhich tractable is more tractable than the than original the original-norm minimum-norm minimum problem. l0 0 Unlikeproblem. the Unlike matched the filteringmatched method, filtering CS method, method CS does method not have does an not exact have or an pre-defined exact or pre resolution,-defined sinceresolution, it is a since non-linear it is a method. non-linear Generally, method. the Generally, resolution the capability resolution of capability the CS method of the isCS much method better is thanmuch the better matched than the filtering matched method filtering if the method targets if are the sparse. targets are sparse. Figure3 3shows shows an an example example of compressed of compressed sensing. sensing. The simulated The simulated scene is scene the same is the as the same matched as the filteringmatched example filtering shown example in Figure shown2 . in Only Figure 1 /20 2. signal Only samples 1/20 signal are used samples for the are CS used reconstruction. for the CS Itreconstruction. can be seen that It can the be two seen closely that spacedthe two targetsclosely are spaced well targets resolved. are This well implies resolved. that This the implies CS method that canthe CS obtain method better can results obtain using better less results data using than l theess matcheddata than filteringthe matched method. filtering The method. reason is The that reason prior informationis that prior information concerning signal concerning sparsity signal is utilized sparsity in is the utilized CS model. in the CS model. Equation (18) is a constrained optimization problem. According According to to the the Lagrange Lagrange theory, theory, it can be transformed into into an an unconstrained unconstrained optimization optimization problem, problem, which which will have will the have same the form same as Equation form as (11).Equation For appropriate (11). For appropriate choices of choicesλ and p of= 1, Equationsand p =1 (11), Equations and (18) (11) will and be equivalent (18) will be [16 equivalent,17]. This implies[16,17]. This that CSimplies is a special that CS case is a ofspecial the regularization case of the regularization method. method.

Figure 3. Compressed sensing example; closely spaced targets are well resolved.

2.7. Summary of Radar Imaging Methods TheThe above subsections introduce introducedd the LS estimator,estimator, matched filtering,filtering, regularization method methods,s, Bayesian MAP estimation,estimation, and the CS method. In this subsection, we will summarize these methods and analyzeanalyze their connections.connections. Table1 1 lists lists the the main main characteristics characteristics and and describes describes some some connectionsconnections betweenbetween thesethese imagingimaging methods.methods. The The LS LS estimation only only relies on the observed data data,, and cannot solve the ill-posedill-posed radar imaging problem eefficiently.fficiently. TheThe matched filteringfiltering method cancan be viewed as using an approximation to avoid the ill-posed term in the LS solution. The regularization method, Bayesian MAP estimation, and the CS method exploit prior knowledge concerning the targets in addition to the observed data, and they are equivalent in some cases. Table 1 also shows the equivalent geometric illustration for each method in 2 . The observation equation can only confine the solution to a hyperplane (which becomes a line in ), but cannot reliably produce a certain solution [17,23]. The other methods aim at obtaining a stable solution close to the true value, using some modifications that represent prior knowledge concerning the targets.

Sensors 2019, 19, 3100 7 of 19 to avoid the ill-posed term in the LS solution. The regularization method, Bayesian MAP estimation, and the CS method exploit prior knowledge concerning the targets in addition to the observed data, and they are equivalent in some cases. Table1 also shows the equivalent geometric illustration for each method in R2. The observation equation can only confine the solution to a hyperplane (which becomes a line in R2), but cannot reliably produce a certain solution [17,23]. The other methods aim at obtaining a stable solution close to the Sensors 2019, 19, x 7 of 19 true value, usingSensors some 2019 modifications,, 19,, xx that represent prior knowledge concerning the targets. 7 of 19 Sensors 2019, 19, x 7 of 19 Figure4 showsSensors theFig 2019ure block, 19 4, xshows diagram the block and diagram the relationship and the relationship of the of radar the radar imaging imaging methods. methods. All All 7of of ofthe 19 the Figure 4 shows thethe block diagram and the relationship of the radar imaging methods. All of the radar imaging methodsradarFig imagingure can 4 shows methods be divided the blockcan be diagram into divided two and into branches. the two relationship branches The. . of T first hethe first radar branch branch imaging does does methods. not use use Allthe the ofprior the prior radarFig imagingure 4 shows methods the block can be diagram divided and into the two relationship branches .of T hethe first radar branch imaging does methods. not use Allthe of prior the information of theradarinformationinformation targets imaging orofof methodsthethe scene, targetstargets andcan oror bescene,scene, it divided leads andand to ititinto leadsleads the two linear toto branches thethe linearlinear imaging. Timagingimaginghe first methods; methodsbranchmethods does;; the thethe not most most use typicaltypical typicalthe prior and and radarinformation imaging of methodsthe targets can or bescene, divided and itinto leads two to branches the linear. Timaginghe first branchmethods does; the not most use typical the prior and informationwidely used ofone the in targets this branch or scene, isis matchedmatched and it leads filtering.filtering. to the AnotherAnother linear branchimaging uses methods the prior; the information most typical of and the widely used oneinformationwidely in this used branch ofone the in targets isthis matched branch or scene, is matched filtering. and it leads filtering. Another to the Another linear branch imagingbranch usesuses methods the the prior;prior the information most information typical of and the of widelytargetstargets used or or scene scene one ..in This this branchleadsleads toto is thematched the non non- linearlinearfiltering. methods methods Another.. The The branch most uses recent the lypriorly developed information methods, of the the targets or scene.widelytargets This used or scene leadsone .in This this to thebranchleads non-linear to is matched the non- methods.linearfiltering. methods Another The. The branch most most recentlyuses recent the priorly developeddeveloped information methods, methods, of the targetsincludincludinging or regularization scene. This leads methods, to the Bayesian non-linear methods, methods and. CS The methods most recent belongly totodeveloped thisthis branch methods,.. targetsincluding or regularization scene. This leads methods, to the Bayesian non-linear methods, methods and. CS The methods most recent belongly todeveloped this branch methods,. including regularizationincluding regularization methods, Bayesian methods, methods,Bayesian methods, and CS and methods CS methods belong belong to to thisthis branch branch.. including regularizationTable methods, 1. Characteristics Bayesian and methods, connections and of CSradar methods imaging belong methods. to this branch. Table 1. Characteristics and connections of radar imaging methods. Table 1. CharacteristicsTable 1. Characteristics andRadar connections Observation and connections Model: of radar ofs A =+radar gimaging n imaging methods. methods. Table 1. CharacteristicsRadar Observation and connections Model: ofs A =+radar g n imaging methods. s : Observed Data, Measurement Matrix, g : Scene, n : Noise s : Observed RadarData, ObservationA : Measurement Model: Matrix s A=+ g, ng : Scene, n : Noise s : ObservedRadar Observation RadarData, ObservationA :: Measurement Model: Model: s=Ag Matrix +sn A=+ g, ng : Scene, n : Noise Imaging s : Observed Data, A Measurement Matrix, g : Scene, n : Noise Equivalent Geometric ImagingImaging s: Observeds :Mathematical Observed Data, Data A: Model Measurement, A : Measurement Matrix, Matrix g:Characteristics Scene,, g : Scene, n n : Noise Equivalent Geometric Methods s : MathematicalObserved Data Model, A : Measurement MatrixCharacteristics, g : Scene, n : Noise Illustration ImagingMethods EquivalentIllustrationIllustration Geometric Imaging Mathematical Model HHCharacteristics−1 H −1 EquivalentEquivalent Geometric Imaging MethodsMethods MathematicalMathematical Model Model CharacteristicsgAAAsˆ = ()HHCharacteristics−1 , ()AA H −1is Illustration Methods gAAAsˆ = ()HH−1 , ()AAH −1is IllustrationIllustration Methods 2 gAAAsˆ = () , ()AA is Illustration Least Squares 2 usually illHH-posed−1 or nonexistent,H −1 Least Squares gsAgˆ = argmin - 2 usuallygAAAsˆ = () illHH-posed−1 or nonexistent,, ()AAH −1 is (LS)Least Estimation Squares gsAgˆ = argming - 2 usuallygAAAsˆ = () HHill-posed−1 or nonexistent,, ()AAH −1 is gsAg= argming - ^22 cannotgAAAsˆ = () obtain a stable, solution()AA is (LS)Least Estimation Squares g 2 H cannot1 H obtainH a stable1 solution (LS)Least Estimation Squares gsAgˆ = argmin - g2= (A A)−usuallycannotA s, (obtainillA-posedA )a− stable oris nonexistent, usually solution Least Squares (LS) Least Squares^ gsAgˆ = argmin2 - 2 [9,13].usually ill-posed or nonexistent, (LS) Estimation gsAgˆ = argming - 2 usually[9,13]. ill-posed or nonexistent, (LS) Estimationg = argmin s-Ag 2 g ill-posed2 or nonexistent,cannot[9,13]. obtain cannota stable obtainsolution a Estimation (LS) Estimation g k k g cannot obtain a stable solution stable solution[9,13].Avoids [9, 13 the]. ill-posed term in the [9,13].Avoids the ill-posed term in the LSAvoids solution, the ill but-posed the resolution term in the is Matched H AvoidsLS solution, the ill but-posed the resolutionterm in the is Matched gˆ A= s H limitedAvoids theby the ill- posedsystem term bandwidth, in the FilteringMatched gˆ A= s H Avoids the ill-posedLSlimited solution, by term the but system in the the resolution LSbandwidth, is Filtering g A= s andLSlimited solution, side by-lobes the but systemwill the appear resolution bandwidth, in the is MatchedFiltering H LSand solution, side-lobes but will the appearresolution in the is Matched ^ gˆ A= s H solution, butlimitedand the side resolution by-lobes the system will is limitedappear bandwidth, in by the Matched Filtering Filtering = H gˆ A= s finallimited image by the [4,21]. system bandwidth, Filtering g A s the system bandwidth,andfinal side image-lobes [4,21]. and will side-lobes appear in the Filtering andfinal side image-lobes [4,21]. will appear in the Approximationsand side-lobes will and appear in the will appear infinalApproximations the image final [4,21]. image and [4 ,21]. transformationsfinalApproximations image [4,21]. of and the original transformations of the original Range Doppler, Approximations and transformations Approximationstransformations andof the original Range Doppler, Approximations and transformationsApproximations matchedApproximations and filtering, transformations and in order to of Chirp Scaling, Approximations andH transformations transformationsmatched filtering, of inthe order original to RangeChirp Doppler, Scaling, of AsH the original matchedreducetransformations the filtering, computational of the in original order cost Range Doppler, ChirpRangeωK, Doppler, etc. Approximations Approximations andof andAsH transformations matchedreduce the filtering, computational in order costto ChirpωK, Scaling, etc. Approximationsof andAs transformations andmatchedreduce make the filtering, it computational more convenientin order costto to ChirpωK, Scaling, etc. H H to reduce thematchedand computational make filtering, it more convenient costin order and to to Scaling, ωK, etc. Chirp Scaling,transformations of Aof sAsH reduceand make the itcomputational more convenient cost to ωK, etc. of As implementreduce the computationalin practice [1]. cost ωK, etc. of As make it moreandimplement convenient make it inmore practice to implementconvenient [1]. to ωK, etc. andimplement make it inmore practice convenient [1]. to in practice [1Addand]. make an extra it more constraint convenient to the to LS implementAdd an extra in practiceconstraint [1]. to the LS formula,implement so inthat practice the ill -[1].posed Addformula, an extra so that constraint the ill-posed to the LS Add an extraAddformula, constraint an extra so that toconstraint thethe LSill-posed to the LS 2 inverseAdd an problemextra constraint becomes to well the -LS Depends on the Regularization 2 formula, so thatformula,inverse the problem ill-posedso that the becomes inverse ill-posed well - Regularization gsˆ =+ Aggargmin -( ) 2 L posed.formula,inverse If problem sothe that added the becomes constraintill-posed well is- Depends on the Regularization gsˆ =+ Aggargming -( ) 2 L formula,posed. If sothe that added the constraintill-posed is Method^ gsˆ =+ Aggargmin2 g -( ) 2 problemL becomesinverseposed. well-posed. Ifproblem the added becomes constraint If the well- is Dependsexpression on the of expressionL()g . RegularizationMethodg = argmin s-Ag +g λL(g) 22 choseninverse appropriately,problem becomes the wellresult- expressionDepends of on Lthe()g .. Regularization MethodRegularizationMethod gsˆ =+ Aggargmin2 -( ) 2 L inversechosen appropriately,problem becomes the wellresult- Depends on the Regularization g gsˆk =+ Aggargmink -( ) 2 addedL constraintposed.chosen is If appropriately, chosenthe added constraint the result is of L(g). Method gsˆ =+ Aggargming -( ) 2 L willposed. be Ifbetter the added than that constraint for is expression of L()g . Method g 2 appropriately,chosenwill the be result appropriately,better willthan bethat betterthe for result expression of L()g . matchedchosen appropriately, filtering [8,9]. the result than that forwillmatched matched be better filtering filtering than [8,9]. that [8 , for9]. will be better than that for matched filtering [8,9]. 2 p matched filtering [8,9]. 2 p Choose L()g as the p -norm gsˆ =+ Aggargmin - 2  p Choose L()g as the p -norm Sparsity-Driven gsˆ =+ Aggargmin - 2  p Choose L()g as the p -norm Sparsity-Driven^ gsˆ =+ Aggargmin2 g - p 2 Choose p L(g) as the p-norm (0 p 1), Sparsity-Driveng = argmin s-Ag +g λ g 22 pp ( 01p ), in order to obtain Sparsity-Driven Regularization 2 g p 2 p Choose( 01p lL()g), in as order the≤ to pobtain≤-norm Regularization g gsˆk =+ Aggargmink 01 k- pk 2 in orderp to obtainChoose( 01p sparse L()g), in reconstruction as order the to pobtain-norm Regularization SparsityRegularization-Driven gsˆ =+ Aggargming 01 - p 2  p sparse reconstruction resultp [6,23]. Sparsity-Driven 0 p 1 g 01p 2 p ( 01p ), in order to obtain Regularization g result [6,23].(sparse01p reconstruction), in order resultto obtain [6,23]. Regularization ≤ ≤ 01p ( 01p ), in order to obtain Regularization 01p sparse reconstruction result [6,23]. 01p sparse reconstruction result [6,23]. sparse reconstruction result [6,23]. 2 p 2 np(gg )−o exp  p For 22 =  , the MAP p(gg )−p exp  pp 2 For 2 =  ,, thethe MAPMAP Bayesian MAPp (g) exp p(αgg )g−p exp  Forp 2σ α = λestimation, the MAP2 will estimation be equivalent will to Bayesian MAP Bayesian MAP ∝ − k k  pp Forestimation 22  will=  be, the equivalent MAP to Estimation^ p(gg )− exp 2 bep 2 equivalentp Forestimation to the2 sparsity-driven will be, the equivalent MAP to ˆ p(2gg )− exp2  p2 p2 p theFor sparsity2 -=driven  , the regularization MAP Estimation BayesianEstimationg MAP= argmin gss=+- AggAgargminp(gg+ )−2 σ exp -2α g 2 p 2 p the sparsity-driven regularization BayesianEstimation MAP gsˆ =+ Aggargmin2 g -2 p2 p p estimationthe sparsity will-driven be equivalent regularization to Bayesian MAP g gsˆk =+ Aggargmink g -2k k 2 regularizationp methodestimation method [6,14]. will [6, 14 be]. equivalent to Estimation g 22 2 pp method [6,14]. Estimation gsˆ =+ Aggargmin -2 2 2 p themethod sparsity [6,14].-driven regularization Estimation gsˆ =+ Aggargmin -2 2 2 p the sparsity-driven regularization gsˆ =+ Aggargming -2 2  p method [6,14]. g 2 p method [6,14].

2 ^ gˆ =argmin g s.t. s - Ag 2  gˆ =argmin g022 s.t. s -For Ag an2 appropriate For an appropriate choice of λ choice, the CS of  , Compressedg = argmin gˆ g=argmins.t. g s-Ag g02 < s.t. ε s - Ag  For an appropriate choice of  , g 0 g 2022 For an appropriate choice of  , Compressed Sensing Compressed k k kg k method2 willthe be CS equivalent method will tothe be equivalent SensingCompressed (CS) gˆ =argmin gor s.t. s - Ag 2  Forthe anCS appropriatemethod will choice be equivalent of  , Sensing (CS) gˆ =argminor g g02or s.t. s - Ag  Forthe anCS appropriatemethod will choice be equivalent of  , (CS) Method CompressedSensing^ (CS) g 02or sparsity-driven2 toFor the an regularization sparsity appropriate-driven choice method of  , CompressedMethod g 2 2 thetoto thethe CS sparsitysparsitymethod-- drivenwill be equivalent SensingMethodg (CS)= argmin ggsˆ =g argmin1 Ags.t. s- Ag s.t.or2 < -ε 2  regularizationthe CS method method will be equivalent[17,23]. SensingMethod (CS) g ggsˆk=kargmin Ag gk s.t.k12or - [17,23]. theregularization CS method methodwill be equivalent[17,23]. Sensing (CS) ggsˆ =argmin Ag g s.t.12or -  toregularization the sparsity- drivenmethod [17,23]. Method g 122 to the sparsity-driven Method ggsˆ =argmin Ag s.t. - 2  to the sparsity-driven Method ggsˆ =argmin Ag s.t.12 -  regularization method [17,23]. ggsˆ =argmin Ag g s.t.12 -  regularization method [17,23]. g 12

Sensors 2019, 19, x 8 of 19

3. Challenges and Advances in Compressed Sensing-Based Radar Imaging The use of regularization methods in radar imaging goes back at least to the year 2000 [21,24]. Since the CS theory was proposed in 2006, it has been explored for a wide range of radar [25–33] and radar imaging applications [4,34–38], including synthetic aperture radar (SAR) [39–42], inverse SAR (ISAR) [43–45], tomographic SAR [46–51], three-dimensional (3D) SAR [52–54], SAR ground moving target indication (SAR/GMTI) [55–61], ground penetrating radar (GPR) [62–64], and through-the-wall radar (TWR) [65–67]. In this paper, we will focus on two-dimensional (2D) imaging radar systems, i.e., SAR, GPR, and TWR. After several years of development, although many interesting ideas have been presented in this area, there still exist a number of challenges, both in theory and practice [68]. The state of the art in Sensorsthis area2019 ha, 19s, 3100not yet reached the stage of practical application. We will present some challenge8 ofs as 19 well as recent advances in this part of the paper.

Figure 4. Block diagram and relationship of the radar imaging methods.

3.3.1. Challenges Sampling Scheme and Advances in Compressed Sensing-Based Radar Imaging TheCS usually use of regularizationinvolves random methods under in-sampling radar imaging [16,17]. goes A widely back at used least waveform to the year in 2000 traditional [21,24]. Sinceradar theimaging CS theory is the was linear proposed frequency in 2006, modulated it has been (LFM) explored waveform. for a wideIf we rangeadopt ofthe radar LFM [ 25waveform–33] and radarin CS- imagingbased radar applications imaging, [a4 random,34–38], includingsampling syntheticanalog to aperturedigital (A/D) radar converter (SAR) [39 is– needed42], inverse, which SAR is (ISAR)not easily [43 –realized45], tomographic in practice SAR. This [46 will–51 require], three-dimensional extra hardware (3D) component SAR [52–s,54 which], SAR means ground that moving LFM targetwaveform indications are no (SARt ideally/GMTI) suit [ed55– for61], CS. ground penetrating radar (GPR) [62–64], and through-the-wall radarRecently, (TWR) [65 many–67]. Inresearchers this paper, have we will found focus that on the two-dimensional stepped frequency (2D) imaging waveform radar is systems, much more i.e., SAR,suitable GPR, for and CS TWR.than the LFM waveform [35,62,63,66,69]. Sparse and discrete frequencies are more convenientAfter several for hardware years of implementation. development, although For a CS many-based interesting radar imaging ideas system, have been a stepped presented frequency in this area,waveform there stillmay exist be the a number preferred of challenges, choice. In bothpractica in theoryl application, and practice a set [68of ].adjustable The state ofpseudorandom the art in this areanumbers has not can yet be reached generated the stageto select of practical the frequency application. points We in will the present stepped some frequencies. challenges In as this well way, as recentrandomly advances generated in this frequencies, part of the i.e., paper. random and sparse measurement, can be realized, and the CS- based imaging model can be implemented. 3.1. Sampling Scheme Figures 5 and 6 show an example for CS-based stepped frequency radar imaging. The main equipmentCS usually in the involves experimental random system under-sampling is a vector network [16,17]. analyzer A widely (VNA) used. The waveform experiment in traditional is carried radarout in imaginga non-reflect is theive linear microwave frequency chamber modulated. Five (LFM)targets waveform. in the scene If are we shown adopt thein Fig LFMure waveform 5. Figure 6a in CS-basedshows the radar backprojection imaging, a result random, using sampling the full analogy sampled to digital data (A (81/D) azimuth converter measurements is needed, which × 2001 is notfrequencies). easily realized Figure in 6b practice. shows Thisthe CS will reconstruction require extra hardwareresult using components, under-sampled which data means (27 that azimuth LFM waveformsmeasurements are not× 128 ideally frequencies). suited for Considering CS. the aspects of resolution and sidelobe levels, the CS reconstructionRecently, manyresult researchersis even better have than found the backprojection that the stepped result, frequency althoughwaveform it uses less issampled much moredata. suitableThe reason for is CS that than prior the LFMinformation waveform concerning [35,62,63 signal,66,69 ].spars Sparseity is and used discrete in the frequenciesCS model, while are more the convenientbackprojection for hardwaremethod uses implementation. no prior information. For a CS-based radar imaging system, a stepped frequency waveform may be the preferred choice. In practical application, a set of adjustable pseudorandom numbers can be generated to select the frequency points in the stepped frequencies. In this way, randomly generated frequencies, i.e., random and sparse measurement, can be realized, and the

CS-based imaging model can be implemented. Figures5 and6 show an example for CS-based stepped frequency radar imaging. The main equipment in the experimental system is a vector network analyzer (VNA). The experiment is carried out in a non-reflective microwave chamber. Five targets in the scene are shown in Figure5. Figure6a shows the backprojection result, using the fully sampled data (81 azimuth measurements 2001 × frequencies). Figure6b shows the CS reconstruction result using under-sampled data (27 azimuth measurements 128 frequencies). Considering the aspects of resolution and sidelobe levels, the CS × reconstruction result is even better than the backprojection result, although it uses less sampled data. The reason is that prior information concerning signal sparsity is used in the CS model, while the backprojection method uses no prior information. Sensors 2019, 19, 3100 9 of 19 Sensors 2019, 19, x 9 of 19

(a) (b)

Figure 5. ExperimentalExperimental scene for CS CS-based-based stepped frequency radar imaging. ( a) Five Five reflectors reflectors in the microwave chamber. ( b) Transmitter andand receiverreceiver antennas.antennas.

Figure 6. ((a)) Backprojection result of full data (81(81 azimuthazimuth measurementsmeasurements × 2001 frequencies). ( b) CS × result of under under-sampled-sampled data (27(27 azimuth measurementsmeasurements × 128 frequencies). × 3.2. Computational Computational Complexity Complexity In the regularization or CS m modelodel for a 2D radar imaging system, the 2D observed data and the 2D scene grid are both stacked stacked into into column column vectors. vectors. This This will will lead lead to to a a huge huge size size measurement measurement matrix. matrix. For example, the original fully sampled data are 2048 2540 points (azimuth range); if a 512 512 For example, the original fully sampled data are 2048 ×× 2540 points (azimuth ×× range); if a 512 ×× 512 pixel image is reconstructed from a reduced sampling data consist of 256 256 points. Then the size pixel image is reconstructed from a reduced sampling data consist of 256 ×× 256 points. Then the size of the matrix AAis 65,536 262,144. Since regularization or CS reconstruction is a non-linear process, of the matrix A is 65,536× × 262,144. Since regularization or CS reconstruction is a non-linear process, such a a large large measurement measurement matrix matrix will result will inresult a huge in computational a huge computational burden for imageburden reconstruction. for image reconstructionIn addition, the. total In addition, memory the to access total memory the measurement to access matrix the measurement is 128 gigabytes matrix (assuming is 128 floatgigabytes point (andassuming complex float numbers point and are complex used). This number is a toos are much used). memory This is a space too much for normal memory desktop space computers.for normal desktopConsidering computers. that data Considering size is usually that data larger size than is usually the above larger example than the in above practice, example it is diinffi practice,cult for itconventional is difficult methods for conventional to reconstruct method a moderate-sizes to reconstruct scene a by moderate using normal-size scene computers. by using normal computers.A common idea for reducing computational complexity and memory occupancy is to split big dataA into common sets of idea small for data reducing [70]. Based computational on this thought, complexity a segmented and memory reconstruction occupancy method is to split for big CS databased into SAR sets imaging of small has data been [70]. proposed Based on [71 this]. Inthought, this method, a segmented the whole reconstruction scene is split method into a for set CS of basedsmall subscenes.SAR imaging Since has the been computational proposed [71]. complexity In this method, is non-linear the whole to the scene data is size, split theinto reconstruction a set of small subscenes.time can be Since reduced the computational significantly. Thecomplexity sensing is matrices non-linear for to the the method data size, proposed the reconstruction in [71] are much time cansmaller be reduced than those significantly. for the conventional The sensing method. matrices Therefore, for the method the method proposed also needs in [71] much are much less memory. smaller thanDue tothose the shortfor the reconstruction conventional timemethod. and lowerTherefore, memory the method requirement also needs of the methodmuch less proposed memory. in Due [71], to the short reconstruction time and lower memory requirement of the method proposed in [71],

Sensors 2019, 19, x 10 of 19 reconstructing a moderate-size scene in a short time is no longer a difficult task. The processing steps of the segmented reconstruction method are shown in Figure 7.

Sensors 2019, 19, 3100 10 of 19 Sensors 2019, 19, x 10 of 19 reconstructingreconstructing aa moderate-sizemoderate-size scenescene inin aa shortshort timetime isis nono longerlonger aa didifficultfficult task.task. TheThe processing stepssteps ofof thethe segmentedsegmented reconstructionreconstruction methodmethod areare shownshown inin FigureFigure7 .7.

Figure 7. Processing steps of the segmented reconstruction method for CS-based synthetic aperture radar (SAR) imaging (taken from [71]).

Figures 8 and 9 show an example of the segmented reconstruction method [71]. Figure 8 shows the experimental scene of an airborne SAR system, which contains six trihedral reflectors. Figure 9a shows the conventional CS reconstruction result, where the reconstruction time is 44,032 s (12 h 14 min). The whole scene is split into five segments, and Figure 9b shows the segmented reconstruction result, where the reconstruction time is now reduced to 1498 s (25 min). It can be seen that, using the segmented reconstruction method, the reconstruction time is significantly reduced, while the reconstruction precision is nearly the same. FigureFigure 7.7. ProcessingProcessing stepssteps ofof thethe segmentedsegmented reconstructionreconstruction methodmethod forfor CS-basedCS-based syntheticsynthetic apertureaperture 3.3. Sparsityradarradar (SAR)(SAR) and imagingSparseimaging Representation (taken(taken fromfrom [[71]).71 ]). Sparsity of the scene is an essential requirement for sparse regularization or CS methods. For an FiguresFigures8 8 and and9 show9 show an an example example of theof t segmentedhe segmented reconstruction reconstruction method method [ 71 ].[71 Figure]. Fig8ure shows 8 shows the SAR scene, an extended scene is usually not sparse in itself (not sparse in the canonical ), except experimentalthe experimental scene scene of an of airborne an airborne SAR system,SAR system, which which contains contains six trihedral six trihedral reflectors. reflectors Figure. 9Figa showsure 9a for the case of a few dominant scatterers in a low reflective background [35]. Therefore, a sparse theshows conventional the conventional CS reconstruction CS reconstruction result, where result the reconstruction, where the time reconstruction is 44,032 s (12 time h 14 is min) 44,032. The s representation is needed to use a sparsity-driven method. whole(12 h scene14 min is). split The into whole five segments, scene is split and Figureinto five9b showssegments, the segmented and Figure reconstruction 9b shows the result, segmented where CS-based optical imaging has successfully used sparse representations [72]. However, radar thereconstruction reconstruction result, time where is now the reduced reconstruction to 1498 time s (25 is min). now Itreduced can be to seen 1498 that, s (25 using min) the. It segmentedcan be seen imaging involves complex-valued quantities; the raw data and the imaging result are both complex- reconstructionthat, using the method,segmented the reconstruction reconstruction method, time is the significantly reconstruc reduced,tion time while is significantly the reconstruction reduced, valued. Since the phase of the scene are potentially random, it is very difficult to find a transform precisionwhile the isreconstruction nearly the same. precision is nearly the same. basis to sparsify a complex-valued and extended scene [73,74]. 3.3. Sparsity and Sparse Representation Sparsity of the scene is an essential requirement for sparse regularization or CS methods. For an SAR scene, an extended scene is usually not sparse in itself (not sparse in the canonical basis), except for the case of a few dominant scatterers in a low reflective background [35]. Therefore, a sparse representation is needed to use a sparsity-driven method. CS-based optical imaging has successfully used sparse representations [72]. However, radar imaging involves complex-valued quantities; the raw data and the imaging result are both complex- valued. Since the phase(a) Sight of A the of thescene scene. are potentially random,(b )it Sight is very B of difficult the scene. to find a transform basisFigure to sparsify 8 8.. Trihedral a complex reflectors reflectors-valued in the and scene. extended Trihedral Trihedral scene reflectors reflectors [73,74 1 1–4].–4 are large, and trihedral reflectors reflectors 5 and 6 are small (taken from [[71]).71]).

(a) Sight A of the scene. (b) Sight B of the scene.

Figure 8. Trihedral reflectors in the scene. Trihedral reflectors 1–4 are large, and trihedral reflectors 5 and 6 are small (taken from [71]).

Sensors 2019, 19, 3100 11 of 19 Sensors 2019, 19, x 11 of 19

FigureFigure 9. ((aa)) Conventional CS CS reconstruction result (reconstruction time = 44,03244,032 s). ( (bb)) Segmented reconstructionreconstruction result result (reconstruction (reconstruction time = 14981498 s) s) (ta (takenken from from [71]). [71]).

3.3. SparsityStructured and d Sparseictionaries Representation and dictionary learning ideas are proposed in [75] and [76], respectively. An alternativeSparsity of approach the scene is is to an handl essentiale the requirement magnitude forand sparse phase regularizationseparately [41 or]. Although CS methods. the Forphase an ofSAR the scene, scene anis potentially extended scenerandom, is usually the magnitude not sparse of inthe itself scene (not usually sparse has in better the canonical sparse characteristics basis), except. However,for the case this of a approach few dominant has scatterersa much inhigher a low computational reflective background complexity [35]. than Therefore, standard a sparse CS reconstructionrepresentation. isAnother needed method to use a investigates sparsity-driven physical method. scattering behavior [4,77]. For example, a car can beCS-based represented optical as the imaging superposition has successfully of responses used from sparse plate representations and dihedral shapes. [72]. However, radar imagingFigure involves 10 shows complex-valued a simulation example quantities; for an extended the raw and data complex and the-valued imaging scene. result There are are bothtwo extendedcomplex-valued. objects in Sincethe scene, the phaseone of ofwhich the has scene a round are potentially shape while random, the other it has is very a rectang difficultular toshape find. Botha transform the two basis objects to sparsifyhave random a complex-valued phases associated and extended with them scene. It can [73 be,74 seen]. that the DCT (Discrete CosineStructured Transform dictionaries) results of andthe magnitude dictionary learningare sparse ideas. are proposed in [75] and [76], respectively. An alternativeFigure 11a approach shows the is result to handle of matched the magnitude filtering. and Since phase the separatelyrandom phase [41]. leads Although to speckle, the phase it can of bethe seen scene that is potentially although random, the scene the has magnitude a smooth of shape, the scene the usuallymatched has filtering better sparse result characteristics. has obvious fluctuationHowever, this. Fig approachure 11b has shows a much the higher result computational of conventional complexity CS reconstruction than standard CS without reconstruction. sparse representation.Another method The investigates reconstruction physical algorithm scattering is behavior SPGL1 [ [478,77].]. Since For example, the scene a car is cannot be sparse represented in the canonicalas the superposition basis, the reconstruction of responses from is not plate accurate. and dihedral Figure 11c shapes. shows the result of the method using a magnitudeFigure sparse 10 shows representation a simulation [4 example1]; it can forbe anseen extended that the andreconstruction complex-valued result scene. is much There better are than two Figextendedure 11a objects,b. Figure in the 11d scene, shows one the of which result has of a the round method shape using while the the improved other has a magnitude rectangular sparse shape. representationBoth the two objects method have proposed random inphases [79]. associated In the proposed with them. method, It can besides be seen the that sparsity, the DCT the(Discrete real- valuedCosine information Transform) resultsof the magnitude of the magnitude and the are coefficient sparse. distribution of the sparse representation are also utilized.Figure 11 Ita can shows be seen the that result both of matchedthe shape filtering.and speckle Since are the further random improved. phase leads to speckle, it can be seen that although the scene has a smooth shape, the matched filtering result has obvious fluctuation. Figure 11b shows the result of conventional CS reconstruction without sparse representation. The reconstruction algorithm is SPGL1 [78]. Since the scene is not sparse in the canonical basis, the reconstruction is not accurate. Figure 11c shows the result of the method using a magnitude sparse representation [41]; it can be seen that the reconstruction result is much better than Figure 11a,b. Figure 11d shows the result of the method using the improved magnitude sparse representation method proposed in [79]. In the proposed method, besides the sparsity, the real-valued information of the magnitude and the coefficient distribution of the sparse representation are also utilized. It can be seen that both the shape and speckle are further improved.

Figure 10. (a) Magnitude of the scene, (b) phase of the scene, (c) DCT result of the magnitude (taken from [79]).

Sensors 2019, 19, x 11 of 19

Figure 9. (a) Conventional CS reconstruction result (reconstruction time = 44,032 s). (b) Segmented reconstruction result (reconstruction time = 1498 s) (taken from [71]).

Structured dictionaries and dictionary learning ideas are proposed in [75] and [76], respectively. An alternative approach is to handle the magnitude and phase separately [41]. Although the phase of the scene is potentially random, the magnitude of the scene usually has better sparse characteristics. However, this approach has a much higher computational complexity than standard CS reconstruction. Another method investigates physical scattering behavior [4,77]. For example, a car can be represented as the superposition of responses from plate and dihedral shapes. Figure 10 shows a simulation example for an extended and complex-valued scene. There are two extended objects in the scene, one of which has a round shape while the other has a rectangular shape. Both the two objects have random phases associated with them. It can be seen that the DCT (Discrete Cosine Transform) results of the magnitude are sparse. Figure 11a shows the result of matched filtering. Since the random phase leads to speckle, it can be seen that although the scene has a smooth shape, the matched filtering result has obvious fluctuation. Figure 11b shows the result of conventional CS reconstruction without sparse representation. The reconstruction algorithm is SPGL1 [78]. Since the scene is not sparse in the canonical basis, the reconstruction is not accurate. Figure 11c shows the result of the method using a magnitude sparse representation [41]; it can be seen that the reconstruction result is much better than Figure 11a,b. Figure 11d shows the result of the method using the improved magnitude sparse representation method proposed in [79]. In the proposed method, besides the sparsity, the real-

Sensorsvalued2019 information, 19, 3100 of the magnitude and the coefficient distribution of the sparse representation12 ofare 19 also utilized. It can be seen that both the shape and speckle are further improved.

Figure 10. (a) MagnitudeMagnitude of the scene, ( b) phase of the scene, (c) DCT result of the magnitude (taken Sensorsfrom 2019 , [[79]). 7919,]). x 12 of 19

Figure 11. Simulation results: ( (aa)) matched matched filtering filtering result, result, ( b) conventional CS reconstruction result without sparse representation,representation, (c(c)) result result of of the the method method with with magnitude magnitude sparse sparse representation, representation, and and (d) (resultd) result of the of the method method with with improved improved magnitude magnitude sparse sparse representation representation (taken (taken from from [79]). [79]).

FigFigureure 12 showshowss the real data results.results. The The raw raw data data is is acquired acquired by by an an airborne airborne SAR SAR system system.. FigFigureure 12 contains a scene of of farmland farmland with with trellises. trellises. The The reflectivity reflectivity from from the the trellises trellises is is very very strong. strong. From the real data result, it can be seen that CS with the improved magnitude sparse representation method can produce an image with less speckle and and clearer edges of of differentdifferent regions than the previous methods.

Figure 12. Real data reconstruction results (scene of farmland with trellises): (a) matched filtering result (full data), (b) conventional CS reconstruction result without sparse representation, (c) result of CS with magnitude sparse representation, and (d) result of CS with improved magnitude sparse representation (taken from [79]).

Sensors 2019, 19, x 12 of 19

Figure 11. Simulation results: (a) matched filtering result, (b) conventional CS reconstruction result without sparse representation, (c) result of the method with magnitude sparse representation, and (d) result of the method with improved magnitude sparse representation (taken from [79]).

Figure 12 shows the real data results. The raw data is acquired by an airborne SAR system. Figure 12 contains a scene of farmland with trellises. The reflectivity from the trellises is very strong. From the real data result, it can be seen that CS with the improved magnitude sparse representation method can produce an image with less speckle and clearer edges of different regions than the previous methods. Sensors 2019, 19, 3100 13 of 19

Figure 12. RealReal data reconstruction results (scene of farmland with trellises): ( (a)) matched filtering filtering resultresult (full (full data), data), ( (bb)) convention conventionalal CS CS reconstruction reconstruction result result without without sparse sparse representation, representation, (c) (resultc) result of CSof CS with with magnitude magnitude sparse sparse representation, representation, and and (d ()d ) result result of of CS CS with with improved improved magnitude magnitude sparse representationrepresentation (taken from [[79]).79]).

3.4. Influence of Clutter Another practical case is when the targets of interest are sparse, but there also exists clutter in the scene. Clutter arises from reflections within the scene, so the image may no longer be sparse if significant clutter returns are present. Typical examples include GPR and TWR imaging. The interesting targets, such as landmines and humans, are usually sparse, but they are often buried in the ground surface clutter and wall clutter. Some methods have been proposed to remove the ground surface clutter and wall clutter for downward-looking GPR and TWR [64,65]. These methods are effective in cases when the clutter is concentrated in a fixed range cell or limited to several range cells. Another scenario is TWR/SAR imaging of moving targets. A sparsity-driven change detection method is proposed in [67]. The stationary targets and clutter are removed via change detection, and then CS reconstruction is applied to the resulting sparse scene. In [55], a SAR/GMTI method using distributed CS is proposed, which can cope with the non-sparse stationary clutter. A more difficult case is when both the targets and clutter are stationary, and the clutter is distributed over the whole scene. Forward-looking GPR may fall into this category. Figure 13 shows a real data example for this case. In such a scenario, shrubs and rocks above the ground surface may cause strong azimuth clutter. Short range clutter is usually also strong, due to the large grazing angle and short range. Besides the strong clutter far away from the target (landmine), there is also ground surface clutter around the target. In [68], an idea is proposed to build a model in which the clutter is also taken into account as a norm in the objective function. In [80], the forward-looking clutter is suppressed in two steps. In the first step, the strong clutter outside of the reconstruction region is suppressed first. In the second step, the clutter in the reconstruction region is suppressed by selecting a proper β, which represents the ratio of the non-zeros area in the reconstructed scene. The reconstruction results are shown in Figure 14. Sensors 2019, 19, x 13 of 19

3.4. Influence of Clutter Another practical case is when the targets of interest are sparse, but there also exists clutter in the scene. Clutter arises from reflections within the scene, so the image may no longer be sparse if significant clutter returns are present. Typical examples include GPR and TWR imaging. The interesting targets, such as landmines and humans, are usually sparse, but they are often buried in the ground surface clutter and wall clutter. Some methods have been proposed to remove the ground surface clutter and wall clutter for downward-looking GPR and TWR [64,65]. These methods are effective in cases when the clutter is concentrated in a fixed range cell or limited to several range cells. Another scenario is TWR/SAR imaging of moving targets. A sparsity-driven change detection method is proposed in [67]. The stationary targets and clutter are removed via change detection, and then CS reconstruction is applied to the resulting sparse scene. In [55], a SAR/GMTI method using distributed CS is proposed, which can cope with the non-sparse stationary clutter. A more difficult case is when both the targets and clutter are stationary, and the clutter is distributed over the whole scene. Forward-looking GPR may fall into this category. Figure 13 shows a real data example for this case. In such a scenario, shrubs and rocks above the ground surface may cause strong azimuth clutter. Short range clutter is usually also strong, due to the large grazing angle and short range. Besides the strong clutter far away from the target (landmine), there is also ground surface clutter around the target. In [68], an idea is proposed to build a model in which the clutter is also taken into account as a norm in the objective function. In [80], the forward-looking clutter is suppressed in two steps. In the first step, the strong clutter outside of the reconstruction region is suppressed first. In the second step, the clutter in the reconstruction region is suppressed by selecting

Sensorsa proper2019 , 19 ,, 3100 which represents the ratio of the non-zeros area in the reconstructed scene.14 ofThe 19 reconstruction results are shown in Figure 14.

Figure 13. Real data example for the clutter problem in forward-lookingforward-looking GPR (backprojection result Sensorsusing 2019, full-sampledfull19, x-sampled data). Taken from [[80].80]. 14 of 19

Figure 14. ReconstructionReconstruction results results in clutter environment with different different parameters (taken from [[80]).80]).

3.5. Model Error Compensation In the regularizationregularization or CSCS methods,methods, wewe usuallyusually assumeassume thatthat thethe modelmodel isis exact.exact. However, in practice, the model may alsoalso containcontain errors.errors. ForFor example, imperfect knowledge of the observationobservation position willwill leadlead toto errorserrors inin the the measurement measurement matrix. matrix. This This e ffeffectect resembles resemble motions motion errors errors that that arise arise in traditionalin traditional airborne airborne SAR SAR imaging. imaging. Figure Figure 15 15shows shows the the geometry geometry of of the the observation observation position position errors errors or motionor motion errors errors in SAR.in SAR. Several methods have been proposed to deal with model errors in CS-basedCS-based or sparsity-drivensparsity-driven radar imaging. A phase phase error error correcti correctionon method method for for sparsity sparsity-driven-driven SAR SAR imaging imaging is is proposed proposed i inn [ [8181].]. AnAn autofocus method for compressivelycompressively sampled SAR is proposed in [[8282].]. This method can correct phase errorserrors in in the the reconstruction reconstruction process. process Both. Both the the methods method proposeds proposed in [81 in,82 [81,] deal82] withdeal phasewith phase errors inerrors the observedin the observed data, or data, approximately or approximately treat the treat observation the observation position-induced position-induced model model errors errors as phase as errorsphase in errors the observed in the observed data. In [83data.], the In platform [83], the position platform errors position are investigated errors are and investigated compensated. and Thatcompensated. method considers That method the azimuth considers off theset azimuth errors and offset also errors uses some and also approximations. uses some approximations. In [84], a model error compensation method is proposed. An iterative algorithm cycles through steps of target reconstruction, and observation position error estimation and compensation are used. This method can estimate the observation position error exactly, while only relying on the observed data.

Sensors 2019, 19, x 15 of 19 Sensors 2019, 19, 3100 15 of 19

Sensors 2019, 19, x 15 of 19

Figure 15. Geometry of the observation position errors in SAR. (Taken from [84]).

Figure 16Figure shows 15. a realGeometry data resultof the observationusing the method position proposederrors in SAR. in [84]. (Taken The fromfrom data [[84]).84 set]). used in this figure is the same as that used for Figure 9. In the data acquisition process, the airplane is expected to fly InFigalong [ure84], a16 a straight model shows error line.a real compensationHowever, data result due using methodto the the air ismethod current’s proposed. proposed influence, An iterative in [84].the trajectory algorithmThe data ofset cycles the used airplane through in this maystepsfigure slightly of is target the same deviate reconstruction, as fromthat used the and expectedfor observationFigure one. 9. In As positionth e a data result, erroracquisition the estimation observation process and , position the compensation airplane data isinevitably expected are used. containThisto fly method along some a canerrors. straight estimate line. the However, observation due position to the air error current’s exactly, influence, while only the relying trajectory on the of observed the airplane data. mayFig Figure slightlyure 16a 16 deviate showsshows afromthe real original the data expected result CS reconstruction using one. the As method a result,result. proposed theSince observation the in observation [84]. The position data position dataset used errorsinevitably in thisare notfigurecontain compensated, is some the same errors. asit that can usedbe seen for Figurethat the9. Intargets the data are acquisition somewhat process, defocused. the Figairplaneure 16b is expected shows the to correspondingfly alongFigu are straight 16a CS shows reconstruction line. the However, original result dueCS reconstruction to with the aircompensation current’s result. influence, forSince observa the the observation trajectorytion position of position the error. airplane errorsIt can may arebe seenslightlynot compensated,that deviate the focusing from it canqualitythe expected be seenis improved that one. the As using targetsa result, the aremethod the somewhat observation proposed defocused. position in [84]. dataThe Fig urepeak inevitably 16b of the shows containtargets the hassomecorresponding an errors.increase ofCS about reconstruction 20%, and theresult sidelobe with compensations are also significantly for observa reduced.tion position error. It can be seen that the focusing quality is improved using the method proposed in [84]. The peak of the targets has an increase of about 20%, and the sidelobes are also significantly reduced.

FigureFigure 16. 16. ObservationObservation position position error error com compensationpensation for for airborne airborne SAR SAR data. (a (a) ) Result Result without without observationobservation position position error error compensation. compensation. (b (b) )Result Result with with observation observation position position error error compensation compensation (taken(takenFigure from from 16. [84])Observation [84]).. position error compensation for airborne SAR data. (a) Result without observation position error compensation. (b) Result with observation position error compensation 4. ConclusionsFigure(taken from 16a [84]) shows. the original CS reconstruction result. Since the observation position errors are not compensated, it can be seen that the targets are somewhat defocused. Figure 16b shows the 4. Conclusions

Sensors 2019, 19, 3100 16 of 19 corresponding CS reconstruction result with compensation for observation position error. It can be seen that the focusing quality is improved using the method proposed in [84]. The peak of the targets has an increase of about 20%, and the sidelobes are also significantly reduced.

4. Conclusions In radar imaging area, there are many relevant techniques and methods, such as matched filtering, the range Doppler algorithm, the chirp scaling algorithm, the ωK algorithm, regularized methods, and CS methods. These techniques and methods are quite different in their forms. This paper tries to understand these techniques and methods in a unified mathematical framework. Based on theoretical analysis, it can be seen that sparsity-driven regularization or CS-based radar imaging methods have potentially significant advantages. However, although many interesting ideas have been presented, very few of them have been verified with real data. There are still many unsolved or open problems in this area. In the issues discussed in this paper, the sampling scheme, fast reconstruction strategy, and model error problems are basically solved. However, issues concerning the sparsity or sparse representation of a complex and extended scene are still not completely solved. Strong clutter may break the sparsity of a scene, while sparse representation methods for an extended scene are currently not perfect. The state of the art in these areas has not yet reached the stage of practical application, and further investigations are needed in the future.

Funding: This work was supported in part by the National Natural Science Foundation of China under Grant 61401474, and in part by the Hunan Provincial Natural Science Foundation under Grant 2016JJ3025. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Cumming, I.; Wong, F. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Boston, MA, USA, 2005. 2. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [CrossRef] 3. Mensa, D.L. High Resolution Radar Imaging; Artech House: Norwood, MA, USA, 1981. 4. Potter, L.C.; Ertin, E.; Parker, J.T.; Çetin, M. Sparsity and compressed sensing in radar imaging. Proc. IEEE 2010, 98, 1006–1020. [CrossRef] 5. Mohammad-Djafari, A. Inverse Problems in Vision and 3D ; ISTE: London, UK; John Wiley and Sons: New York, NY, USA, 2010. 6. Mohammad-Djafari, A. Bayesian approach with prior models which enforce sparsity in signal and image processing. EURASIP J. Adv. Signal Process. 2012, 2012, 52. [CrossRef] 7. Mohammad-Djafari, A.; Daout, F.; Fargette, P. Fusion and inversion of SAR data to obtain a superresolution image. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Cario, Egypt, 7–10 November 2009; pp. 569–572. 8. Groetsch, C.W. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind; Pitman Publishing Limited: London, UK, 1984. 9. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; Wiley: New York, NY, USA, 1977. 10. Miller, K. Least squares methods for ill-posed problems with a prescribed bound. SIAM J. Math. Anal. 1970, 1, 52–57. [CrossRef] 11. Blacknell, D.; Quegan, S. SAR super-resolution with a stochastic point spread function. In Proceedings of the IEE Colloquium on Synthetic Aperture Radar, London, UK, 29 November 1989. 12. Pryde, G.C.; Delves, L.M.; Luttrell, S.P. Transputer based super resolution of SAR images: Two approaches to a parallel solution. In Proceedings of the IEE Colloquium on Transputers for Image Processing Applications, London, UK, 13 February 1989. 13. Kay, S.M. Fundamentals of Statistical Signal Processing; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1993. 14. Bouman, C.; Sauer, K. A generalized Gaussian image model for edge-preserving MAP estimation. IEEE Trans. Image Process. 1993, 2, 296–310. [CrossRef][PubMed] Sensors 2019, 19, 3100 17 of 19

15. Chang, L.T.; Gupta, I.J.; Burnside, W.D.; Chang, C.T. A data compression technique for scattered fields from complex targets. IEEE Trans. Antennas Propag. 1997, 45, 1245–1251. [CrossRef] 16. Candès, E.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [CrossRef] 17. Baraniuk, R.G. Compressive sensing. IEEE Signal Process. Mag. 2007, 24, 118–121. [CrossRef] 18. Jafarpour, S.; Weiyu, X.; Hassibi, B.; Calderbank, R. Efficient and robust compressed sensing using optimized expander graphs. IEEE Trans. Inf. Theory 2009, 55, 4299–4308. [CrossRef] 19. Yuejie, C.; Scharf, L.L.; Pezeshki, A.; Calderbank, R. Sensitivity to basis mismatch in compressed sensing. IEEE Trans. Signal Process. 2011, 59, 2182–2195. 20. Raginsky, M.; Jafarpour, S.; Harmany, Z.T.; Marcia, R.F.; Willett, R.M.; Calderbank, R. Performance bounds for expander-based compressed sensing in Poisson noise. IEEE Trans. Signal Process. 2011, 59, 4139–4153. [CrossRef] 21. Logan, C.L. An Estimation-Theoretic Technique for Motion Compensated Synthetic-Aperture Array Imaging. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2000. 22. Browne, K.E.; Burkholder, R.J.; Volakis, J.L. Fast optimization of through-wall radar images via the method of Lagrange multipliers. IEEE Trans. Antennas Propag. 2013, 61, 320–328. [CrossRef] 23. Romberg, J. Imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 14–20. [CrossRef] 24. Çetin, M.; Karl, W.C. Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans. Antennas Propag. 2001, 10, 623–631. 25. Kalogerias, D.; Sun, S.; Petropulu, A.P. Sparse sensing in colocated MIMO radar: A matrix completion approach. In Proceedings of the 2013 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Athens, Greece, 12–15 December 2013; pp. 496–502. 26. Yao, Y.; Petropulu, A.P.; Poor, H.V. Measurement matrix design for compressive sensing-based MIMO radar. IEEE Trans. Signal Process. 2011, 59, 5338–5352. 27. Yao, Y.; Petropulu, A.P.; Poor, H.V. MIMO radar using compressive sampling. IEEE J. Sel. Top. Signal Process. 2010, 4, 146–163. 28. Rossi, M.; Haimovich, A.M.; Eldar, Y.C. Spatial compressive sensing for MIMO radar. IEEE Trans. Signal Process. 2014, 62, 419–430. [CrossRef] 29. Rossi, M.; Haimovich, A.M.; Eldar, Y.C. Global methods for compressive sensing in MIMO radar with distributed sensors. In Proceedings of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 6–9 November 2011; pp. 1506–1510. 30. Rossi, M.; Haimovich, A.M.; Eldar, Y.C. Spatial compressive sensing in MIMO radar with random arrays. In Proceedings of the 46th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 21–23 March 2012. 31. Herman, M.; Strohmer, T. Compressed sensing radar. In Proceedings of the 2008 IEEE Radar Conference, Rome, Italy, 26–30 May 2008. 32. Herman, M.; Strohmer, T. High-resolution radar via compressed sensing. IEEE Trans. Signal Process. 2009, 57, 2275–2284. [CrossRef] 33. Herman, M.; Strohmer, T. General deviants: An analysis of perturbations in compressed sensing. IEEE J. Sel. Top. Signal Process. 2010, 4, 342–349. [CrossRef] 34. Baraniuk, R.G.; Steeghs, P. Compressive radar imaging. In Proceedings of the 2007 IEEE Radar Conference, Boston, MA, USA, 17–20 April 2007; pp. 128–133. 35. Ender, J.H.G. On compressive sensing applied to radar. Signal Process. 2010, 90, 1402–1414. [CrossRef] 36. Çetin, M.; Stojanovic, I.; Önhon, N.Ö.; Varshney, K.; Samadi, S.; Karl, W.C.; Willsky, A. Sparsity-driven synthetic aperture radar imaging: Reconstruction, autofocusing, moving targets, and compressed sensing. IEEE Signal Process. Mag. 2014, 31, 27–40. [CrossRef] 37. Zhang, B.; Hong, W.; Wu, Y. Sparse microwave imaging: Principles and applications. Sci. China Inf. Sci. 2012, 55, 1722–1754. [CrossRef] 38. Bi, H.; Zhang, B.; Zhu, X.; Jiang, C.; Hong, W. Extended chirp scaling-baseband azimuth scaling-based

azimuth-range decouple L1 regularization for TOPS SAR imaging via CAMP. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3748–3763. [CrossRef] 39. Patel, V.M.; Easley, G.R.; Healy, D.M., Jr.; Chellappa, R. Compressed synthetic aperture radar. IEEE J. Sel. Top. Signal Process. 2010, 4, 244–254. [CrossRef] Sensors 2019, 19, 3100 18 of 19

40. Alonso, M.T.; López-Dekker, P.; Mallorquí, J.J. A novel strategy for radar imaging based on compressive sensing. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4285–4295. [CrossRef] 41. Samadi, S.; Çetin, M.; Masnadi-Shirazi, M.A. Sparse representation-based synthetic aperture radar imaging. IET Radar Sonar Navig. 2011, 5, 182–193. [CrossRef] 42. Yang, L.; Zhou, J.; Hu, L.; Xiao, H. A Perturbation-Based Approach for Compressed Sensing Radar Imaging. IEEE Antennas Wirel. Propag. Lett. 2017, 16, 87–90. [CrossRef] 43. Zhang, L.; Qiao, Z.; Xing, M.; Li, Y.; Bao, Z. High-Resolution ISAR imaging with sparse stepped-frequency waveforms. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4630–4651. [CrossRef] 44. Wang, H.; Quan, Y.; Xing, M.; Zhang, S. ISAR Imaging via sparse probing frequencies. IEEE Geosci. Remote Sens. Lett. 2011, 8, 451–455. [CrossRef] 45. Du, X.; Duan, C.; Hu, W. Sparse representation based autofocusing technique for ISAR images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1826–1835. [CrossRef] 46. Zhu, X.X.; Bamler, R. Tomographic SAR inversion by L1-norm regularization-the compressive sensing approach. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3839–3846. [CrossRef] 47. Zhu, X.X.; Bamler, R. Super-resolution power and robustness of compressive sensing for spectral estimation with application to spaceborne tomographic SAR. IEEE Trans. Geosci. Remote Sens. 2012, 50, 247–258. [CrossRef] 48. Zhu, X.X.; Ge, N.; Shahzad, M. Joint Sparsity in SAR Tomography for Urban Mapping. IEEE J. Sel. Top. Signal Process. 2015, 9, 1498–1509. [CrossRef] 49. Budillon, A.; Evangelista, A.; Schirinzi, G. Three-Dimensional SAR Focusing from Multipass Signals Using Compressive Sampling. IEEE Trans. Geosci. Remote Sens. 2011, 49, 488–499. [CrossRef] 50. Xilong, S.; Anxi, Y.; Zhen, D.; Diannong, L. Three-dimensional SAR focusing via compressive sensing: The case study of angel stadium. IEEE Geosci. Remote Sens. Lett. 2012, 9, 759–763. [CrossRef] 51. Aguilera, E.; Nannini, M.; Reigber, A. Multisignal compressed sensing for polarimetric SAR tomography. IEEE Geosci. Remote Sens. Lett. 2012, 9, 871–875. [CrossRef] 52. Kajbaf, H. Compressed Sensing for 3D Microwave Imaging Systems. Ph.D. Thesis, Missouri University of Science and Technology, Rolla, MO, USA, 2012. 53. Zhang, S.; Dong, G.; Kuang, G. Superresolution downward-looking linear array three-dimensional SAR imaging based on two-dimensional compressive sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2184–2196. [CrossRef] 54. Tian, H.; Li, D. Sparse flight array SAR downward-looking 3-D imaging based on compressed sensing. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1395–1399. [CrossRef] 55. Lin, Y.G.; Zhang, B.C.; Hong, W.; Wu, Y.R. Along-track interferometric SAR imaging based on distributed compressed sensing. Electron. Lett. 2010, 46, 858–860. [CrossRef] 56. Prünte, L. Compressed sensing for joint ground imaging and target indication with airborne radar. In Proceedings of the 4th Workshop on Signal Processing with Adaptive Sparse Structured Representations, Edinburgh, UK, 27–30 June 2011. 57. Wu, Q.; Xing, M.; Qiu, C.; Liu, B.; Bao, Z.; Yeo, T.S. Motion parameter estimation in the SAR system with low PRF sampling. IEEE Geosci. Remote Sens. Lett. 2010, 7, 450–454. [CrossRef] 58. Stojanovic, I.; Karl, W.C. Imaging of moving targets with multi-static SAR using an overcomplete dictionary. IEEE J. Sel. Top. Signal Process. 2010, 4, 164–176. [CrossRef] 59. Önhon, N.Ö.; Çetin, M. SAR moving target imaging in a sparsity-driven framework. In Proceedings of the SPIE Optics + Photonics Symposium, and Sparsity XIV, San Diego, CA, USA, 21–25 August 2011; Volume 8138, pp. 1–9. 60. Prünte, L. Detection performance of GMTI from SAR images with CS. In Proceedings of the 2014 European Conference on Synthetic Aperture Radar, Berlin, Germany, 3–5 June 2014. 61. Prünte, L. Compressed sensing for removing moving target artifacts and reducing noise in SAR images. In Proceedings of the European Conference on Synthetic Aperture Radar, Hamburg, Germany, 6–9 June 2016. 62. Suksmono, A.B.; Bharata, E.; Lestari, A.A.; Yarovoy, A.G.; Ligthart, L.P. Compressive stepped-frequency continuous-wave ground-penetrating radar. IEEE Geosci. Remote Sens. Lett. 2010, 7, 665–669. [CrossRef] 63. Gurbuz, A.C.; McClellan, J.H.; Scott, W.R. A compressive sensing data acquisition and imaging method for stepped frequency GPRs. IEEE Trans. Signal Process. 2009, 57, 2640–2650. [CrossRef] Sensors 2019, 19, 3100 19 of 19

64. Tuncer, M.A.C.; Gurbuz, A.C. Ground reflection removal in compressive sensing ground penetrating radars. IEEE Geosci. Remote Sens. Lett. 2012, 9, 23–27. [CrossRef] 65. Lagunas, E.; Amin, M.G.; Ahmad, F.; Nájar, M. Joint wall mitigation and compressive sensing for indoor image reconstruction. IEEE Trans. Geosci. Remote Sens. 2013, 51, 891–906. [CrossRef] 66. Amin, M.G. Through-the-Wall Radar Imaging; CRC Press: Boca Raton, FL, USA, 2010. 67. Ahmad, F.; Amin, M.G. Through-the-wall human motion indication using sparsity-driven change detection. IEEE Trans. Geosci. Remote Sens. 2013, 51, 881–890. [CrossRef] 68. Strohmer, T. Radar and compressive sensing—A perfect couple? In Proceedings of the Keynote Speak of 1st International Workshop on Compressed Sensing Applied to Radar (CoSeRa 2012), Bonn, Germany, 14–16 May 2012. 69. Yang, J.; Thompson, J.; Huang, X.; Jin, T.; Zhou, Z. Random-frequency SAR imaging based on compressed sensing. IEEE Trans. Geosci. Remote Sens. 2013, 51, 983–994. [CrossRef] 70. Qin, S.; Zhang, Y.D.; Wu, Q.; Amin, M. Large-scale sparse reconstruction through partitioned compressive sensing. In Proceedings of the International Conference on Digital Signal Processing, Hong Kong, China, 20–23 August 2014; pp. 837–840. 71. Yang, J.; Thompson, J.; Huang, X.; Jin, T.; Zhou, Z. Segmented reconstruction for compressed sensing SAR imaging. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4214–4225. [CrossRef] 72. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [CrossRef] 73. Çetin, M.; Karl, W.C.; Willsky, A.S. Feature-preserving regularization method for complex-valued inverse problems with application to coherent imaging. Opt. Eng. 2006, 45, 017003. 74. Çetin, M.; Önhon, N.Ö.; Samadi, S. Handling phase in sparse reconstruction for SAR: Imaging, autofocusing, and moving targets. In Proceedings of the EUSAR 2012—9th European Conference on Synthetic Aperture Radar, Nuremberg, Germany, 23–26 April 2012. 75. Varshney, K.R.; Çetin, M.; Fisher, J.W., III; Willsky, A.S. Sparse representation in structured dictionaries with application to synthetic aperture radar. IEEE Trans. Signal Process. 2008, 56, 3548–3560. [CrossRef] 76. Abolghasemi, V.; Gan, L. Dictionary learning for incomplete SAR data. In Proceedings of the CoSeRa 2012, Bonn, Germany, 14–16 May 2012. 77. Halman, J.; Burkholder, R.J. Sparse expansions using physical and polynomial basis functions for compressed sensing of frequency domain EM scattering. IEEE Antennas Wirel. Propag. Lett. 2015, 14, 1048–1051. [CrossRef] 78. Van Den Berg, E.; Friedlander, M.P. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [CrossRef] 79. Yang, J.; Jin, T.; Huang, X. Compressed sensing radar imaging with magnitude sparse representation. IEEE Access 2019, 7, 29722–29733. [CrossRef] 80. Yang, J.; Jin, T.; Huang, X.; Thompson, J.; Zhou, Z. Sparse MIMO array forward-looking GPR imaging based on compressed sensing in clutter environment. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4480–4494. [CrossRef] 81. Önhon, N.Ö.; Çetin, M. A sparsity-driven approach for joint SAR imaging and phase error correction. IEEE Trans. Antennas Propag. 2012, 21, 2075–2088. 82. Kelly, S.I.; Yaghoobi, M.; Davies, M.E. Auto-focus for compressively sampled SAR. In Proceedings of the CoSeRa 2012, Bonn, Germany, 14–16 May 2012. 83. Wei, S.J.; Zhang, X.L.; Shi, J. An autofocus approach for model error correction in compressed sensing SAR imaging. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2012), Munich, Germany, 22–27 July 2012. 84. Yang, J.; Huang, X.; Thompson, J.; Jin, T.; Zhou, Z. Compressed sensing radar imaging with compensation of observation position error. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4608–4620. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).