Editorial Board Supported by NSFC .

Honorary Editor General ZHOU GuangZhao (Zhou Guang Zhao) Editor General ZHU ZuoYan Institute of Hydrobiology, CAS Editor-in-Chief LI Wei Beihang University Executive Associate Editor-in-Chief WANG DongMing Centre National de la Recherche Scientifique

Associate Editors-in-Chief

GUO Lei Academy of Mathematics and Systems Science, CAS HUANG Ru Peking University QIN YuWen National Natural Science Foundation of SUN ZengQi Tsinghua University YOU XiaoHu Southeast University ZHAO QinPing Beihang University ZHAO Wei University of

Members

CHEN JianEr LI Joshua LeWei TSAI WeiTek Texas A&M University University of Electronic Science and Arizona State University Technology of China DU Richard LiMin WANG Ji Voxeasy Institute of Technology LIU DeRong National University of Defense Technology Institute of Automation, CAS GAO Wen WANG JiangZhou Peking University LIN HuiMin University of Kent

Institute of Software, CAS GE ShuZhi Sam University of Electronic Science and WANG Long LIN ZongLi Peking University Technology of China University of Virginia

GUO GuangCan WU YiRong LONG KePing University of Science and Technology of Institute of Electronics, CAS University of Science and Technology China Beijing XIE WeiXin HAN WenBao Shenzhen University LU Jian PLA Information Engineering University Nanjing University XU Jun Tsinghua University HE JiFeng MEI Hong East China Normal University Peking University XU Ke HU WeiWu MENG LuoMing Beihang University Institute of Computing Technology, CAS Beijing University of Posts and Telecommunications YIN QinYe HU ZhanYi Xi’an Jiaotong University Institute of Automation, CAS PENG LianMao YING MingSheng IDA Tetsuo Peking University Tsinghua University University of Tsukuba PENG QunSheng Zhejiang University ZHA HongBin JI YueFeng Peking University Beijing University of Posts and Telecommu- SHEN ChangXiang nications Computing Technology Institute of China ZHANG HuanGuo Navy Wuhan University JIN Hai Huazhong University of Science and SUN JiaGuang ZHOU Dian Technology Tsinghua University The University of Texas at Dallas

JIN YaQiu TANG ZhiMin ZHOU ZhiHua Institute of Computing Technology, CAS Nanjing University

JING ZhongLiang TIAN Jie ZHUANG YueTing Jiao Tong University Institute of Automation, CAS Zhejiang University

Editorial Staff SONG Fei FENG Jing ZHAO DongXia Go To Website SCIENCE CHINA Information Sciences

Contents Vol. 55 No. 11 November 2012

RESEARCH PAPER

Study on co-occurrence character networks from Chinese essays in different periods ...... 2417 LIANG Wei, SHI YuMing, TSE Chi K & WANG YanLi Automatic composition of information-providing web services based on query rewriting ...... 2428 ZHAO WenFeng, LIU ChuanChang & CHEN JunLiang A construction method of matroidal networks ...... 2445 YUAN Chen, KAN HaiBin, WANG Xin & IMAI Hideki Communication network designing: Transmission capacity, cost and scalability ...... 2454 ZHANG GuoQiang & ZHANG GuoQing Corner occupying theorem for the two-dimensional integral rectangle packing problem ...... 2466 HUANG WenQi, YE Tao & CHEN DuanBing Round-optimal zero-knowledge proofs of knowledge for NP ...... 2473 LI HongDa, FENG DengGuo, LI Bao & XUE HaiXia Binary particle swarm optimization with multiple evolutionary strategies ...... 2485 ZHAO Jing, HAN ChongZhao & WEI Bin Communications and control co-design: a combined dynamic-static scheduling approach ...... 2495 LU ZiBao & GUO Ge Optimized statistical analysis of software trustworthiness attributes ...... 2508 ZHANG Xiao, LI Wei, ZHENG ZhiMing & GUO BingHui A new one-bit difference collision attack on HAVAL-128 ...... 2521 ZHANG WenYing, LI YanYan & WU Lei Signcryption with fast online signing and short signcryptext for secure and private mobile communication ...... 2530 YOUN Taek-Young & HONG Dowon An ID-based authenticated dynamic group key agreement with optimal round ...... 2542 TENG JiKai, WU ChuanKun & TANG ChunMing Evolutionary ciphers against differential power analysis and differential fault analysis ...... 2555 TANG Ming, QIU ZhenLong, YANG Min, CHENG PingPan, GAO Si, LIU ShuBo & MENG QinShu An oblivious fragile watermarking scheme for images utilizing edge transitions in BTC bitmaps ...... 2570 ZHANG Yong, LU ZheMing & ZHAO DongNing The essential ability of sparse reconstruction of different compressive sensing strategies ...... 2582 ZHANG Hai, LIANG Yong, GOU HaiLiang & XU ZongBen Waveform design and high-resolution imaging of cognitive radar based on compressive sensing ...... 2590 LUO Ying, ZHANG Qun, HONG Wen & WU YiRong An efficient sparse channel estimator combining time-domain LS and iterative shrinkage for OFDM systems with IQ-imbalances ..... 2604 SHU Feng, ZHAO JunHui, YOU XiaoHu, WANG Mao, CHEN Qian & STEVAN Berber Object registration for remote sensing images using robust kernel pattern vectors ...... 2611 DING MingTao, JIN Zi, TIAN Zheng, DUAN XiFa, ZHAO Wei & YANG LiJuan Quasi-linear modeling of gyroresonance between different MLT chorus and geostationary orbit electrons ...... 2624 ZHANG ZeLong, XIAO FuLiang, HE YiHua, HE ZhaoGuo, YANG Chang, ZHOU XiaoPing & TANG LiJun A variational method for contour tracking via covariance matching...... 2635 WU YuWei, MA Bo & LI Pei Near lossless compression of hyperspectral images based on distributed source coding ...... 2646 NIAN YongJian, WAN JianWei, TANG Yi & CHEN Bo A 10 GHz multiphase LC VCO with a ring capacitive coupling structure ...... 2656 CHEN YingMei, WANG Hui, YAN ShuangChao & ZHANG Li

SCIENCE CHINA Information Sciences

. RESEARCH PAPER . November 2012 Vol. 55 No. 11: 2582–2589 doi: 10.1007/s11432-011-4502-6

The essential ability of sparse reconstruction of different compressive sensing strategies

ZHANG Hai1,2 ∗, LIANG Yong3, GOU HaiLiang1 & XU ZongBen1

1 Institute for Information and System Science, Xi’an Jiaotong University, Xi’an 710049,China; 2Department of Mathematics, Northwest University, Xi’an 710069,China; 3University of Science and Technology, Macau 999078,China

Received March 30, 2010; accepted July 8, 2010; published online January 14, 2012

Abstract We show the essential ability of sparse signal reconstruction of different compressive sensing strate- gies,which include the L1 regularization, the L0 regularization(thresholding iteration algorithm and OMP algo- rithm), the Lq(0

Keywords compressive sensing, regularization, sparsity, L1/2 regularization

Citation Zhang H, Liang Y, Gou H L, et al. The essential ability of sparse reconstruction of different compressive sensing strategies. Sci China Inf Sci, 2012, 55: 2582–2589, doi: 10.1007/s11432-011-4502-6

1 Introduction

Compressive sensing [1,2], also called sparse signal reconstruction, is a novel sampling paradigm to re- construct sparse signal. It has been widely used in the areas of signal processing, image processing and machine learning in recent years. Different from the traditional Shannon/Nyquist theory, the compressive sensing makes it possible to recover sparse signal from far fewer samples than the Shannon sampling. Therefore it provides a new way for acquiring, processing and applying the complex information. And the compressive sensing theory has been a remarkable achievement in the areas of signal processing. In fact, compressive sensing is to find the sparse solution of the underdetermined linear equation,

b = Ax, A ∈ Rm×N (m  N), where x is signal, A is m × N measurement matrix (m  N)andb is a measured vector. The solution of the equation is underdetermined due to m  N. A sparse signal means that the signal, when expressed

∗Corresponding author (email: [email protected])

c Science China Press and Springer-Verlag Berlin Heidelberg 2012 info.scichina.com www.springerlink.com Zhang H, et al. Sci China Inf Sci November 2012 Vol. 55 No. 11 2583 in some basis or dictionaries, has very few significant coefficients, and the remaining ones are zero or approximately equal to zero. For example, communications signals are often sparse in the short time Fourier domain and radar signals are sparse in the chirplet domain. Following the tradition, we write x0 for the number of nonzero components of vector x,andwewish to single out the sparest solution. Traditionally, we use the following optimization model to obtain the sparest solutions: (P0)minx0, s.t. b = Ax.

This is of little practical use, however, since the problem (P0) is nonconvex and therefore generally difficult to be solved, as its solution usually requires an intractable combinatorial search. To solve the above problem, alternative strategies to find sparest solutions have been put forward, such as the orthogonal greedy algorithm or the basis pursuit. The latter replaces the problem (P0)bymeansofL1 minimizing,

(P1)minx1, s.t. b = Ax, which can be solved by linear programming. This method can be traced back to the works of the 1960s [3,4], which has been widely applied in statistics and signal processing in the 1990s. Lasso (least absolute shrinkage and selection operator) [5] and BPDN (basis pursuit de-nosing) [6] are the representatives of this L1 method. Recently, due to the works of Donoho, Candes and Tao et al. [2,7], the general conditions to describe the equivalent solution of (P0)and(P1) are built under some conditions on the measurement matrix A. After their significant work, the L1 method has become so popular that it has been considered as the “modern least squares”. In general, the compressive sensing focuses on the following three questions: (i) the sparse representa- tion of the information(signal), that is, how to choose the basis under which the signal has the sparsest representation; (ii) the simple sampling mechanism, that is, how to design the proper sampling mecha- nism under which the signal can be exactly reconstructed; (iii) the exact reconstruction strategy, that is, how to choose the proper strategy to reconstruct the sparse signal efficiently. Obviously, those three questions are correlated. The sparse representation of a signal is the foundation; the sampling mechanism is the foundation for application and reconstruction; and the exact reconstruction algorithm is the central problem of those three questions and they rely on the efficiency of the representation and the sampling mechanism. The sparse representation of signal has been widely studied [8–10]. Based on the theory of wavelet analysis, Mallat provided matching pursuit to decompose any signal into linear expansion waveforms that are selected from a redundant dictionary of functions. This is the major sparse signal representation method. The sample mechanism has been widely studied [11,12], which accords with the signal recon- struction algorithm. Typically, Candes proved that if the measurement matrix is an iid gaussian matrix, then the sparse signal can be exactly recovered with high probability under some weak conditions. Designing a sparse signal reconstruction strategy is the central problem of the compressive sensing. On the one hand, a faster and more efficient algorithm is to be designed to solve the L0 problem directly, which is the focus of [13, 14]. On the other hand, new recovery strategies are being proposed, such as the L1 method which minimizes the L1 norm [6], the Lq(0

2 The framework of sparse signal reconstruction and phase diagram

In this section, we first introduce the sparse signal reconstruction and some classical sparse signal recon- 2584 Zhang H, et al. Sci China Inf Sci November 2012 Vol. 55 No. 11

struction algorithms.

2.1 The sparse signal reconstruction

In general, the sparse signal reconstruction can be stated as: consider a real-value, finite length signal x, N × RN x {ϕ }N viewed as an 1 vector in . then can be represented in an orthonormal basis i i=1,usingthe N × N basis matrix Ψ = {ϕ1, ϕ2,...,ϕN } with the vector ϕi as columns. Therefore x can be expressed as x = Ψs where s is the N × 1 column vector of weightings coefficients si = x, ϕi. Suppose that we measure x by the measurement matrix Φ, and then we get the observation y = Φx = ΦΨs = Θs, where Θ = ΦΨ is an n × N matrix. We need to recover unknown vector s from the observation y, and then we can get the signal x. It is well known that the reconstruction of x can be modeled L s , . . y Φx ΦΨs, as finding the minimization of the following 0 problem: min 0 s t = = where s N I L L 0 = i=1 si= o is called the 0 norm. It is transformed into the following 1 norm minimization L s , . . y Φx ΦΨs, s N |s | L L ( 1 problem): min 1 s t = = where 1 = i=1 i .The 0 problem and the 1 problem can be transformed into the following regularization framework, that is, the L0 regularization: N y − Θs2 λ I , min + (si=0) i=1

and the L1 regularization N 2 min y − Θs + λ |si| , i=1

where λ is the tuning parameter, controlling the balance of the first and second part. Because the L1 regularization can be solved by a quadratic programming problem, the NP optimization problem can be avoided. Therefore the L1 regularization has become the dominant tool for sparse signal reconstruction. In recent years, there has been an explosion of researches on the properties of the L1 regularization. However, for many practical applications, the solutions of the L1 regularization are often less sparse than those of the L0 regularization. And in the areas of sparse signal reconstruction, a basic problem is how to reconstruct the signal exactly with the fewest samplings. Two methods are available. One is to use the new nonconvex regularization strategies. For example, Candes proposed the Log regularization [17], N 2 min y − Θs + λ log(ε + |si|) , i=1

where ε is constant. Chartrand proposed the Lq(0

andXuproposedtheL1/2 regularization N 2 1/2 min y − Θs + λ |si| . i=1

The other method is to solve the L0 regularization directly using approximation algorithm [13, 14]. A natural question is how to evaluate the different methods. In this paper, we show the essential reconstruction abilities of major classical methods based on the phase diagram [18]. And we propose an experiment method for designing the efficient reconstruction strategy.

2.2 Sparsity and regularization

In this subsection, we analyze the relation between the sparse problem and regularization. We first review some classical regularizations in machine learning which help to propose new reconstruction strategy for the sparse signal reconstruction. Zhang H, et al. Sci China Inf Sci November 2012 Vol. 55 No. 11 2585

In general, the regularization methods have the form n 1 l y ,f x λf , min n ( i ( i)) + k (1) i=1 l ·, · x ,y n λ where ( ) is a loss function, ( i i)i=1 isadataset,and is the regularization parameter. When f is in the linear form and the loss function is square loss, fk is normally taken as the norm of the coefficient of linear model. Almost all the existing learning algorithms can be considered as a special form of this regularization framework. For example, when k = 0, it is AIC or BIC, referred to as the L0 regularization; when k = 1, it is the Lasso, called the L1 regularization; when k = 2, it is the ridge regression, called the L2 regularization and when k = ∞,itistheL∞ regularization. It is easy to see that the loss function of compressive sensing is square loss, and the regularizations are the L0,theL1 and the Lp(0

Y = XTβ + , E = 0, Cov()=σ2I, (2)

T T where Y =(Y1,...,Yn) is an n × 1 response vector; X =(X1, X2,...,Xn)(Xi =(xi1,...,xip), T 2 i =1,...,n)andβ =(β1,...,βp) is a vector of p × 1 unknown parameters;  is random error; σ is a positive constant. Let A = {j : βj =0 }, p0 = |A|. Then the true model depends only on a subset of the predictors. That is to say, Y is relevant to p0 predictors, while the others are irrelevant predictors. Without loss of generality, we assume that the data are normalized. We use the square loss function and then the unified regularized framework bears the form n p 1 T 2 βˆ =argmin (Yi − Xi β) + P (λ, |βi|) . (3) β n i=1 i=1

The different penalty functions P (·, ·) always yield different regularizations. We list some classical regu- larizations by different penalty functions.

Example 1. L0 Penalties.

2 1.P (λ, |β|)=λ /2Iβ . =0 −β2/2+λ|β|, |β| <λ, 2.P (λ, |β|)= due to Antoniadis [20]. λ2/2, |β|  λ λ|β|, |β| <λ, 3.P (λ, |β|)= due to Fan [21]. λ2/2, |β|  λ,

Example 2. L1 Penalty. P (λ, |β|)=λ|β| due to Tibshirani and Donoho [3,4]. 1/2 Example 3. L1/2 Penalty. P (λ, |β|)=λ|β| due to Xu [16]. Example 4. SCAD Penalty. ⎧ ⎪ ⎨⎪ λ, |β|  λ, P λ, |β| aλ − β / a − , λ<β

L P λ, |β| λb|β| Example 5. Transformed 1 Penalty. ( )= 1+b|β| due to Geman [23]. 2586 Zhang H, et al. Sci China Inf Sci November 2012 Vol. 55 No. 11

Example 6. Log Penalty. P (λ, |β|)=λ log(ε + |β|) due to Candes [17]. The solutions of the foregoing regularizations are sparse; that is, when the model has redundant irrelevant predictors, the foregoing regularizations can select the true model. It is obvious that the regularization in machine learning is similar to the sparse signal recovery in essence, so it can provide new ways for the sparse signal recovery. Fan [22,24] proposed the SCAD regularization after studying the different regularizations. And in [25], Fan proved that the SCAD has excellent statistical properties. Inspired by this, we propose the SCAD compressive sensing1) as follows:

min y − Θs2 + P (λ, |s|),

where ⎧ ⎪ ⎨⎪ λ, |s|  λ, P λ, |s| aλ − s / a − ,λaλ and a=3.7. The results of this paper can clarify the sparse reconstruction abilities of those regularizations.

2.3 The phase diagram of sparse signal recovery

In this subsection, we show the sparse diagram experiment framework which can be used to analyze the different sparse signal reconstructions. For an underdetermined system of linear equations y = Φx, when the model (P0) has a unique sparse solution, it is also the unique solution of other regularizations with penalty P (·, ·) and the solution can be obtained from the regularization with penalty P (·, ·) procedure. We say that the regularization with penalty P (·, ·)andtheL0 regularization are equivalent, or briefly, L0/LP equivalence. When a vector is not only a solution of (P0) but also a solution of the regularization with penalty P (·, ·)problem,itissaid to be a point of L0/LP equivalence. In the context of L0/L1 equivalence, Donoho [26] introduced the notion of Phase Diagram to illustrate how sparsity({number of nonzeros in x}/{number of rows in Φ}) and indeterminacy({number of rows in Φ}/{number of columns in Φ}) affect the success of the L1 regularization. Using the technique of high-dimensional geometry analysis, Donoho provided a necessary and sufficient condition for any matrix N N N Φ of size n × N such that every x ∈ χ (k)isapointofL1/L0 equivalence, where χ (k)={x ∈ R : x0  k}. The performance exhibits two phases(success/failure) in a diagram, as shown in Figure 1. Each point on the plot of the figure corresponds to a statistical model for certain values of n, N,andk. n k The abscissa runs from zero to one, and gives values for δ = N . The ordinate is ρ = n ,measuringthe level of sparsity in the model. Above the plotted phase transition curve, the L1 method fails to find the sparsest solution; below the curve the solution of (P1) is the precise solution of (P0). Donoho and Stodden [19] conducted a series of simulation experiments on the problem of variable selection when the number of variables exceeds that of the observations. They have defined a Problem Suite S{k, n, N} as a collection of problems with sparse solutions, each problem having an n × N model matrix Φ and a k-sparse N-vector of coefficients x. We will adopt the method of Donoho and Stodden [19] in this paper. For each k, n, N combination we run an algorithm in question multiple times, and measure its success according to a quantitative criterion. We choose the normalized root mean square error (nRMSE) nRMSE=xˆ − x2/x2 as the quantization criterion, and then compare results across models built with different problem sizes. The following recipes are employed to study an algorithm for the regularization model: 1. Generate a prototype model y = Φx, which has a k-sparse solution. 2. Run an algorithm of the regularization to obtain a reconstruction solution xˆ. Evaluate performance xˆ−x2  γ γ to test if x2 ,where is the tolerance bound set in advance. 3. Plot the phase diagram. We solve the L0 regularization by the iteration thresholding [13] and OMP algorithm [14], the L1 regularization by L1 Magic [11], the Lq regularization by the reweighted L1 algorithm [15,16], the Log

1) Xu C. SCAD compressed sensing (submitted to Acta Mathematica Sinica) Zhang H, et al. Sci China Inf Sci November 2012 Vol. 55 No. 11 2587

Theoretical phase transition 1.0 0.9 0.8 0.7 Combinatorial search 0.6 0.5 ρ = k / n 0.4 0.3 Eq. (2) solves Eq. (1) 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 δ=n/p

Figure 1 Theoretical phase transition diagram: theoretical threshold at which equivalence of the solutions to the L1 and

L0 optimization problems breaks down. The curve delineates a phase transition from the lower region where the equivalence holds to the upper region, where it seems to require combinatorial search to recover the optimal sparse model. Along the x-axis the level of underdetermination decreases, and along the y-axis the level of sparsity of the underlying model increases.

regularization by the reweighted L1 algorithm [17] and the SCAD regularization by hard iteration thresh- olding algorithm.

3 Analysis on phase diagram

In this section, we show the phase diagram results of different compressive sensing strategies. In the following experiments, let p = 256, and repeat each experiment 30 times. We use hard iteration thresh- olding algorithm and OMP to solve the L0 regularization. Then we use the L1 magic to solve the L1 regularization. For the Lq regularization, we use the reweighted L1 algorithm. And we use iteration thresholding algorithm to solve the SCAD. The major results are shown in Figures 2 and 3. We can find that: (1) In Figure 2, the results of the L1 magic for the L1 regularization accord with the theoretical results of the L1 regularization. The ratio of the successful reconstruction area to the whole area is 39.92%. (2) In Figure 2, we also find that the sparse signal reconstruction ability of the Lq regularization is stronger than that of the L1 regularization. As q decreases, the ability of reconstruction becomes stronger. When q =0.9, the ratio of the exact reconstruction area to the whole area is 45.72%. And when q decreases to 1/2, the ratio increases to 56.12%. However, when q becomes smaller than 1/2, the ratio has nonsignificant increase, which means that when the sparse reconstruction is considered, the

L1/2 regularization has representative properties. (3) As shown in Figure 3, the signal reconstruction ability of the SCAD regularization with thresholding algorithm is stronger than that of the L1 regularization. The ratio of the area of exact reconstruction area to the whole area attains 50.0%. However, the ratio is smaller than that of the L1/2 regularization. (4) In Figure 3 is shown, the signal reconstruction ability of the Log regularization with the reweighted L1 algorithm with the optimization model, N |s | (t+1) 2 i si =argminy − Θs + λ , |s(t)| ε i=1 i + t+1 where si means that si of the t + 1 step is stronger than that of the L1 regularization. The results of the phase diagram show that the signal reconstruction ability depends on ε.Whenε ∈ (0.1, 1), the signal

reconstruction has better performances. However, the ratio is smaller than that of the L1/2 regularization. (5) In Figure 3, the signal reconstruction ability of the L0 regularization with different algorithms has different performances. But it is interesting that both the iteration thresholding algorithm and the OMP 2588 Zhang H, et al. Sci China Inf Sci November 2012 Vol. 55 No. 11

1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.9 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.5 0.5 ρ = k / n ρ = k / n 0.4 0.4 0.4 0.4 ρ = k / n 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0.1 0 0 0 0.1 0.3 0.5 0.7 0.9 0.1 0.3 0.5 0.7 0.9 0.1 0.3 0.5 0.7 0.9 δ=n/p δ=n/p δ=n/p

q=0.1 q=0.3 q=0.5

1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.9 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.5 0.5 ρ = k / n ρ = k / n ρ = k / n 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.3 0.5 0.7 0.9 0 0.1 0.3 0.5 0.7 0.9 0 0.1 0.3 0.5 0.7 0.9 0 δ=n/p δ=n/p δ=n/p

q=0.7 q=0.9 q=1.0

Figure 2 Phase diagram of signal reconstruction of Lq(0

1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.9 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.5 ρ = k / n

ρ = k / n 0.5 ρ = k / n 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.3 0.5 0.7 0.9 0.1 0.3 0.5 0.7 0.9 0 0.1 0.3 0.5 0.7 0.9 δ=n/p δ=n/p δ=n/p Log regularization LLog regularization SCAD regularization step=0.1 step=0.1 1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.9 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.6 0.6 0.5 0.5 0.5

0.5 0.5 ρ = k / n

ρ = k / n 0.5 ρ = k / n 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.2 0.1 0.2 0.1 0.1 0.1 0.1 0 0 0.1 0 0.1 0.3 0.5 0.7 0.9 0.1 0.3 0.5 0.7 0.9 0.1 0.3 0.5 0.7 0.9 δ=n/p δ=n/p δ=n/p L0 regularization (hard thresholding) L0 regularization (OMP) L (1/2) regularization

Figure 3 Phase diagram of signal reconstruction of SCAD, Log, L0 and L1/2 regularization. Horizontal axis: undersam- δ n ρ k k, n, p pling fraction = p . Vertical axis: sparsity fraction = n . The signals with ( ) in the blue region can be exactly reconstructed while those in the red region cannot be exactly reconstructed. algorithm have similar ability of sparse signal reconstruction to the L1 regularization. The ratio of the area of exact reconstruction is bigger than that of the L1 regularization by iteration thresholding algorithm and the OMP algorithm. Their ratios are 41.66% and 45.0% respectively. Zhang H, et al. Sci China Inf Sci November 2012 Vol. 55 No. 11 2589

4 Conclusions

In this paper we have proposed a unified experiment framework to analyze the ability of sparse signal reconstruction according to the work of Donoho. We have shown the different abilities of different recons- truction strategies. We have found that the Lq regularization, the SCAD regularization and the Log regularization can efficiently improve the ability of sparse signal reconstruction. The L1/2 regularization can be taken as a representative of the Lq regularizations; that is, when 1/2

Acknowledgements This work was supported by National Basic Research Program of China (Grant No. 2007CB311002), National Natural Science Foundation of China (Grant Nos. 60975036, 11171272), and Macau Science and Technology Development Fund (Grant No. 021/2008/A) of Macau Special Administrative Region of the People’s Republic of China.

References

1 Candes E, Romberg J, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pur Appl Math, 2006, 59: 1207–1223 2 Donoho D. Compressed sensing. IEEE Trans Inform Theory, 2006, 52: 1289–1306 3 Taylor H, Banks S, McCoy J. Deconvolution with the norm. Geophys, 1979, 44: 39–52 4 Logan B F. Properties of high-pass signals. PhD thesis. Columbia: Columbia University, 1965. 1417–1428 5 Tibshirani R. Regression shrinkage and selection via the lasso. J Roy Stat Soc B, 1996, 58: 267–288 6 Chen S, Donoho D, Saunders M. Atomic decomposition by basis pursuit. SIAM J Sci Comput, 1998, 20: 33–61 7 Candes E, Tao T. Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans Inform Theory, 2006, 52: 5406–5425 8 Chen S. Basis Pursuit. Stanford: , 1995. 118 9 Coifman R, Wickerhauser M. Entropy-based algorithms for best basis selection. IEEE Trans Inform Theory, 1992, 38: 713–718 10 Mallat S, Zhang Z. Matching pursuits with time-frequency dictionaries. IEEE Trans Signal Proces, 1993, 41: 3397–3415 11 Candes E, Tao T. Decoding by linear programming. IEEE Trans Inform Theory, 2005, 51: 4203–4215 12 Candes E, Romberg J. Sparsity and incoherence in compressive sampling. Inverse Probl, 2007, 23: 969–985 13 Blumensath T, Davies M. Iterative hard thresholding for compressed sensing. Appl Comput Harmon A, 2009, 27: 265–274 14 Tropp J, Gilbert A C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inform Theory, 2007, 53: 4655–4666 15 Chartrand R. Exact reconstructions of sparse signals via nonconvex minimization. IEEE Signal Proc Lett, 2007, 14: 707–710 16 Xu Z B, Zhang H, Wang Y, et al. L1/2 regularization. Sci China Inf Sci, 2010, 40: 411–422 17 Candes E J, Wakin M B, Boyd S. Enhancing sparsity by reweighted L1 minimization. J Fourier Anal Appl, 2008, 14: 877–905 18 Donoho D, Tanner J. Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos Trans Roy Soc A, 2009, 367: 4273–4293 19 Donoho D, Stodden V. Breakdown point of model selection when the number of variables exceeds the number of observations. In: IJCNN’06, 2006. 1916–1921 20 Antoniadis A. Wavelets in statistics: a review (with discussion). Ital J Stat, 1997, 6: 97–144 21 Fan J. Comment on ‘Wavelets in statistics: a review’ by Antoniadis A. Ital J Stat, 1997, 6: 97–144 22 Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc, 2001, 96: 1348–1360 23 Geman D, Reynolds G. Constrained restoration and the recovery of discontinuities. IEEE Trans Pattern Anal, 1992, 14: 367–383 24 Fan J, Li R. Statistical challenges with high dimensionality: feature selection in knowledge discovery. In: Sol´eM, Soria J, Varona J L, et al., eds. Proceedings of the International Congress of Mathematicians. Madrid: European Mathematical Society Publishing House, 2006. 595–622 25 Fan J, Peng H. On non-concave penalized likelihood with diverging number of parameters. Ann Stat, 2004, 32: 928–961 26 Donoho D. High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discrete Comput Geom, 2007, 35: 617–652

Information for authors full text opens free to domestic readers at www.scichina.com, and is available to overseas readers at www.springerlink.com. SCIENCE CHINA Information Sciences (Sci China Inf Sci), cosponsored by the Chinese Academy of Sciences and the National Natural Science Foundation of China, and published Subscription information by Science China Press, is committed to publishing high- ISSN print edition: 1674-733X quality, original results of both basic and applied research in ISSN electronic edition: 1869-1919 all areas of information sciences, including computer science and technology; systems science, control science and engi- Volume 55 (12 issues) will appear in 2012 neering (published in Issues with odd numbers); information Subscription rates and communication engineering; electronic science and For information on subscription rates please contact: technology (published in Issues with even numbers). Sci Customer Service China Inf Sci is published monthly in both print and electronic China: [email protected] forms. It is indexed by Academic OneFile, Astrophysics Data North and South America: System (ADS), CSA, Cabells, Current Contents/Engineering, [email protected] Computing and Technology, DBLP, Digital Mathematics Reg- istry, Earthquake Engineering Abstracts, Engineered Materi- Outside North and South America: als Abstracts, Gale, Google, INSPEC, Journal Citation Re- [email protected] ports/Science Edition, Mathematical Reviews, OCLC, Orders and inquiries: ProQuest, SCOPUS, Science Citation Index Expanded, China Summon by Serial Solutions, VINITI, Zentralblatt MATH. Science China Press Papers published in Sci China Inf Sci include: 16 Donghuangchenggen North Street, Beijing 100717, China REVIEW (20 printed pages on average) surveys repre- Tel: +86 10 64015683 sentative results and important advances on well-identified topics, with analyses and insightful views on the states of the Fax: +86 10 64016350 art and highlights on future research directions. Email: [email protected] RESEARCH PAPER (no more than 15 printed pages) North and South America presents new and original results and significant develop- Springer New York, Inc. ments in all areas of information sciences for broad reader- Journal Fulfillment, P.O. Box 2485 ship. Secaucus, NJ 07096 USA BRIEF REPORT (no more than 4 printed pages) describes Tel: 1-800-SPRINGER or 1-201-348-4033 key ideas, methodologies, and results of latest developments in a timely manner. Fax: 1-201-348-4505 Authors are recommended to use Science China’s online Email: [email protected] submission services. To submit a manuscript, please go to Outside North and South America: www.scichina.com, create an account to log in http://mco3. Springer Distribution Center manuscriptcentral.com/scis, and follow the instructions there Customer Service Journals to upload text and image/table files. Haberstr. 7, 69126 Heidelberg, Germany Authors are encouraged to submit such accompanying Tel: +49-6221-345-0, Fax: +49-6221-345-4229 materials as short statements on the research background and area/subareas and significance of the work, and brief Email: [email protected] introductions to the first and corresponding authors including Cancellations must be received by September 30 to take their mailing addresses with post codes, telephone numbers, effect at the end of the same year. fax numbers, and e-mail addresses. Authors may also sug- Changes of address: Allow for six weeks for all changes to gest several qualified experts (with full names, affiliations, become effective. All communications should include both old phone numbers, fax numbers, and e-mail addresses) as and new addresses (with postal codes) and should be ac- referees, and/or request the exclusion of some specific indi- companied by a mailing label from a recent issue. According viduals from potential referees. to § 4 Sect. 3 of the German Postal Services Data Protection All submissions will be reviewed by referees selected by Regulations, if a subscriber’s address changes, the German the editorial board. The decision of acceptance or rejection of Federal Post Office can inform the publisher of the new ad- a manuscript is made by the editorial board based on the dress even if the subscriber has not submitted a formal ap- referees’ reports. The entire review process may take 90 to plication for mail to be forwarded. Subscribers not in agree- 120 days, and the editorial office will inform the author of the ment with this procedure may send a written complaint to decision as soon as the process is completed. If the editorial Customer Service Journals, Karin Tiks, within 14 days of board fails to make a decision within 120 days, please contact publication of this issue. the editorial office. Authors should guarantee that their submitted manuscript Microform editions are available from: ProQuest. Further has not been published before and has not been submitted information available at http://www.il.proquest.com/uni elsewhere for print or electronic publication consideration. Electronic edition Submission of a manuscript is taken to imply that all the An electronic version is available at springerlink.com. named authors are aware that they are listed as coauthors, Production and they have agreed on the submitted version of the paper. Science China Press No change in the order of listed authors can be made without 16 Donghuangchenggen North Street, Beijing 100717, China an agreement signed by all the authors. Tel: +86 10 64015683 or +86 10 64034134 Once a manuscript is accepted, the authors should send a Fax: +86 10 64016350 copyright transfer form signed by all authors to Science China Press. Authors of one published paper will be presented one Printed in the People’s Republic of China sample copy. If more sample copies or offprints are required, Jointly published by please contact the managing editor and pay the extra fee. The Science China Press and Springer