Subgradient Based Multiple-Starting-Point Algorithm for Non-Smooth Optimization of Analog Circuits Wenlong Lv1, Fan Yang1∗, Changhao Yan1, Dian Zhou1,2 and Xuan Zeng1∗ 1State Key Lab of ASIC & System, School of Microelectronics, Fudan University, Shanghai, P. R. China 2Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX, U.S.A. Abstract—Starting from a set of starting points, the multiple- is to transform the general polynomial optimization problem starting-point optimization searches the local optimums by to convex programming by Semidefinite-Programming (SDP) gradient-guided local search. The global optimum is selected relaxations [4]. The advantage of model based algorithms from these local optimums. The region-hit property of the multiple-starting-point optimization makes the multiple-starting- is that given the circuit performance models, the optimal point approach more likely to reach the global optimum. solutions can be found very efficiently even for large-scale However, for non-smooth objective functions, e.g., worst-case problems. But what limits their usage is the generation of optimization, the traditional gradient based local search methods accurate performance models. Even with a large number of may stuck at non-smooth points, even if the objective function simulations, the accuracy of the generated performance models is smooth “almost everywhere”. In this paper, we propose a subgradient based multiple-starting-point algorithm for non- still cannot be guaranteed for the whole design space. smooth optimization of analog circuits. Subgradients instead of The simulation based analog circuit optimization methods traditional gradients are used to guide the local search of the non- require no explicit analytical performance models. SPICE smooth optimization. The Shor’s R algorithm is used to accelerate simulations are used to guide the optimization directly. Simu- the subgradient based local search. A two-stage optimization strategy is proposed to deal with the constraints in analog lated Annealing (SA) [5], Genetic Algorithms (GA) [6], par- circuit optimization. Our experiments on 2 circuits show that ticle swarm optimization (PSO) [7] and differential evolution the proposed method is very efficient for worst-case optimization. (DE) [8] are best-known simulation based approaches. The The proposed approach can achieve much better solutions with stochastic characteristics of these algorithms help them to less simulations, compared with the traditional gradient based better explore the solution space and avoid falling into the method, smoothing approximation method, smooth relaxation method and differential evolution algorithms. local optimum. Although the above mentioned algorithms can Index Terms—Worst-case optimization; Shor’s R algorithm; efficiently explore the design space, they suffer from relatively Multiple-starting-point low convergence rate. Note that design specifications like gain, bandwidth, phase margin are smooth and differentiable over I. INTRODUCTION the whole design space. As a result, gradients can be used Due to the continuous scaling of the IC technology and to efficiently guide the local search. It has been reported in the growing demand for higher performance and lower power, [9]-[10] that gradient based local search with multiple starting manual analog circuit design is hard to meet the performance points (MSP) is very efficient for these smooth performances. and power requirements with limited time to market. Au- If one starting point is located in a valley, it converges rapidly tomated analog circuit design has attracted great interest in to the local optimum by the local search. The region-hit recent years, as it improves circuit performance while reduces property of the multiple-starting-point optimization makes the the design time. Recently, the optimal device sizing attracts multiple-starting-point optimization approach more likely to many research interests because it can be well formulated as reach the global optimum. constrained nonlinear optimization problems and thus possi- In order to improve the robustness of the design under PVT bly be efficiently solved by the well-developed optimization variations, the optimization problem can be reformulated by algorithms. optimizing the worst performance over several PVT corners. Most analog circuit optimization algorithms fall into two Compared with Monte-Carlo based yield optimization, the categories: model based and simulation based methods. For worst-case optimization tends to be more pessimistic. How- model based methods, analytical expressions of the circuits ever, the worst-case optimization is widely used by designers performances are required. The expressions can be obtained and accepted by industry. It turns to be a de facto design by manual analysis or fitting [1]. A well-known model methodology in industry [10]-[11]. based algorithm is Geometric Programming(GP) [2], where Another example of non-smooth objective function is the the circuit performances are expressed as posynomials [3]. rail-to-rail amplifier. The amplifier should satisfy all speci- The posynomial approximation guarantees the optimization fications with input common mode voltage varying from 0 problem is convex and the global optimum can be found. to Vdd [12], which means that the transconductance gm of Recent advances show that global optimums can be found input stage should keep constant across whole common mode even for general polynomial optimization problems. The idea input voltage range. Such an objective can be formulated as optimizing the worst performance with several representative *Corresponding authors: {yangfan, xzeng}@fudan.edu.cn. input voltages, e.g., 0, 0.25Vdd, 0.5Vdd, 0.75Vdd, and Vdd. 978-3-9815370-8-6/17/$31.00 c 2017 IEEE 1195 The worst-case optimization poses new challenges to ..., cN are N constraints which should be met during the gradient-based local search and MSP algorithms. For worst- optimization. Typically, the objective function f(x) and the case optimization which is expressed as the maximum, mini- constraints c1,c2, ..., cN are smooth. mum and absolute value of the smooth performance functions, If multiple scenarios are considered, we aim to minimize the objective functions and constrained functions are thus the worst-case FOM while the constraints are still satisfied. smooth “almost everywhere” but can be non-smooth at some For M scenarios, (1) can be reformulated as zero-measured subsets of design space. For the aforementioned cases, the traditional gradient-based minimize max(f1(x), ..., fM (x)) methods may stuck at non-smooth points, even if the objective s.t. c (x) < 0 i,j (2) function is smooth “almost everywhere”. But we should note ∀i ∈ 1 ...N that the gradients are still very useful to guide the search and ∀ ∈ the gradients are available “almost everywhere”. The tradi- j 1 ...M. tional methods for non-smooth optimization try to smooth the Compared to (1), the objective function in (2) became non- non-smooth functions by approximations such as log-sum-exp smooth. The number of constraints increase to M × N. The approximation [13]. However, these smooth approximations number of constraints grows linearly with number of scenarios. would deviate the real non-smooth objective functions and thus For analog circuit optimization problem, most constrains are degrade the optimization quality. Smooth relaxation transforms mapped from design specifications, which would be tight the non-smooth optimization problem to smooth optimization and difficult to satisfy during the optimization. The M × N problem by introducing new constraints, however, the smooth constraints can also be rewritten as a single non-smooth relaxation approach introduces extra constraints. The objective constraints g(x) < 0 as follows. function and the constraints are tightly coupled in the relaxed formulation, which degrades the convergence rate. minimize F (x) (3) In this paper, we propose a subgradient based multiple- s.t. g(x) < 0, starting-point algorithm for non-smooth optimization of analog circuits. Subgradient instead of traditional gradient is used to where g(x)=max(c1(x), ..., cM×N (x)), and F (x)= guide the local search of the non-smooth optimization. The max(f1(x), ..., fM (x)). Shor’s R algorithm is used to accelerate the subgradient based The minimax formulation in (3) makes the objective func- local search. A two-stage optimization strategy is proposed to tion less correlated with constraints, as in most design space, deal with the constraints in automated analog circuit design. only one FOM and one design specifications dominate the In the first stage, we move the solution to the feasible regions. whole objective function and constraint function. A discussion In the second stage, the objective function is minimized while of the advantages of minimax formulation can be seen in [14]. keeping the constraints been satisfied. Our experiments on 2 B. Multiple-Starting-Point Algorithm circuits show that the proposed method is very efficient for worst-case optimization. The proposed approach can achieve Multiple-Starting-Point (MSP) algorithm is an efficient al- much better solutions with less simulations, compared with the gorithm for general constraint optimization problem. Starting traditional gradient based method, smoothing approximation from a set of starting points, the multiple-starting-point op- method, smooth relaxation method and differential evolution timization searches the local optimums by gradient guided algorithms. local search.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-