Hindawi Mathematical Problems in Engineering Volume 2019, Article ID 7313808, 15 pages https://doi.org/10.1155/2019/7313808

Research Article Improving for the Logit-Based Stochastic User Equilibrium Problem

Min Xu,1 Bojian Zhou ,2 and Jie He 2

1Department of Industrial and Systems Engineering, e Hong Kong Polytechnic University, Hung Hom, Hong Kong 2School of Transportation, Southeast University, Nanjing 210096, China

Correspondence should be addressed to Bojian Zhou; [email protected]

Received 16 March 2019; Revised 5 August 2019; Accepted 3 September 2019; Published 8 October 2019

Academic Editor: Roberta Di Pace

Copyright © 2019 Min Xu et al. /is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. /is study proposes an improved truncated Newton (ITN) method for the logit-based stochastic user equilibrium problem. /e ITN method incorporates a preprocessing procedure to the traditional truncated Newton method so that a good initial point is generated, on the basis of which a useful principle is developed for the choice of the basic variables. We discuss the rationale of both improvements from a theoretical point of view and demonstrate that they can enhance the computational efficiency in the early and late iteration stages, respectively, when solving the logit-based stochastic user equilibrium problem. /e ITN method is compared with other related methods in the literature. Numerical results show that the ITN method performs favorably over these methods.

1. Introduction solution algorithms for the logit-based SUE problem. /e first class is link-based algorithms. /is type of algorithms /e main role of traffic assignment models is to forecast uses link flow as its variable. Since link flow is an aggregate equilibrium link or path flows in a transportation network. variable of different path flows, link-based algorithms do not /ese models are widely used in the field of transportation require explicit path enumeration. It only assumes an im- planning and network design. Traditionally, traffic assign- plicit path choice set, such as the use of all efficient paths ment models are formulated as the user equilibrium (UE) or (Dial [9]; Maher [10]), or all cyclic and acyclic paths (Bell stochastic user equilibrium (SUE) problems, in which no [11]; Akamatsu [12]). /e most well-known link-based al- traveler can reduce his/her actual or perceived travel time by gorithm is the method of successive averages (MSAs) pro- unilaterally changing routes at equilibria [1, 2]. Among posed in Sheffi and Powell’s study [4]. /is algorithm uses a various types of traffic assignment models in the literature, stochastic loading procedure to produce an auxiliary link the logit-based stochastic user equilibrium traffic assignment flow pattern, and the search direction equals the difference problem is most widely adopted and extensively studied [3]. between the auxiliary link flow and the current link flow. /e /is problem incorporates a random error term in the route step size for MSA is a predetermined sequence that is de- cost function to simulate travelers’ imperfect perceptions, creasing towards zero, such as 1/k where k is the iteration which follows Gumbel distribution [4]. /e logit-based SUE index. Maher [10] made further modifications to the method problem can be equivalently formulated as a mathematical of successive averages. In his study, the Davidon– programming problem with a unique solution. /is feature Fletcher–Powell (DFP) method was used to generate a facilitates its usage in both theoretical and practical studies search direction and the cubic (or quadratic) interpolation [5–8]. was applied to estimate the optimal step sizes. /e widespread application of the logit-based SUE /e other class is path-based algorithms. /is kind of model makes its solution approach also receive considerable algorithm is built on path flow variables. It requires an interest in recent years. In general, there are two classes of explicit choice of a subset of feasible paths prior to or during 2 Mathematical Problems in Engineering the assignment. Unlike the link flow variable, path flow /e remainder of the paper is organized as follows. variable is a disaggregate variable which cannot be further Section 2 outlines the traditional truncated Newton method decomposed. /erefore, different for a linear equality constrained optimization problem. methods can be utilized in a more flexible way. For example, Section 3 discusses some implementation issues when ap- Damberg et al. [13] extended the disaggregate simplicial plying the traditional truncated Newton method to the logit- decomposition method of Larsson and Patriksson [14] to based SUE problem. Section 4 proposes a preprocessing solve the logit-based SUE problem. /is path-based method procedure to determine a good initial point. Section 5 de- iteratively solves subproblems that are generated through velops a maximal flow principle for the choice of the basic/ partial linearization of the objective function. /e search nonbasic variable. Numerical experiments are conducted in direction is obtained by the difference between the solution Section 6. Section 7 wraps up the paper with conclusions and of the subproblem and the current iteration point. Bekhor future research directions. and Toledo [15] proposed using the gradient projection (GP) method to solve this problem. In their study, the gradient of 2. The Truncated Newton Method for a Linear the objective function was projected on a linear manifold of the equality constraints, with the scaling matrix being di- Equality Constrained Optimization Problem agonal elements of the Hessian. Consider the following convex problem: /is study focuses on path-based algorithms for the logit [P1] minimize f(x) SUE problem. To the best of our knowledge, almost all (1) existing path-based algorithms have a linear or sublinear subject to Ax � b, convergence rate, which is relatively slow when the iteration n point is approaching the optimal solution. In order to im- where f: R ⟶ R is a strictly convex function that is twice continuously differentiable, A is an m × n matrix of full row prove the convergence, it is desirable to develop an algo- m rithm with a superlinear convergence rate. Recently, Zhou rank, and b ∈ R . [P1] can be viewed as a general for- et al. [16] proposed a modified truncated Newton (MTN) mulation of the logit-based stochastic user equilibrium method to solve the logit-based SUE problem. /is method problem that will be investigated in this study. We will first consists of two phases. /e major iteration phase is per- show how to solve [P1] by the truncated Newton method. formed in the original space, while the minor iteration phase Since [P1] only involves linear equality constraints, it can is performed in the reduced space. At each major iteration, a be transformed into an unconstrained optimization problem reduced Newton equation is approximately solved using the using variable reduction technique. Specifically, the matrix A preconditioned conjugate gradient (PCG) method. /e re- and variable x are partitioned as follows: duced variables in the reduced Newton equation can be A �AB, AN �, changed dynamically, which facilitates the usage of the PCG xB (2) method. Zhou et al. proved that the convergence rate of the x �� �, MTN method is superlinear. It works very fast once the xN iteration point gets near to the optimal SUE solution. m×m where AB ∈ R is a nonsingular matrix and AN ∈ However, there are two important problems that are not m×(n− m) m n− m resolved in Zhou et al.’s research. First, when the iteration R , xB ∈ R , and xN ∈ R . AB is called the basic point is far from the optimal SUE solution, the truncated matrix and its columns correspond to the basic variables xB. Newton type methods are relatively slow. /e reason for this AN is called the nonbasic matrix. /e columns of AN cor- phenomenon is not clear. Second, in Zhou et al.’s research, a respond to the nonbasic variables xN. dynamic principle on how to choose the basic route is /erefore, the constraints Ax � b can be rewritten as proposed. /is is only an intuitive principle. /e rationale A x + A x � b. (3) behind this principle is not explained. B B N N

With the aim of addressing the above two problems, in By rearranging the above equation, the basic variables xB this study, we propose an improved truncated Newton (ITN) can be expressed as follows: method for the logit-based SUE problem. /e ITN method − 1 T ( ) makes two improvements over the traditional truncated xB ��AB � b − ANxN �. 4 Newton method. First, a preprocessing procedure is in- troduced. /is procedure utilizes the partial linearization Substituting equation (4) into [P1], we obtain the fol- method (Patriksson [17]) to generate a good initial point in lowing reduced unconstrained problem: the original space. It can largely replace the early iteration − 1 �T− � stage of the traditional truncated Newton method. Second, � ⎛⎝ AB b ANxN ⎞⎠ [P1-RED] minimize fxN � � f , x ∈Rn− m on the basis of the generated initial point, a static principle N xN on how to partition the coefficient matrix and the variables is (5) developed. With this principle, the computational efficiency � of the truncated Newton method in the late iteration stage where f(xN) is referred to as the reduced objective function. k can be enhanced. Furthermore, the rationale behind these Let xN be a feasible point for [P1-RED]. By approxi- � k two improvements is analyzed theoretically, which broadens mating f using a second-order Taylor series around xN, the the theoretical significance of this study. following subproblem can be obtained: Mathematical Problems in Engineering 3

T 1 T k [SUB-1] min �g�k � p + p H� p, (6) n− m Step 0: partition the matrix A and vector x as pN∈R 2 x k � ( , ) � �B � �k � k � 2 � k A AB AN , x , respectively. Transform [P1] where g ≜ ∇f(xN) and H ≜ ∇ f(xN) are the reduced xN k gradient and reduced Hessian of f, and p ≜ xN − xN is the into [P1-RED] according to equation (5). k difference between the nonbasic variable xN and the feasible Step 1: let xN be an initial point in the reduced space. point xk . Set k � 1. N k Clearly, [SUB-1] is a problem. A Step 2: if xN is a sufficiently accurate approximation typical method for this problem is the preconditioned to the minimizer of [P1-RED], terminate the algorithm. conjugate gradient (PCG) method. /is method constructs a Step 3: solve [SUB-1] approximately to generate a search �k sequence of conjugate directions using the objective gradient direction p in the reduced space. Step 4: compute a step size λ along p�k, for which λ and minimizes the objective function along each of the k k produces a sufficient decrease in the reduced function f�. directions. Interested readers may refer to Chapter 5 in Step 5: set xk+1 � xk + λkp�k, k � k + 1. Go to Step 2. Nocedal and Wright [18] for a detailed description of this N N method. It is commonly known that for large-scale opti- mization problems, finding the exact solution of [SUB-1] is ALGORITHM 1: Major iteration. computationally intensive. /e truncated Newton method is thus designed to alleviate this drawback by solving [SUB-1] k k k approximately if xN is far from the optimal solution of [P1] Given a preconditioner M and a forcing term η ; and solving [SUB-1] more accurately when the optimal Step 1: set z0 � 0, r0 � g�k solution is approached. Solve Mkl0 � r0 for l0 Let p�k be an approximate solution of [SUB-1] generated Set d0 � − l0 by the PCG method. According to Lemma A.2 in Dembo Set j � 0. j T j j T � k j and Steihaug [19], p�k defines a descent direction with respect Step 2: set αj � (r ) l /(d ) H d � Set zj+1 � zj + αjdj to the reduced objective function f(x ). Hence, by finding k N Set rj+1 � rj + αjH� dj an appropriate step size in this direction, the new solution j+1 k If the termination criterion ‖r ‖/‖g� ‖ < ηk point for the next iteration can be obtained. k j+1 is satisfied, return p� � z , sk � rj+1 and In what follows, we will give a detailed description of the terminate. truncated Newton method for the linear equality con- Else continue with Step 3. strained optimization problem [P1]. As elaborated above, Step 3: solve Mklj+1 � rj+1 for lj+1 this method consists of two phases. /e major iteration Set βj+1 � (rj+1)Tlj+1/(rj)Tlj phase transforms the original problem into an un- Set dj+1 � − lj+1 + βj+1dj constrained one and applies the truncated Newton frame- Set j � j + 1 work to solve it. /e minor iteration phase uses the PCG Go to step 2. method to solve a quadratic programming subproblem approximately. ALGORITHM 2: Minor iteration. /e detailed steps of major iteration are described in ����� Algorithm 1 below. It is performed in the reduced variable k to the optimal solution, we have η � ‖g�k‖ ⟶ 0, space. which implies that more inner iterations should be /e minor iteration is elaborated in Algorithm 2 below. performed. In each minor iteration, zj is the sequence of iterations, rj is j j (2) In Step 2 of Algorithm 2, the reduced the gradient of the objective of [SUB-1] evaluated at z , d is k j j H� need not be formed explicitly. Algorithm 2 only the conjugate search direction, and α and β are scalars that j j requires matrix-vector products, i.e., the value of are used to determine z and d , respectively. k H� d for some vector d. In the next section, we will From the above description of the truncated Newton use this feature to simplify the calculation process of method, we have the following two remarks: the logit-based SUE problem. (1) /e forcing term ηk in Algorithm 2 is usually chosen Furthermore, we would like to emphasize that the to be (Nocedal and Wright [18] p168) truncated Newton method employed in this study is per- ������� formed in a fixed reduced space. In other words, once the k � k� η � min�ρ, �g� � �, (7) partitions of the matrix A and vector x are made, they re- main unchanged in all major iterations. /is is different where ρ is a given positive parameter. Clearly, ηk from the modified truncated Newton method (MTN) plays the role of controlling the solution accuracy of proposed in Zhou et al. [16], for which the partitions can be [SUB-1] for each major iteration. When the in- changed from one iteration to another. Using a fixed par- cumbent solution is far from the optimal solution, we tition simplifies both theoretical analysis and practical have ηk � ρ > 0, which means that only a few inner implementation of the algorithm. However, it puts forward iterations are sufficient to satisfy the termination cri- higher requirements for the selection of the initial points and terion in Step 2. When the incumbent solution is near the basic/nonbasic variables, which is the reason why the 4 Mathematical Problems in Engineering improved truncated Newton (ITN) method is proposed. In /e optimal solution to the logit-based SUE problem what follows, we will first apply the traditional truncated yields an equilibrium flow distribution, which is funda- Newton method to the logit-based SUE problem (Section 3) mental to many transportation planning and network design and then propose improvements made by the ITN method problems. For practical transportation networks, in order to (Sections 4 and 5). obtain the equilibrium flow distribution in a reasonably short time, a fast method that can cope with problem size 3. Solving the Stochastic User Equilibrium should be employed. As is known, the truncated Newton method is one of the most accurate and fast methods for Problem Using Truncated Newton Method large-scale problems. In what follows, we will discuss how to 3.1. Stochastic User Equilibrium Problem. As discussed in use this method to solve [SUE]. Section 1, stochastic user equilibrium problem is funda- mental to the analysis of transportation systems. It concerns 3.2. Implementing the Truncated Newton Method to Solve the the distribution of travel demands to routes in a trans- Stochastic User Equilibrium Problem. From a practical point portation network under the assumption that travelers have of view, when applying the truncated Newton method to different perception errors when selecting routes. /is solve the logit-based SUE problem, some implementation problem is defined over a transportation network G(V, L), issues should be addressed. Next, we will discuss them, where V is the set of nodes and L is the set of directed links in respectively. the network. Let W be the set of all origin-destination (OD) pairs in the network, Rw be the set of simple (loop-free) routes between OD pair w ∈ W, and bw be the travel demand 3.2.1. Application of the Variable Reduction Technique. between OD pair w ∈ W. For a route r ∈ Rw connecting OD /e coefficient matrix A for equation (9) can be written as w pair w ∈ W, the route flow is denoted as xr . Let ta(va) be the follows: travel time on link a ∈ L, which is assumed to be a con- 1, 1, ...1 0 ... 0 tinuous and differentiable function of the flow v on that link ⎢⎡ ⎥⎤ a ⎢ ⎥ only. /e logit-based SUE problem can be expressed as the ⎢ 0 1, 1, ...1 ... 0 ⎥ A �⎢ ⎥. (12) ⎢ ⎥ following minimization problem (Fisk [3]): ⎣⎢ ...... ⎦⎥ v a 1 w w 0 0 ... 1, 1, ...1 [SUE] min f(x) � � � ta(τ)dτ + � � xr ln xr , 0 θ w a∈A w∈W r∈R /is is a block diagonal matrix whose diagonal consists (8) of vectors of all ones (in different length). Clearly, for each

w row of matrix A, any variable whose coefficient is “1” can be � xr � bw, ∀w ∈ W, (9) chosen as the basic variable, and the rest variables in this r∈Rw row are nonbasic variables. /e basic and nonbasic ma- w w trices are then formed by combining columns that corre- xr ≥ 0, ∀w ∈ W, r ∈ R , (10) spond to the basic and nonbasic variables. It is easy to see that the basic matrix is an identity matrix, which is ob- w w va � � � xr δar, ∀a ∈ L. (11) viously invertible. w∈W r∈Rw As is known, each variable in the logit-based SUE problem represents a specific route flow. For OD pair In the above formulation, equation (8) is the objective w w ∈ W, let x denote the basic route flow variable where rB function. It consists of an integral term and an entropy term. rB is the index of the basic route, and xw denote the nonbasic /e parameter θ reflects an aggregate measure of people’s rN r perception of travel costs. Equation (9) defines the demand/ route flow variable where N is the index of the nonbasic route flow conservation conditions Equation (10) indicates route. /erefore, we can express the basic flow variable in the nonnegativity constraints. /e link-route flow re- terms of the nonbasic flow variables, i.e., lationship is characterized by equation (11), in which the xw � b − � xw , ∀w ∈ W, rB w rN w w (13) indicator δar � 1 if route r between OD pair w uses link a rN∈RN and δw � 0 otherwise. ar w By substituting equation (11) into equation (8), we where RN is the set of indices of nonbasic routes between OD obtain a minimization problem in terms of route flow pair w ∈ W. w Next, we present an example to show how the coefficient variable xr only. Fisk [3] has proved that the objective function f is strictly convex with respect to x, which ensures matrix A and the variable x are formed and partitioned for a the uniqueness of equilibrium route flows. On the other specific network. hand, it is well known that the logit model assigns strictly positive flows to all paths in the choice set. /erefore, the Example 1. Consider the network shown in Figure 1, nonnegative constraints (10) are not binding at the optimal consisting of 4 nodes and 5 links. /ere are two OD pairs in solution. Consequently, constraints (10) and (11) can be the network, one is from node 1 to node 4 (indexed by OD ignored and problem [SUE] is essentially equivalent to the pair 1) and the other from node 2 to node 4 (indexed by OD equality constrained minimization problem [P1]. pair 2). /e demand between each OD pair is b1 � 10, b2 � 5. Mathematical Problems in Engineering 5

2 3.2.2. Computation of a Search Direction. By substituting equation (13) into equation (8), the reduced objective function can be rewritten as 1 4 va 1 f�x � � � � t (τ)dτ + � � xw ln xw N a rN rN a A 0 θ w W w 3 ∈ ∈ rN∈RN

Figure 1: An example to explain the basic/nonbasic flow variable 1 + � ⎛⎝b − � xw ⎞⎠ln⎛⎝b − � xw ⎞⎠. choice. w rN w rN θ w w w∈W rN∈RN rN∈RN ( ) /ere are three routes connecting OD pair 1, which are 21 indexed by route (1, 1), route (1, 2), and route (1, 3), re- Elements of the reduced gradient are given by � spectively. /e two routes that connect OD pair 2 are zf w w � � tava � · �δ − δ � numbered as route (2, 1) and route (2, 2). zxw arN arB /e node sequence for each route is presented as follows: rN a∈A (22) route (1, 1): node sequence 1-2-4, 1 ⎡⎢ w w ⎤⎥ + ⎣⎢ln x − ln⎛⎝b − � x ⎞⎠⎦⎥. rN w rN route (1, 2): node sequence 1-3-4, θ w rN∈RN route (1, 3): node sequence 1-2-3-4, route (2, 1): node sequence 2-4, Elements of the reduced Hessian matrix are 2 � route (2, 2): node sequence 2-3-4. z f zt w w� w w � � a · �δ� − δ ��δ − δ � 1 1 1 2 2 w w� a�rN a�rB arN arB Let x , x , x , x , and x denote the flows through the zx zx a A zva 1 2 3 1 1 rN �rN ∈ five routes; then, the demand/route flow conservation conditions for equation (9) is 1 1 1 + ⎛⎝ · δww� + · δw�⎞⎠, 1 1 1 w rN �rN w w θ xr bw − �r Rw xr x1 + x2 + x3 � 10, (14) N N∈ N N (23) 2 2 x1 + x2 � 5. (15) where indicator δww� is equal to 1 if r � �r , w � w�, and 0 rN�rN N N /e coefficient matrix A for equations (14) and (15) is w� otherwise, and indicator δw is equal to 1 if w � w�, and 0 1 1 1 0 0 otherwise. A �� �. (16) As mentioned before, when solving [SUB-1], we only 0 0 0 1 1 need to compute the following matrix-vector product for 1 some vector d: If we choose x2 as the basic flow variable for OD pair 1, 2 k and x1 as the basic flow variable for OD pair 2, then the basic u � H� · d. (24) flow variables in equations (14) and (15) can be expressed in terms of the nonbasic flow variables, i.e., In equation (24), d is any vector whose dimension equals the total number of the nonbasic route flow variables. 1 1 1 ( ) x2 � 10 − x1 − x3, 17 /e coordinates of the vector u can be calculated by

x2 � 5 − x2. (18) 1 2 w zta ⎜⎛ w� w� w� w� ⎟⎞ ur � � ⎝⎜ � � d · δ − � � d · δar ⎠⎟ N zv �rN a�rN �rN B Equations (17) and (18) can be written in the following a∈A a w∈W w w∈W � �r∈R� � �r∈R�w matrix form: N N 1 w w 1 1 w 1 1 x1 · �δ − δ � + · · d + 1 arN arB w rN w 1 0 x2 10 1 1 0 ⎜⎛ ⎟⎞ θ xr θ bw − �r ∈Rw xr � �� � �� � − � �⎜ x1 ⎟, (19) N N N N 2 ⎝⎜ 3 ⎠⎟ 0 1 x 5 0 0 1 w w 1 2 · � d , ∀w ∈ W, r ∈ R . x2 �rN N w �r∈R� from which we can observe that the basic and nonbasic for N matrix A (c.f. equation (16)) are (25) /e diagonal elements of the reduced Hessian matrix are 1 0 AB �� �, 0 1 2 � ( ) z f zta w w 2 1 1 1 20 � � · �δ − δ � + ⎛⎝ − ⎞⎠. w2 arN arB w w 1 1 0 zx zva θ x bw − �r Rw x A �� �. rN a∈A rN N∈ N rN N 0 0 1 (26) 6 Mathematical Problems in Engineering

Following the suggestion in Nash [20], the precondi- size should be reduced so that each iteration is strictly feasible. tioner matrix Mk is chosen as a diagonal matrix whose el- Hence, in the early iteration stage, the actual step size is ements are equal to the diagonal elements of the reduced usually much smaller than needed, which will result in a very Hessian. Obviously, with Mk defined in this way, it can be slow rate of convergence. easily inverted. However, when the iteration point is near the optimal Up to now, all the ingredients required in the compu- solution (this usually occurs in the late iteration stage), the tation of the search direction are available. /e search di- restriction in equations (27) and (28) does not work. In rection can be obtained by iteratively performing addition, by /eorem 2.3 in Dembo and Steihaug [19], the Algorithm 2. truncated Newton method converges superlinearly when the iteration point is within a neighbourhood of the optimal SUE solution. As a result, the performance of the truncated 3.2.3. Determination of the Step Size. For the logit-based Newton is quite fast at the late iteration stage. SUE problem, although the optimal solution satisfies the /e following proposition rigorously validates the above nonnegativity constraints (10), some iteration points may phenomenon. violate these constraints during the iterative process. When w this happens, the term ln x that appears in equations (21) k rN Proposition 1. Let �x� � be the sequence of iterations gen- and (22) becomes undefined. To avoid such a circumstance, erated by Algorithm 1. Suppose that ηk is chosen as equation at each iteration, a restriction on the step size should be (7), and the step size is chosen according to equations imposed to ensure strictly positive path flows, i.e., the step k (27)—(30); then there exists a positive integer k such that the size λ should satisfy the following two constraints: restriction in equations (27) and (28) can be automatically k k �xw � + λk�p�w � > 0, ∀w ∈ W, r ∈ Rw , (27) satisfied for all k ≥ k. rN rN N N

k k Proof. By /eorem 2.1 in Dembo and Steihaug [19], we have � ��xw � + λk�p�w � � < b , ∀w ∈ W. rN rN w (28) � � r Rw � k� N∈ N lim �g� � � 0, (31) k⟶∞ By incorporating the above restriction into the well- known Armijo rule, the step size can be de- which, together with step 2 in Algorithm 2 and (7), implies � � termined by � j+ � �r 1� k i lim � � � 0. (32) λ � β , (29) k⟶∞ �g�k�

where i is the smallest nonnegative integer i which satisfies Since rj+1 is the gradient of the objective of [SUB-1] equations (27) and (28) and the following inequality: evaluated at p�k, we have k i k k i k T k f��+ � � f�� � + �� � � , , ( , ). j+1 k k k xN β p ≤ xN αβ g xN p α β ∈ 0 1 r � H� p� + g� . (33) (30) Substituting equation(33) into equation (29) and noting k that H� is positive definite, we obtain � � 4. A Preprocessing Procedure for the � − 1 � ��k+ � k �k� Determination of a Good Initial Point �p �H � g � � � � . (34) lim � k� 0 /e first important feature of the improved truncated k⟶∞ �g� � Newton method is the incorporation of a preprocessing Define procedure into the traditional truncated Newton method. k /is procedure overcomes the drawback of the truncated k p� q � � �, Newton method and generates a good initial point to start �g�k� with. It in essence replaces the early iteration stage of the (35) k truncated Newton method. k g� e � � �. �g�k� 4.1. A Drawback of the Truncated Newton Method. /e re- striction in equations (27) and (28) is indispensable when /en equation (34) can be rewritten as � � � − 1 � applying the truncated Newton method, since it makes the � k k k� lim �q +�H� � e � � 0. (36) reduced objective function (21) and the reduced gradient (22) k⟶0� � well defined in all iterations. However, when the iteration k k point is far from the optimal SUE solution (this usually occurs Since H� is positive definite and ‖e ‖ � 1, it follows that k in the early iteration stage), this restriction may deteriorate �q � is a bounded sequence. In view of equations (35) and (31), we have the performance of the algorithm. /e reason is as follows, By � � the procedure of the Armijo rule (equations (29) and (30)), if � k� lim �p� � � 0. (37) anyone of the nonnegative constraints (10) is violated, the step k⟶0 Mathematical Problems in Engineering 7

k Hence, for every ε > 0, there exist a positive integer , [P2] min f(x) � f1(x) + f2(x) such that for all k ≥ k, (42) � � subject to Ax � b. � k� �p� � ≤ ε. (38) Suppose that at iteration k, a feasible point xk is given. By Equation(38) implies that for every component of p�k, approximating the first term of the objective function in [P2] � � k � k� with a first-order Taylor series around x , the following ��pw � � ≤ ε, ∀w ∈ W, r ∈ Rw . (39) � rN � N N subproblem can be obtained:

w k w k k Assume that at iteration k(k ≥ k), both (x ) and (x ) [SUB-2] min ∇f1(x)x − x � + f2(x) rN rB (43) are bounded away from zero. /en there exist δ1 > 0 and subject to Ax � b. δ2 > 0 such that k w k Let x be an exact solution to [SUB-2]. By /eorem 2.1 in �x � > δ , k k rN 1 [17], it is known that if the vector x − x is nonzero, it is a (40) k descent direction with respect to the original objective �xw � > δ . rB 2 function f(x). /e next iteration point is then obtained k through a line search along this descent direction. Since ε can be chosen sufficiently small and λ ≤ 1, it Details of the preprocessing procedure are described in follows from Equations (27), (28), and (40) that Algorithm 3. va w k k w k k w Clearly, by letting f (x) � � � t (τ)dτ, f (x) � �x � + λ �p � > δ − λ ε > 0, ∀w ∈ W, r ∈ R , 1 a∈A 0 a 2 rN rN 1 N N w w (1/θ)�w∈W�r∈Rw xr ln xr in [P2], we obtain the logit-based k k k k � ��xw � + λk�pw � � � b − �xw � + λk � �pw � SUE problem. /e resulting subproblem that corresponds to rN rN w rB rN w w rN∈RN rN∈RN [SUB-2] is � � k � w � k 1 w w < bw − δ + λ ε�RN� < bw, ∀w ∈ W. w w 2 [SUB-2-SUE] min � � cr � xr + � � xr ln xr, w θ w (41) w∈W r∈R w∈W r∈R w k+1 (44) /e above two inequalities suggest that (xr ) and w k+1 N (xr ) are also bounded away from zero. /erefore, the w B � xr � bw, ∀w ∈ W. (45) conclusion of Proposition 1 holds true. r∈Rw Furthermore, we would like to point out that in the early w k w iteration stage of the truncated Newton method, although where (cr ) is the travel cost on path r ∈ R , w ∈ W based the subproblem (6) is only solved approximately, usually it on the vector of path flows at iteration k. still requires more than one inner iteration for each major Unlike the complicated solution approach to [SUB-1], iteration. /is consumes more CPU times when compared the solution to [SUB-2-SUE] in the preprocessing process with algorithms that do not need any inner iteration. can be given in closed form. /e next proposition presents a From all discussions above, if we find an initial point detailed derivation of this solution. such that it is close to the optimal SUE solution, the drawback of the truncated Newton method can be avoided. Proposition 2. e subproblem [SUB-2-SUE] has a closed /is can be achieved by replacing the early iteration stage of form solution, which can be explicitly expressed as the truncated Newton method with a preprocessing pro- w k cedure, which will be discussed in the next subsection. □ w exp�− θcr � � w xr � bw , ∀w ∈ W, r ∈ R . (46) w k �l∈Rw exp�− θ�cl � � 4.2.PreprocessingProcedure. /e preprocessing procedure is proposed to find a good initial point to start with. It can Proof. Consider the following Lagrange function: largely replace the early iteration stage of the truncated w k w 1 w w Newton method. /is procedure is based on the partial L(x, μ) � � � cr � xr + � � xr ln xr w θ w linearization method in Patriksson [17]. When performing w∈W r∈R w∈W r∈R this procedure, all iteration points will strictly satisfy the (47) nonnegativity constraints, such that the step size restriction, ⎛⎝ w ⎞⎠ + � μw � xr − bw , (27) and (28), is not needed. Furthermore, at each iteration w∈W r∈Rw of the procedure, the generated subproblem has a closed form solution. /erefore, it does not require any inner it- where μw is the Lagrange multiplier associated with equation eration to solve this subproblem. Next, we elaborate the (45). /en, [SUB-2-SUE] can be transformed into the fol- preprocessing procedure. lowing minimization problem: From equation (8), we know that the objective function min L(x, μ). (48) for the logit-based SUE problem is composed of two terms. Without loss of generality, we reconsider the problem [P1] /e first-order conditions for the above problem state in the following equivalent form: that 8 Mathematical Problems in Engineering

for the preprocessing procedure to find a good initial point. Step 0: let xk be an initial feasible point in the original Since the preprocessing procedure is performed in the space. Set k � 0. original space, it does not need to determine the basic and Step 1: if xk is close to the minimizer of [P1], terminate nonbasic variables. However, the outcome of this procedure this procedure. lays a good foundation for what variable should be chosen as k Step 2: solve [SUB-2] to obtain a solution x , then the the basic/nonbasic variables. □ search direction in the original space is given by pk � xk − xk. Step 3: compute a step size λk � βi where i is the smallest nonnegative integer i such that f(xk + βipk) ≤ f(xk)+ 5. A Maximal Flow Principle for the Choice of αβig(xk)Tpk, α, β ∈ (0, 1). the Basic/Nonbasic Variables k+1 k k k Step 4: set x � x + λ p , k � k + 1. Go to Step 1. /e second feature of the improved truncated Newton method is the development of a practical principle for ALGORITHM 3: Preprocessing procedure. choosing the basic variables in the logit-based SUE problem. /is principle is applied after the preprocessing procedure. It zL(x, μ) relies on the information contained in the initial point that � 0, zxw belongs to the original space. Such a principle accelerates the r PCG method used in the minor iteration, and it in essence zL(x, μ) (49) improves the computation efficiency of the truncated � 0, Newton method in the late iteration stage. zμw

∀w ∈ W. 5.1. e Principle and Its Rationale. /e preprocessing procedure generates an initial point that is near to the Hence, it follows from equation (49) that optimal SUE solution. Since this point is in the original

zL(x, μ) w k 1 w space, it can provide us valuable information on how to w � � � cr � + ln xr + 1 � + μw � 0, optimally partition the variables and the coefficient matrix. zx w θ r w∈W r∈R (50) In view of the special structure of matrix A, it is known that any variable whose coefficient is nonzero can be potentially w W, r Rw. ∀ ∈ ∈ used as the basic route flow variable. However, in practice, Solving the above equation yields different choice of the basic route flow variables affects the performance of the PCG method differently and thus yields w w k xr � exp�− θcr � − 1 − θμw�. (51) different convergence rates of the inner iteration. As discussed in Nocedal and Wright [18], the conver- w Inserting xr into equation (45), we have gence behavior of the PCG method is strongly dependent on the condition number of the quadratic optimization prob- w k exp�− θcr � � lem (6), which is defined as xw � b , ∀w ∈ W, r ∈ Rw. r w k (52) � �� − � w k � k�� k 1� σ �l∈Rw exp�− θ�cl � � � � � �� � � n cond�H � ��H ���H � � � , (53) σ1 /is completes the proof. where σ and σ are the largest and smallest eigenvalues of In conclusion, the preprocessing procedure is an ap- n k 1 the matrix H� . Nocedal and Wright [18] showed that the plication of the partial linearization method to the logit- larger the condition number, the slower the likely conver- based SUE problem. Compared with the truncated Newton gence of the PCG method. For the logit-based SUE problem, method, the preprocessing procedure has two advantages. it is obvious that a different choice of the basic route flow On one hand, it is strictly feasible for all iteration points. k variables yields a different H� . /erefore, among all possible /erefore, the step size restriction, (27) and (28), is not choices, the most suitable way is to select a group of basic necessary so that the step size will not be forced to be re- k route variables such that the condition number cond(H� ) is duced. On the other hand, it utilizes the special structure of as small as possible. However, in practice, the value of the logit-based SUE model so that the resulting subproblem k cond(H� ) is difficult to evaluate, which makes it hard to sort has a closed form solution. As a result, inner iterations are different condition numbers. Fortunately, based on the not needed, which can save a lot of CPU times. However, information contained in the initial point, we can at least since the preprocessing procedure only uses a first-order avoid the case in which the condition number is very large. approximation of the objective function, its convergence rate /e next principle presents a strategy to avoid such a case. is sublinear. It may become very slow during the late iter- ation stage. /is is why this procedure is only used to the w take place of the truncated Newton method in the early 5.1.1. e Maximum Flow Principle. Let �xr | w ∈ W, w iteration stage. r ∈ R }Int. be the initial point generated by the pre- Convergence of the partial linearization method is processing procedure. /e basic route can be set as the one established in Patriksson [17], which ensures that it is easy that corresponds to the maximum flow route of each OD Mathematical Problems in Engineering 9 pair at the initial point. In other words, the basic route index From Proposition 3, we know that at the initial point, if rB for OD pair w should satisfy the basic route for each OD pair is chosen as the one whose r � xw , w W. flow is away from zero, we can avoid the case of very large B arg max �r �Int. ∀ ∈ (54) r∈Rw condition numbers. At the remaining iteration points, this conclusion still holds. Proposition 4 below establishes it It is worth to note that the maximum flow principle theoretically. □ proposed in this study is static. Once the basic routes are selected, they are unchanged in all remaining iterations. In Proposition 4. Let x0 be an initial point that is close to the [16], a dynamic principle with a similar idea is also discussed. optimal SUE solution. At x0, if the basic route is chosen However, that is only an intuitive principle inspired by a according to the maximum flow principle, then for all simple example. /e rationale behind that principle is not remaining iteration points xk (k > 0), the flow on the basic clear. In this study, we will fill this gap by rigorously explaining route is always bounded away from zero. the rationale behind the maximum flow principle. Details of the explanation are given in the following two propositions. Proof. At x0, let (xw )0 be the flow on the basic route. By the rB maximum flow principle, (xw )0 is the maximum variable for rB Proposition 3. At a certain initial point, if the basic route for OD pair w ∈ W. /erefore, (xw )0 is bounded away from rB each OD pair is chosen as the one whose flow is nearly zero, zero, and there exists δ > 0 such that then the condition number of the reduced Hessian matrix 0 �xw � ≥ δ > 0, ∀w ∈ W. (59) defined in equation (53) will be very large. rB Since x0 is close to the optimal SUE solution x∗, we can Proof. Elements of the reduced Hessian matrix are given in assume that equation (23). /is equation can be decomposed into three � � �x0 − ∗� , (60) parts. x ≤ ε /e first part is where ε a very small positive number. zta w� w� w w /e above inequality implies that � · �δa r − δa r ��δar − δar �. (55) � � zv �N �B N B � 0 ∗� a∈A a ��xw � − �xw � � ≤ ε, ∀w ∈ W. (61) � rB rB � /e second part is By /eorem 2.1 in Dembo and Steihaug [19], xk con- 1 1 ww� ∗ k · · δr r . (56) verges to the SUE solution x as ⟶ ∞, and accordingly, θ xw N �N w k w ∗ rN the basic route flow variable (x ) also converges to (x ) . rB rB i.e., /e third part is w k w ∗ 1 1 w lim �xr � � �xr � , ∀w ∈ W. (62) � k⟶∞ B B · w · δw. (57) θ bw − �r ∈Rw xr N N N In view of equation (61) and the definition of limit, we Assume that for each OD pair, we choose the basic route have that for all k ≥ 0, � � as the one whose flow is nearly zero. Under this assumption, � k ∗� ��xw � − �xw � � ≤ ε. (63) the values of equations(55) and (56) are limited, whereas the � rB rB � value of equation (57) will be quite large. /e reason is that w Adding equation (61) in equation (63) yields b − � w x is the flow on the basic route. By the as- w rN∈R rN � � � � N w � 0 ∗� � k ∗� sumption that b − � w x is nearly zero, 1/(b − � w w � � w w � w rN∈RN rN w ��x � − �x � � +��x � − �x � � ≤ 2ε. (64) w � rB rB � � rB rB � � w x ) will tend to infinity. /erefore, we can omit rN∈RN rN equations (55) and (56) and only consider equation (57). As Using the triangle inequality, we have a result, equation (23) approximately equals � � � w 0 w k� 2 � ��xr � − �xr � � ≤ 2ε. (65) z f 1 1 w B B ≈ · · δ�. ( ) w w� w w 58 zy zy θ bw − �r Rw xr r �r N∈ N N Since ε is very small, as a result of equations (59) and (65), we obtain that for all k > 0, From equation (58), we can observe that if the flow on � k w k w 0 the basic route is nearly zero, the reduced Hessian matrix H �xr � ≥ �xr � − 2ε ≥ δ − 2ε > 0. (66) can be approximately viewed as a block diagonal matrix. B B Elements in each block are nearly equal. Hence, there exist which implies that the flow on the basic route is away from k k nearly linearly dependent columns in H� . /is means H� is zero for all k > 0. almost singular, and it has eigenvalues that are nearly zero. /e above two propositions rigorously explain the ra- k Since the reduced Hessian matrix H� is positive definite, all tionality of the maximum flow principle. In Proposition 3, k its eigenvalues are positive. As a result, the smallest ei- the special structure of H� is crucial to prove the resultant k genvalue σ presented in equation (53) is very small, which assertion. In order to illuminate H� more clearly, we in- 1 k means that cond(H� ) is quite large. troduce an example below. □ 10 Mathematical Problems in Engineering

5.2. An Illustrative Example. Example 2. Consider the grid 1 2 3 network in Figure 2. /ere are 9 nodes and 12 links in the network with only one OD pair between node 1 and node 9. Route 1: 1–2–5–6–9 /e OD demand is 100. /ere are six different routes Route 2: 1–2–3–6–9 connecting the origin and destination. 4 5 6 Route 3: 1–4–7–8–9 /e link travel time function is defined as follows: Route 4: 1–2–5–8–9 v n Route 5: 1–4–5–8–9 0 a Route 6: 1–4–5–6–9 tava � � ta�1 + β� � �, (67) Ca 7 8 9 0 where ta, Ca, ta(va), respectively, are link a’s free-flow travel Figure 2: A simple grid network. time, capacity, and travel time with flow va, and β, n are deterministic parameters. In this example, we set β � 0.6, n � 4. We assume that Ca � 100 for all links; for links (4, 5), Table 1: /e reduced Hessian matrix for route 1. 0 0 (5, 6), and (7, 8), ta � 1; and for the remaining links ta � 2. 1.399761 0.022992 0.022817 0.022992 0.012136 In this example, there are 6 route flow variables. /e 0.022992 0.085817 0.045225 0.074526 0.041262 initial point is given by x0 � 13.93, 0.12, 15.09, 1.46, 0.022817 0.045225 0.159536 0.04525 0.011961 6.57, 62.83). At this point, if route i (i � 1, ... , 6) is chosen as 0.022992 0.074526 0.04525 0.107955 0.049282 the basic route, the reduced Hessian matrix corresponding 0.012136 0.041262 0.011961 0.049282 0.051935 to this route can be formed, and its condition number can be Condition number: 65.88. calculated. For abbreviation, we use “the reduced Hessian i i matrix for route ” and “the condition number for route ” to Table 2: /e reduced Hessian matrix for route 2. express the above meanings. 1.399761 1.376769 1.376944 1.376769 1.387625 Tables 1–6 present the reduced Hessian matrices for 1.376769 1.439594 1.399177 1.428303 1.405895 routes 1–6. Substituting these matrices into equation (53), 1.376944 1.399177 1.513663 1.399202 1.376769 we can calculate the condition number for each route, which 1.376769 1.428303 1.399202 1.461731 1.413915 is shown in the lower left corner below each table. 1.387625 1.405895 1.376769 1.413915 1.427424 By comparing these tables above, we can observe that the Condition number: 397.28. column vectors in Table 2 are nearly linearly dependent. /is coincides with the fact that route 2 is the route whose flow is Table 3: /e reduced Hessian matrix for route 3. nearest to 0 (f2 � 0.12). Clearly, the condition number for route 2 is 397.28, which is much larger than the condition 0.085817 0.062825 0.040592 0.011292 0.044556 number for any other routes. 0.062825 1.439594 0.040417 0.011292 0.0337 0.040592 0.040417 0.154903 0.011316 0.011292 0.011292 0.011292 0.011316 0.04472 0.019312 6. Numerical Results 0.044556 0.0337 0.011292 0.019312 0.055229 Condition number: 81.69. In order to numerically justify the theoretical analysis conducted in this research, this section presents some performance comparisons between the ITN method and Table 4: /e reduced Hessian matrix for route 4. other typical methods in the literature. /ese methods in- 0.159536 0.136719 0.114311 0.114286 0.147575 clude the MSA method (Sheffi and Powell [4]), the MTN 0.136719 1.513663 0.114486 0.114461 0.136894 method (Zhou et al. [16]), and the GP method (Bekhor and 0.114311 0.114486 0.154903 0.143587 0.143612 Toledo [15]). Among the above four methods, the MSA 0.114286 0.114461 0.143587 0.176991 0.151607 method is an implicit enumeration method. It uses Dial’s 0.147575 0.136894 0.143612 0.151607 0.187549 STOCH procedure for network loading, which implicitly Condition number: 88.17. considers all efficient paths of the network. /e other three methods are explicit enumeration methods, which require Table 5: /e reduced Hessian matrix for route 5. an explicit choice of a subset of feasible paths prior to the 0.107955 0.084963 0.033429 0.062705 0.058672 traffic assignment. /ese methods are tested on the Sioux 0.084963 1.461731 0.033429 0.06253 0.047816 Falls network and Winnipeg network. /e Sioux Falls 0.033429 0.033429 0.04472 0.033404 0.025408 network is a medium-size network. It is composed of 76 0.062705 0.06253 0.033404 0.176991 0.025383 links, 24 nodes, and 528 OD pairs. /e Winnipeg network is 0.058672 0.047816 0.025408 0.025383 0.061325 a real-size network. It consists of 2836 links, 1052 nodes, and Condition number: 78.84. 4344 OD pairs. Both networks are taken from Bar-Gera [21]. To provide a common basis for the comparison of the working route set. For the Sioux Falls network, the average above three explicit enumeration methods, each of them is number of generated routes is 7.3 per OD pair, and the performed on the same working route set. We use a combi- maximum number of generated routes is 11 for any OD pair. nation of the link elimination method (Azevedo et al. [22]) and For the Winnipeg network, the average and maximum number link (De La Barra et al. [23]) to generate this of generated routes per OD pair are 20.3 and 29, respectively. Mathematical Problems in Engineering 11

Table 6: /e reduced Hessian matrix for route 6. 2 0.051935 0.039799 0.010673 0.039974 0.002653 0 0.039799 1.427424 0.021529 0.050655 0.013509 0.010673 0.021529 0.055229 0.043937 0.035917 –2 0.039974 0.050655 0.043937 0.187549 0.035942 0.002653 0.013509 0.035917 0.035942 0.061325 –4 Condition number: 68.24. –6 /e performances of the three explicit enumeration –8 methods are compared in terms of iteration number and Convergence measure CPU times under different circumstances. For the MTN –10 method, we only concern its major iterations, because the number of minor iterations can be inherently reflected by its –12 500 100 150 200 250 300 350 400 450 500 CPU times. Similarly, for the ITN method, its iteration Iteration number number is calculated by the sum of the iteration number of the preprocessing procedure (Algorithm 3) and the major GP MTN iteration number of the truncated Newton method (Algo- ITN MSA rithm 1). /e convergence criterion for the ITN method is (a) based on the root mean square error (RMSE) of the reduced –3 gradient (Zhou et al. [24]), i.e.,����� � � –4 �g�k� RMSE � ≤ ε, (68) |H| –5 k where ‖g� ‖ is the reduced gradient at the kth iteration and –6 |H| is the total number of routes. In Step 2 of Algorithm 1, we use ε � 10− 4 to terminate the –7 ITN method. In Step 1 of Algorithm 3, we terminate the –8 preprocessing procedure when RMSE is smaller than one-tenth Convergence measure of its initial value for the Sioux Falls network, and one-half of its –9 initial value for the Winnipeg network. /e parameter ρ in the forcing term (7) in Algorithm 2 is set to be 0.5. –10 0 20 4060 80 100 120 140 160 180 200 For all of the computation instances in this section, the Iteration number start point is obtained by averagely assigning the demand to each route in the working route set between each OD pair. GP MTN Our computer programs are coded in MATLAB and exe- ITN MSA cuted on a notebook computer. (b)

Figure 3: Convergence performance in terms of iteration numbers. 6.1. Algorithm Performance. /e algorithm performance test (a) Sioux falls network. (b) Winnipeg network. is carried out to compare the convergence progress of dif- ferent methods at different iteration stages. In this test case, We can also observe from Figure 3 that in the late it- we use ln(RMSE) as the convergence progress measure. /e eration stage, the slope of the curve for the MSA method is dispersion parameter θ is assumed to be 0.5. much more gentle than the other three methods. /e reason Figure 3 displays the convergence performance of the is that the predetermined step size employed in the MSA explicit and implicit methods in terms of iteration numbers on method diminishes to zero as k approaches infinity, which the Sioux Falls and Winnipeg network. /is figure in essence deteriorates its rate of convergence. Furthermore, if we illustrates the convergence rate of the four methods. We find compare the three explicit methods, we find that the slopes that in the early iteration stage, the convergences of the MTN of the curve for the ITN method and the MTN method are and GP method are relatively slow in comparison with the ITN similar, both of which are steeper than the GP method. /is and MSA method on both networks. Since the MTN method observation is consistent with the fact that the ITN and MTN belongs to the category of the truncated Newton method, this methods are locally superlinearly convergent, while the phenomenon in essence confirms the analysis developed in convergence rate for GP method is only linear. Furthermore, Section 4.1, which indicates that the step size restriction may the fast convergence rate of the ITN method in the late deteriorate the convergence rate of the truncated Newton type iteration stage also validates Proposition 1, which suggests methods in the early iteration stage. As for the GP method, it that the restriction on the step size does not work when the also requires a maximum step size restriction to ensure strictly iteration point is close to the optimal SUE solution. positive path flows in each iteration (c.f. Bekhor and Toledo Figure 4 shows the convergence performance of the [15]). With a similar reasoning, it is also slower than the ITN explicit and implicit methods in terms of CPU times on the method in the early iteration stage. Sioux and Winnipeg network. /is figure in essence 12 Mathematical Problems in Engineering

2 –3

0 –4

–2 –5

–4 –6

–6 –7

–8 –8 Convergence measure Convergence measure –10 –9

–12 –10 050100 150 0 1000 2000 3000 4000 5000 6000 CPU time (s) CPU time (s) GP MTN GP MTN ITN MSA ITN MSA

(a) (b)

Figure 4: Convergence performance in terms of CPU times. (a) Sioux falls network. (b) Winnipeg network.

450 140 400 120 350 100 300 250 80

200 60 150 Iteration number Iteration number 40 100 20 50 0 0 0 1 0 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 Teta Teta GP GP ITN ITN MTN MTN

(a) (b)

Figure 5: Effect of the dispersion parameter on iteration numbers. (a) Sioux falls network. (b) Winnipeg network. illustrates the computational efficiency of the four methods. examine the impact of the dispersion parameter or demand From this figure, we can see that the computational effi- factor on the performance of different methods. As we ciency of the ITN method is very high, which validates the learned from Section 6.1, the MSA method (i.e., the implicit effectiveness of the maximum flow principle proposed in method) is much inferior to the other three methods, es- Section 5.1. Compared with the MTN and GP methods, the pecially when computing exact solutions. In order for the ITN method can reduce the CPU times by roughly 50%– curves of different methods contained the relevant figures to 80%. /e reason for the greater efficiency of the ITN method be comparable, in the sensitivity analysis, we will omit the is two-fold. First, the number of iterations required by the MSA method and only examine the performance of the ITN ITN method is smaller than that of the MTN and GP method with the MTN and GP method under various method. Secondly, the preprocessing procedure of the ITN conditions. method does not entail any inner iteration, thus saving a lot of CPU times in the early iteration stage. 6.2.1. Sensitivity Tests for Different Dispersion Parameters. /e iteration numbers and CPU times by different θ for the 6.2. Sensitivity Analysis. By varying the value of θ or mul- three methods on the Sioux Falls and Winnipeg network are tiplying the model demand with different factors, we can illustrated in Figures 5 and 6. As can be observed from Mathematical Problems in Engineering 13

300 8000

7000 250 6000 200 5000

150 4000

3000 CPU time (s) 100 CPU time (s) 2000 50 1000

0 0 0 1 0 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 Theta Theta GP GP ITN ITN MTN MTN

(a) (b)

Figure 6: Effect of the dispersion parameter on CPU times. (a) Sioux falls network. (b) Winnipeg network.

700 120

600 100

500 80 400 60 300 40 Iteration number 200 Iteration number

100 20

0 0 0 1 0 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 Demand factor Demand factor GP GP ITN ITN MTN MTN

(a) (b)

Figure 7: Effect of the demand factor on iteration numbers. (a) Sioux falls network. (b) Winnipeg network.

Figure 5, the ITN method is superior to the MTN and GP Furthermore, we can find that the larger the dispersion method in terms of iteration number for all values of θ on parameter θ is, the more advantages the ITN method gains. both networks. /is suggests that the average convergence /e reason for this phenomenon is that if θ is large, the start rate of the ITN method is higher than the other two point is much farther from the optimal solution than the case methods. As for the required CPU times shown in Figure 6, when the θ is small. Since the ITN method contains a we can observe the following fact. For the Sioux Falls net- preprocessing procedure which is tailored to avoid the step work, the ITN method consumes less CPU times than the size restriction in the early iteration stage, the effects of this MTN and GP method in all θ. For the Winnipeg network, procedure will become more prominent if the start point is the CPU times for the three methods are similar when θ is farther from the optimal solution. very small (i.e., θ � 0.1), but the ITN method consumes much less CPU times than the other two methods when θ is relatively large. /is indicates that the ITN method is more 6.2.2. Sensitivity Tests for Different Levels of Demand. efficient than the MTN and GP method for most practical Figures 7 and 8 present the iteration numbers and CPU cases. times required by different demand levels for the three 14 Mathematical Problems in Engineering

500 5000 450 4500 400 4000 350 3500 300 3000 250 2500 200 2000 CPU time (s) CPU time (s) 150 1500 100 1000 50 500 0 0 0 1 0 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 Demand factor Demand factor GP GP ITN ITN MTN MTN

(a) (b)

Figure 8: Effect of the demand factor on CPU times. (a) Sioux falls network. (b) Winnipeg network. methods on the Sioux Falls and Winnipeg network. As il- method, which can generate the path set dynamically during lustrated in Figure 7, for the Sioux Falls network, the number the assignment. of iterations of the ITN method is smaller than the MTN and GP method for all demand factors. For the Winnipeg net- Data Availability work, when the demand level is relatively small, the ITN method requires fewer iterations in comparison with the /e data used to support the findings of this study are other two methods. When the demand level is high, the available from http://www.bgu.ac.il/∼bargera/tntp/. performance of the ITN and that of MTN method are comparable, both of which are superior to the GP method. From Figure 8, we can observe similar results for the curves Conflicts of Interest of the three methods in terms of CPU times. /e authors declare that there are no conflicts of interest In view of all discussions above, we conclude that the regarding the publication of this article. ITN method performs better than the MTN and GP method for most practical cases. Acknowledgments 7. Conclusions and Future Research /is research is supported by the National Natural Science Foundation of China (nos. 71601046 and 51778141), Natural /is study investigated truncated Newton type algorithms Science Foundation of Jiangsu Province (BK20160686), for the logit-based SUE problem. We showed that using the National Key R&D Program of China (2018YFB1600900), traditional truncated Newton method to solve this problem and Hong Kong Polytechnic University (1-BE1V). may be very slow in the early iteration stage, and we thus proposed an improved truncated Newton method to References overcome this drawback. We compared the ITN method with GP and MTN method on the Sioux Falls network. [1] J. G. Wardrop, “Some theoretical aspects of road traffic re- Numerical results validate that the ITN method can indeed search,” Proceedings, Institute of Civil Engineers, Part II, vol. 1, improve the performance of these methods. pp. 325–378, 1952. Further research work can be undertaken in the fol- [2] C. F. Daganzo and Y. Sheffi, “On stochastic models of traffic lowing aspects. First, the current research concentrates on assignment,” Transportation Science, vol. 11, no. 3, pp. 253– solving the logit-based SUE model. Extending the ITN 274, 1977. method to the more general traffic assignment models, such [3] C. Fisk, “Some developments in equilibrium traffic assign- ment,” Transportation Research Part B: Methodological, as the combined distribution and assignment model [25], the vol. 14, no. 3, pp. 243–255, 1980. cross-nested logit model [26], and the generalized extreme [4] Y. Sheffi and W. B. Powell, “An algorithm for the equilibrium value (GEV) family of model [27] is highly anticipated. assignment problem with random link times,” Networks, Second, when implementing the ITN method, routes are vol. 12, no. 2, pp. 191–207, 1982. generated prior to the traffic assignment. It is desirable to [5] Z. Liu, S. Wang, B. Zhou, and Q. Cheng, “Robust optimization incorporate a column generation scheme to the ITN of distance-based tolls in a network considering stochastic day Mathematical Problems in Engineering 15

to day dynamics,” Transportation Research Part C: Emerging [25] S. P. Evans, “Derivation and analysis of some models for Technologies, vol. 79, pp. 58–72, 2017. combining trip distribution and assignment,” Transportation [6] C. Sun, L. Cheng, S. Zhu, F. Han, and Z. Chu, “Multi-criteria Research, vol. 10, no. 1, pp. 37–57, 1976. user equilibrium model considering travel time, travel time [26] P. Vovsha and S. Bekhor, “Link-nested logit model of route reliability and distance,” Transportation Research Part D: choice: overcoming the route overlapping problem,” Trans- Transport and Environment, vol. 66, pp. 3–12, 2019. portation Research Record, vol. 1645, no. 1, pp. 133–142, 1998. [7] C. Wang, C. Xu, J. Xia, and Z. Qian, “Modeling faults among [27] D. McFadden, “Modelling the choice of residential location,” e-bike-related fatal crashes in China,” Traffic Injury Pre- in Spatial Interaction eory and Residential Location, vention, vol. 18, no. 2, pp. 175–181, 2017. A. Karlquist, Ed., pp. 75–96, North-Holland, Amsterdam, [8] M. Du and L. Cheng, “Better understanding the character- Netherlands, 1978. istics and influential factors of different travel patterns in free- floating bike sharing: evidence from Nanjing, China,” Sus- tainability, vol. 10, no. 4, p. 1244, 2018. [9] R. B. Dial, “A probabilistic multipath traffic assignment al- gorithm which obviates path enumeration,” Transportation Research, vol. 5, no. 2, pp. 83–111, 1971. [10] M. Maher, “Algorithms for logit-based stochastic user equi- librium assignment,” Transportation Research Part B: Meth- odological, vol. 32, no. 8, pp. 539–549, 1998. [11] M. G. H. Bell, “Alternatives to Dial’s logit assignment algo- rithm,” Transportation Research Part B: Methodological, vol. 29, no. 4, pp. 287–295, 1995. [12] T. Akamatsu, “Cyclic flows, Markov process and stochastic traffic assignment,” Transportation Research Part B: Meth- odological, vol. 30, no. 5, pp. 369–386, 1996. [13] O. Damberg, J. T. Lundgren, and M. Patriksson, “An algo- rithm for the stochastic user equilibrium problem,” Trans- portation Research Part B: Methodological, vol. 30, no. 2, pp. 115–131, 1996. [14] T. Larsson and M. Patriksson, “Simplicial decomposition with disaggregated representation for the traffic assignment prob- lem,” Transportation Science, vol. 26, no. 1, pp. 4–17, 1992. [15] S. Bekhor and T. Toledo, “Investigating path-based solution algorithms to the stochastic user equilibrium problem,” Transportation Research Part B: Methodological, vol. 39, no. 3, pp. 279–295, 2005. [16] B. Zhou, M. C. Bliemer, X. Li, and D. Huang, “A modified truncated Newton algorithm for the logit-based stochastic user equilibrium problem,” Applied Mathematical Modelling, vol. 39, no. 18, pp. 5415–5435, 2015. [17] M. Patriksson, “Partial linearization methods in nonlinear programming,” Journal of Optimization eory and Appli- cations, vol. 78, no. 2, pp. 227–246, 1993. [18] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, NY, USA, second edition, 2006. [19] R. S. Dembo and T. Steihaug, “Truncated-newtono algorithms for large-scale unconstrained optimization,” Mathematical Programming, vol. 26, no. 2, pp. 190–212, 1983. [20] S. G. Nash, “Preconditioning of truncated-Newton methods,” SIAM Journal on Scientific and Statistical Computation, vol. 6, no. 3, pp. 599–616, 1985. [21] H. Bar-Gera, “Transportation network test problems,” 2019, http://www.bgu.ac.il/∼bargera/tntp/. [22] J. A. Azevedo, M. E. O. Santos Costa, J. J. E. R. Silvestre Madera, and E. Q. Vieira Martins, “An algorithm for the ranking of shortest paths,” European Journal of Operational Research, vol. 69, no. 1, pp. 97–106, 1993. [23] T. De la Barra, B. Perez, and J. Anez, “Multidimensional path search and assignment,” in Proceedings of the 21st PTRC Summer Annual Meeting, pp. 307–319, Manchester, UK, September 1993. [24] Z. Zhou, A. Chen, and S. Bekhor, “C-logit stochastic user equilibrium model: formulations and solution algorithm,” Transportmetrica, vol. 8, no. 1, pp. 17–41, 2012. Advances in Advances in Journal of The Scientifc Journal of Operations Research Decision Sciences Applied Mathematics World Journal Probability and Statistics Hindawi Hindawi Hindawi Hindawi Publishing Corporation Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www.hindawi.comwww.hindawi.com Volume 20182013 www.hindawi.com Volume 2018

International Journal of Mathematics and Mathematical Sciences

Journal of Optimization Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Submit your manuscripts at www.hindawi.com

International Journal of Engineering International Journal of Mathematics Analysis Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal of Advances in Mathematical Problems International Journal of Discrete Dynamics in Complex Analysis Numerical Analysis in Engineering Dierential Equations Nature and Society Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

International Journal of Journal of Journal of Abstract and Advances in Stochastic Analysis Mathematics Function Spaces Applied Analysis Mathematical Physics Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018