DOI QR코드

DOI QR Code

A SMOOTHING NEWTON METHOD FOR NCP BASED ON A NEW CLASS OF SMOOTHING FUNCTIONS

  • Zhu, Jianguang (School of Science, Shandong University of Science and Technology) ;
  • Hao, Binbin (School of Science, China University of Petroleum)
  • Received : 2013.02.08
  • Accepted : 2013.04.20
  • Published : 2014.01.30

Abstract

A new class of smoothing functions is introduced in this paper, which includes some important smoothing complementarity functions as its special cases. Based on this new smoothing function, we proposed a smoothing Newton method. Our algorithm needs only to solve one linear system of equations. Without requiring the nonemptyness and boundedness of the solution set, the proposed algorithm is proved to be globally convergent. Numerical results indicate that the smoothing Newton method based on the new proposed class of smoothing functions with ${\theta}{\in}(0,1)$ seems to have better numerical performance than those based on some other important smoothing functions, which also demonstrate that our algorithm is promising.

Keywords

1. Introduction

Consider the following nonlinear complementarity problem (NCP): to find a vector such that

where is continuously differentiable with F := (F1, F1, . . . , Fn)T. The NCP has been studied extensively due to its many applications in operation research, engineering and economics(see, for example, [1,2]).

For the NCPs, many solution methods, such as interior point methods [3,4], smoothing methods [5,6,7]. In this paper, we are interested in smoothing Newton methods for solving NCP. This method is to reformulate NCP as a system of smoothing equations by using smoothing function, and to solve the equation at each iteration by Newton method. Smoothing function plays an important role in smoothing Newton algorithms. Up to now, many smoothing functions have been proposed: the Kanzow smoothing function [8], Chen-Harker-Kanzow-Smale smoothing function [5], Chen-Mangasarian smoothing function [9], Huang-Han-Chen smoothing function [10], and so on. Generally, the construction of a smoothing function is based on a so-called NCP-function: An NCP-function is a mapping having the property

Many NCP-functions have been studied. Among them, the Fischer-Burmeister function and the minimum function are the most prominent NCP-functions, which are defined respectively by

By smoothing the symmetric perturbed Fischer-Burmeister function, Huang, Han, Xu and Zhang [11] proposed the following smoothing function:

By smoothing the symmetric perturbed minimum function, Huang et. al. [10] proposed the following smoothing function:

Recently, by combining the Fischer-Burmeister function and the minimum function, Liu and Wu [12] proposed the following function:

Motivated by [10,11,12], we introduce in this paper the following smoothing function:

where θ is a given constant with θ ∈ [0,1]. It is easy to see that when θ = 1, ϕθ reduces to the smoothing function defined by (1.3); and when θ = 0, ϕθ reduces to smoothing function defined by (1.2). Thus, the class of smoothing functions defined by (4) contains the smoothing function (1.2) and (1.3) as special cases.

Motivated by the above mentioned work, by using the symmetric perturbed technique and the idea of convex combination, we propose a new class of smoothing functions. We also investigate a smoothing Newton method to solve the NCP based on a new class of smoothing functions. Our algorithm has the following nice properties: (a) Our algorithm needs only to solve one linear system of equations and perform one line search per iteration. (b) Here we give the boundedness of the level set and hence the iteration sequence is bounded and thus there exists at least one accumulation point. We do not need to assume the nonemptyness and boundedness of the solution set of NCP (1.1), although this assumption is widely used in the literature. (c) The function we use is a parametric class of smoothing functions containing some important smoothing complementarity functions as its special cases. We can adjust the two parameter to get better effect in practice. The numerical experiments implicate that the algorithm is efficient and promising.

The organization of this paper is as follows. In section 2, we recall some useful definitions and give some properties of new smoothing function. In section 3, we propose a smoothing Newton algorithm. Convergence results are analyzed in section 4. Some preliminary computational results are reported in section 5. Some words about notation are needed. All vectors are column vectors. denote the nonnegative and positive orthants of respectively. We define N = {1, 2, . . . , n}.

 

2. Preliminaries

In this section, we recall some useful definitions and give some properties of the new smoothing function defined by (4).

Definition 2.1. A matrix is said to be a P0-matrix if all its principal minors are non-negative.

Definition 2.2. A function is said to be a P0-function if for all there exists an index i0 ∈ N such that

The following lemma gives some properties of the smoothing function ϕθ(·, ·, ·) defined by (4). Its proof is obviously.

Lemma 2.3. Let be defined by (4). Then,

(i) ϕθ(0, a, b) = 0 ⇔ a ≥ 0, b ≥ 0, ab = 0.

(ii) ϕθ(μ, a, b) is continuously differentiable for all points in different from (0, c, c) for arbitrary In particular, ϕθ(μ, a, b) is continuously differentiable for arbitrary (μ, a, b) ∈ with μ ≠ 0.

(iii) ϕθ(μ, a, b) is semismooth on

where

By (5) and Lemma 2.1, we known that solving NCP (1) is equivalent to solve H(z) = 0.

Define merit function

We also know that the NCP (1) is equivalent to the following equation:

For simplicity, we denote

Lemma 2.4. Let be defined by (5) and (6), respectively. Then:

(i) Փθ is continuously differentiable at any

(ii) H is continuously differentiable at any with its Jacobian

where

with

If F is a P0−function, then the matrix H′(z) is nonsingular on

Proof. It is easy to see that Փθ is continuously differentiable at any

Next we prove (ii). It follows from (i) and F is continuously differentiable that H is continuously differentiable at any From the definition of H(z) (5), it follows that (9) holds. For all i ∈ N,

By the above equation, we have

Since

which together with (2.6), we have

Thus,

which imply that D1(z) and D2(z) are positive diagonal matrices for any Since F is a P0-function, then F′(x) is a P0-matrix for any by Lemma 5.4 in [13]. In view of the fact that D2(z) is a positive diagonal matrix, by a straightforward calculation we have that all principal minors of the matrix D2(z)F′(x) are nonnegative. By Definition 2.1, we know that the matrix D2(z)F′(x) is a P0-matrix. Hence, by Theorem 3.1 in [14], the matrix D1(z) + D2(z)F′(x) is obviously nonsingular, which implies that H′(z) is nonsingular.

 

3. Algorithm

In this section we shall present a smoothing Newton method for NCP and prove that the proposed algorithm is well defined.

Algorithm 3.1. ( Smoothing Newton algorithm)

S0 Choose

Take γ ∈ (0, 1) such that

Let be an arbitrary vector,

S1 Termination criterion. If ∥H(zk)∥ = 0, stop.

S2 Compute

where βk = β(zk) is defined by β(z) := γmin{1, h(z)}.

S3 Let mk is the smallest nonnegative integer such that

Let λk := δmk.

S4 Set zk+1 = zk + λkΔzk and k := k + 1. Go to S1.

The following theorem proves that Algorithm 3.1 is well-defined and generates an infinite sequence. Define the set

Theorem 3.1. Suppose F is a continuously differentiable P0-function. Then, Algorithm 3.1 is well-defined and generates infinite sequence {zk = (μk, xk)} with

Proof. If μk > 0, since F is a continuously differentiable P0-function, then it follows from Lemma 2.2 that the matrix H′(zk) is nonsingular. Hence, step S2 is well-defined at the k−th iteration. By (11) we have

which implies

where the second inequality follows from

Hence, by the first equation of (3.1), we can get

From (2.1) and (2.4), we have

Let Rk(α) = h(zk+αΔzk)−h(zk)−αh′(zk)Δzk. It is easy to see that R(α) = o(α). When

Then by (3.1), (3.2), (3.4) and (3.5), we have

Since For α sufficiently small, we can get this shows that step S3 is well-defined at the k-th iteration. Therefore, Algorithm 3.1 is well-defined and generates an infinite sequence

Next, we will prove zk ∈ Ω for k ≥ 0. This can be obtained by inductive method. Firstly, it is evident from the choice of the starting point z0 ∈ Ω. Secondly, suppose that zk ∈ Ω, then by (13) we have then

 

4. Convergence of Algorithm 3.1

In this section, we discuss the global convergence and local superlinear convergence of Algorithm 3.1. We need the following Lemma 4.1 which can be founded in [15].

Lemma 4.1. Let ε > 0 and the function be defined by

Let be any two sequences such that ak, bk → +∞ or ak → −∞ or bk → −∞. Then |ϕ(ak, bk)| → +∞.

Lemma 4.2. Let be defined by

Assume that be any two sequences such that ak, bk → +∞ or ak → −∞ or bk → −∞. Then

Proof. (i) Suppose that ak → −∞. If {bk} is bounded, then the result holds obviously; else if bk → +∞, we have −ak > 0 and bk > 0 for all k sufficiently large, and hence,

which, together with −ak → +∞, implies that

(ii) For the case of bk → −∞. By using the symmetry of function about ak, bk, we know the result holds.

(iii) Suppose that ak → +∞ and bk → +∞. Thus, for sufficiently large k,

hence,

By Lemma 4.1, we know that

Lemma 4.3. Let F be a continuous P0-function and Փθ(μ, x) be defined by (6). For any μ > 0 and c > 0, define the level set

Then, for any 0 < μ1 ≤ μ2 and c > 0 , the set is bounded.

Proof. Suppose, to the contrary, that Lμ(c) is unbounded. Then for some fixed c > 0, we can find a sequence {(μk, xk)} such that μ1 ≤ μk ≤ μ2 and ∥Փθ(μk, xk)∥ ≤ c, ∥xk∥ → ∞.

Since the sequence {xk} is unbounded, then the index set J := {i ∈ N : is unbounded } is nonempty. Without loss of generality, we can assume that be defined by

Then, is bounded. Note that F is a P0-function, by Definition 2.2, we have

where j is one of the indices for which the max is attained, and j is assumed, without loss of generality, to be independent of k, we obtained

We consider the following two cases:

case 1: In this case, since is bounded by the continuity of Fj, we deduce from Equation (4.3) we have

By Lemma 4.2, we know that

case 2: In this case, since is bounded by the continuity of Fj, we deduce from Equation (4.3) for any k. Since μ1 ≤ μk ≤ μ2, we have

which, together with Lemma 4.2, gives

In either case, we obtained ∥Փθ(μk, xk)∥ → +∞, which contradicts with ∥Փθ(μk, xk)∥ ≤ c. This completes the proof.

Corollary 4.3 Suppose that F is a P0-function and μ > 0. Then the function ∥Փθ(μ, x)∥ is coercive, i.e., lim∥x∥→∞ ∥Փθ(μ, x)∥ = +∞.

Theorem 4.4. Suppose F is a continuously differentiable P0-function, and the sequence {zk = (μk, xk)} is generated by Algorithm 3.1. Then the sequence {zk} is bounded and any accumulation point z∗ = (μ∗, x∗) of the sequence {zk} is a solution of H(zk) = 0.

Proof. Since h(zk) is monotonically decreasing and bounded from below by zero, it then follows that the sequence ∥Փθ(zk)∥ is bounded. By Corollary 4.3, we immediately obtain {xk} is bounded. Note that the boundedness of {h(zk)} implies the boundedness of μk. So {zk} is bounded. Without loss of generality, suppose zk → z∗. Then h(zk) → h∗, β(zk) → β∗. If h(zk) = 0, we obtain the desired result. Now, we prove h∗ = 0 by contradiction. In fact, if h∗ ≠ 0, then h∗ > 0, then β∗ = γ min{1, h∗} > 0, and It follows from Lemma 2.2 that H′(z∗) is nonsingular. By the continuity of H′(z), there exists a closed neighborhood N(z∗) of z∗ such that for any z ∈ N(z∗), we have is invertible. So, for all sufficiently large k, zk ∈ N(z∗) and H′(zk) is invertible. Let be the unique solution of the following system:

It follows from the continuity of H and the definition of β(.) that {μk} and {βk} converge to μ∗ and β∗, respectively. That together with (3.2), implies that

Thus, for sufficiently large k, the stepsize does not satisfy (3.2), then

which implies that

Taking limits on both sides of the inequalities (4.5), from (4.6) we have

This indicates that we have σ ≥ 1, which contradicts is a solution of H(μ, x) = 0.

Theorem 4.5. Suppose that F is a continuously differentiable P0-function. Let z∗ be an accumulation point of the iteration sequence {zk} generated by Algorithm 3.1. If all V ∈ ∂H(z∗) are nonsingular, then:

(1) λk ≡ 1,for all zk sufficiently close to z∗;

(2) the whole sequence {zk} converges to z∗;

(3) ∥zk+1−z∗∥ = o(∥zk−z∗∥)(or ∥zk+1−z∗∥ = O(∥zk−z∗∥2) if F′ is Lipschitz continuous on ℜn).

Proof. The proof is similar to the one given in [16], Theorem 3.2.

 

5. Numerical experiments

In this section, we report some numerical results of Algorithm 3.1. All experiments are done using a PC with CPU of 1.6 GHz and RAM of 512 MB, and all codes are finished in MATLAB 7.5. Throughout our computational experiments, the parameters used in the algorithm are chosen as

In our implementation, we use ∥H(zk)∥ ≤ 10-6 as the stopping rule.

Example 5.1. Kojima-Shindo Problem. This test problem was used by Pang and Gabriel [17], Mangasarian and Solodov [18], Kanzow [19], and Jiang and Qi [20] with four variables. Let

Table 1 gives the results for this example with starting points a1 = (0, 0, 0, 1)T, a2 = (1,−2, 1,−2)T, a3 = (1, 2, 6, 8)T.

TABLE1.Numerical results for Examples 5.1 to 5.4

Example 5.2. Josephy Problem. This test problem was used by Dirkse and Ferris [22] with four variables. Let

Table 1 gives the results for this example with starting points a1 = (2,−2,−2,−2)T, a2 = (2, 3, 4, 6)T, a3 = (0, 2, 0, 6)T.

Example 5.3. Mathiesen Problem. This test problem was used by Pang and Gabriel [17] with four variables, which was also tested by Kanzow [19] . Let

where α = 0.75, b2 = 1, b3 = 2. Table 1 gives the results for this example with starting points a1 = (0.5, 0.5, 0.5, 2)T, a2 = (2,−2,−2,−2)T, a3 = (0,−2,−2, 0)T.

Example 5.4. HS 34 Problem. This test problem was from the book of Hock and Schittkowski [21]: Their Karush-Kuhn-Tucker (KKT) optimality conditions lead to complementarity problems of dimensions 8. Let

Table 1 gives the results with starting points a1 = (−1,−1,−1, 1, 1, 1, 1, 1)T, a2 = (0, 0, 0, 1, 1, 1, 1, 1)T, a3 = (1, 1, 1,−10,−10,−10,−10,−10)T.

In Table 1, IT denotes the numbers of iteration; NF denotes the numbers of function value’s evaluation; CPU denotes the CPU time for solving the underlying problem in second; and − denotes the algorithm fails to find the optimizer in the sense that the iteration numbers are larger than 1000.

Table 1 shows that not all the best numerical results occur in the case of θ = 0(in this case, the smoothing function is proposed by Huang et. al. in [11]) or θ = 1 (in this case, the smoothing function is proposed by Huang et. al. in [10]). These demonstrate that the new smoothing function introduced in this paper is worth investigating. The Figures 1 and 2 below plot the corresponding convergence of merit function h(zk) versus the iteration number. From the two figures, when θ = 0.5 and θ = 0.75, h(zk) has a faster decrease than θ = 0 and θ = 1. These also demonstrate that the new smoothing function introduced in this paper is worth investigating. Numerical experiments also demonstrate the feasibility and efficiency of the new algorithm.This new proposed class of complementarity functions have great advantage because we can adjust the parameter θ to obtain an optimal solution to NCP.

FIGURE 1.Convergence behavior of Example 5.3 with the initial point a1

FIGURE 2.Convergence behavior of Example 5.3 with the initial point a3

References

  1. P.T.Harker,J.-S.Pang, Finite dimensional variational inequality and nonlinear complemen-tarity problem: A survey of theory, algorithms and applications, Math. Program. 48 (1990) 161-220. https://doi.org/10.1007/BF01582255
  2. M.C.Ferris, J.S.Pang, Engineering and economic applications of complementarity problems, SIAM Review 39 (1997) 669-713. https://doi.org/10.1137/S0036144595285963
  3. F.A.Potra, Y.Ye . Interior-point methods for nonlinear complementarity problems, J. Optim. Theory Appl. 88 (3)(1996) 617-642. https://doi.org/10.1007/BF02192201
  4. S. Wright, D. Ralph, A superlinear infeasible-interior-point algorithm for monotone com-plementarity problems, Math. Oper. Res. 21(4)(1996) 815-838. https://doi.org/10.1287/moor.21.4.815
  5. K. Hotta, A. Yoshise, Global convergence of a class of non-interior point algorithms using Chen-Harker-Kanzow-Smale functions for nonlinear complementarity problems, Math. Program. 86(1)(1999) 105-133. https://doi.org/10.1007/s101070050082
  6. L. Qi, D. Sun, G. Zhou, A new look at smoothing Newton methods for nonlinear complemen-tarity problems and box constrained variational inequalities, Math. Program. 87(1)(2000) 1-35. https://doi.org/10.1007/s101079900127
  7. L. Fang, A new one-step smoothing Newton method for nonlinear complementarity problem with P0-function, Appl. Math. Comput. 216 (2010) 1087-1095. https://doi.org/10.1016/j.amc.2010.02.001
  8. C.Kanzow, Some noninterior continuation methods for linear complementarity problems, SIAM J. Matrix Anal. Appl. 17(1996) 851-868. https://doi.org/10.1137/S0895479894273134
  9. C.Chen, O.L.Mangasarian, A Class of Smoothing Functions for Nonlinear and Mixed Com-plementarity Problems, Comput. Optim. Appl. 5 (1996) 97-138. https://doi.org/10.1007/BF00249052
  10. Z.H.Huang, J.Han, Z.Chen . Predictor-corrector smoothing newton method, based on a new smoothing function, for solving the nonlinear complementarity problem with a P0 function, J. Optim. Theory Appl. 117(1)(2003) 39-68. https://doi.org/10.1023/A:1023648305969
  11. Z.H.Huang, J.Han, D.C.Xu, L.P.Zhang, The non-interior continuation methods for solving the P0 function nonlinear complementarity problem, Science in China, 44(9) (2001) 1107-1114 https://doi.org/10.1007/BF02877427
  12. X. Liu, W.Wu, Coerciveness of some merit functions over symmetric cones, J. Ind. Manag. Optim. 5(2009)603-613. https://doi.org/10.3934/jimo.2009.5.603
  13. M.Kojima, N.Megiddo, T.Noma, Homotopy continuation methods for nonlinear comple-mentarity problems, Math. Oper. Res. 16 (1991) 754-774. https://doi.org/10.1287/moor.16.4.754
  14. B.Chen, P.T.Harker, A non-interior continuation algorithm for linear complementarity problems, SIAM J. Matrix Anal. Appl. 14 (1993) 1168-1190. https://doi.org/10.1137/0614081
  15. C. Kanzow, Global convergence properties of some iterative methods for linear comple-mentarity problems, SIAM J. Optim. 6 (1) (1996), 326-341. https://doi.org/10.1137/0806019
  16. Z.H. Huang, Y. Zhang, W. Wu, A smoothing-type algorithm for solving system of inequal-ities, J. Comput. Appl. Math. 220 (1) (2008) 355-363. https://doi.org/10.1016/j.cam.2007.08.024
  17. J.S.Pang, S.A.Gabriel, NE/SQP: A robust algorithm for the nonlinear complementarity problem, Math. Program. 60 (1993) 295-337. https://doi.org/10.1007/BF01580617
  18. O.L.Mangasarian, M.V.Solodov, Nonlinear complementarity as unconstrained and con-strained minimization, Math. Program. 62 (1993) 277-297. https://doi.org/10.1007/BF01585171
  19. C.Kanzow, Some equation-based methods for the nonlinear complementarity problem, Optim. Meth. Soft. 3 (1994) 327-340. https://doi.org/10.1080/10556789408805573
  20. H.Jiang, L.Qi, A new nonsmooth eqations approach to nonlinear complementarity problems, SIAM J. Control Optim. 35 (1997) 178-193. https://doi.org/10.1137/S0363012994276494
  21. W.Hock, K.Schittkowski, Test examples for nonlinear programming codes, Lecture Notes in Economics and Mathematical Systems 187, Springer-Verlag: Berlin, Germany, (1981).
  22. S.P.Dirkse, M.C.Ferris, MCPLIB: A collection of nonlinear mixed complementarity problems, Optim. Meth. Soft. 5 (1995) 319-345. https://doi.org/10.1080/10556789508805619