DOI QR코드

DOI QR Code

A MODIFIED PROXIMAL POINT ALGORITHM FOR SOLVING A CLASS OF VARIATIONAL INCLUSIONS IN BANACH SPACES

  • LIU, YING (College of Mathematics and Information Science, Hebei University)
  • Received : 2015.01.03
  • Accepted : 2015.02.03
  • Published : 2015.05.30

Abstract

In this paper, we propose a modified proximal point algorithm which consists of a resolvent operator technique step followed by a generalized projection onto a moving half-space for approximating a solution of a variational inclusion involving a maximal monotone mapping and a monotone, bounded and continuous operator in Banach spaces. The weak convergence of the iterative sequence generated by the algorithm is also proved.

Keywords

1. Introduction

Variational inclusions, as the generalization of variational inequalities, are among the most interesting and important mathematical problems and have been widely studied in recent years since they have wide applications in mechanics, physics, optimization and control, nonlinear programming, economics and transportation equilibrium, and engineering sciences, etc. It is well known that the general monotonicity and accretivity of mappings play an important role in the theory and algorithms of variational inclusions. Various kinds of iterative algorithms to solve the variational inclusions have been developed by many authors. For details, we can refer to [3-23]. In this paper, we mainly consider the following nonlinear variational inclusion problem: find u ∈ E such that

where E is a Banach space, f : E → E∗ is a single-valued mapping and M : E → 2E∗ is a multi-valued mapping. The set of solutions of Problem (1.1) is denoted by V I(E, f, M), i.e., V I(E, f, M) = {x ∈ E : 0 ∈ f(x) + M(x)}. Throughout this paper, we always assume that V I(E, f, M) ≠ ∅.

If f ≡ 0, then (1.1) reduces to

which is known as the zero problem of a multi-valued operator and has been studied by many authors when M has the monotonicity or accretivity, see [9-11,17-19,22,24,25] and the reference therein.

If M has the accretivity, then Problem (1.1) has also studied by many authors in Banach spaces by using the resolvent operator, see [5,12] and the reference therein.

However, when M has the monotonicity, Problem (1.1) in Banach spaces is far less studied than that when M has the accretivity. In [13], Lou etc. constructed a iterative algorithm for approximating a solution of a class of generalized variational inclusions involving monotone mappings in Banach spaces. But the strongly accretivity and Lipschitz continuity are assumed on the perturbed operator f, which are very strong conditions. Therefore, under the weaker assumptions on the perturbed operator f, the development of an efficient and implementable algorithm for solving Problem (1.1) and its generalizations in Banach spaces when M has the monotonicity is interesting and important.

When E is a Hilbert space and M is a maximal monotone, H-monotone or A−monotone mapping, Problem (1.1) has been studied in [15,23,26]. Especially, Zhang [26] constructed the following iterative algorithm:

Algorithm 1.1

Step0. (Initiation) Select initial z0 ∈ ℍ(a Hilbert space) and set k = 0.

Step1. (Resolvent step) Find xk ∈ ℍ such that

where and a positive sequence {λk} satisfies

Step2. (Projection step) Set K = {z ∈ ℍ : ⟨A(zk) − A(xk), z − A(xk)⟩ ≤ 0}. If A(zk) = A(xk), then stop; otherwise, take zk+1 such that

Step3. Let k = k + 1 and return to Step1.

Moreover, Zhang [26] proved the iterative sequence {xk} generated by Algorithm 1.1 converges weakly to a solution of (1.1) when M : H → 2H is a A−monotone mapping and f : H → H is only monotone and continuous.

We should note that:

(1) the Algorithm 1.1 requires only that the perturbed operator f has the monotonicity and continuity which are weaker than the strong monotonicity and Lipschitz continuity assumed in some related researches, see [13,15,23] and the references therein;

(2) the next iterate A(zk+1) is the metric projection of the current iterate A(zk) onto the separation hyperplane K, which is not expensive at all from a numerical point of view.

But, we should also note that the Algorithm 1.1 is only confined to Hilbert spaces. Since the metric projection strictly depends on the inner product properties of Hilbert spaces, it can no longer be applied for variational inclusions in Banach spaces.

The above fact motivates us to develop alterative methods for approximating solutions of variational inclusions in Banach spaces. Therefore, the purpose of this paper is to modify Algorithm 1.1 to apply it to Banach spaces for approximating a solution of Problem (1.1) when M has the maximal monotonicity and the perturbed operator f has only the the monotonicity and continuity. This paper is organized as below. In section 2, we recall some basic concepts and properties. In section 3, we consider Problem (1.1) involving a maximal monotone mapping and a monotone, bounded and continuous operator in Banach spaces and prove theorem 3.1 which extends the zero problem of a monotone operator studied by [6,9,10,19,25] to Problem (1.1) and also extends Problem (1.1) considered in [15,23,24,26] from Hilbert spaces to Banach spaces. Furthermore, theorem 3.1 will also be development of the results of [5,11,12] in different directions. In section 4, we consider the zero point problem of a maximal monotone mapping and construct iterative algorithm 4.1. Moreover, we also give a simple example to compare algorithm 4.1 and the algorithm of [19].

 

2. Preliminaries

Throughout this paper, let E be a Banach space with norm ║ㆍ║, and E∗ be the dual space of E. ⟨·, ·⟩ denotes the duality pairing of E and E∗. When {xn} is a sequence in E, we denote strong convergence of {xn} to x ∈ E by xn → x, and weak convergence by xn ⇀ x. Let 2E∗ denote the family of all the nonempty subset of E∗. Let U = {x ∈ E : ║x║ = 1} be the unit sphere of E. A Banach space E is said to be strictly convex if for all x, y ∈ U and x ≠ y. It is said to be uniformly convex if for any two sequences {xn}, {yn} in U and . E is said to be smooth provided exists for each x, y ∈ U. It is said to be uniformly smooth if the limit is attained uniformly for x, y ∈ U.

Let J : E → 2E∗ be the normalized duality mapping defined by

The following properties of J can be found in [2,6] :

(i) If E is smooth, then J is single-valued.

(ii) If E is strictly convex, then J is strictly monotone and one to one.

(iii) If E is reflexive, then J is surjective.

(iv) If E is uniformly smooth, then J is uniformly norm-to-norm continuous on each bounded subset of E.

The duality mapping J from a smooth Banach space E into E∗ is said to be weakly sequentially continuous [4,6] if xn ⇀ x implies Jxn ⇀ Jx.

Definition 2.1 ([7,20]). Let f : E → E∗ be a single-valued mapping. f is said to be

(i) monotone if

(ii) strictly monotone if

and equality holds if and only if x = y.

(iii) γ−strongly monotone if there exists a constant γ > 0, such that

(iv) δ−Lipschitz continuous if there exists a constant δ > 0, such that

(v) α−inverse-strongly-monotone, if there exists a constant α > 0 such that

It is obvious that the α−inverse-strongly-monotone mapping is monotone and −Lipschitz continuous.

Definition 2.2 ([3,9,13,20]). Let A, H : E → E∗ be two nonlinear operators. A multi-valued operator M : E → 2E∗ with domain D(M) = {z ∈ E : Mz ≠ ∅} and range R(M)= ∪{Mz ∈ E∗ : z ∈ D(M)} is said to be

(i) monotone if ⟨x1 − x2, u1 − u2⟩ ≥ 0 for each xi ∈ D(M) and ui ∈ M(xi), i = 1, 2.

(ii) α−strongly monotone, if there exists a constant α > 0 such that

(iii)m−relaxed monotone, if there exists a constant m > 0 such that

(iv)maximal monotone, if M is monotone and its graph G(M) = {(x, u) : u ∈ Mx} is not properly contained in the graph of any other monotone operator. It is known that a monotone mapping M is maximal if and only if for (x, u) ∈ E × E∗, ⟨x − y, u − v⟩ ≥ 0 for every (y, v) ∈ G(M) implies u ∈ Mx.

(v) general H−monotone, if M is monotone and (H + λM)E = E∗, for all λ > 0.

(vi) general A−monotone, if M is m−relaxed monotone and (A+λM)E = E∗, for all λ > 0.

Remark 2.1. We have from [16] that if E is a reflexive Banach space, then a monotone mapping M : E → 2E∗ is maximal if and only if R(J + λM) = X∗, ∀λ > 0.

Remark 2.2. We note that the general A-monotonicity generalized the general H−monotonicity. On the other hand, if E is a Hilbert space, then the general A-monotone operator reduces to the A-monotone operator studied in [26] and the general H-monotone operator reduces to the H-monotone operator studied in [10,23]. For examples about these operators and their relations , we refer the reader to [3,10,23] and the references therein.

Let E be a smooth Banach space. Define

Clearly, from the definition of ϕ we have that

(A1)(║x║ − ║y║)2 ≤ ϕ(y, x) ≤ (║x║ + ║y║)2,

(A2)ϕ(x, y) = ϕ(x, z) + ϕ(z, y) + 2⟨x − z, Jz − Jy⟩,

(A3)ϕ(x, y) = ⟨x, Jx − Jy⟩ + ⟨y − x, Jy⟩ ≤ ║x║║Jx − Jy║ + ║y − x║║y║.

Remark 2.3. We have from Remark 2.1 in [14] that if E is a strictly convex and smooth Banach space, then for x, y ∈ E, ϕ(y, x) = 0 if and only if x = y.

Let E be a reflexive, strictly convex and smooth Banach space. K denotes a nonempty, closed and convex subset of E. By Alber [2], for each x ∈ E, there exists a unique element x0 ∈ K (denoted by ΠK(x)) such that

The mapping ΠK : E → K defined by ΠK(x) = x0 is called the generalized projection operator from E onto K. Moreover, x0 is called the generalized projection of x. See [1] for some properties of ΠK. If E is a Hilbert space, then ΠK is coincident with the metric projection PK from E onto K.

Lemma 2.3 ([2]). Let E be a re exive, strictly convex and smooth Banach space, let C be a nonempty, closed and convex subset of E and let x ∈ E. Then

for all y ∈ C.

Lemma 2.4 ([2]). ELet C be a nonempty, closed and convex subset of a smooth Banach space, and let x ∈ E. Then, x0 = ΠC(x) if and only if

Lemma 2.5 ([8]). Let E be a uniformly convex and smooth Banach space. Let {yn}, {zn} be two sequences of E. If ϕ(yn, zn) → 0, and either {yn}, or {zn} is bounded, then yn − zn → 0.

An operator A of C into E∗ is said to be hemi-continuous if for all x, y ∈ C, the mapping f of [0, 1] into E∗ defined by f(t) = A(tx + (1 − t)y) is continuous with respect to the weak∗ topology of E∗.

Lemma 2.6 ([16]). Let E be a re exive Banach space. If T : E → 2E∗ is a maximal monotone mapping and P : E → E∗ is a hemi-continuous bounded monotone operator with D(P) = E, then the sum S = T + P is a maximal monotone mapping.

Lemma 2.7 ([16]). Let E be a re exive Banach space and λ be a positive num-ber. If T : E → 2E∗ is a maximal monotone mapping, then R(J + λT) = E∗ and (J+λT)−1 : E∗ → E is a demi-continuous single-valued maximal monotone mapping.

Lemma 2.8 ([7]). Let S be a nonempty, closed and convex subset of a uniformly convex, smooth Banach space E. Let {xn} be a sequence in E. Suppose that, for all u ∈ S,

for every n = 1, 2, ... Then {ΠSxn} is a Cauchy sequence.

 

3. Variational inclusion

In this section, we construct the following iterative algorithm for solving Variational inclusion (1.1) involving a maximal monotone mapping M and a continuous bounded monotone operator f.

Algorithm 3.1

Step0. (Initiation) Arbitrarily select initial z0 ∈ E and set k = 0.

Step1. (Resolvent step) Find xk ∈ E such that

where a positive sequence {λk} satisfies

Step2. (Projection step) Set Ck = {z ∈ E : ⟨z − xk, J(zk) − J(xk)⟩ ≤ 0}. If zk = xk, then stop; otherwise, take zk+1 such that

Step3. Let k = k + 1 and return to Step1.

Remark 3.1. (1)We show the existence of xk in (3.1). In fact, (3.1) is equivalent to the following problem: find xk ∈ E such that

Since M : E → 2E∗ is maximal monotone and f : E → E∗ is continuous, bounded and monotone operator with D(f)=E, we have that, by Lemma 2.6, M + f is maximal monotone. By Lemma 2.7, for any λk > 0, J + λkf + λkM is surjective. Hence, there is a xk ∈ E such that (3.1) holds, i.e., Step1 of Algorithm 3.1 is well-defined.

(2) If xk = zk, by (3.4), we have xk ∈ V I(E, f, M). Thus, iterative sequence {xk} is finite and the last term is a solution of Problem (1.1). If zk ≠ xk then zk ∉ Ck. Therefore Algorithm 3.1 is well-defined.

(3) In Algorithm 3.1, the Resolvent step (3.1) is used to construct a halfspace, the next iterate zk+1 is a generalized projection of the current iterate zk on the halfspace, which is not expensive at all from a numerical point of view.

Now we show the convergence of the iterative sequence generated by Algorithm 3.1 in the Banach space E.

Theorem 3.1. Let E be a uniformly convex, uniformly smooth Banach space whose duality mapping J is weakly sequentially continuous and M : E → 2E∗ be a maximal monotone mapping. Let f : E → E∗ be a continuous, bounded and monotone operator with D(f) = E. Then, the iterative sequence {xk} generated by Algorithm 3.1 converges weakly to an element ∈ V I(E, f, M). Further,

Proof. We split the proof into five steps.

Step1. Show that {zk} is bounded.

Suppose x∗ ∈ V I(E, f, M). Then we have −f(x∗) ∈ M(x∗). From (3.4), it follows that

By the monotonicity of M, we deduce that

It follows from the monotonicity of f and (3.5) that

This implies that

which leads to

Since zk+1 = ΠCk (zk), by Lemma 2.3, we deduce that

Thus,

which yields that the sequence {ϕ(x∗, zk)} is convergent. From (A1), we know that {zk} is bounded.

Step2. Show that {xk} is also bounded and {xk} and {zk} have the same weak accumulation points.

It follows from (3.6) that

Thus we know that

By Lemma 2.5, we have

From zk+1 = ΠCk (zk) ∈ Ck, we have that

By (A1), (A2) and (3.10),(3.8), we have

This implies that

By (A1), we have

Since {zk} is bounded, we have from (3.12) that {xk} is also bounded. Moreover {xk} and {zk} have the same weak accumulation points.

Step3. Show that each weak accumulation point of the sequence {xk} is a solution of Problem (1.1).

Since J is uniformly norm-to-norm continuous on bounded sets, it follows from (3.12) that

Since {xk} is bounded, let us suppose be a weak accumulation point of {xk}. Hence, we can extract a subsequence that weakly converges to . Without loss of generality, let us suppose that xk ⇀ as k → ∞. Then from (3.12), we have zk ⇀ as k → ∞. For any fixed v ∈ E, take an arbitrary u ∈ f(v) + M(v). Then, there exists a point w ∈ M(v) such that w + f(v) = u. Therefore, it follows from the monotonicity of f and M that

Adding these inequalities, we have

Note w + f(v) = u, we have

which implies that

Taking limits in (3.14), by (3.13) and the boundedness of {xk} and , we have

Since M + f is maximal monotone, by the arbitrariness of (v, u) ∈ G(M + f), we conclude that (, 0) ∈ G(M + f) and hence is a solution of Problem (1.1), i.e., ∈ V I(E, f, M).

Step4. Show that V I(E, f, M) is closed and convex.

Taking {yn} ⊂ V I(E, f, M) and yn → as n → ∞. Since yn ∈ V I(E, f, M), we have −f(yn) ∈ M(yn). For any fixed v ∈ E, take w ∈ M(v). It follows from the monotonicity of M that

Taking limits in (3.15), by the continuity of f, we have

By the arbitrariness of (v,w) ∈ G(M), we conclude that ∈ G(M) and hence ∈ V I(E, f, M), i.e., V I(E, f, M) is closed.

Taking v1, v2 ∈ V I(E, f, M), we have 0 ∈ f(vi) + M(vi), i = 1, 2. For any (v, u) ∈ G(M + f) and t ∈ (0, 1), we have

and

Adding (3.16) and (3.17), we have

By the arbitrariness of (v, u) ∈ G(M + f), we conclude that (tv1 +( 1−t)v2, 0) ∈ G(M + f) and hence tv1 + (1 − t)v2 ∈ V I(E, f, M), i.e., V I(E, f, M) is convex.

Step5. Show that xk ⇀ , as k → ∞ and

Put uk = ΠV I(E,f,M)zk. It follows from (3.7) and Lemma 2.8 that {uk} is a Cauchy sequence. Since V I(E, f, M) is closed, we have that {uk} converges strongly to w ∈ V I(E, f, M). By the uniform smoothness of E, we also have . Finally, we prove = w. It follows from Lemma 2.4, uk = ΠV I(E,f,M)zk and that

So, we have

Taking limits in (3.19), by the weakly sequential continuity of J, we obtain and hence . Since E is strictly convex, we get = w. Therefore, the sequence {xk} converges weakly to □

Remark 3.2. If M = 0, then Theorem 3.1 reduces to the 0 ∈ fx for a monotone operator f which has been studied by [6] by using the hybrid projection method when f : E → E∗ has the inverse-strong monotonicity which is a stronger condition than the monotonicity and continuity assumed in Theorem 3.1.

Remark 3.3. If f = 0, then Theorem 3.1 reduces to the zero point problem of a maximal monotone mapping. To be more precise, we can see section 4.

Remark 3.4. The thought of Theorem 3.1 is due to [26], i.e. Algorithm 1.1 of this paper. It is a development of [26] in spatial structure, since the Banach space is a wider range than the Hilbert space, although Theorem 3.1 don’t thoroughly generalize [26], since the maximal monotone mapping in Hilbert spaces is a special case of the A-monotone mapping studied in [26] when A = I (the identity mapping).

Remark 3.5. It follows from Lemma 2.3 of [7] that the normalized duality mapping J defined by (2.1) is strongly monotone in a 2-uniformly convex Banach space and hence the maximal monotone mapping becomes a special case of the A−monotone mapping when m = 0 and A = J, where A has strong monotonicity and continuity assumed in [3,9,24,26]. Therefore, It is interesting to construct the iterative algorithms for approximating the solutions of Problem (1.1) when M is a A−monotone mapping, f is a continuous, monotone bounded operator and A is a strong monotone and continuous operator in a 2-uniformly convex and uniformly smooth Banach space. This will thoroughly generalize the results of [26] from Hilbert spaces to Banach spaces.

 

4. The zero point problem

Let M : E → 2E∗ be a maximal monotone mapping. We consider the following problem: Find x ∈ E such that

This is the zero point problem of a maximal monotone mapping. We denote the set of solutions of problem (4.1) by V I(E,M) and suppose V I(E,M) ≠ ∅.

Theorem 4.1. Let E be a uniformly convex, uniformly smooth Banach space whose duality mapping J is weakly sequentially continuous. Let the sequence {xk} be generated by the following Algorithm.

Algorithm 4.1:

Step0. (Initiation) Arbitrarily select initial z0 ∈ E and set k = 0.

Step1. (Resolvent step) Find xk ∈ E such that

where a positive sequence {λk} satisfies

Step2. (Projection step) Set Ck = {z ∈ E : ⟨z − xk, J(zk) − J(xk)⟩ ≤ 0}. If zk = xk, then stop; otherwise, take zk+1 such that

Step3. Let k = k + 1 and return to Step1.

Then, the iterative sequence {xk} generated by Algorithm 4.1 converges weakly to an element ∈ V I(E,M). Further, .

Proof. Taking f ≡ 0 in Theorem 3.1, we can obtain the desired conclusion. □

Remark 4.1. The setting of Problem (4.1) considered in Theorem 4.1 is a Banach space which is more extensive than Hilbert spaces considered in [25].

Remark 4.2. In [19], the authors have also constructed a iterative algorithm for approximating a solution of Problem (4.1). More precisely, they constructed the following iterative algorithm:

Algorithm 4.2:

where {αn} ⊂ [0, 1] with αn ≤ 1 − δ for some δ ∈ (0, 1), {rn} ⊂ (0,+∞) with infn≥0 rn > 0 and the error sequence {en} ⊂ E such that ║en║ → 0, as n → ∞. They proved the iterative sequence (4.5) converges strongly to ΠV I(E,M)x0.

Now, we give a simple example to compare Algorithm 4.1 with Algorithm 4.2.

Example 4.1. Let E = ℝ, M : ℝ → ℝ and M(x) = x. It is obvious that M is maximal monotone and V I(E,M) = {0} ≠ ∅.

The numerical experiment result of Algorithm 4.1 Take , and initial point z0 = −1 ∈ ℝ. Then {xk} generated by Algorithm 4.1 is the following sequence:

and xk → 0 as k → ∞, where 0 ∈ V I(E,M).

Proof. By (4.2), z0 = (1+λ0)x0. Since z0 = −1, λ0 = 2, we have . By algorithm 4.1, we have C0 = {z ∈ ℝ : ⟨z − x0, z0 − x0⟩ ≤ 0} = [x0,+∞). By (4.4), z1 = PC0 (−1) = x0 < 0. It follows from z1 = x0 < 0 and (4.2) that x0 = (1 + λ1)x1, i.e.,

Suppose that

By Algorithm 4.1, Ck+1 = {z ∈ E : ⟨z − xk+1, zk+1 − xk+1⟩ ≤ 0}. It follows from hypothesis (4.7) that xk+1 > xk and zk+1 − xk+1 < 0. Therefore, Ck+1 = [xk+1,+∞). Since zk+2 = PCk+1zk+1 = P[xk+1;+∞)xk, we have zk+2 = xk+1 < 0. From (4.2), we have xk+1 = zk+2 = (1 + λk+2)xk+2. Hence, . By induction, (4.6) holds. □

Next, we give the numerical experiment results by using the following Table 4.1, which shows that the iteration process of the sequence {xk} as initial point z0 = −1 and . From the figures, we can see that {xk} converges to 0.

Table 4.1

The numerical experiment result of Algorithm 4.2 Take ek = 0, for all k ≥ 0, and initial point .Then {xk} generated by Algorithm 4.2 is the following sequence:

and xk → 0 as k → ∞, where 0 ∈ V I(E,M).

Proof. By Algorithm 4.2, we have , H0 = {v ∈ ℝ, ║v − z0║ ≤ ║v − x0║} = , W0 = {v ∈ ℝ, ⟨v −x0, x0 −x0⟩ ≤ 0} = ℝ. Therefore, Suppose that , and hence,

Hk+1 = {v ∈ ℝ : ║v − zk+1║ ≤ ║v − xk+1║} = ⊂ [xk+1,+∞), Wk+1 = {v ∈ ℝ : ⟨v − xk+1, x0 − xk+1⟩ ≤ 0} = [xk+1,+∞). Therefore, and

Combine (4.9) with (4.10), we obtain that . By induction, (4.8) holds. □

Next, we give the numerical experiment results by using the following Table 4.2, which shows that the iteration process of the sequence {xk} as initial point . From the figures, we can see that {xk} converges to 0.

Table 4.2

Remark 4.3. Comparing Table 4.1 with Table 4.2, we can intuitively see that the convergence speed of Algorithm 4.1 constructed in this paper is faster than that of Algorithm 4.2 constructed in [19].

 

5. Conclusion

In this paper, we construct Algorithm 3.1 under very mild conditions for approximating a solution of Problem (1.1). The results of this paper develop the corresponding results in some references from the following aspects.

1) From a numerical point of view, the iterative steps of Algorithm 3.1 are less than those of [6,15,19,] because we needn’t compute the intersection of two nonempty closed convex sets. Furthermore, the next iterate zk+1 is the generalized projection of the current iterate zk onto the separation hyperplane Ck, which is simpler than the generalized projection onto a general nonempty closed convex set.

2) In terms of the spatial structure, the Banach space considered in this paper is a wider range than the Hilbert space considered in [15,23,24,26].

3) We obtain that the convergence point of {xk} generated by Algorithm 3.1 is , which is more concrete than related conclusions of [19,25,26] and so on.

4) The perturbed operator f has only the monotonicity and continuity which are weaker than the strong monotonicity and Lipschitz continuity assumed in [13,15,23] and the reference therein.

References

  1. Y.I. Alber and S. Guerre-Delabriere, On the projection methods for fixed point problems, Analysis, 21 (2001), 17-39. https://doi.org/10.1524/anly.2001.21.1.17
  2. Y.I. Alber and S. Reich, An iterative method for solving a class of nonlinear operator equations in Banach spaces, Panamer. Math.J., 4 (1994), 39-54.
  3. L.C. Cai, H.Y. Lan and Y. Zou, Perturbed algorithms for solving nonlinear relaxed coco-ercive operator equations with general A-monotone operators in Banach spaces, Commun Nonlinear Sci Numer Simulat, 16 (2011), 3923-3932. https://doi.org/10.1016/j.cnsns.2011.01.024
  4. J. Diestel, Geometry of Banach Spaces, Springer-Verlag, Berlin, 1975.
  5. Y.P. Fang and N.J. Huang, H-accretive operators and resolvent operator technique for solving variational inclusions in Banach spaces, Appl. Math. Lett., 17 (2004), 647-653. https://doi.org/10.1016/S0893-9659(04)90099-7
  6. H. Iiduka and W. Takahashi, Strong convergence studied by a hybrid type method for mono-tone operators in a Banach space, Nonlinear Analysis, 68 (2008), 3679-3688. https://doi.org/10.1016/j.na.2007.04.010
  7. H. Iiduka and W. Takahashi, Weak convergence of a projection algorithm for variational inequalities in a Banach space, J. Math. Anal. Appl., 339 (2008), 668-679. https://doi.org/10.1016/j.jmaa.2007.07.019
  8. S. Kamimura and W. Takahashi, Strong convergence of a proximal-type algorithm in a Banach space, SIAM J. Optim., 13 (2002), 938-945. https://doi.org/10.1137/S105262340139611X
  9. H. Lan, Convergence analysis of new over-relaxed proximal point algorithm frameworks with errors and applications to general A-monotone nonlinear inclusion forms, Applied Mathematics and Computation, 230 (2014), 154-163. https://doi.org/10.1016/j.amc.2013.12.028
  10. S. Liu, H. He and R. Chen, Approximating solution of 0 ∈ Tx for an H-monotone operator in Hilbert spaces, Acta Mathematica Scientia 33B (2013), 1347-1360. https://doi.org/10.1016/S0252-9602(13)60086-7
  11. S. Liu and H. He, Approximating solution of 0 ∈ Tx for an H-accretive operator in Banach spaces, J. Math. Anal. Appl., 385 (2012), 466-476. https://doi.org/10.1016/j.jmaa.2011.06.074
  12. Y. Liu and Y. Chen, Viscosity iteration algorithms for nonlinear variational inclusions and fixed point problems in Banach spaces, J. Appl. Math. Comput., 45 (2014), 165-181. https://doi.org/10.1007/s12190-013-0717-6
  13. J. Lou, X.F. He and Z. He, Iterative methods for solving a system of variational inclusions involving H-η-monotone operators in Banach spaces, Computers and Mathematics with Applications, 55 (2008), 1832-1841. https://doi.org/10.1016/j.camwa.2007.07.010
  14. S. Matsushita and W. Takahashi,A strong convergence theorem for relatively nonexpansive mappings in a Banach space, J. Approx. Theory, 134 (2005), 257-266. https://doi.org/10.1016/j.jat.2005.02.007
  15. L. Min and S.S. Zhang, A new iterative method for finding common solutions of generalized equilibrium problem, fixed point problem of infinite k-strict pseudo-contractive mappings and quasi-variational inclusion problem, Acta Mathematica Scientia, 32B (2012), 499-519. https://doi.org/10.1016/S0252-9602(12)60033-2
  16. D. Pascali, Nonlinear mappings of monotone type,Sijthoff and Noordhoff International Publishers, Alphen aan den Rijn, 1978.
  17. R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAMJ. Control Optim., 14 (1976), 877-898. https://doi.org/10.1137/0314056
  18. N.K. Sahu and R.N. Mohapatra, Approximation solvability of a class of A-monotone implicit variational inclusion problems in semi-inner product spaces, Applied Mathematics and Computation, 236 (2014), 109-117. https://doi.org/10.1016/j.amc.2014.02.095
  19. L. Wei and H.Y. Zhou, Strong convergence of projection scheme for zeros of maximal monotone operators, Nonlinear Analysis 71 (2009), 341-346. https://doi.org/10.1016/j.na.2008.10.081
  20. F.Q. Xia and N.J. Huang, Variational inclusions with a general H-monotone operator operator in Banach spaces, Computers and Mathematics with Applications, 54 (2007), 24-30. https://doi.org/10.1016/j.camwa.2006.10.028
  21. Z. Yang and B.S. He, A relaxed approximate proximal point algorithm, Ann. Oper. Res., 133 (2005), 119-125. https://doi.org/10.1007/s10479-004-5027-9
  22. L.C. Zeng, S.M. Guu, H.Y. Hu and J.C. Yao, Hybrid shrinking projection method for a generalized equilibrium problem, a maximal monotone operator and a countable family of relatively nonexpansive mappings, Computers and Mathematics with Applications, 61 (2011), 2468-2479. https://doi.org/10.1016/j.camwa.2011.02.028
  23. L.C. Zeng, S.M. Guu and J.C. Yao, Characterization of H-monotone operators with applications to variational inclusions, Computers and Mathematics with Applications, 50 (2005), 329-337. https://doi.org/10.1016/j.camwa.2005.06.001
  24. Q.B. Zhang, An algorithm for solving the general variational inclusion involving A-monotone operators, Computers and Mathematics with Applications, 61 (2011), 1682-1686. https://doi.org/10.1016/j.camwa.2011.01.039
  25. Q.B. Zhang, A modified proximal point algorithm with errors for approximating solution of the general variational inclusion, Operations Research Letters, 40 (2012), 564-567. https://doi.org/10.1016/j.orl.2012.09.008
  26. Q.B. Zhang, A new resolvent algorithm for solving a class of variational inclusions, Mathematical and Computer Modelling, 55 (2012), 1981-1986. https://doi.org/10.1016/j.mcm.2011.11.057