DOI QR코드

DOI QR Code

ALGORITHM FOR WEBER PROBLEM WITH A METRIC BASED ON THE INITIAL FARE

  • Kazakovtsev, Lev A. (Department of Information Technologies, Siberian State Aerospace University) ;
  • Stanimirovic, Predrag S. (Faculty of Sciences and Mathematics, University of Nis)
  • Received : 2014.02.14
  • Accepted : 2014.07.24
  • Published : 2015.01.30

Abstract

We introduce a non-Euclidean metric for transportation systems with a defined minimum transportation cost (initial fare) and investigate the continuous single-facility Weber location problem based on this metric. The proposed algorithm uses the results for solving the Weber problem with Euclidean metric by Weiszfeld procedure as the initial point for a special local search procedure. The results of local search are then checked for optimality by calculating directional derivative of modified objective functions in finite number of directions. If the local search result is not optimal then algorithm solves constrained Weber problems with Euclidean metric to obtain the final result. An illustrative example is presented.

Keywords

1. Introduction

Weber problem [29] is a continuous optimization problem for finding a point X* ∈ ℝn satisfying

Here, Ai ∈ ℝn, i = 1, . . . , N are some known demand points, wi ∈ ℝ, wi ≥ 0 are some weighting coefficients, ∥ · ∥ : ℝn → ℝ is a vector norm [20].

Main appearances of the Weber problem include the warehouse location [10,7], positioning computer and communication networks [14], locating base stations of wireless networks. Solving a Weber problem (searching for a centroid) is a step of many clustering algorithms [25,19,9].

The problem (1) was originally formulated by Weber [29] with Euclidean norm (∥ · ∥ = l2 (·)) and it is generalized to lp norms and other metrics [29,6].

Detailed explanation of various norms and metrics is presented in [21,18,8]. The lp norms play an important role in the theory and practice of location problems. The most common distance metrics in continuous space are Euclidean (l2), rectangular (l1) and Chebyshev (l∞) metrics but other metrics are also important for specific cases [1,8,23]. Various distance metrics can be used for solving clustering problems [26,30]. In [16], authors consider norm approximation and approximated solution for Weber problems with an arbitrary metric using random search [15]. Problems with barriers are described in [18]. In special cases, such problems can be transformed into discrete problems [22].

In the case of public transportation systems, the price usually depends on distance. However, some minimum price is usually defined. For example, the initial fare of the taxi cab may include some distance, usually 1-5 km. Having rescaled the distances so that this distance included in the initial price is equal to 1, we can define the price function dP as

where ∥ · ∥ is a vector norm. We use the term ”taxi metric” to denote the metric defined by (2). In this paper, we consider ∥ · ∥ as Euclidean norm in ℝ2 only (∥ · ∥2).

In clustering problems, such metric can be used to describe the distance between the samples and the core of the cluster [27] with fixed core diameter. A metric which neglects the distances smaller than some pre-defined observational error ε is equivalent with our ”taxi” metric:

The Radar Screen [3] metric is a very similar norm metric with the distance function defined by

The Weber problem with the Radar Screen metric is a special case of the problem considered in [11]. Unlike (3), our distance function (2) is convex and our approach significantly differs from that proposed in [11].

The paper is organized as follows. In Chapter 2, we restate some basic definitions and describe existing algorithms and investigate some features of the objective function. In Chapter 3, we restate the algorithm for the Weber problem with new metric. In chapter 4, we give a simple example and results of the algorithm.

 

2. Preliminaries

The single-facility Weber problem (1) in ℝ2 (planar problem) with ”taxi metric” (2) can be formulated as

Here,

The problem proposed by Weber is based on the Euclidean metric

The most common algorithm for Weber problem with the metrics induced by the lp norms is Weiszfeld procedure [28,10].

For the simplicity, we assume that

Lemma 2.1. If

then any point X ∈ SE is a solution of problem (4). Moreover, any X′ ∉ SE is not a minimizer of (4).

Proof. Let us assume that X∗ ∈ SE. Then for arbitrary ∀X ∈ ℝ2 we have

which implies f (X∗) = min{f (X), X ∈ ℝ2}. □

Lemma 2.1 describes the case when the non-iterative solution is possible. More several cases when the non-iterative approach is applicable are described in [2].

Let us denote the set

Lemma 2.2. If X∗ is the solution of the problem (4) and X∗ ∈ R0 then X∗ is the solution of Weber problem (5) and vice versa.

Proof. Under the assumption X∗ ∈ R0 we have f (X∗) = fE (X∗). □

Lemma 2.3. The objective function of problem (4) is convex.

Proof. The sum of convex functions fi (X) = max{1, ||X − Ai||2}, i = 1, . . . , N is convex. □

For any arbitrary point X ∈ ℝ2, let us denote the sets of demand point indices

and a set of points (a region)

The regions R(X) of any point X ∈ ℝ2 are bounded by arcs [11,12] of radius 1 with centres in points Ai, i = 1, . . . , N (see Fig. 1). In [4], authors prove that the quantity of regions is quadratically bounded by the number of demand points.

FIGURE 1.Illustration of the problem (4), its regions and unions U1, U2.

An algorithm for solving constrained Weber problems with regions bounded by arcs is proposed in [17].

Let our problem have M different regions Rk, k = 1, . . . , M :

Note that R0 was introduced above. The border point of each region belong to at least one other region.

The algorithm for enumerating all the disc intersection points is given in [5]. Let our problem has I disc intersections D1, . . . , DI.

Lemma 2.4. If X∗ is a solution of the problem (4) and X∗ ∈ Rk, k = 0, . . . , M and S> (X∗) ≠ ∅ then X∗ is the solution of the following constrained Weber problem with the Euclidean metric:

Proof. The value of the objective function for X ∈ Rk is

Since the first summand in (14) is constant, we have an equivalent problem (12) with the constraint (13). □

The solution of constrained optimization problems with convex objective functions coincides with the solution of the corresponding unconstrained problem or lays on the border of the forbidden region [12] (moreover, the solution of the constrained problem is said to be visible from the solution of the unconstrained problem).

Corollary 2.5. If X∗ is a solution of problem (4) then it is the solution of the unconstrained problem (12) or ∃i ∈ {1, . . . , N} which satisfies ∥Ai − X∗∥2 = 1.

Let us denote by Uq, q = 1, . . . , NU the unions of regions Rk, k = 1, . . . , M surrounded by the region R0 :

Denote also the borders of those unions by

and set of points of all the borders as

Lemma 2.6. If X∗∗ is the unique solution of the problem (5) and X∗ is a solution of the problem (4) then

Proof. Let us consider a constrained problem with the Euclidean metric

Let X′ be a solution of this problem.

As the objective function of this problem is convex, two cases are possible.

Case 1. X′ = X∗∗.

Case 2. Solution X′ of this constrained problem lies on the borderline of the feasible set, i.e. X′ ∈ B. Moreover, X′ is visible from X∗∗.

In Case 1, in accordance with Lemma 2.2, X′ ≠ X∗∗ unless X′ ∈ B. Thus, if X∗∗ ∈ Uq′ then X′ ∈ Uq′. From X′ ∈ R0, we have X′ ∈ Bq′. Let us denote the set (see Fig. 1)

From f (X) = fE (X) ∀X ∈ R0, X′ is the solution of the constrained problem

From the convexity of the objective function f (·) immediately follows that S is convex. Let us denote a set X′S of optimizers of the constrained problem (15) – (16). From

we have

Thus,

Therefore, the set S does not contain any barriers Bq of the unions Uq (q = 1, . . . , NU) except the points from X′S and ∃X′ ∈ X′S : X′ ∈ Uq′. Since X′ ∈ Uq′ and (S ∩ R0) ⊂ B, we have

Since X∗ is the optimizer of (4), f (X∗) ≤ f (X′). Thus, X∗ ∈ S and X∗ ∈ Ul′. □

Lemma 2.7. Let X′ S be the set of solutions of the constrained problem (12)-(13). Let Gk be the set of border points of region Rk. Then the set G ∩ X′S is finite unless S> (X∗) = ∅∀X∗ ∈ X′S.

Proof. The case ||Ai − X||2 ≤ 1∀i = 1, ..., N, X ∈ X′ Let X∗ ∈ X′S be an arbitrary point. If S> (X∗) ≠ ∅ then, being the Weber problem with the Euclidean metric, problem (12) has a strictly convex objective function unless all its demand points Ai, i ∈ S> (X∗) are collinear. In this case, the problem has exactly one solution.

If the demand points are collinear, the solution coincides with one of demand point Ai, i′ ∈ S> (X∗) or all points of some line segment Ai′Ai′′, i′ ∈ S> (X∗), i′′ ∈ S> (X∗) are solutions. The border G is formed by arcs. Thus, it has finite number of intersections with the line segment. □

The algorithms proposed in the next section are based on the lemmas above. Algorithms for both constrained and unconstrained Weber problem with Euclidean metric are well investigated, see [12,13,29]. We use these algorithms as subroutines in our algorithm.

 

3. Algorithm description

Our algorithm starts the local search procedure from the initial point which is calculated by the Weiszfeld procedure as the solution of the unconstrained Weber problem with the Euclidean metric (5). If the solution X∗ satisfies X∗ ∈ R0 (i.e. ∥X∗ − Ai∥2 ≥ 1, i = 1, . . . , N) then, in accordance with Lemma 2.2, X∗ is the solution of problem (4). Otherwise, algorithm continues further search from point X∗.

Having solved problem (12) with constraint X ∈ R(X∗), we obtain a new solution X∗ or a set of solutions. If the unique solution all points from the solution set belong to the border of the union of regions Uq′ then, in accordance with Lemma 2.6, we have the optimal solution.

If the unique solution X∗ or every point of the solution set does not contain any border points of region R(X∗), due to convexity of the objective function, we have the solution final and algorithm stops.

If the solution X∗ lays on the borderline of region R (X∗) or the solution set contains any border points then we must solve the constrained Weber problem for the regions containing X∗. If there are some better solutions, continue with the best solution. Otherwise, stop.

Since the objective function is convex, we can use any local search procedure. The following heuristics provides the significant speed-up. First, the value of the objective function is calculated for the circle intersection points of the region R(X∗) (i.e. its angular points) where X∗ if the solution of the unconstrained Weber problem (5). This intersection X∗∗ with the best result is chosen as an initial point for the further search. The local search procedure continues then with the neighbor intersection points (i.e. the intersection points which are the ends of the arcs starting from X∗∗ ). When the local search stops at some intersection X∗∗, our algorithm checks if this point is the local minimum in each of its neighbor regions. If it is not the local (and global) minimum, the search continues with solving constrained Weber problem as described above.

If a temporal solution X∗∗ is an intersection point, the algorithm checks if this point is the local minimum in each of its regions. The angular point of the convex region is the point of minimum of the function in this region if all possible directional derivatives are non-negative. But our regions can be non-convex.

Let us denote P(Rk) such a convex polygon that all its vertices coincide with the angular points of the region Rk. Then region

is convex.

Let us denote two rays l1 and l2 with initial point X∗∗ and an angle ϕ ∈ (0, π) between them such that all points of the region ϱ(Rk) are situated between l1 and l2 and both l1 and l2 are tangent to the borderline of the region ϱ(Rk). All possible directions from X∗∗ in the region ϱ(Rk) lay between l1 and l2.

From the convexity of region ϱ(Rk) and the objective function (12), if

then X∗∗ is the minimum point of (12) in ϱ(Rk). Here, are directional derivatives with directions l1 and l2 correspondingly. From ϱ(Rk) ⊂ Rk, this point X∗∗ is the minimum point in Rk.

If X∗∗ is the point of local minimum for all regions which it joins then X∗∗ is the solution of problem (4) and solving the constrained Weber problem (12), (13) is not needed. The experiments on the randomly generated problems and the rescaled problems from [24] show that solving the constrained Weber problem is not needed in most cases.

In our algorithm, regions Rk are enumerated as follows. The number k is an array of N digits, one digit for each of the demand points. The ith digit is set to 1 if ||X − Ai|| < 1 for all internal points X of region Rk. If ||X − Ai|| > 1 then the ith digit is set to 0. For example, region R6 (see Fig. 1) in new notation is R11100. Using this method of enumeration, it is not necessary to enumerate all regions at the first steps of the algorithm.

Analogous method of enumeration is used for intersection points Dj. The index j contains N digits. If ||Dj − Ai|| > 1 then the ith digit is set to 0. If ||Dj − Ai|| < 1 then the ith digit is set to 1. If ||Dj − Ai|| = 1 then the ith digit is set to 2. For example, the angulous point D1 (see Fig. 1) of regions R1, R2, R5 and R6 in the proposed algorithm is denoted as D12200 (it is an internal point of the circle with center in A1, border point of circles with centers in A2 and A3 and it is situated outside circles with centers in A4 and A5).

With this notation, it is easy to determine the region or regions for any arbitrary point X∗. We use the following algorithm (here, k is an array of digits).

Note that the region R0, see (7), in this notation is R000...0.

Algorithm 3.1. Determine the region index

The algorithm above returns a set (an array) Rarray of region indexes k such that X ∈ Rk. Steps 1 to 1.4 form an array of digits describing the distance from the given point to each of the demand points: digits 0,1 and 2 mean distance more than 1, less than 1 and equal to 1, correspondingly. In Steps 3 to 3.5, array Rarray of indexes is formed. Initially, it contains one element coinciding with the array k formed in Steps 1 to 1.4. For each demand point having distance equal to 1 from the given point (digit 2 in array k), array Rarray is duplicated: instead of digit 2, digit 0 is substituted in the first copy of the initial array Rarray and digit 1 in its second copy. Thus, array Rarray contains 2e1 indexes where e1 is quantity of the demand points having distance equal to 1 from point X.

For any intersection point Dj, the index j is known and we can start this algorithm for such point from Step 2 assuming k = j.

For determining the set of the neighbor intersection points for a given intersection point Dj , we use the following algorithm.

Algorithm 3.2. Form a list of neighbor angular points

In Steps 2 to 2.4, all known intersetion points are scanned. For the indexes of the intersection points, the notation from 3.1 is used: digit 2 in the kth position of the index means that distance from the intersection point to the kth demand point is equal to 1. In Steps 2.2 to 2.2.5, searching for digits 2 in indexes is organized.

Our algorithm for solving problem (4) is organized as follows.

Algorithm 3.3. Solving the location problem (4)

 

4. Numerical example

Let us solve the problem shown in Fig. 2

FIGURE. 2.Example problem scheme and its objective function graph.

Here, N = 7, A1 = (0, 0.25), A2 = (0.25, 0), A3 = (0.25, 0.75), A4 = (1.35, 0.25), A5 = (1, 0.77), A6 = (3.45, 0.2), A7 = (3.55, 0.4), w1 = w6 = 1, w2 = 9, w3 = 4, w4 = 3, w5 = w7 = 2.

The result of Weiszfeld procedure at Step 1 of Algorithm 3.3 is

At Step 2, this point is not in R0000000 since ∥A1 − X∗∥ < 1. Thus, the algorithm goes on.

At Step 3, from ∥A1−X∗∥ < 1, ∥A2−X∗∥ < 1, ∥A3−X∗∥ < 1, ∥A5−X∗∥ < 1, R(X∗) = R11101000.

At Step 4, our algorithm forms a set Dall of all 22 intersection points.

At Step 5, the set of angular points (intersections) of region R1110100 is

After Step 6 and three iterations in Steps 7 to 7.2, we have

At Step 8, a boolean variable bfound is set to 1 and our algorithm start the iteration (Step 9).

At Step 9.1, bfound is reset to 0. Algorithm 3.2 returns the list of the neighbor intersections for D1212100 :

At Step 9.2 to 9.2.1, the algorithm estimates the objective function for these intersections except D1210200, D1112200 and after these iterations, we have

At Step 9.3, the algorithm adds Dneighbour to the set Dchecked and we have Dchecked = {D1212100, D1210200, D1112200, D2012100, D2211100}.

Step 9 is then repeated.

At the second iteration of Step 9.1, bfound is reset to 0. Algorithm 3.2 returns the list of the neighbor intersections for D1212100 :

At Step 9.2 to 9.2.1, the algorithm estimates the objective function for these intersections except D2012100, D1212100 and after two iterations, we have

At Step 9.3, Dchecked = {D1212100, D1210200, D1112200, D2012100, D2211100,D0221100, D2121100}, Step 9 is then repeated.

At the third iteration of Step 9.1, bfound is reset to 0. Algorithm 3.2 returns the list of the neighbor intersections for D1212100 :

At Step 9.2 to 9.2.1, the algorithm estimates the objective function for the intersections D0201200 and D002210 and after two iterations, we have no improvement of X∗∗ = D0221100 and F∗∗ = 26.209559.

Thus, bfound = 0 and the iterations of Step 9 finish.

At Step 10, the list of the regions joint by X∗∗ = D022110 is

Algorithm sets Ltosearch = ∅.

In region R0001100, the direction l1 is a ray on the line connecting X∗∗ and D0201200 (d7 in Fig. 3), l2 is a ray on the line connecting X∗∗ and D0022100 (d2 in Fig. 3).

FIGURE 3.Neighbor regions and directions for directional derivatives calculations (Steps 12 to 12.3 of Algorithm 3.3).

In region R0011100, the direction l1 is a ray on the line tangent to the circle with center in A3 (d1 in Fig. 3), l2 is a ray on the line connecting X∗∗ and D2211100 (d4 in Fig. 3).

In region R0111100, the direction l1 is a ray on the line tangent to the circle with center in A2 (d3 in Fig. 3), l2 is a ray on the line tangent to the circle with center in A3 (d6 in Fig. 3).

In region R010110, the direction l1 is a ray on the line tangent to the circle with center in A2 (d8 in Fig. 3), l2 is a ray on the line connecting X∗∗ and D2121100 (d5 in Fig. 3).

In Steps 12 to 12.2, our algorithm calculates all directional derivatives

All values are positive (Step 12.2).

Thus, Ltosearch = ∅ and iterations in Steps 13 to 13.2 are not performed.

The resulting point is X∗∗ = (0.177025, 0.375000), value of the objective function is F∗∗ = 26.209559.

 

5. Conclusion

The location problems for the systems with the minimum transportation cost can be formulated as the problems with a special metric

The proposed algorithm is able to solve such problems. The implemented local search heuristic reduces the computational complexity to the complexity of solving few constrained and one unconstrained Weber problems with Euclidean metric. However, the computational complexity of the proposed algorithm is subject to the further research.

References

  1. R.G. Brown, Advanced Mathematics: Precalculus with discrete Mathematics and data analysis, (A.M.Gleason, ed.), Evanston, Illinois: McDougal Littell, 1997.
  2. R. Chen, Noniterative Solution of Some Fermat-Weber Location Problems, Advances in Operations Research (2011), Article ID 379505, Published online. 10 pages doi:10.1155/2011/379505, http://downloads.hindawi.com/journals/aor/2011/379505.pdf.
  3. M.M. Deza and E. Deza, Encyclopedya of Distances, Springer Verlag, Berlin, Heilderberg, 2009.
  4. Z. Drezner, A. Mehrez and G.O. Wesolowsky, The facility location problem with limited distances, Transportation Science, 25 (1991), 183-187. https://doi.org/10.1287/trsc.25.3.183
  5. Z. Drezner and G.O. Wesolowsky, A maximin location problem with maximum distance constraints, AIIE Transact., 12 (1980), 249-252. https://doi.org/10.1080/05695558008974513
  6. Z. Drezner, K. Klamroth, A. Schobel and G.O. Wesolowsky, The Weber problem, in Z. Drezner and H.W. Hamacher (editors), Facility Location: Applications and Theory, Springer-Verlag, 2002, 1-36.
  7. Z. Drezner, C. Scott and J.S. Song, The central warehouse location problem revisited, IMA Journal of Managemeng Mathematics, 14 (2003), 321-336. https://doi.org/10.1093/imaman/14.4.321
  8. Z. Drezner and M. Hamacher, Facility location: applications and theory, Springer-Verlag, Berlin, Heidelberg, 2004.
  9. S. Gordon, H. Greenspan, J. Goldberger, Applying the Information Bottleneck Principle to Unsupervised Clustering of Discrete and Continuous Image Representations, Computer Vision. Proceedings. Ninth IEEE International Conference on, Vol.1 (2003), 370-377.
  10. R.Z. Farahani and M. Hekmatfar editors, Facility Location Concepts, Models, Algorithms and Case Studies, Springer-Verlag Berlin Heidelberg, 2009.
  11. I.F. Fernandes, D. Aloise, D.J. Aloise, P. Hansen and L. Liberti, On the Weber facility location problem with limited distances and side constraints, Optimization Letters, issue of 22 August 2012, 1-18, published online, doi:10.1007/s11590-012-0538-9.
  12. P. Hansen, D. Peeters and J.F. Thisse, Constrained location and the Weber-Rawls problem, North-Holland Mathematics Studies, 59 (1981) 147-166. https://doi.org/10.1016/S0304-0208(08)73463-7
  13. H. Idrissi, O. Lefebvre and C. Michelot, A primal-dual algorithm for a constrained Fermat-Weber problem involving mixed norms, Revue francaise d'automatique, d'informatique et de recherche operationnelle. Recherche Operationnelle, 22 (1988), 313-330.
  14. L.A. Kazakovtsev, Wireless coverage optimization based on data provided by built-in measurement tools, WASJ, 22, Special Volume on Techniques and Technologies (2013), 8-15.
  15. A.N. Antamoshkin and L.A. Kazakovtsev, Random search algorithm for the p-median problem, Informatica (Ljubljana), 37 (2013), 267-278.
  16. L.A. Kazakovtsev, Adaptation of the probability changing method for Weber problem with an arbitrary metric, Facta Universitatis, (Nis) Ser. Math. Inform., 27 (2012), 289-254.
  17. L.A. Kazakovtsev, Algorithm for Constrained Weber Problem with feasible region bounded by arcs, Facta Universitatis, (Nis) Ser. Math. Inform., 28 (2013), 271-284.
  18. K. Klamroth, Single-facility location problems with barriers, Springer Verlag, Berlin, Heilderberg, 2002.
  19. K. Liao, D. Guo, A Clustering-Based Approach to the Capacitated Facility Location Problem, Transactions in GIS, 12 (2008), 323-339. https://doi.org/10.1111/j.1467-9671.2008.01105.x
  20. H. Minkowski, Gesammelte Abhandlungen, zweiter Band, Chelsea Publishing, 2001.
  21. J. Perreur and J.F. Thisse, Central metric and optimal location, J. Regional Science, 14 (1974), 411-421. https://doi.org/10.1111/j.1467-9787.1974.tb00463.x
  22. I.P. Stanimirovic, Successive computation of some efficient locations of the Weber problem with barriers, J. Appl. Math. Comput., 42 (2013), 193?11. DOI 10.1007/s12190-012-0637-x
  23. P.S. Staminirovic, M. Ciric, L.A. Kazakovtsev and I.A. Osinuga, Single-facility Weber location problem based on the Lift metric, Facta Universitatis, (Nis) Ser. Math. Inform., 27 (2012), 31-46.
  24. E. Taillard, Location problems, web resource available at http://mistic.heig-vd.ch/taillard/problemes.dir/location.html
  25. E.D. Thaillard, Heuristic Methods for Large Centroid Clustering Problems, Journal of Heuristics, 9 (2003), 51-73. https://doi.org/10.1023/A:1021841728075
  26. A. Vimal, S.R. Valluri, K. Karlapalem, An Experiment with Distance Measures for Clustering, International Conference on Management of Data COMAD 2008, Mumbai, India, 241-244 (2008)
  27. K. Voevodski, M.F. Balcan, H. Roglin, S.H. Teng, Y. Xia, Min-sum Clustering of Protein Sequences with Limited Distance Information, Proceedings of the First International Conference on Similarity-based Pattern Recognition (SIMBAD'11), Venice, Italy (2011), 192-206.
  28. E. Weiszfeld, Sur le point sur lequel la somme des distances de n points donnes est minimum, Tohoku Mathematical Journal, 43 (1937), 335-386.
  29. G. Wesolowsky, The Weber problem: History and perspectives, Location Science, 1 (1993), 5-23.
  30. Y. Ying, P. Li, Distance Metric Learning with Eigenvalue Optimization, Journal of Machine Learning Research, 13 (2012), 1-26.

Cited by

  1. Application of Heuristic and Metaheuristic Algorithms in Solving Constrained Weber Problem with Feasible Region Bounded by Arcs vol.2017, 2017, https://doi.org/10.1155/2017/8306732
  2. Multicriterion problem of allocation of resources in the heterogeneous distributed information processing systems vol.1015, pp.1742-6596, 2018, https://doi.org/10.1088/1742-6596/1015/3/032162
  3. Simulation modelling of the heterogeneous distributed information processing systems vol.450, pp.None, 2015, https://doi.org/10.1088/1757-899x/450/5/052018