### ’n Algoritme vir die minimum van die konkawe knapsakprobleem

#### Abstract

This paper addresses the problem of resource allocation among activities where the cost of each is described by a concave function. There is a single linear constraint (limited resource) and each activity has an upper and lower bound (maximum and minimum resource allocations). The objective is to minimise the sum of the functions.

The problem with convex functions is well-studied and since a local minimum is also global, this problem was tamed early by Luss and Gupta [Luss H & Gupta SK, 1975, Allocation of effort resources among competing activities, Operations Research, 23, pp. 360–366], and by Bitran and Hax [Bitran GR & Hax AC, 1979, On the solution of convex knapsack problems with bounded variables, pp. 357–367 in Prekopa A (Ed.), Survey of Mathematical Programming]. In contrast the minimisation of a sum of concave functions has received less attention and then the emphasis has often been on an objective function which is nonseparable quadratic, and on the complexity of finding a true local minimiser (e.g. More and Vavasis [More JJ & Vavasis SS, 1991, On the solution of concave knapsack problems, Mathematical Programming, 49, pp. 397–411]).

We are concerned with the computational problem of finding the global optimum for a (separable) sum of nondecreasing general concave functions and the approach is via the Kuhn-Tucker necessary conditions. These are improved by using the result that a minimiser must be an extreme point, which means that all but one variable (at most) is at an upper or lower bound. (Mor´e and Vavasis base their CKP algorithm on this property.) The improved necessary conditions form the basis of the method of greatest differences (GVA), our algorithm to improve a feasible solution. A greedy heuristic to produce a first feasible solution is also proposed.

Using four groups of 10 instances, each from the four classes of concave functions of Luss and Gupta, one thousand different runs for incremental values of the resource were made for both the CKP and our GVA. While the CKP often found a globally suboptimal answer for functions which intersect, the GVA found the correct answer in all 16 000 runs. This is no guarantee that a method based on necessary conditions will always find the global maximum but the GVA is numerically promising and it masters a class of problems that the CKP does not. The greedy first feasible solution was found to be optimal more frequently (64% to 49%, and 73% to 40%) than that proposed by More and Vavasis.

The GVA does not depend on the kind of function, requiring only that the functions be nondecreasing and concave. With minor alterations it can be used for the maximisation of the sum of nondecreasing convex functions. It is much faster than dynamic programming on problems with up to 10 functions and should be superior when tested on large problems.

The problem with convex functions is well-studied and since a local minimum is also global, this problem was tamed early by Luss and Gupta [Luss H & Gupta SK, 1975, Allocation of effort resources among competing activities, Operations Research, 23, pp. 360–366], and by Bitran and Hax [Bitran GR & Hax AC, 1979, On the solution of convex knapsack problems with bounded variables, pp. 357–367 in Prekopa A (Ed.), Survey of Mathematical Programming]. In contrast the minimisation of a sum of concave functions has received less attention and then the emphasis has often been on an objective function which is nonseparable quadratic, and on the complexity of finding a true local minimiser (e.g. More and Vavasis [More JJ & Vavasis SS, 1991, On the solution of concave knapsack problems, Mathematical Programming, 49, pp. 397–411]).

We are concerned with the computational problem of finding the global optimum for a (separable) sum of nondecreasing general concave functions and the approach is via the Kuhn-Tucker necessary conditions. These are improved by using the result that a minimiser must be an extreme point, which means that all but one variable (at most) is at an upper or lower bound. (Mor´e and Vavasis base their CKP algorithm on this property.) The improved necessary conditions form the basis of the method of greatest differences (GVA), our algorithm to improve a feasible solution. A greedy heuristic to produce a first feasible solution is also proposed.

Using four groups of 10 instances, each from the four classes of concave functions of Luss and Gupta, one thousand different runs for incremental values of the resource were made for both the CKP and our GVA. While the CKP often found a globally suboptimal answer for functions which intersect, the GVA found the correct answer in all 16 000 runs. This is no guarantee that a method based on necessary conditions will always find the global maximum but the GVA is numerically promising and it masters a class of problems that the CKP does not. The greedy first feasible solution was found to be optimal more frequently (64% to 49%, and 73% to 40%) than that proposed by More and Vavasis.

The GVA does not depend on the kind of function, requiring only that the functions be nondecreasing and concave. With minor alterations it can be used for the maximisation of the sum of nondecreasing convex functions. It is much faster than dynamic programming on problems with up to 10 functions and should be superior when tested on large problems.

#### Full Text:

PDFDOI: https://doi.org/10.5784/21-1-16

### Refbacks

- There are currently no refbacks.

ISSN 2224-0004 (online); ISSN 0259-191X (print) |

Powered by OJS and hosted by Stellenbosch University Library and Information Service since 2011. |

Disclaimer: This journal is hosted by the SU LIS on request of the journal owner/editor. The SU LIS takes no responsibility for the content published within this journal, and disclaim all liability arising out of the use of or inability to use the information contained herein. We assume no responsibility, and shall not be liable for any breaches of agreement with other publishers/hosts. |