Rivisla di matematica per le scienze economiche e sociali - Anno 10", Fascicolo 1 *-2*.
ON S O L V I N G A L I N E A R P R O G R A M WITH ONE QUADRATIC CONSTRAINT
L. MARTEIN Department of Statistics and Applied Mathematics, University of Pisa, Italy. S. SCHAIBLE
Graduate School of Management Riverside, California. Versione definitivapervenuta i125-7-1988 A solutionprocedure for linear programs with one convexquadratic constraintis suggested. The method finds an optimal solution in finitely many iterations. Fractional programming,quadratic programming.
1. Introduction There are various decision problems that give rise to a linear program with one additional quadratic constraim. Such a problem arises for instance when in a linear model the optimization is further restricted by a nonlinear constraint which can be sufficiently well approximated by a quadratic constraint. On the other hand, such an additional quadratic constraint may arise in a natural way. This is the case in portfolio theory for example. An efficient portfolio can be determined by maximizing the expected return subject to a bounded variance of the return [6, 10]. The variance is a quadratic function of the decision variables of the portfolio selection problem. Furthermore, in stochastic linear programming the chance constraint approach may give rise to a linear program with one additional quadratic consffaint [3]. Again the quadratic function originates in the variance of a stochastic linear term; in this case it is the stochastic constraint of the original linear program. Moreover, it was show in [8, 9] when defining the dual of a quadratic-linear fractional program that such a dual reduces to a linear program with one quadratic constraint. In fact it was the analysis of fractional programs and their duals which motived the research in this paper. We point out that the problem under consideration can also be viewed as the reciprocal of a quadratic program [4] in which the quadratic constraint has traded places with the objective function. 75
Our solution procedure is parametric in nature. The quadratic constraint Q ( z ) <_ 0 is relaxed to Q ( z ) < ( where ( > 0 is a parameter. There are other methods for our problem that are parametric as well, however the parameter introduced in such algorithms is used differently, see for example [ 11].
1. Some theoretical results Consider the problem
P:maxcTz, z6R
~={z6 R " ' A z < _ b ;
Q(z) < O , z > O }
where c 6 IR", b 6 1R'~, A is a real m x n matrix and Q ( z ) is a strictly convex quadratic function, that is
Q(z) ~=1 / 2 z T Q z + qTz+%, where Q is a symmetric positive definite matrix of order n. The aim of the paper is to establish a sequential method for solving problem P in a finite number of iterations. For this purpose we state, first of all, some theoretical results. Consider the related linear program
R":Az<_b,z>O}.
PL:maxcTz, zERL~{z6
We will assume that degeneracy does not occur. We introduce the following notations: S ~ the set of optimal solutions of problem P, S~, the set of optimal solutions of the linear problem PL,
K -~ (ze R " :
O(z) _
K(O ~-{zeR": O(z)<_(}, ~>0. The following theorem states some important properties which can be easily proved:
THEOREM 1.1 i) We have either R = 0 or S ~ ~ 0. ii) If S~ f3 K 5r 0, then S~ fq K = S ~ iii) Assume R 7t 0. If S~ = 0 or S~ N K = 0, then problem P has a unique optimal solution ~. wich is binding at the quadratic constraint, i.e. Q(~:) = 0. Consider now the parametric problem: P(~) :
max x6R, nK(O
76
crz
and let ~: be an optimal solution of P ( ( ) with Q(~:) = ~. The Karush-Kuhn-Tucker conditions applied at ~: establish the existence of multipliers ), E IR+ ,,~+n,/~ (5 IR+, with (),, #) ~ 0 such that: c= ~
)'i a(0 +/~VQ(~)
(1.2 ~)
i=1
) , i ( a ( ' 3 r ~ - b i) = 0, i = 1 , . . . , m
(1.2b)
),,~+j.~j = 0, j = 1 , . . . , n
(1.2c)
~(q(~)
- ~) = o
(1.2a)
AS: < b, O(.~) < ~, .~ > 0
(1.2e)
where a (i), i = 1 , . . . , m and a('~+J),j = 1 , . . . , n , denotethe transpose ofthe i - t h row of A and the transpose of the j - th row of matrix - I n , respectively. The following theorem holds: THEOREM 1.2 Let ~ be an optimal solution of P ( ( ) , with Q( ?~) = ~. Then there exist ~ E IR~++', fz > 0 satisfyng (1.2), if conditions i) or ii) hold:
i) ~ S~,; ii) 8~ = {~}. Proof Suppose that ~ S~. If conditions (1.2) are true for /~ = 0, then ~ satisfies the Karush-Kuhn-Tucker conditions for the linear problem PL, so that .~ E S'~, but this contradicts our assumption. On the order hand, if ~ is the unique solution of PL, then ~: is necessarily a vertex of R L and we have c=~,ja
(y), ~ j > 0
ViE0 r
(1.3)
jEJ
where J is the set of indices associated with the active constraints at ~. With respect to the binding constraints at 5:, (1.2 a) reduces to: e= ~
)V a(j) + # V Q ( ~ ) .
( 1.4)
yE~r
Let us note that (1.4) is true for )~j = ~j and /~ = 0; we will show that there exist ~,j > 0, j E or and ~ > 0 satisfying (1.4). Since a (j) , j E g are a basis of R", then there exist aj, j E J, suchthat: VQ(~) = ~ %.a 0"). Thus we have: j'EJ
c = ~ ( X s + ~ s % ) a (s)
( 1.5 a)
.#el
77
; ; = ,~j + ~%., :" ~ .r.
( 1.5 b)
.
Set .,rl={] E ] : c~j > 0}; the following cases arise: I) ]1 = 0; then (1.4) is satisfied for any # > 0 and 5~;.= ,~j. - #~j, j E J. II) ~r1 r 0; set /x
jz = max Y,/ccj = ~k/o~ J~Jl " Then (1.4) is satisfied for ~ = ~~ k and Xj = ~j. - / ~ j ,
(1.6)
j E J. This completes the proof.
REMARK 1.1 Let use note that, when the linear problem PL has alternate optimal solutions, it can happen that the only Lagrange multiplier /~, satisfying (1.2 a), is zero so that theorem 1.2 does not hold. This special case will be analyzed in section 5. REMARK 1.2
Taking into account Theorem 1.2, condition (1.2 a) can be equivalently rewritten in the form gn+~
,~0c = VQ(~) + ~ )'i a(~ )'0 > 0. i=1
(1.7)
Let use note that (1.7) can be obtained by introducing the reciprocal of problem P. We say that problem p2(/5) ( rain g(z)~-G(B) f(z) >]~,zES is the reciprocal of problem
pl(o 0 ~ max f ( z ) ~=F(oO Lg(z)
2. Basic theoretical results for the algorithm Consider the linear problem PL. Suppose that $ is the unique solution of PL (the other case will be discussed in Section 5). If $ satisfied the quadratic constraint, then is optimal for problem P. Otherwise we consider the parametric problem P ( ~ ) , with E [0, ~] ,~ = Q ( $ ) . The idea of the sequential method that we are going to describe is to generate a finite sequence of $(k) where $(*) is an optimal solution for P ( ~ ) (see fig. 1). An optimal solution corresponding to ~k = 0 is then an optimal solution of P. With this aim in mind, consider problem P(~k), ~k E [ O, ~] and its optimal solution ~:(k).
Let /3x = b be the system of linear equations corresponding to the set of the constraints which are binding at $(k), i.e. /35:(~) = b, where /3 is a k x n real matrix of rank k. We suppose k < n; the case k = n will be analyzed in Remark 3.1. From the Karush-Kuhn-Tucker conditions applied to problem P(~k), taking into account Theorem 1.2 and Remark 1.1, we have: O z + / 3 T ) , = ,~oC_ q
(2.1a)
/3x = b
(2.1b)
that is
where H_A [ Q
~T].
Since /3 has full rank, the matrix H is non singular so that, from (2.2), we obtain
,x ** = where u=(:')
.H_I
x = )~oa+ ~
(2.3a)
), -- ~o ~ + ~
(2.3b)
(~) and v = ( ~ = H -1 (-~q). Consider now the quadratic con-
straint in the parametric form: 1 / 2 x T Q x + qT z + qo = ~, ~ E [0,~k]
(2.4)
and substitute (2.3 a) in (2.4); we obtain the following second order equation: 1 / 2 o&02 + 3)'0 + "l - ~ = 0
(2.5) 79
where O~= I.]TQI]; /~=
gTQ17+qTg; ,./= 1/2~TQT17+{TI~ +
q0"
Set (~) -/3 +. X//32 - 2c~7+ 2c~{
)'o ({) =
(2.6).
We are interested in decreasing the value of {, starting from {t, in such a way that 2({) = ~'o ({)ti + ~ is the optimal solution of problem P ( { ) . To this end, let us note that ~o({) is an increasing function so that { < {k implies )'o({) < ~o---*)'o({~) Then Xo plays the role of a parameter in (2.3) and we can study the stability of the solution of problem P ( { i ) with respect to )'o- In other words we want fo find the values of k o E [ 0, '~o ] which not only guarantee the nonnegativity of Z(~o) and )'()~o) in (2.3), but also the feasibility of z()'o) with respect to the constraints /3z _< which are nonactive at ~(k). Set: t2~/3ti; ~ - * b - / 3 ~ ;
sl = (;" : ~j < o}; :2 = U : b < o}; s3 = ( j : ~ < o}. The following theorem holds: THEOREM 2.1 The vector z()~o) is the optimal solution for the problem P( () , ~ = Q ( ~( )~o)) ,for anYX0 E [),~, )'o], where )~ = max {)~01, )~02, k03 }; l
)~01
=
)'o2 =
~o3 =
6/ max---z--,
Je'q 0
{ {
ul
max - _ ~ --, J~h uj 0
i f J 1 5/ 0
(2.7a)
(2.7 b)
otherwise; if J2 =/0
( 2 . 7 c)
otherwise;
max - - -~, jes3 ~j
if J3 -'/0
0
otherwise;
(2.7 d)
(1) Let use note that equation (2.5) must have, for ~ = ~} a positive root since problem P(~k) has optimal solutions; on the other hand, (2. la) collapses to the Karush-Kutm-Tucker conditions applied to problem min cTz; ~7 E {Z E ]Rn ; B x ~> b, Q(x) < ~k } SO that (2.5) has also a negative root. 80
Proof From the Karush-Kuhn-Tucker conditions applied to problem P ( ~ ) , taking into account (2.3), x()~o) is optimal for P ( O , ~ = Q(x(),o)), if i), ii), iii) hold: i) z(),o) = )~ot~ + ~ > O; ii) "~(~o) = )~o ~ + ~ -> 0; iii) B(z(,Xo) = )~o/3~ + / 3 ~ < b that is ~o a _< t~. Now we will show that i) holds for any 3,o E [)~ol, 5'0 ]- If v1 >- 0 and ~j > 0, then xj()~o) 2 0
forany )~o 2 0 .
If ~j > 0
and ~ 1 < 0 ,
then xj()~o) 2 0
istrue
o
for any )~o < ~'-'~," since 5:(k) is optimal for P(r so that "~o < ~ --
u./
and thus xj(),o) > O, for any --
then i) is satisfied for )'o = )'o,
)'o < ~o- Consider now the case --
~i -< O; since z(),o) _> O, necessarily we have 5j > O, so that zj(),o) >_ 0 for any ),o > -~," --
u. 1
As a consequence i) is satisfied for any )'o E [ )'01, ~o II n a similar way we can prove that ii) and iii) hold for any )'o E [ )'02, )'o ] and for any )~o E [)~os, '~o ], respectively. Obviously, all the conditions i), ii) and iii) are satisfied for any )'o E [),~, ~o ]This completes the proof. We have just seen that starting from the optimal solution ~(~) of problem P(r we have found a new optimal solution ~(k+ 1) = X(),~) of problem P ( ~ + 1), ~k+ 1 = Q(~:(k+l)). If ~+1 > 0, we must still decrease the value of the parameter ~ in order to reach the value zero. This will be analyzed in the next section. Some special cases will be studied in the following theorem. THEOREM 2.2
Let )~ be the value defined by (2.7 a). i) lf )~ = 0 a n d Q ( z ( 0 ) ) > 0, then the feasible region of problem P is empty. ii) If Q(z()~)) = 0 then x( X~) is an optimal solution of problem P. iii) If Q ( x( )~ )) < 0 then z(~o) is the optimal solution of problem P, where ~o is the positive root of the equation Q(x()~o) ) = 0.
Proof i) From the Karush-Kuhn-Tucker conditions (1.2) and taking into account of (1.7), ),~ = 0 implies that z(0) is the optimal solution of the problem minQ(z),
z E R L.
Since Q(x(O)) > 0, the feasible region of the problem P is empty. 81
ii) Obvious. iii) Since Q(:r(~0) ) > 0, the optimal solution of problem P is binding at the quadratic constraint (see Theorem 1.1 iii)); on the other hand Q ( z ( ~ ) ) < 0 implies the existence of "~o E [,k~,~0] such that Q(Z(~o)) = 0, then ~o is necessarily the positive root of the equation Q(z(,ko)) 0. This complete the proof.
3. Adding and deleting a constraint Consider again the optimal solution x(),~) = ~(~+1) of the problem P(~k+l), (k+l = Q(.~(k+l)) and suppose that ~k.1 > 0. Since ~:(~+1) is not optimal for P, we must generate a new optimal solution of P ( ( ) , corresponding to a value of ( lower than ~k+l- Let use note that, for ,ko = ),~, one of the components o f the vectors z()'0), )'()'o),
/3z()'o) - /~ becomes zero, so that the set of the active constraints
changes with respect to the parameter ~. More precisely, if ),~ = "~oI or .k~ = )'02, an active constraint must be deleted, while if ),~ = )'o3 one new constraint must be added to the set of active constraints. Thus we are interested to update system (2.3) in order to find the new value of A~ and repeat the procedure.
Adding a constraint Suppose that ~T:~ = ~ becomes an active constraint, then system (2.1) can be update in the following way:
{
Qa;+ BT~+e~
=~0 c-
q
=/,
z
~T~
(3.1)
= ,~
where 5, is the multiplier associated with o~Tz = /3. System (3.1) can be rewritten in the following way: H'
= )'o
0 0
+
where /_/ H I ~_.
hT
I I I
~
h , h T = ( OgT , 0)
0
I I
It is easy to show that the inverse of H ' can be obtained from /./-1 by the following 82
formula:
( H')
H _ I _ ( H - l h ) ( h r H -1) hZH-lh -I =
. . . . . . . . . . . . . . . . . . . . . .
h r H -l
H-lh I- . . . . . . . . .
(3.3)
1 hTH -1 h
By means of the new inverse, it is easily possible to update (2.3).
Deleting a constraint Suppose that the constraint o F z = fl must be deleted from the set of the active ones; we can assume, whithout loss of generality, that the matrix and the right-hand side in (2.1 b) are of the form:
so that system (2.1) reduces to
Qx + .~T~
= ),o c - q
(3.4 a)
/~x
= g
(3.4 b)
that is
It is easy to show that the inverse of H ' can be obtained from
C: d I
H_I
:"
dTI
I I I
a
by the following formula:
(H') -1 c - !(d. dr) 12
(3.5)
83
REMARK 3.1 When, in system (2.1), /3 is a square matrix of order n, (2.3) reduces to x= ~
(3.6a)
)`= )'0~+ ~
(3.6b)
where ~ =/~-1~; a = ( / 3 T ) - l c ; D =
_(]~V)-lq
_
(/~Q-1I~T)-I~, SO that, starting
from the optima/solution of the linear problem PL ~: = /~-16, we must only choose, by means of (2.7 c), the constraint that must be deleted. Let us note that when in (2,7 c) )`02 = 0 the feasible region of problem P is empty, since, from (1.5), ~ is the optimal SOlution of the problem min {Q(x) : x E RI,}.
4. The Algorithm The theoretical results, established in the previous sections, allow us to suggest the following algorithm to solve problem P, when the linear problem PL has unique solution. STEP 0. (Not iterative). Solve problem -PL and let 5: be the unique solution of -PL- If Q(~) _< 0, STOP: 5: is the optimal solution of P; otherwise set ~ = ~(k) and go to step 1. STEP 1.Let B(B) be the matrix associated with the active (non active) constraints at ~(k). Calculate )'01, )'02, )'03 and ),~. If )'~ = 0 go to step 7; otherwise .~Ck+l) = x()'~) becomes the current solution; go to step 2. STEP 2. If Q(.~(k+1)) = 0, STOP: .~(k+l) is the optima/solution of P. If Q(~:(k+l)) < 0, then go to step 6; otherwise go to step 3. STEP 3. If )'~ 5r )'Ol go to step 4; otherwise a nonnegativity constraint is deleted (see procedure ~deleting a constraint>>); set k = k + 1 and go to step 1. STEP 4. If )'~ ~ )'03, go to step 5; otherwise the constraint corresponding to the multiplier which becomes zero is deleted (see procedure <>); set k = k + 1 and go to step 1. STEP 5. A new constraint is added (see procedure ~~); set/r = k+ 1 and go to step 1. STEP 6. Calculate )~0 and :r(),o) , STOP: x()~0) is the optima/solution of P. S T E P 7 . I f Q ( x ( 0 ) ) > 0, STOP:the feasible regionof P is empty. If Q ( x ( 0 ) ) = 0, STOP: x(0) is the optima/solution of P; otherwise go to step 6. 84
5. Special Cases Let us note that the sequential method which we have described in section 4 is based on the following assumption: 1) Problem PL has unique optimal solution ~; Such an assumption implies, for theorem 1.2, the following one: 2) The Langrage multiplier associated with the quadratic constraint in problem P(~), = Q(~:), si strictly positive. Thus in order to establish a sequential method for solving problem P in the general case, we must analyse what happens when 1) or 2) do not hold. When the objective function of the linear problem PL is not bounded from above, we solve first the problem: max
q(=) < 0
cTz ~ cV$~
(5.1)
and then linear problem max
c'rz = c-rz *
(5.2)
cTz < cT~;~ Z ~ R
which has altemate optimal solutions. In such a case and, in general, when the linear problem PL has alternate optimal solutions, it can happen that the only Lagrange multiplier #, satisfying (1.2 a), is zero, so that we cannot start with our procedure. In this case the idea is to perturbate the objective function in such a way that new linear problem has a unique optimal solution and to apply the sequential method of section 4 until we can restore the original objective function. More precisely, let ~ be an optimal solution (not unique) of PL and consider the problem P~ : m a x ( c + Ec')'rz z~.R
where (2) d is such that the linear problem max(c + (c') Tz zER L
has the unique solution ~:; (2.3) becomes z = *o(~ + d~) + 17
(5.4a)
II
(2) A suitablechoice for d may be 4 = ~ a (~ , where a Ci) denotes the gradient of the i-th i=1
linear consu'aint binding at ~:. 85
Let use note that (2.7) cannot be applied since the coefficients of ,ko depend on e, for this reason we consider the following set of indices:
:', = (]: aj= o, g j > o , ~j < o};
,r~={j:~,j=o, hi>o, ~j < o); :~={]:aj=o, ~ j > o , ,~j < o} where /~/~fi. Theorem 2.1 can be reformulated in the following way:
THEOREM5.1 The vector z( )%) is the optimal solution for the problem P~( () , ~ = Q( Z( Xo ) ) ,
for any )~o E [~o/e,),o/e], where ~o = max()'~l, ~ 2 ,
~'ol
max---, =
je:;
aj
{
0
m a x - _ -~- , ~k02 ----- jeJ~ Uj
)'o3 =
)~3}
ifY~0
(5.5a)
(5.5b)
otherwise; ifJ~ gO
0
otherwise;
max--=-, jes; us 0
ifJ~ g 0
( 5.5 C)
( 5.5 d)
otherwise;
Proof It is sufficient to note, taking into account of the arbitrary of e, that max jes~
~j 5j + ehi
is reached when '2i = 0. COROLLARY 5.1 If ~o = 0, then z(),o) is the optimal solution forproblem P ( ~ ) , ~ = Q ( z ( ) % ) ) , for any ,ko 6 [,~, ~o],
Proof The statement follows immediately from Theorem 5.1 and Theorem 2.1. 86
REMARK 5.1 If "~0 > 0, we must add or delete a constraint according to )'0 = )`~3 or not (see Section 3). As a consequence of Corollary 5.1, if )'0 = 0, then we can set ~ = 0 and restore problem P ( 4 ) .
6. A numerical example Now we propose a simple example in order to clarify the theoretical and computational aspects of the sequential method proposed in section 4. Consider the problem
19:
l
max
~1 +
3-
2a:2
<8,
2-
Q( z) ~=z~ + :r~ - 25 < 0
T h e related linear problem p/ : { m a x z l + 2 z 2 3
2 - < z 2 <_7
has the unique solution ~ = ( 8 , 7 ) , which is not optimal for P since Q ( ~ ) > 0. The constraints which are active at ~ are :r I -< 8, z 2 _< 7, so that we have
0
0
1
0
0
0
0
1
0
1
0
-2
0
0
0
1
0
2
0
1
0
0
2
0
1
1
0
0
0
1
0
H-l=
H =
-2
Hence (2.3) turns out to be:
Since z(), 0) = ( 8 , 7 ) )`~ = m a x { 1 6 ,
Z
= 8
f )`1
z2
=7'
)`2
=
)`0
-
16
=2)`0-14
is independent from )`0, we have arl = J3 = 0 so that
~-} = 16; as a consequence the costraint z I _< 8 associated with
the multiplier )'1 must be deleted. 87
We apply the procedure ,~deleting a constraint~ and find the new inverse
1/2
o
o
0
0
1
0
1
( H ' ) -1 =
I
xI x:
= 1/2)' 0
(2.3) becomes
-2
)'2 = 2)'o - 14
= 7
With respect to the normegative constraints we have:
[-:
B=
0]
,
( : ~ ) ( - 1 ~ 2 ) ( - ~ )
~=
;a=
;,~---
-1 so that )`ol = 0; )'02 = 7, )'03. = 6 and )'~ = )'02 = 7 implies that the current solution is ~(I) = ( 7 / 2 , 7 ) . Since Q(~;(1)) > 0, the value of )'o must be decreased, the constraint z 2 _< 7 associated with the multiplier )'2 must be deleted. The new inverse is
1/2
o
o
1/2
( H , ) -1 =
;
xI X2
and (2.3) becomes
= 1/2),o = )`0"
Regarding the nonactive constraints we have
1
=
88
0
-1
0
0
-1
;
b=
(8) (1,2)(:i) -3 -2
;
t]=
-1/2 1
;
~3=
sothat )'ol = 0; )'o2 = 0, )'03 = 6 and )'5 = )'03 = 6 gives the solution ~(2) = ( 3 , 6 ) . Since Q($c2)) > 0, the value of )'o must be decreased; the constraint - x 1 < - 3 must be added. We apply the procedure ,adding a constraint)) and we find the new inverse: 0
0
-1
o
(H') -l =
o
-1
{
0
;
and (2.3) becomes
-2
371
=3
~:2
=)'o''
.
)'1 = - ) ' o + 6
With regard to the nonactive constraints we have
B =
1
0
0
-1
,;
g=
(8) -2
;
t2=
(!) -
;
7 0
~=
(5) -2
;
7
1
so that )'oi = 0; )'02 = 0, )'03 = 2 and )'5 = )'03 = 2 implies that the current solution is $r
= ( 3 , 2 ) . Since Q(~f3)) < 0,then, for Theorem 2.2 iii), we calculate
the positive root of Q ( z ( ) ' o ) ) = )'02 - 16 = 0, so that $f3) = ( 3 , 4 ) is the optimal solution of problem P. The following picture outlines the finite sequence of optimal level solution f3) generated in solving the problem by means of the sequential method.
r Let $(~) be the optimal solution of problem P(~t); we refer to $(k) as an optimal level solution.
89
X 2 ,
T (1)= (7/2,7) "--..~.
~.(3)=(3,2)" ~
_
]
t C ~-.
x1 Figure 1
Optimal trajectory 5: --* ~:(1) ._, ~(2) ~ ~(3) __, ~7(4). REFERENCES [1] A. CAMBINI:A sequential method for convex quadratic programming. Dept. of Mathematics, University of Pisa, paper A-16, 1975. [2] A. CAMBINI,P. CARRARESI,E GIANNESSI:Sequential method and decomposition in mathematical programs, Dept. of Mathematics, University of Pisa, paper A-38, 1976. [3] A. CI-IARNES,W. W. COOpER:Deterministic equivalents for optimizing and satisJicing under chance constraints, Operation Research 11, 1963, pp. 18-39. [4] P. FAVATI,M. PAPPALARDO:On the reciprocal vector optimization problems, Journal of Optimization Theory and Applications, 47, 1985, pp. 181-193. [5] A. GEOFFRION:Stochastic programming with aspiration on fractile criteria, Management Science, 13, 1967, pp. 672-679.. [6] H. MARKOWlTZ:Portfolio Selection: Efficient Diversification on Investiments, John Wiley & Sons, Inc., New York, 1952. [7] L. MAR'rEIN,A sequential algorithm for the minimum of a strictly convex quadratic form under nonnegativity constraints, Dept. of Mathematics, University of Pisa, paper A-44, 1977. [8] S. SCHAIBLE:Parameter-free convex equivalent and dual programs offractional programming problems, Zeitschrift f'tir Operations Research, 18, 1974, pp. 187-196. [9] S. SCRMBLE:Duality in fractional programming: a unifield approach, Operation Research, 24, 1976, pp. 452-462. [10] W. E SHARPE:Portfolio Theory and Capital Markets, McGraw-Hill, New York, 1970. [11] C. VAN DE PANNE:Programming with a quadratic constraint, Management Science, 12, 1966, pp. 798-815. [12] M. VOLPATO:Sul carattere Ottimale delle politiche estremanti nei problemi di estremo vincolato, Studi e Modelli di Ricerca Operativa, UTET, Torino 1971, pp. 1043-1053. Sulla soluzione di un programma lineare con un vincolo quadratico. SUNTO Si propone una procedura di soluzione per un programma lineare con un vincolo quadratico convesso. I1 metodo perviene a una soluzione ottima in un numero finito di iterazioni. 90