^"V.
^^
Dewey
ALFRED
P.
WORKING PAPER SLOAN SCHOOL OF MANAGEMENT
NETWORK FLOWS Ravindra K. Ahuja Thomas L. Magnanti James B. Orlin
Sloan W.P. No. 2059-88
August 1988 Revised: December, 1988
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139
NETWORK FLOWS Ravindra K. Ahuja L. Magnanti James B. Orlin
Thomas
Sloan W.P. No. 2059-88
August 1988 Revised: December, 1988
NETWORK FLOWS
Ravindra K. Ahuja* Thomas L. Magnanti, and James Sloan School of Management Massachusetts Institute of Technology Cambridge, MA. 02139 ,
On
leave from Indian Institute of Technology,
B. Orlin
Kanpur - 208016, INDIA
MIT.
LffiRARF --^ JUN 1
NETWORK FLOWS OVERVIEW Introduction 1.1 Applications 1.2 Complexity Analysis 1.3 Notation and Definitions 1.4 Network Representations 1.5 Search Algorithms 1.6 Developing Polynomial Time Algorithms
Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks, Linear and Integer Programming
Basic Properties of
21
Z2 Z3 24
Network Transformations
Shortest Paths
Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path Algorithm
3.1
Dijkstra's
3.2
Dial's
3.3 3.4 3.5
Maximum Flows 4.1
4.2 4.3 4.4
4.5
Minimum 5.1
5.2 5.3 5.4 5.5
Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm
Preflow-Push Algorithms Excess-Scaling Algorithm Cost Flows Duality and Optimality Conditions Relationship to Shortest Path and Maximum Flow Problems
Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns
5.10
Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity Analysis
5.11
Assignment Problem
5.6
5.7 5.8 5.9
Reference Notes
References
Network Flows Perhaps no subfield of mathematical programming
network optimization.
Highway,
physical networks pervade
our everyday
lives.
As
more
alluring than
communication and many other
electrical,
rail,
is
even non-specialists
a consequence,
recognize the practical importance and the wide ranging applicability of networks.
Moreover, because the physical operating characteristics of networks
and mass balance
at
(e.g.,
flows on arcs
nodes) have natural mathematical representations, practitioners and
non-specialists can readily understand the mathematical descriptions
of
network
optimization problems and the basic ruiture of techniques used to solve these problems.
This combination of widespread applicability and ease of assimilation has undoubtedly
been instrumental in the evolution of network planning models as one of the most widely used modeling techniques
Network optimization
is
also alluring to methodologists.
concrete setting for testing and devising
many
inspired
price
of the
directive
and applied mathematics.
in all of operatior^s research
new
most fundamental
decomposition
Networks provide a
Indeed, network optimization has
theories.
results in all of optimization.
algorithms
for
both
linear
For example,
programming and
combinatorial optimization had their origins in network optimization.
So did cutting
plane methods and branch and bound procedures of integer programming, primal-dual
methods
of
linear
combinatorial
and nonlinear programming, and polyhedral methods of
optimization.
In addition, networks
have served as the major prototype
for several theoretical domaiiis (for example, the field of matroids) for a
wide variety of min/max duality
results in discrete mathematics.
Moreover, network optimization has served as a
from optimization and computer
science.
Many
fertile
results in
routinely used to design and evaluate computer systems, science concerning data structures
and
efficient data
impact on the design and implementation of
The aim optimization.
number
of this paf>er
In particular,
of recent theoretical
into the following
is to
we
summarilze
many
meeting ground for ideas
network optimization are
and ideas from computer
manipulation have had a major
many network
optimization algorithms.
of the fundamental ideas of network
concentrate on network flow problems and highlight a
and algorithmic advances.
broad major
and as the core model
topics:
We
have divided the discussion
Applications Basic Prof)erties of
Network Flows ''
Shortest Path Problems
Maximum Flow Problems Minimum Cost Flow Problems AssigTunent Problems
Much
of our discussion
focuses on the design of provably good
Among good
polynomial-time) algorithms.
are simple and are likely to be efficient in practice.
discussion so that
it
We
summary
survey.
who have
to the non-specialists
we
limit
to structure
our
multicommodity flows; and
We, however,
(iv)
working
a basic
programming.
our discussions to the problems
important generalizations of these problems such as the
have attempted
of the rudiments of optimization, particularly linear
In this chapter,
(ii)
presented those that
not only provides a survey of the field for the specialists, but also
serves as an introduction and
knowledge
we have
algorithms,
(e.g.,
Some
above.
listed
the generalized network flows;
(i)
the network design, will not be covered in our
briefly describe these
problems
and provide some
in Section 6.6
important references.
As
remainder of our discussion,
a prelude to the
several important preliminaries
performance of algorithms; quantitively;
many in 1.1
a
(iii)
discuss
(i)
different
graph notation and vtirious ways
few basic ideas from computer science
and
algorithms;
(ii)
We
.
in this section
two generic proof techniques
(iv)
that
that
ways
we
present
measure the
to
to represent
networks
underUe the design
have proven
to
of
be useful
designing polynomial-time algorithms.
Applications
Networks this section,
we
arise in
numerous application
settings
emd
in a variety of guises.
intended to illustrate a range of applications and to be suggestive of
problems
arise in practice;
of our discussion.
To
a
more extensive survey would take us
illustrate the
networks arising
in practice:
that
we
this discussion,
discussion
how network
far
breadth of network applications,
models requiring solution techniques For the purposes of
Our
few prototypical applications.
briefly describe a
In is
flow
beyond the scope
we
consider
some
will not describe in this chapter.
we
will consider four different types of
•
Physical networks (Streets, railbeds, pipelines, wires)
•
Route networks
•
Space-time networks (Scheduling networks)
•
Derived networks (Through problem trai^formations)
These four categories are not exhaustive and overlap
in coverage.
Nevertheless,
they provide a useful taxonomy for summarizing a variety of applications.
Network flow models are
also used for several purposes:
•
Descriptive modeling (answering "what is?" questions)
•
Predictive modeling (answering "what will be?" questions)
•
Normative modeling (answering "what should be?" questions,
that
is,
performing optimization)
We
will illustrate
models
in
each of these categories.
We
first
introduce the basic
underlying network flow model and some useful notation.
The Network Flow Model Let
capacity integer
G
Uj;
= (N, A) be a directed network with a cost
associated with every arc
number
node;
if b(i)
node.
Let
<
then node
0,
n =
representing
b(i)
|
N
|
i
is
a
its
e A.
|
|.
We
associate with each
supply or demand.
demand node; and
m= A
and
(i, j)
if b(i)
The minimum
lower bound
a
Cjj,
If b(i)
=
cost
0,
>
0,
node
then node
then node
i
is
/,;,
i
i
is
and a
e
N
an
a supply
a transhipment
network flow problem can be
formulated as follows:
Minimize
^
(1.1a)
C;; x;: '
(i,j)€A^
subject to
X^ii
/jj
We
<
Xxji
-
{j:(i,j)e]\}
Xjj
=b(i),
foralli€N,
(1.1b)
{j:(j,i)6^A}
S u^
refer to the vector x
=
.
for all
(xjj)
(i, j)
e A.
as the flow in the network.
(1 .Ic)
The
constraint (1.1b)
implies that the total flow out of a node minus the total flow into that node must equal
.
We
the net supply /demand of the node.
The flow must
balance constraint. (1.1c)
which we
henceforth refer to this constraint as the moss
also satisfy the lower
capacity constraints
The flow bounds might model
bound constraints.
refer to as the flow
bound and
physical capacities, contractual obligations or simply operating ranges of interest. Frequently, the given lower bounds
/j;
are
all
we show
zero;
later that they
can be
made
zero without any loss of generality.
In matrix notation,
minimize
{
we
ex
:
represent the
Nx
= b and
in terms of a node-arc incidence matrix
network and one column corresponding to arc
whose
of size n
flow variable
and
one
-1.
2.2
and
2.3,
we
We
N
let
denote the
has one row for each node of the
(i, j)
is
Nj; =
to Cj
node - e;
j
1.1
j-th unit vector
which
which
is
a
column vector
a
is
1.
Note
that each
2m
out of
its
nm
with
total entries are
h
gives an example of the node-arc incidence matrix.
some
i
Therefore the column
with a -1 coefficient.
has very special structure: only
consider
N
Njj represent the column of
nonzero entries are +1 or -1, and each column
its
Figure
make two (i)
of
e;
(1.2)
two mass balance equations, as an outflow from node
an inflow
as
N
The matrix all
let
problem
),
N. The matrix
for each arc.
and
app>ears in
corresponding to arc
nonzero,
cost flow
entries are all zeros except for the )-th entry
x-;
a +1 coefficient
(i, j),
/
minimum
one +1 and
Later in Sections
of the consequences of this special structure.
For now,
we
observations.
Summing
all
the
mass balance constraints eliminates
the flow variables
all
and
gives
i
I N
b(i)
Consequently,
i
total
cor\straints are to
(ii)
If
= 0,or
€
€ {N
:
Ib(i) = Mi) > 0)
Ib(i) i
€ {N
supply must equal
b(i)
total
<
.
0)
demand
if
the
mass balance
have any feasible solution.
the total supply does equal the total
demand, then summing
balance equations gives the zero equation Ox = equal to minus the
The following
:
sum
of
special ccises of the
all
0,
theory and applications of network flows.
cost flow
the mass
or equivalently, any equation
other equations, and hence
minimum
all
is
is
redundant.
problem play a
central role in the
(a)
(1,2) 1
2 3 4 5
An example
network.
A c Nj
N2
X
representing possible person-to-object assignments, and a cost
associated with each element
one
way
object in a
minimum
is
a
e
Nj and
that
(i, j)
in A.
The
objective
minimizes the cost of the assignment. The Jissignment problem G = (N^ u N2, A) with b(i) = 1 for all i
= -1 for
all
i
e
N2 (we set
and
l^:=
u^;
that
familiar city street
map
most readily comes
planning problems arise
one way
In order to
to inind
how
to
model
predictive
street
(i, j)
€ A).
for
Many network
envision a network.
network
As one
illustration, consider the
decide upon such issues as speed
to
whether or not
these decisions intelligently,
to construct a
we need
new
a descriptive
road or bridge.
model
that tells us
flows and measure the performance of any design as well as a
traffic
model
when we
problem context.
in this
street assignments, or
make
for all
1
perhaps the prototypical physical network, and the
is
problem of managing, or designing, a limits,
=
"^
Physical Networks
one
assign each person to exactly
is to
cost flow problem on a network
b(i)
The
C;;
measuring the
effect of
any change
in the system.
We can
then use
these models to answer a variety of "what if planning questions.
The following type these types of questions. specifies
how
conditions; traverse
it.
long
the
Now
it
of equilibrium
Each
line of the
district).
The time
takes to traverse this link.
more
traffic that
to
do so depends upon
flows on the link, the longer
(e.g.,
his or her
origin
traffic
(e.g.,
his
workplace in the central
Each of these users must choose a route through the network. Note, affect
each other;
if
two users
traverse the
they add to each other's travel time because of the added congestion on the
make
that
the travel time to
is
also suppose that each user of the system has a point of origin
however, that these route choices
us
answer
to
network has an associated delay function
or her home) and a point of destination
business
network flow model permits us
the behavioral assumption that each user wishes to travel
and destination as quickly as
possible, that
is,
network with the property
origin to destination path (that
is
Now
let
his or her
embedded
set of
there a flow pattern in the
no user can unilaterally change
is, all
link,
along a shortest travel time path.
(shortest path problems);
that
link.
between
This situation leads to the following equilibrium problem vdth an
network optimization problems
same
his (or her) choice of
other ULsers continue to use their specified paths in
the equilibrium solution) to reduce his travel time.
Operations researchers have
problem
setting, as well as related theory
developed
a set of sophisticated
models
for this
(concerning, for example, existence and uniqueness of equilibrium solutions), and
algorithms for computing equilibrium solutions.
Used
in the
mode
of "what if
scenario analysis, these models permit analysts to answer the type of questions previously.
These models are actively used
we posed
Indeed, the Urban Mass Transit
in practice.
Authority in the United States requires that communities perform a network equilibrium impact analysis as part of the process for obtaining federal funds for highway construction or improvement.
many
Similar types of models arise in
network equilibrium model forms the
heairt of
other problem contexts.
For example, a
the Project Independence Energy Systems
(LPIES) model developed by the U.S. Department of Energy as an analysis tool for
The
guiding public policy on energy.
another example.
In this setting.
basic equilibrium
Ohm's Law serves
model
of electrical
networks
is
as the analog of the congestion
function for the traffic equilibrium problem, and Kirkhoff s
Law
represents the network
mass balance equations. Another type of physical network
is
a very large-scale integrated circuit (VLSI
the nodes of the network correspond to electrical components
circuit).
In this setting
and the
links correspond to wires that connect these links.
planning problems arise design,
between
*.he its
in this
problem context. For example,
smallest possible integrated circuit to
components and maintain necessary
make
sejjarations
Numerous
how
can
we
network
lay out
,
or
the necessary connections
between the wires
(to
avoid
electrical interference).
Route Networks Route networks, which are one
level of abstraction
removed from physical
networks, are familiar to most students of operations research and management science.
The
traditional operations research transportation
with supplies
at its plants
a given aistomer costs based
must ship
demand. Each
upon some
problem
elsewhere in the system,
in
each with
arc connecting a supply point to a retail center incurs
we
preprocess the data and
Consequently, an arc connecting a supply point and
center might correspond to a complex four leg distribution channel with legs
from a plant (by truck)
(iv)
shipper
physical network, in this case the transportation network. Rather
construct transportation routes.
(i)
A
to geographically dispersed retail centers,
than solving the problem directly on the physical network,
retail
is illustrative.
(iii)
to a rail station,
from the
from the distribution center (on a
some
rail
(ii)
from the
head (by truck)
rail station to a rail
to a distribution center,
local delivery truck) to the final
cases just to the distribution center).
If
we
head
customer
(or
and even
assign the arc with the composite
distribution cost of
numerous
the intermediary legs, as well as with the distribution capacity for
problem becomes a
this route, this
from plants
all
classic
network transportation model:
customers that minimizes overall
to
As but one
applications.
find the flows
This type of model
costs.
is
used
in
winning practice paper written
illustration, a prize
several years ago described an application of such a network planning system by the Cahill costs
May
Roberts Pharmaceutical
Company
by 20%, while improving customer
Many
(of Ireland)
to
reduce overall distribution
service as well.
related problems arise in this type of
problem
setting, for instance, the
design issue of deciding upon the location of the distribution centers.
address
this
It is
possible to
type of decision problem using integer programming methodology for
choosing the distribution given choice of
sites;
sites
and network flows
to cost out (or optimize flows) for
any
using this approach, a noted study conducted several years ago
permitted Hunt Wesson Foods Corporation to save over $1 million annually.
One problem
special case of the transportation
that
we
problem merits note — the assignment
applications, particularly in problem contexts such as
application context,
demand
This problem has numerous
introduced previously in this section.
we would
machine scheduling.
identify the supply points with jobs to be performed, the
points with available machines, and the cost associated with arc
of completing job
i
In this
on machine
The solution
j.
cost assignment of the jobs to the machines,
to the
problem
(i, j)
specifies the
as the cost
minimum
that each
machine has the
some production
or service activity
assuming
capacity to perform only one job.
Space Time Networks Frequently in practice,
over time. In these instances
we wish
it is
to schedule
often convenient to formulate a
network flow problem
on a "space— time network" with several nodes representing a particular
facility (a
machine, a warehouse, an airport) but at different points in time. Figure
1.2,
which represents a core planning model
in
production planning, the
an important example. In this problem context, we wish to meet prescribed demands for a product in each of the T time periods. In each d^ economic
period,
lot size
we can produce
inventory
T+
1
problem,
I^
is
at level
from the previous
nodes: one node
t
=
1, 2,
and /or we can meet the demand by drav^g upon
Xj
.
f)eriod. .
.
,
T
The network representing
this
problem has
represents each of the planning periods, and one
node
represents the "source" of
production level level
period
t
The flow on
production.
and the flow on arc
t,
t
to period
+
t
1
.
(t, t
+
1)
arc
prescribes the
(0, t)
represents the inventory
The mass balance equation
for each
models the basic accounting equation: incoming inventory plus production
that period
node
in period
be carried from period
to
I^
Xj
all
must equal demand plus
indicates that
all
final inventory.
demand (assuming
The mass balance equation
zero beginning and zero
fir\al
=
in
for
inventory
over the entire planning period) must be produced in some period
t
Whenever
easily solved as a
the production and holding costs are linear, this problem
is
1, 2,
.
.
.
,
T.
we must find the minimum cost path of If we impose to that demand point). production and inventory arcs from node capacities on production or inventory, the problem becomes a minimum cost network demand
shortest path problem (for each
period,
flow problem.
Id,
Network flow model
Figure 1^.
of the economic lot size problem.
One extension of this economic lot sizing problem Assume that production x^ in any period incurs a fixed produce T^.
in period
In addition
inventory cost
,
t
(i.e.,
we may h^
x^
>
0),
arises frequently in practice. cost:
no matter how much or how
that
little,
we
incur a per unit production cost c^ in period
for carrying
Hence, the cost on each arc for
any unit of inventory from period
this
problem
is
or linear plus a fixed cost (for production arcs).
t
is,
whenever we
incur a fixed cost t
and a per
to i>eriod
t
unit
+
1.
either linear (for inventory carrying arcs)
Consequently, the objective function for
10
the
problem
As we
concave.
is
indicate in Section 2.2
any such concave
,
known
flow problem always has a special type of optimum solution solution. This problem's spanning tree solution
the
arc
first
on each path
inventory carrying
number
a production arc (of the
is
form
as a spanning trees
into disjoint directed paths;
and each other
(0, t))
arc
is
an
This observation implies the following production property: in the
arc.
each time
solution,
decomposes
network
cost
we
produce,
of contiguous periods.
we produce enough
meet the demand
to
we
Moreover, in no period do
for an integral
both carry inventory from
the previous period and produce.
The production property permits us shortest path
nodes
an arc
j).
satisfying the
problem very
problem on an auxiliary network G' defined
consists of (i,
to solve the
1
to
T+
The length
demand
1,
and
of arc
nodes
for every pair of (i,
j)
of the periods
i
The network G'
as follows.
and
j
with
to
node T +
obtain the
1
of the
same
from
facility (ii)
objective function veilue
of the
<
j,
it
contains
j-1. Observe that for every production
to
i
optimum production schedule by
Many enhancements
i
equal to the production and inventory cost of
is
schedule satisfying the production property, G' contair\s a directed path 1
efficiently as a
and
in
G' from node
Hence we can
vice-versa.
solving a shortest path problem.
model are
example
possible, for
might have limited production capacity or limited storage
(i)
the production
for inventory, or
the production facility might be producing several products that are linked by
common
production costs or by changeover cost
dies in an automobile stamping plant
share
common
limited production
(for
when making In
facilities.
example,
we may need
to
change
different types of fenders), or that
most
cases, the
quite difficult to solve (they are NP
enhanced models are
embedded network
structure
often proves to be useful in designing either heuristic or optimization methods.
Another
used
classical
network flow scheduling problem
to identify a flight schedule for
an
represents both a geographical location
York
at 10 A.M.).
The
arcs are of
two
airline.
(e.g.,
types:
New York at 10 A.M. to Boston at 11 to stay at New York from 10 A.M. until 11 overnight at New York from 11 P.M. until example
revenues vdth each service or
demand)
network).
A
leg, a
is
the airline scheduling problem
In this application setting, each
an airport) and a point (i)
service arcs connecting
A.M.;
A.M. 6
network flow
will specify a set of flight plans
in
(ii)
time
two
(e.g..
node
New
airports, for
layover arcs that permit a plane
to wait for a later flight, or to wait
A.M. the next morning. in this
If
we
identify
network (with no external supply
(circulation of airplanes through the
flow that maximizes revenue will prescribe a schedule for an
airline's fleet
11
of planes.
The same type of network representation
arises in
many
other dynamic
scheduling applications.
Derived Networks This category
is
a "grab
bag" of specialized applications and illustrates that
sometimes network flow problems
arise in surprising
surface might not appear to involve networks.
ways from problems
The foUovdng examples
that
on the
illustrate this
point.
Single Duty
Crew Scheduling.
drivers of a bus company.
Time Period/Duty Number
Figure 1.3 illustrates a
number
of possible duties for the
12
In this formulation the binary variable x: indicates whether
we
0)
column vector whose components are
A
A
the matrix
select the j-th duty;
work
make
We
breaks).
Vs.
all
this identification,
show
we perform
Observe
equation from the equation below
Now
to the system.
add
a
that this
it.
problem
is
just
of
A) and
below the
of the
last
problem
Therefore, the problem cost in the
is
in
to ship
network given
row
a
column
of
shift (no
To
(1.2b) subtract each
In
minus the sums of
to
row
1
in
first
the
all
1.4,
hour of the duty
in the
A, or the added row, that Hes
Moreover, the revised right hand side vector
and a -1
in the last (the
one unit of flow from node
Figure
in
is
and b
Because of the structure of A, each column in the
+1 in the column of A).
have a +1
=
This transformation does not change the solution
a single -1 (corresponding to the
will
(x;
a shortest path problem.
revised system will have a single +1 (corresponding to the
column
or not
duty contains a single work
's
redundant equation equal
equations in the revised system.
1)
that the ones in each
following operations:
the
=
represents the matrix of duties
occur in consecutive rows because each driver
split shifts or
(x;
which
is
1
to
appended) row.
node 9
at
minimum
an instance of the shortest path
problem. ^5
1
unit
Figure
If
number
1.4.
Shortest path formulation of the single duty scheduling problem.
instead of requiring a single driver to be on duty in each period, to
network
be on duty
each period, the same
flow problem, but in
demands) could be
minimum
in
cost
Critical Path
arbitrary.
this case the right
we
specify a
transformation would produce a
hand side
coefficients (supply
Therefore, the transformed problem
network flow problem, rather than a shortest
p)ath
would be
and
a general
problem.
Scheduling and Networks Derived from Precedence Conditions
In construction
and many other project planning applications, workers need
complete a variety of tasks that are related by precedence conditions; for example,
to in
constructing a house, a builder must pour the foundation before framing the house and
complete the framing before beginning
to install either electrical or
plumbing
fixtures.
13
Suppose we need
This type of application can be formulated mathematically as follows. to
complete
jobs
J
and
that job
time
choose the
start
constraints
and complete the
=
(j
j
1, 2,
of each job
S;
.
.
.
,
J)
so that
j
requires
t:
we honor
days
to complete.
overall project as quickly as possible.
us a network. The precedence constraints imply that for each arc job
two
cannot
j
dummy
start until job jobs,
has been completed.
i
any other job can begin and have completed this
If
augmented
all
a "completion" job
other jobs.
G
Let
minimize sj^^ - Sq
represent the
arcs, thereby giving
(i, j)
in the
network, the
"start" job
+
J
1
to
we add
be completed before
we
that cannot be initiated until
= (N, A) represent the network corresponding
Then we vdsh
project.
we
For convenience of notation,
both with zero processing time: a
to
a set of specified precedence
by nodes, then the precedence constraints can be represented by
jobs
We are
to solve the following optimization
to
problem:
,
T
subject to
S
Sj
On to
+
Sj
tj
,
for each arc
the surface, this problem, which
is
(i
two
Sj
variables,
linear
hand side of the
to the left
one with
programming dual
variable
xj:
one
a plus
with each arc
of this (i, j)
,
V
maximize
j)
a linear
bear no resemblance to network optimization.
variable
,
e A.
program
Note, however, that
and one with a minus one
problem has a familiar
structure.
then the dual of this problem
t;
X;; ^
if
X:;
{j:(i,j)eA)
+
^ 2-
,
seems
we move
If
we
coefficient.
the
is
,
f
Xjj
{j:(j,i)€!^)
si I
-l,ifi = 0, otherwise, for l,ifi = J + l
all
i
€
The
associate a dual
subject to
^ 2^
s:
constraint, then each constraint contains exactly
coefficient
(i,j)€X
in the variables
N
14
15
xj;
S
for all
0,
(i, j)
6
A
.
This problem requires us to determine the longest path in the network
node
node
to
+
J
with
1
following interpretation.
tj
arc length of
as the
the completion of the overall project, this path has
become known
principal tool in project
The
projects.
critical
This longest path has the
(i, j).
needed
the longest sequence of jobs
It is
from
to fulfill the sp>ecified
Since delaying any job in this sequence must necessarily delay
precedence conditions.
the problem has
arc
G
as the
management,
path
critical
become known
important because
itself is
This model
path problem.
managing
particularly for
as the critical path
and
become
heis
a
large-scale corwtruction
identifies those jobs that require
it
managerial attention in order to complete the project as quickly as possible. Researchers and practitioners have enhanced this basic model in several ways.
For example,
if
resources are available for expediting individual jobs,
we
could consider
the most efficient use of these resources to complete the overall project as quickly as
Certain versions of this problem can be formulated as
possible.
minimum
cost flow
problems.
The open
mining problem
pit
precedence conditions. this figure,
we have
another network flow problem that arises from
is
Consider the open
pit
mine shown
in Figure
divided the region to be mined into blocks.
As shown
1.5.
The provisions
of
in
any
given mining technology, and perhaps the geography of the mine, impose restrictions
on
how we
can remove the blocks:
have removed any block
that lies
for
example,
we
can never remove a block until
immediately above
restrictions
it;
we
on the "angle" of
mining the blocks might impose similar precedence conditions. Suppose now that each block has an associated revenue n (e.g., the value of the ore in the block minus the j
cost for extracting the block) If
we
block
need wish
let j,
y;
and we wish
be a zero-one variable indicating whether
the problem will contain
mine block to maximize
to
j
node
^
y;
<
(i)
before block total
1,
i,
linear
rather than
y;
for each block, a variable for
demand demand
at
node
j.
and ,
(ii)
sum
1) or not (y;
(or, y;
summed
over
all
or
1)
-
yj
blocks
programming version =
maximize
S
0)
overall revenue.
=
0)
we
will be a
j.
rj's,
that
The dual
we
linear
of the problem (with the
network flow problem with a
each precedence constraint, and the revenue
of the
extract
whenever we
an objective function specifying
This network will also have a
equal to minus the
=
(y^
a constraint y; ^ y^
revenue ny;
program (obtained from the constraints
to extract blocks to
dummy
"collection
and an arc connecting
it
to
node"
node
j
n
as the
with (that
is.
16
block
j);
corresponds to the upper bound constraint
this arc
program. The dual problem
is
way
that
in the original linear
1
0.
path scheduling problem and open pit mining problem illustrate one
critical
network flow problems
program are
^
one of finding a network flow that minimizes ths sum of
flows on the arcs incident to node
The
y;
by
related
Whenever, two variables
arise indirectly.
in a linear
a precedence conditions, the variable corresponding to this
precedence constraint in the dual linear program v^ll have a network flow structure.
If
the only constraints in the problem are precedence constraints, the dual linear program will
be a network flow problem.
Matrix Rounding of Census Information
The
Census Bureau uses census infonnation
wide variety of purposes. By law,
for a
of
U.S.
information and not disclose
its
individual.
the Bureau has an obligation to protect the source
can be attributed to any particular
statistics that
can attempt to do so by rounding the census information contained in any
It
shown
Consider, for example, the data
table.
to construct millions of tables
entry in this table
is
a
1,
round each entry
Since the upper leftmost
the tabulated information might disclose information about a
We
particular individual.
in Figure 1.6(a).
might disguise the information in
in the table, including the
this table as follows;
row and column sums,
either
up
or dov^n to
add
the nearest multiple of three, say, so that the entries in the table continue to
sum
(rounded) row and column sums, and the overall
adds
to a
rounded version
of the overall
rounded version of the data
that
a feasible flow in a network
meets
sum
It
j):
prescribed table, rounded either
network connected
to
rounded up or dov^n.
column node
up
or
down.
j
to this
We
also
must be the sum of illustrates the 1.6.
If
we
i
row
in the table
cast as finding
and one node
row
(corresponding to
the flow on this arc should be the
up
or dov^T*. In addition,
each row node Similarly,
i:
the flow
we add
on
we add
this arc
a supersink
t
add an arc connecting node
t
entries in the original table
network flow problem corresponding
rescale all the flows, meeisuring
them
i)
for each
and node
j
entry in the
ij-th
a supersource s to the
must be the
i-th
row sum,
with the arc connecting each
node; the flow on this arc must be the
all
table
and can be solved by an application of the maximum flow
contains an arc connecting node
(corresponding to column
The problem can be
this criterion.
new
Figure 1.6(b) shows a
in the original table.
algorithm. The network contains a node for each
column.
of the entries in the
to the
j-th
and node
column sum, rounded
s;
the flow on this arc
rounded up or down. Figure
1.7
to the census data specified in Figure
in integral units of the
rounding base
16a
Time
in
^service (hours)
<1 Income less
than $10,(XX)
$10,000 - $30,000 $30,000 - $50,000
mure than $50,000
Column
Total
1
1-5
<5
16b
(multiples of 3
in
our example), then the flow on each arc must be integral
The formulation of
consecutive integral values.
problem, corresponding
to tables
a
one of two
more general version
of this
with more than two dimensions, will not be a network
imbedded network
Nevertheless, these problems have an
flow problem.
at
(corresponding to 2-dimensional "cuts" in the table) that
we
structure
can exploit in divising
algorithms to find rounded versions of the tables.
12
Complexity Analysis There are three basic approaches for measuring the performance of an algorithm:
empirical analysis, worst-case analysis, and average-case analysis. typically
measures the computational time of an algorithm using
a distribution (or several distributions) of problem instances.
empirical analysis
aims
to
is to
how
statistical
sampling on
The major
objective of
algorithms behave in practice.
Worst-case analysis
provide upper bounds on the number of steps that a given algorithm can take on
any problem instance.
The
estimate
Empirical analysis
Therefore, this type of analysis provides performance guarantees.
objective of average-case analysis
an algorithm.
is
to estimate the expected
number
of steps taken
Average-case analysis differs from empirical analysis because
it
by
provides
rigorous mathematical proofs of average-case performance, rather than statistical estimates.
Each of these three performance measures has appropriate for certain purposes.
its
relative merits,
many
is
Nevertheless, this chapter will focus primarily on
worst-case analysis, and only secondarily on empirical behavior.
designed
and
Researchers have
of the algorithms described in this chapter specifically to improve
worst-case complexity while simultaneously maintaining good empirical behavior.
Thus, for the algorithms
we
present, worst-case analysis
is
the primary measure of
performance.
Worst-Case Analysis For worst-case analysis,
we bound
the running time of network algorithms in
terms of several basic problem parameters: the number of nodes (m),
C
and upper bounds
(or
U) appears
integer valued.
C and U on
in the
the cost coefficients
complexity arulysis,
As an example
(n),
and the arc
we assume
the
number
capacities.
of arcs
Whenever
that each cost (or capacity)
of a worst-case result within this chapter,
we
will
is
prove
17
that the
number
problem
is less
of steps for the label correcting algorithm to solve the shortest path
than
To avoid
pnm
steps for
some
sufficiently large constant p.
compute or mention the constant
the need to
use a "big O" notation, replacing the expressions: requires
pmn
steps for
some constant
time of the label correcting algorithm
p, researchers typically
"the label correcting algorithm
p" with the equivalent expression "the running
0(nm)." The 0(
is
notation avoids the need to
)
state a specific constant; instead, this notation indicates only the
running time. By dominant, sufficiently large values of
running times.
would
we mean
n and m.
For example,
state that the
if
the term that
Therefore, the time
is
may have
constant terms
0(
)
is
other terms for
called asymptotic
lOnm^ + 2'^'^n^m, then we
m ^ n.
dominant even though
term would dominate.
2''^'^n'^m this
is
all
bounds are
O(nm^), assuming that
running time indicates that the lOnm^ term values of n and m, the
would dominate
the actual running time
running time
dominant terms of the
Observe that the for
most
practical
Although ignoring the
undesirable feature, researchers have widely adopted the
notation for several reasons:
Ignoring the constants greatly simplifies the analysis.
1.
the 0(
)
Consequently, the use of
notation typically has permited analysts to avoid the prohibitively difficult
analysis required to
compute the leading constants, which,
in turn, has led to a
flourishing of research on the worst
2.
Estimating the constants correctly
the constants
is
is
fundamentally
not determined solely by the algorithm;
difficult.
it is
The
leeist
value of
also highly sensitive to the
choice of the computer language, and even to the choice of the computer.
3.
For
all
integers for
4.
to the
all
of the algorithms that
we
present, the constant terms are relatively small
the terms in the complexity bound.
For large practical problems, the constant factors do not contribute nearly as
running time as do the factors involving
n,
m,
C
much
or U.
Counting Steps
The running time of steps
it
performs.
of a network algorithm
The counting
which are quite appropriate
for
is
determined by counting the number
of steps relies on a
number
most of today's computers.
of assumptions, most of
18
Al.l
The computer
carries out instructions sequentially, with at
being executed
at a time.
Each comparison and basic arithmetic operation counts as one
A1.2
By envoking
Al.l,
we
Al .2
implicitly
assumes
today's computers
algorithms that
we
we would
division, takes equal time,
by counting
of computations;
other computer operations, on
all
cissumption that each operation, be
is justified
in part
by the
0(
fact that
differences in running times of at most a constant factor,
which
between an addition and a multiplication on
modem
On may
we
be counted are comparisons
same asymptotic worst-case
obtain the
Our
present.
step.
network flow «dgorithms.
that the only operations to
In fact, even
tirithmetic operations.
model
are adhering to a sequential
will not discuss parallel implementations of
and
most one instruction
essentially all
is
it
results for the
an addition or
notation ignores
)
the time difference
computers.
the other hand, the assumption that each arithmetic operation takes one step
lead us to underestimate the aisymptotic running time of arithmetic operations
involving very large numbers on real computers since, in practice, a computer must
numbers
store large
in several
words
of
memory.
its
Therefore, to perform each
operation on very large numbers, a computer must access a
thus takes more than a constant
number
number
of
words of data and
To avoid
of steps.
underestimation of the running time, in comparing two running times,
assume
that both
C and U
some constant
for
are polynomially
This assumption,
k.
reasonable in practice. For example,
would allow
if
bounded
known
we were
in n,
i.e.,
C
systematic
this
we
will typically
= Oirr-) and
U = 0(n'^),
as the similarity assumption,
to restrict costs to
be
less
is
quite
than lOOn-^,
we
costs to be as large as 100,000,000,000 for networks with 1000 nodes.
Polynomial-Time Algorithms
An
algorithm
boimded by the
number
length
is
a
is
said to be a polynomial-time algorithm
of bits needed to represent that problem.
low order polynomial function of
m)flog n + log
C
m, log C and log U.
maximum
n,
if its
running time
is
running time
input length of a
problem
is is
For a network problem, the input
m, log
C and
+ log U)). Consequently, researchers refer
polynomial-time algorithm n,
The
a polynomial function of the input length.
if its
to a
bounded by
log
U
(e.g.,
it
is
0((n +
network algorithm as a
a polynomial function in
For example, the running time of one of the polynomial-time
flow algorithms
we
consider
is
0(nm
+ n^ log U).
Other instances of
/)
19
O(n^m) and 0(n
polynomial-tiine bounds are
log
said to be a strongly
polynomial-time algorithm
polynomial function
in
flow algorithm alluded
algorithm.
The
interest in strongly
we envoke
if
therefore,
to,
algorithm
is
is
C
=
polynomial function of n and log its
running time
is
C) We
Odog n) and
0(mC).
Some
").
C
is
is
The
primarily theoretical.
log
U=
CXlog
of exp)onential time
nC cannot be bounded by
say that an algorithm
m,
n).
running time grows
if its
Some examples
n,
or log U.
a
polynomial-time algorithms
(Observe that
polynomially bounded in
pseudopolynomial-time algorithms algorithms.
all
said to be an exponential-time algorithm
bounds are 0(nC), 0(2^), 0(n!) and 0(n^°g
bounded by
is
is
a strongly polynomial-time
not
the similarity assumption,
as a function that can not be polynomially bovmded.
if
running time
polynomial-time algorithms
are strongly polynomial-time because log
An
if its
polynomial-time algorithm
only n and m, and does not involve log
maximum
In particular,
A
n).
is
a
pseudopolynomial-time
C and
U.
The
class of
an important subclass of exponential-time
instances of pseudopolynomial-time
bounds are 0(m + nC) and
For problems that satisfy the similarity assumption, pseudopolynomial-time
algorithms become polynomial-time algorithms, but the algorithms will not be attractive if
C and U
are high degree
polynomiab
in n.
There are two major reasons for preferring polynomial-time algorithms to exponential-time
algorithms.
First,
any polynomial-time algorithm
Even
superior to any exponential-time algorithm.
example, n^'^OO
is
smaller than tP'^^^E^ ^
if
n
is
in
extreme cases
sufficiently large.
Qn
is
asymptotically
this is true. this case,
For
n must
be larger than 2"^^^'^^^.)
Figure 1.8 illustrates the asymptotic superiority of
polynomial-time algorithms.
The second reason
experience has
shown
that, as a rule,
exponential time algorithms. small degree.
is
more pragmatic.
Much
practical
polynomial-time algorithms perform better than
Moreover, the polynomials in practice are typically of a
20
APPROXIMATE VALUES
21
I
N
I
m= A
and
I
I
We associate
.
that Uj; >
assume throughout nodes
in a graph;
An
We
arc
the source s
i
=
A(i)
arc
(i, j)
A
node
of
list
:
€ A.
Cj;
we
Frequently,
The arc
j.
an outgoing
aire
and
a capacity
distinguish
We
Uj:.
two
incident to nodes
and say
(i, j),
node
of
is
(i,j)
as the head of arc
j
special
that the arc
and an incoming
i
and
i
arc of
j.
(i, j)
node
defined as the set of arcs emanating from node
A(i), is
The degree
€ N}.
j
i,
is
and
i
The
i.
(i, j)
e A, a cost
(i, j)
t.
jmd node
e
{(i, j)
sink
tail
Tlie arc adjacency
i.e.,
and
as the
emanates from node j.
for each
has two end points,
(i, j)
node
refer to
with each arc
node
of a
number
the
is
i,
and
of incoming
outgoing arcs incident to that node.
A directed (\2
,
13), 13.-
r-1.
An
and
ij^^-j
•
•
,(
path in
ij.i, if)
,
undirected path
G
= (N, A)
i2
,
i3
,
•
•
,
.
We
or
(ij. , i)
(i^
ij^+p €
either arc
(ij^, i\^+-[)
A
for each
or arc (ij^+i
A directed
as the internal nodes of the path.
ij-.^
path together with the arc
with the arc
(ij^,
nodes and arcs ip (ip k=
12^, 12, 1,
.
.
defined similarly except that for any two consecutive nodes
on the path, the path contains
the nodes
(ij. ,
i|)
and an undirected
cycle
,
i\^
.
i^.
We refer to
cycle is a directed
an imdirected path together
is
ij.).
,
shall often use the terminology path to designate either a directed or
undirected path;
we
a sequence of distinct
satisfying the property that
if
is
is
whichever
is
appropriate from context.
i|
from the problem context. Alternatively, we
-
i2
shall
-
.
.
.
when
-ij^
sometimes
We
(sequence oO arcs without mention of the nodes.
any ambiguity might
For simplicity of notation,
shall explicitly state directed or undirected path.
often refer to a path as a sequence of nodes
If
its
an
arise,
we
shall
arcs are apparent
refer to a path as a set of
shall
use similar conventions for
node
set
representing cycles.
A
graph
G
= (N, A)
two subsets N| and N2 so
A
graph G' =
G' = (N', A')
Two
is
nodes
i
and i
it
G
if its
in A,
G=
i
to
j.
A
graph
disconnected.
is
e
N| and
(N, A)
= (N, A)
if
are said to be connected
j
is
(i, j)
subgraph of
a spanning subgraph of
connected; othervs^se,
is
called a bipartite graph
(N', A') is a
undirected path from
G
is
that for each arc
if
N' =
if
N'
j
CN
N and
and A'
A'
c
c
A.
A
graph
A.
the graph contains at least one
said to be connected
In this chapter,
N can be partitioned into
e N2.
if all
pairs of nodes are
we always assume
that the
graph
We refer to any set Q c A with the property that the graph G' = (N, A-Q) disconnected, and no superset of Q has this property, as a cutset of G. A cutset
is
connected.
22
two
partitions the graph into
Q as the
the cutset
A
graph
T is a spanning subgraph arcs not belonging to 1
least
two
A
spanning
any nontree arc
A
contains no cycle.
subgraph of
A
T.
tree
T
said to be a spanning
is
a tree with degree equal to one
T are called
called a leaf node.
is
A tree of G if
connected acyclic graph.
tree is a
A spanning tree of G = (N, A)
called nontree arcs.
tree contains a
tree arcs,
and
has exactly n-
Each
tree has at
to a
unique path between any two nodes. The addition of
spanning tree creates exactly one
cycle again creates a spanning tree.
whose end points belong
to
tree-arc constitute a cutset.
resulting graph
is
We
two If
Removing any
any arc belonging
Removing any
cycle.
tree-arc creates
different subtrees of a
again a spanning
spanning
to this cutset is
two
subtrees.
tree created
added
arc in this
Arcs
by deleting
a
to the subtrees, the
tree.
we assume
In this chapter,
that logarithms
represent the logarithm of any
are of base 2 unless
number b by
log
we
state
it
b.
Network Representations
1.4
The complexity also
alternatively represent
nc des.
leaf
othervdse.
shall
of G. Arcs belonging to a spaiming tree
T are
A node in
tree arcs.
if it
a connected
is
We
X and N-X.
partition (X, N-X).
acyclic
is
T
subtree of a tree
node
sets of nodes,
upon
manner used
the
scheme used
of a
network algorithm depends not only on the algorithm, but to represent the
for maintaining
network within a computer and the storage
and updating the intermediate
of an algorithm (either worst
the network discuss
more
cleverly
some popular ways In Section
and by using improved data
representation of a network.
which only space
2m words
matrix representation. the element
I^:
=
1 if
improved by representing
structures.
This scheme requires
have nonzero values. Clearly, to represent a
nm
words
this
network representation
network
is
This representation stores an n x n matrix
arc
(i, j)
In this section,
we
already described the node-arc incidence matrix
Another popular way
efficient.
The running time
of representing a network.
we have
1.1,
results.
€ A, and
Ijj
=
otherwise.
The
to store a network, of is
not
the node-node adjacency I
with the property that
arc costs
and
capacities are
23
(a)
A network example
arc
number
point
(tail,
head)
1-
cost
cost
2
4
2
3
2
3
1
3
4
4
1
5
2
1
6
4
3
7
1
4
8
3
2
1
(b)
The forward
star representation.
(c)
The reverse
arc
number 1
2 3
4 5
6 7 8
(tail,
head)
cost
star representation.
24
also stored in n x n matrices.
but
is
This representation
not attractive for storing a sparse network.
The forward
star
and
reverse star representations are
represent networks, both sparse and dei^se. incidence
numbers
from node
We
point(i), that indicates the smallest
If point(i)
list.
set point(l)
=
1
We
arbitrarily.
the cost of arcs in this order.
Hence the outgoing
arcs of
number
node
> point(i+l)
i
-
node
at
then node
representation as follows. store the
(tail,
We
has no outgoing
i
i,
we
examine the nodes
incoming arcs
set rpoint(l) =
known
we
j
=
1
to
rpoint(i),
n
j.
at
node
i
Observe
that
(3, 2) hcis 1.
numbers
arc
number
and once we know the
1.9(d) gives the
the
(tail,
(tail,
star representation.
complete trace array.
we
and sequentially
maintain a reverse
first
position in these
For the sake of
i.
earlier,
we
store the
This data structure
We
star representation S,
we
we
can avoid this duplication by
head) and the cost of the
head) and cost of arcs,
arc numbers,
any
1.9(c).
4 in the forward star representation.
So instead of storing
from the forward
As
by storing both the forward and reverse
ir\stead of
at
as the reverse star
We also
which denotes the
will maintain a significant duplicate information.
storing arc
1) in
can create a reverse star
at positions rpoint(i) to (rpoint(i+l) - 1).
gives us the representation shov^Ti in Figure
incoming arcs
in order
and rpoint(n+l) = m+1.
1
-
For consistency,
arc.
set of
arrays that contains information about an incoming arc at node consistency,
node
us to determine efficiently the set of
additional data structure
denoted by
denoted by
1.9(a).
head) and the cost of incoming arcs of node
pointer with each node
i,
of an arc emanating from
list
Starting from a forward star representation,
representation.
node
Figure 1.9(b) specifies the forward star
star representation allows
we need an
star
Arcs emanating from the
on.
any node. To determine, simultaneously, the
efficiently,
as
the arcs emanating
are stored at positions point(i) to (point(i+l)
1,
to
then sequentially store the (taU, head) and
in the arc
representation of the network given in Figure
outgoing arcs
and so
2,
number
first
known
The forward
literature.)
also maintain a pointer with each
and point(n+l) = m+1.
The forward
we
the arcs in a certain order:
same node can be numbered
the arc
(These representations are also
then the arcs emanating from node
1,
probably the most popular ways
computer science
representation in the
list
representation
i.
adequate for very dense networks,
is
The
eircs.
arc
For example, arc
(1, 2)
has arc number
can simply store the arc numbers
can always retrieve the associated information
We
store
circ
numbers
in
an m-array
trace.
Figure
25
Search Algorithms
1.5
Search algorithnvs are fundamental graph techniques; different variants of search lie at
the heart of
commonly used
many network
algorithms.
in a
graph
node
s,
For purposes of illustration,
G
let
all
nodes in a network that
us suppose that
we wish
At every point
in
one of two
states:
in the
We call an arc
of
a
node
(i, j)
admissible
Initially,
if
node
i
is
marked and node
only the source node
new node by examining an j
j,
of the
to find all the
i.e.,
predi])
=
i.
all
nodes
marked or unmarked. The marked nodes are
is
marked.
j
is
most
nodes
in the
known
to
yet to be determined. inadmissible
Subsequently, by examining
Whenever
the procedure
we say that node is a predecessor terminates when the graph contains no
admissible arc
The algorithm
is
unmarked, and
admissible arcs, the search algorithm will mark more nodes.
marks
two
satisfy a particular
search procedure,
be reachable from the source, and the status of unmarked nodes
otherwise.
discuss
= (N, A) that are reachable through directed paths from a distinguished
called the source.
network are
we
search techniques: breadth-first search and depth-first search.
Search algorithms attempt to find property.
In this section,
(i, j)
i
admissible arcs. Tl e follovkdng algorithm summarizes the basic iterative steps.
26
SEARCH;
algorithm
begin
unmark
all
mark node LIST
nodes
N;
in
s;
:= {s);
do
while LIST *
begin
node
select a if
node
i
i
in LIST;
incident to an admissible arc
is
(i, j)
then
begin
mark node pred(j) :=
i;
add node
j
j;
to LIST;
end else delete
node
i
from LIST;
end; end;
When from
this
algoirthm terminates,
s via a directed path.
has marked
it
all
nodes
The predecessor indices define
in
G
that are reachable
a tree consisting of
marked
nodes.
We structure
use the following data structure to identify admissible is
also used in the
We
discussed in later sections.
from
it.
which i
is
is
the
Arcs
in
each
flow and
minimum
maintain with each node
the current candidate for being examined next. first
When
admissible
It is
the
i
Initially,
The search algorithm examines
arc in A(i). is
inadmissible,
it
makes the next
the algorithm reaches the end of the arc
list
The same data
cost flow algorithms A(i) of arcs
emanating
Each node has a current arc
can be arranged arbitrarily.
list
whenever the current arc arc.
maximum
arcs.
list, it
(i, j)
the current arc of node
this list sequentially
arc in the arc
list
and
the ciirrent
declares that the node has no
arc.
easy to
iteration of the
show
that the search algorithm runs in
0(m +
while loop either finds an admissible arc or does
the algorithm marks a
new node and adds
it
to LIST,
and
n)
not.
= 0(m)
time.
Each
In the former case,
in the latter Ccise
it
deletes a
marked node from LIST. Since the algorithm marks any node at most once, it executes the while loop at most 2n times. Now consider the effort spent in identifying the
27
admissible arcs. For each node
algorithm examines a
i,
we
scan arcs in A(i)
X A(i) = m N
total of
at
most once. Therefore, the search
and thus terminates
arcs,
in
0(m)
time.
ie
The algorithm, nodes
to LIST.
as described, does not specify the order for
examining and adding
Different rules give rise to different search techniques.
maintained as a queue,
i.e.,
If
the set LIST
nodes are always selected from the front and added
then the search algorithm selects the marked nodes in the
first-in, first-out
is
to the rear,
order.
This
kind of search amounts to visiting the nodes in order of increasing distance from therefore, this version of search
is
called a breadth-first search.
nondecreasing order of their distance from the
minimum number
s,
is
to maintain the set
always selected from the front and added
marks nodes
with the distance from
of arcs in a directed path from s to
Another popular method
It
i
in the
meeisured as
i.
LIST as a
stack,
i.e.,
nodes are
in this instance, the search
to the front;
algorithm selects the marked nodes in the
s to
last-in, first-out order.
This algorithm
performs a deep probe, creating a path as long as possible, and backs up one node initiate a
new probe when
it
new nodes from
can mark no
Hence,
this version of search is called a depth-first search.
L6
Developing Polynomial-Time Algorithms
s;
to
the tip of the path.
Researchers frequently employ two important approaches to obtain polynomial algorithms for network flow problems: the geometric improvement (or linear convergence)
approach, and the scaling approach.
underlying these two approaches.
and
In this section,
We
will
we
briefly outline the basic ideas
assume, as usual, that
all
data are integral
that algorithms maintain integer solutions at intermediate stages of computations.
Geometric Improvement Approach
The geometric improvement approach shows polynomial time
if
at
every iteration
it
that
makes an improvement
difference between the objective function values of the current
Let
H
an algorithm runs
in
proportioT\al to the
and optimum
solutioiis.
be an upper bound on the difference in objective function values between any two
feasible solutions.
maximum mCU.
instance, in the
problem
H
=
For most network problems,
flow problem
H
=
H
is
a function of n,
mU, and
in the
m, C, and U. For
minimum
cost flow
28
Lemma 1.1. Suppose r^ is the objective function value of a minimization problem of some solution at the k-th iteration of an algorithm and 2* is the minimum objective function value. Further, suppose that the algorithm guarantees that (2k_2k+l) ^ a(z^-z*)
(13)
(i.e., the improvement at iteration k+1 is at least a times the total possible improvement) some constant a xvith < a< 1. Then the algorithm terminates in O((log H)/a) iterations.
Proof. The quantity
-
(z*^
z*)
represents the total possible improvement in the objective
function value after the k-th iteration. Consider a consecutive sequence of starting
from
iteration k.
function value by at least
optimum
If in
(1.3)
iterations
- z*)/2 units, then the algorithm would determine an
aCz*^
solution within these
2/a
each iteration, the algorithm improves the objective
2/a
iterations.
On
the other hand,
the algorithm improves the objective function value by no
then
for
if
more than
at
some
aCz*^
iteration,
- z*)/2
q
units,
implies that
a(z^ - z*)/2 ^ z^ -
z^-^^
^ aCz^ - z*),
and, therefore, the algorithm must have reduced the total possible improvement (z*^- z*) by a factor of 2 within these 2/a iterations. Since H is the maximum possible improvement and every objective function value is an integer, the algorithm must terminate wathin 0((log H)/a) iterations.
We A
have stated
this result for
minimization versions of optimization problems.
similar result applies to maximization versions of optimization problems.
The geometric improvement approach might be summarized by "network algorithms that have algorithms."
a
the statement
geometric convergence rate are polynomial time
In order to develop polynomial time algorithms using this approach,
can look for local improvement techniques that lead to large
improvements
in the objective function.
(i.e.,
we
fixed percentage)
The maximum augmenting path algorithm
maximum flow problem and the maximum improvement algorithm minimum cost flow problem are two examples of this approach. (See Sections for the
for the 4.2
and
5.3.)
Scaling Approach
Researchers have extensively used an approach called scaling to derive
polynomial-time algorithms for a wide variety of network and combinatorial
we
describe the simplest form of scaling
optimization problems.
In this discussion,
which we
Section 5.11 presents an example of a bit-scaling algorithm for
call bit-scaling.
29
problem.
the assignment
Sections 4
and
5,
describe polynomial-time algorithms for the
using more refined versions of scaling,
maximum
flow and
minimum
cost flow
problems.
Using the bit-scaling technique, we solve a problem P parametrically as a sequence of problems P^, P2, P3, Pj^ the problem P^ approximates data to the first ...
bit,
,
:
the problem P2 approximates data to the second
bit,
and each successive problem
better approximation until Pj^ = P. Further, for each k = 2,
of
problem
Pj^^.-j
serves as the starting solution for problem
.
.
.
Pj^.
,
K, the
The
useful whenever reoptimization from a good starting solution
is
optimum
is
solution
scaling technique
more
a
efficient
is
than
solving the problem from scratch.
For example, consider a network flow problem whose largest arc capacity has value U. Let
K = Flog Ul and
suppose
number, adding leading zeros
problem
Pj^
would consider
representation.
if
that
we
necessary to
represent each arc capacity as a
make each
capacity
K
K
bits long.
the capacity of each arc as the k leading bits in
bit
binary
Then the its
binary
Figure 1.10 illustrates an example of this type of scaling.
The manner
of defining arc capacities easily implies the following observation.
Observation. The capacity
of
an arc
in
P^
is
tivice that in Pf^^j
plus
or
1.
30
100
<=^
(a)
PI
(b)
:
P2
100
010
P3:
(c)
Figure
(a)
(b) (c)
1.10.
Example of a
Network with
bit-scaling technique.
arc capacities.
Network with binary expansion of The problems Pj, P2, and P3.
arc capacities.
31
The following algorithm encodes a generic version
of the bit-scaling technique.
algorithm BIT-SCALING;
begin
optimum
obtain an for k
:
= 2
to
solution of P^;
K do
begin
optimum
reoptimize using the
optimum
obtain an
solution of Pj^.i to
solution of
Pj^;
end; end; This approach
both the
maximum
is
very robust; variants of
minimum
flow and
to
improved algorithms
of
problem
The problem P^
(i)
Pj;_i is
an excellent starting
optimum
solution of
Pj^.
(iii)
For problems that satisfy the similarity assumption, the number of problems solved
OOog
Thus
n).
efficient
for this
by
(i.e.,
approach
to
work, reoptimization needs
flow value for problem
the problem
Pj,,
multiply the
Moreover,
vj^
maximum and
the capacity of an arc
optimum flow -
Pj^
2vj^_'j
<
m
xj^.^
is
xj^
at
most
m units
(if
we add
flow problem.
Section 4.1
would perform
is
flow problem.
twice
for Pj^.i
1
by
I's
to the capacity of
at
most
1).
its
Let
denote the
vj^
capacity in Pj^.i plus
we
2,
or
1.
vj^.
If
obtain a feasible flow for
can increase the
any
arc,
In general,
For example, the
O(m^) time. Therefore, 0(m^ log U) time, whereas time bound
more
little
In
we
Pj^.
because multiplying the flow X]^_^ by 2 takes care of the
flow from source to sink by
maximum
be only a
denote an arc flow corresponding to
doubling of the capacities and the additional
by
to
is
a factor of log n) than optimization.
Consider, for example, the
maximum
is
Hence, the optimum
solution for problem Pj^ since Pj^.^ and Pj^ are quite similar. solution of Pi^_i can be easily reoptimized to obtain an
for
This approach works well
cost flow problems.
The optimal solution
(ii)
have led
because of the following reasons,
for these applications, in part,
generally easy to solve,
it
claissical
it
then is
we
maximum
flow value
increase the
maximum
easier to reoptimize such a
labeling algorithm as discussed in
the reoptimization in at
most
m
augmentations, taking
the scaling version of the labeling algorithm runs in the non-scaling version runs in
polynomial and the
latter
bound
is
O(nmU)
time.
only pseudopolynomial.
simple scaling algorithm improves the running time dramatically.
The former Thus
this
32
BASIC PROPERTIES OF
2.
As
prelude to the
a
Section
in either of
1.1
we
describe several basic
how network
flow problems can be
rest of this chapter, in this section
We
properties of network flows.
modeled
NETWORK FLOWS
begin by showing
two equivalent ways:
as flows on arcs as in our formulation in
Then we
or as flows on paths and cycles.
and demonstrate
solutions to network flow problems
partially characterize optimal
that these
problems always have
and spanning
certain special types of optimal solutions (so
Consequently, in designing algorithms,
We
types of solutions.
we need
tree
only consider these special
next establish several important connections between network
flows and linear and integer programming.
Finally,
we
discuss a few useful
transformations of network flow problems.
2.1
Flow Decomposition Properties and Optimality Conditions It is
natural to view network flow problems in either of two ways:
and
arcs or as flows on paths
our discussion, we will find
it
In the context of developing underlying theory,
cycles.
models, or algorithms, each view has
as flows on
its
own
worthwhile
to
advantages. Therefore, as the
first
step in
develop several connections between these
alternate formulations. In the arc formulation (1.1), the basic decision variables are flows
The path and
cycle formulation starts with an enumeration of the paths
the network.
Its
q,
decision variables are h(p), the flow on path p,
which are defined
for every directed path
p
in
and
f(q),
Xj:
on arcs
P and
cycles
the flow
P and every directed cycle q
in
(i, j).
Q
of
on cycle
Q.
Notice that every set of path and cycle flows uniquely determines arc flows in a natural way:
the flow
xj;
on arc
(i, j)
We
paths p and cycles q that contain this arc.
some new
notation:
5jj(p)
similarly, 6jj(q) equals
^i3=
1 if
I p€ P
equals
arc
(i, j)
if
1
is
sum
equals the
arc
(i,
j)
of the flows h(p)
X qe
Q
f(q) for all
formalize this observation by defining
is
contained in path p and
contained in cycle q and
5ij(p)h(p)+
and
hf
is
otherwise.
otherwise;
Then
33
If
the flow vector x
is
we
expressed in this way,
say that the flow
is
represented
flows and cycle flows and that the path flow vector h and cycle flow vector
path
eis
a path
f is
and
cycle flow representation of the flow.
Can we represent
answer
reverse this process? That
we decompose any
can
The following
and cycle flows?
as) path
it
is,
arc flow into
(i.e.,
an affirmative
result provides
to this question.
Theorem
Flow Decomposition Property
2.1:
Every directed path and
(Directed Case).
Conversely, every has a unique representation as nonnegative arc flows. nonnegative arc flow x can he represented as a directed path and cycle flow (though not necessarily uniquely) with the following two properties:
cycle flow
Every path with positive flow connects a supply node of x
C2.1.
to a
C2.2. At most n+m paths and cycles have nonzero flow; out of have nonzero flow.
We
assertions.
decomposed Oq,
give an algorithmic proof to
into path
i|) carries a
and cycle
positive flow.
If
We
i^
that
repeat this argument until either
we
cycles
only the converse
feasible arc flow x can be
supply node. Then some arc
then
we
some other
implies that
of x.
m
most
these, at
to establish
any
ig is a
demand node
a
is
i^j
show
Suppose
flows.
balance constraint (1.1b) of node flow.
we need
In the light of our previous observations,
Proof.
demand node
stop; otherwise the
arc
(i^, 12)
carries positive
demand node
encounter a
mass
or
we
revisit a
previously examined node. Note that one of these cases will occur within n steps. In the
former case ij^
we
obtain a directed path p from the supply
consisting solely of arcs with positive flow,
cycle q.
If
we
obtain a directed path,
and redefine b(iQ) = b(iQ) - h(p), p.
If
we
each arc
obtain a cycle q, (i, j)
We
we
b(ij^)
let f(q)
we =
b(ijj)
+ h(p) and
{x^:
:
(i, j)
xj:
=
Xj; -
we
obtain a directed
min
(xj:
:
(i, j)
h(p) for each arc
€ q) and redefine
x^;
=
Xj:
-
e
p)],
(i, j)
in
f(q) for
repeat this process with the redefined problem until the network contains no
we
select a
transhipment node with at
one outgoing arc with positive flow as the starting node, and repeat the procedure,
which 0.
[b(iQ), -b(ij^),
some demand node
in q.
supply node (and hence no demand node). Then lecist
ig to
in the latter case
h(p) = inin
let
= min
and
node
in this Ceise
must find a
Clearly, the original flow
procedure.
Now
is
cycle.
the
We
sum
terminate
of flows
observe that each time
when
we reduce
the flow on
some
problem x =
on the paths and cycles
identified
we
we
identify
supply /demand of some node or the flow on some arc a cycle,
for the redefined
arc to zero.
to zero;
a
path,
by the
reduce the
and each time we
identify
Consequently, the path and cycle
34
representation of the given flow x contains at most (n +
which there are is
It
form
at
m
be negative. In
and
of
even though the underlying
p,
and can contain
which has an orientation from
arcs with
its initial to its
node, has forward arcs and backward arcs which are defined as arcs along and
A
opposite to the path's orientation. h(p) on each forward arc
and
-h(p)
more general
path flow will be defined
on each backward
same way.
In this
6j:(q) is still
valid v^th the following provision:
arc
this Ccise,
cycles can be undirected,
Each undirected path
negative flows.
paths and cycles,
somewhat more general
possible to state the decomposition property in a
directed, the paths
is
total
cycles.
that permits arc flows xj; to
network
final
most
m)
(i, j)
a
is
backward
setting,
arc.
on p as
We
a flow with value
define a cycle flow in the
our representation using the notation
we now
define
6j;(p)
and
5j;(p)
and
be
S^jCq) to
-1 if
arc of the path or cycle.
Theorem 2.2; Flow Decomposition Property (Undirected Case). Every path and cycle Conversely, every arc flow x can be flow has a unique representation as arc flows. represented as an (undirected) path and cycle flow (though not necessarily uniquely) with the following three properties: Every path with positive flow connects a source node of x
C2.3.
For every path and
C2.4.
cycle,
any arc with
to a sink
node of
positive flow occurs as a forward arc
x.
and any
arc with negative flow occurs as a backward arc.
At most n+m paths and cycles have nonzero flow; out of C2.5. have nonzero flow. Proof.
This proof
extend the path (ij^
,
ij^_|
)
at
similar to that of
is
some node
ij^_-j
Theorem
one example,
by a sequence
,
ij^)
is
cycles
that
we
with positive flow or an arc
is
a
number
of important consequences.
As
way and
to
show how we can
build
one solution from another
of simple operations.
We need >
The major modification
(ij^.'j
m
enables us to compare any two solutions of a network flow problem in a
it
particularly convenient
f(q)
most
with negative flow. The other steps can be modified accordingly.
The flow decomposition property has
flow
2.1.
by adding an arc
these, at
the concept of augmenting cycles with respect to a flow
called
<
Xjj
an augmenting
+
5jj(q) f(q)
<
cycle
Ujj,
with respect to a flow x
for each arc
(i, j)
e q.
if
x.
A
cycle q with
35
In other words, the flow remains feasible
(namely cycle
augmented around the cycle
f(q)) is
q as
V
=
c(q)
(i, j)
=
we augment
if
0
b,
Nx
-
=
that z can be represented as cycle flows, ...
,
f(qj.)
=
for
z,
<
Now
6ij(qi) f(qi)
any arc
yjj
=
Xjj
+
(i, j)
+
q2,
.
.
.
,
network flow problem, x
-
we
can find
+
...
at
most
r
m
<
it.
...
,
+
6ij(q2)
q^
these cycle flows
each cycle q^
,
<
,
q2
Xj;
+
qj^ to x, ,
...
,
6j:(qj(.) f(qj^^)
SjjCqr) fCq^.
f(q2)
+
...
that contains
same <
sign;
+
5jj(qr) f(qr)
e
A
it
or a
Uj; for
moreover,
each arc
(i, j)
e
an augmenting cycle with respect
that
(i, j)
satisfies the
<
Ujj.
(i, j)
(i, j)
(i, j)
6
e
A
A
(i, j)
(i, j)
r
(i,j)€A
k=l
e
e
A
A
k=l
is
either a
backward arc on each cycle
< qj^.
x^;
yjj
<
and the rightmost Consequently,
Ujj.
That
is, if
we add any
the resulting solution remains feasible on each arc
q,. is
b,
of A,
(i, j)
Therefore, each term between
inequality in this expression has the for each cycle qj^
=
we have
5jj(q^) fCq^)
that contains
Nx
cycle flows f(q])/
by condition C2.4 of the flow decomposition property, arc
qm
i.e.,
Consequently, flow decomposition implies
i.e.,
+
f(q) is c(q) f(q).
difference vector z = y
0.
5jj(q2) f(q2)
forward arc on each cycle q^, q2, q-j,
to a
satisfying the property that for each arc
zjj
Since y = x +
along the cycle with one unit of flow. The
Then the
homogeneous equations Nz = Ny
f(q-)),
augmenting cycle represents the change
and y are any two solutions
that x
Ny
< X < u and
of flow
define the cost of an augmenting
augmenting around cycle q with flow
in flow cost for
Suppose
cost of an
amount
positive
A
€
in cost of a feasible solution
change
The
Cj; 5jj(q).
We
q.
some
if
to the
flow
x.
(i, j).
of
Hence,
Further, note
36
We
have thus established the following important
Theorem
Augmenting Cycle Property.
2.3:
Let X
result.
any two feasible solutions of a flow on at most m augmenting nicies and y
he
network flow problem. Then y equals x plus the with respect to x. Further, the cost of y equals the cost of x
augmenting
The augmenting
is
any
cycle property permits us to formulate optimality conditions for
optimum
characterizing the
X
x
vector X* - x can be
^
x*.
negative cost.
nonnegative
Further,
cost,
an optimum flow.
it
2.4.
x*
if
cost flow problem.
an optimum solution of the
into at
cx* - ex.
If
Suppose
minimum
that
cost flow
cycle property implies that the difference
most
m augmenting cycles and the sum
ex* < cx
of the
then one of these cycles must have a
every augmenting cycle in the decomposition of x* - x has a
then cx* - cx >
We
is
The augmenting
decomposed
costs of these cycles equals
minimum
solution of the
feasible solution, that
problem, and that
Theorem
plus the cost of flow on the
cycles.
Since x*
0.
is
an optimum flow, cx* = cx and x
have thus obtained the following
Optimality Conditions.
A
feasible flow
x
is
also
result.
is
an optimum flow
if
and only
if
admits no negative cost augmenting cycle.
2J.
Cycle Free and Spanning Tree Solutions
We start
by assuming
that x is a feasible solution to the
minimize
and
that
/
=
0.
Much
{
cx
:
Nx = b and
/
^x
of the underlying theory of
observation concerning the example in Figure are given besides each arc.
2.1.
network flow problem
)
network flows stems from In the example, arc flows
a simple
and
costs
37
3-e
3,$4
i
4,$3
Figure
Let us
assume
4+e
Improving flow around a
2.1.
for the time
being that
Note
of flow 6 to all the arcs pointing in a clockwise direction all
that
and subtracting
Let us refer to this incremental cost
minimize cost
in
our example,
nonnegativity of
all
arc flows,
that in the
new
solution
(at
i.e.,
A
-
$4
-
$3 = $
as the q/cle cost
we
set
we no
of the
i.e..
A.
that the cycle
is
a
Consequently, to
6 as large as possible while preserving
3-6^0 and
6 = 3),
sum
the
each
at
-1.
and say
depending upon the sign of
negative, positive or zero cost cycle
is
of the cost of counterclockvkdse arcs,
Per unit change in cost = A = $2 + $1 + $3
flow from
this
mass balance
node. Also, note that the per unit incremental cost for this flow change
minus the sum
in
adding a given amount
arcs pointing in the counterclockwise direction preserves the
cost of the clockwise arcs
The network
arcs are uncapacitated.
all
flow around an undirected cycle.
this figure contains
cycle.
4
-
8 S
0,
or 6 <
3;
that
is,
we
set 6
longer have positive flow on
all
=
3.
Note
arcs in the
cycle.
Similarly,
then -2)
we would
and again
value zero. of
all
flows,
depends
if
the cycle cost were positive
decrease 6 as
much
as possible
(i.e.,
(i.e.,
we were
5 +
can restate
we must
linearly
on
6,
this observation in
we
optimize
the cycle has a flow value of zero.
it
by
on
at least
another way:
select 6 in the interval -2
<6 <
selecting 6
change C|2 from 2
6^0, 2 + 6^0, and
find a lower cost solution with the flow
We
to
3.
to 4),
4 + 6 S 0, or 6
one arc in the cycle
> at
to preserve nonnegativity
Since the objective function
= 3 or 6 =
-2 at
which point one arc
in
38
We
can extend
this
the per unit cycle cost
(i) If
observation in several ways:
A =
we
0,
are indifferent to
all
solutions in the interval -2 < 9 <
3 and therefore can again choose a solution as good as the original one but with the flow of at least arc in the cycle at value zero.
we impose upper bounds on
(ii) If
range of flows that preserves flows)
original
feasibility
again an interval, in this
is
one by choosing 6 =
is,
-2 or 6
observations
We
it.
such as 6 units on
<6<
1,
and we can
find a solution as
At these values of
1.
up
to this point.
Let us say that an arc
upper bound.
(i, j)
6,
the solution
(i, j)
is
is
and
Theorem
initial
flow
we
2.5:
flow
is restricted if its
and summarizing our
a p'ee arc with respect to a
xj;
equals either
its
lower or if
bounded from below on the
{
the
entirely of free arcs.
fundamental
Cycle Free Property. problem minimize
is
is at its
Therefore,
can apply our previous argument repeatedly, one cycle
establish the following
optimization
as the
cycle free,
bound) or
In general, our prior observations apply to any cycle in a network.
time,
good
In this terminology, a solution x has the "cycle free property"
network contains no cycle made up
given any
then the
between the lower and upper bounds imposed
if Xj; lies strictly
will also say that arc
all arcs,
mass balances, lower and upper bounds on
additional notation will be helpful in encapsulating
given feasible flow x
upon
=
e.g.,
either the flow is zero (the lower
for
Some
(i.e.,
Ceise -2
some arc on the cycle, upper bound (x^2 = ^ ^t 6 = 1). that
the flow,
ex
:
feasible
result:
If the
Nx
=
b,
region
1
at a
objective function value of the network
}
and the problem has
a feasible solution,
then at least one cycle free solution solves the problem.
Note
that the
lower bound assumption imposed upon the objective value
is
necessary to rule out situations in which the flow change variable 6 in our prior
argument can be made
arbitrarily large in a negative cost cycle, or arbitrarily small
(negative) in a positive cost cycle; for example, this condition rules out
directed cycle with no upper
bounds on
its
arc flows.
any negative
cost
39
network
is
Suppose
useful to interpret the cycle free property in another way.
It is
connected
(i.e.,
there
is
an undirected path connecting every two pairs of
Then, either a given cycle free solution x contains a free arc that
nodes).
each node in the network, or
we
that the
is
incident to
can add to the free arcs some restricted arcs so that the
resulting set S of arcs has the following three properties:
(i)
S contains
(ii)
S contaiT\s no undirected cycles, and
(iii)
No
We
all
superset of S satisfies properties
will refer to
the network
the free arcs in the current solution,
and any
any
set
(i)
and
S of arcs satisfying
feasible solution x for the
(i)
(ii).
through
(iii) eis
network together with
that contains all free arcs as a spanning tree solution.
(At times
we
a spanning tree of a
spanning
tree S
will also refer to a
given cycle free solution x as a spanning tree solution, with the understanding that
may
restricted arcs
Figure that
it
may
in several
be needed to form the spanning tree
2.2. illustrates a
spanning
be possible (and often
ways
(e.g.,
replace arc
is)
to
(2, 4)
tree
S.)
corresponding to a cycle free solution. Note
complete the wdth arc
set of free arcs into a
5) in
(3,
Figure
2.2(c)); therefore, a
We
cycle free solution can correspond to several spanning trees S.
spanning tree solution x this case, the
rot span
(i.e.,
is
nondegenerate
are not incident to)
of the arc.
In this case,
we
all
tree
given
will say that a
the set of free arcs forms a spanning tree.
spanning tree S corresponding
this solution will contain at least
bound
if
spanning
to the
flow x
is
unique.
If
In
the free arcs do
the nodes, then any spanning tree corresponding to
one arc whose flow equals the vdll say that the
spanning
tree
is
arc's
lower or upper
degenerate.
40
(4,4)
(1,6)
(0,5)
(a)
An example network with
arc
flows and capacities represented as
(xj:, uj:
© (b)
A cycle free solution.
© (c)
Figure
2.2.
A
spanning
tree solution.
Converting a cycle free solution to
a
spanning
tree solution.
).
41
When
restated in the terminology of spanning trees, the cycle free property
becomes another fundamental
Theorem
If the objective
function value of the network
problem minimize
is
network flow theory.
Spanning Tree Property.
2.6:
optimization
result in
Nx
{ex:
bounded from below on the
=
b,
I
u]
region and the problem has a feasible solution
feasible
then at least one spanning tree solution solves the problem.
We of the flow
might note
problem as
well,
function of the flow vector valid because
spanning
that the
those versions where the objective function
i.e.,
is
negative at
incremental cost remains negative (by concavity) as
flow around the
one arc reaches
its
we
Hence,
cycle.
is
a concave
This extended version of the spanning tree property
x.
the incremental cost of a cycle
if
concave cost versions
tree property is valid for
some
we augment
is
point, then the
positive
amount
can increase flow in a negative cost cycle until
of
at least
lower or upper bound.
Networks, Linear and Integer Programming
2.3
The
cycle free property
consequences.
In particular, these
the cusp between
programming.
and spanning
two
large
tree property
two properties imply
have many other important
that
network flow theory bes
at
and important subfields of optimization—linear and integer
This positioning may, to a large extent, account for the emergence of
network flow theory as
a cornerstone of mathematical
programming.
Triangularity Property
Before establishing our
programming, we least
one
in the
first
(actually at
spanning
make
lecist
tree.
first
a
network flows
results relating
few observations. Note
two) leaf nodes, that
Consequently,
if
we
is,
a
node
1,
then
row
1
incident arc from S, the resulting network
Consequently, by rearranging
spanning
tree,
we
all
can
but
If is
is
row
1
a
we now remove
S has
at
one arc
and
its
-1,
incident arc
which
this lecif
lies
is
on the
node and
its
spanning tree on the remaining nodes.
row and column
now assume
tree
that is incident to only
has only a single nonzero entry, a +1 or a
diagonal of the node-arc incidence matrix.
for the
any spanning
and integer
rearrange the rows and columns of the
node-arc incidence matrix of S so that the leaf node
column
that
to linear
that
1
row
of the node-arc incidence matrix
2 has
-t-1
or
-1
element on the
42
diagonal and zeros
to the right of the diagonal.
Continuing
in this
rearrange the node-arc incidence matrix of the spanning tree so that
lower triangular. Figure
2.3
shows
5
L =
permits us to
its first
n-1
rows
is
the resulting lower triangular form (actually, one of
several possibilities) for the spanning tree in Figure 2.2(c).
nodes
way
43
Now
supply/demand vector b and lower and upper bound
further suppose that the
vectors
/
and u have
all
M
an arc lower or upper bound and the right hand side
components -1,
the
first
Mx^
-
is
equation in
(2.1)
an integer
vector.
since the
implies that x|
first
diagonal element of
is integreil;
from the second equation; continuing
solving for one variable at a time
component
shows
this
that x^
is
of
yr-
0,
+1, or
equals -1),
But this observation implies that the
now
hand side remains
the right
(2.1),
since every
has integer components (each equal to
of x' are integral as well:
the equality in for X 2
b
Then
integer components.
if
we move
integral
U
equals +1 or
x] to
the right of
and we can solve
forward substitution by successively
integral.
This argument shows that for problems with integral data, every spanning tree solution
is
integral.
Since the spanning tree property ensures that network flow
problems always have spanning fundamental
Theorem problem
we have
established the following
result.
2.8.
Integrality Property.
minimize is
tree solutions,
{
If the objective
Nx
ex:
=
b,
1
value of the network optimization
}
bounded from below on the feasible region, the problem has a feasible solution, and b, 1, and u are integer, then the problem has at least one integer optimum
the vectors solution.
Our
observation at the end of Section 2.2 shows that this integrality property
also valid in the
more general
situation in
which the objective function
is
is
concave.
Relationship to Linear Programming
The network flow problem with the which, as the
leist
result
objective function ex
is
a linear program
shows, always has an integer solution. Network flow problems
are distinguished as the most important large class of problems with this prop>erty.
Linear programs, or generalizations with concave cost objective functions, ako satisfy
another well-known property: they always have, in the parlance of convex
emalysis, extreme point solutions; that
expressed
+
tis
(l-a)z for
a
is,
solutions x with the property that x cannot be
weighted combination of two other feasible solutions y and
some weight
< a <
always have cycle free solutions,
1.
Since, as
we might
we have
seen,
z, i.e.,
as x =
ay
network flow problems
expect to discover that extreme point
44
and
solutions
cycle free solutions are closely related,
and indeed they are as shown by
the
next result.
Theorem
Extreme Point Property.
For network flow problems, every cycle free solution is an extreme point and, conversely, every extreme point is a cycle free solution. Consequently, if the objective value of the network optimization problem 2.9.
minimize is
bounded from below on the
{
Nx
ex:
=
b,
)
and the problem has a
region
feasible
I
feasible solution,
then the problem has an extreme point solution.
Proof. With the background developed already, this result is
not a cycle free solution, then
2.1,
that X = (l/2)y + (l/2)z.
we
these vectors for which y and z differ,
i.e.,
NjCz^
"
let
>
)
Nj =
if
X
is
N
yjj
y^ and
not equal to
Zij
for
x
free arcs, as in
<
yij
<
xij
<
not an extreme point and
and zij
<
z' be the ujj
or
/jj
<
components zjj
corresponding to these arcs
any arc on the
that the cycle.
< (i,
it
is
xij j).
<
yjj
is
of
<
Then
network contains an
But by definition of the
z^, this cycle contains only free arcs in the solution x.
not an extreme point solution, then
Therefore,
not a cycle free solution.
programming, extreme points are usually represented algebraically as
In linear
basic solutions; for these special solutions, the
linear
is
which implies, by flow decomposition,
0'
x^,
around a cycle with
Let x', y'
1.
/ij
denote the submatrix of
imdirected cycle with
components
-6
Conversely, suppose that x
< a<
and
First, if
define two feasible solutions y and z with the property
represented as x = ay + (l-a)z with
uij,
easy to establish.
cannot be an extreme point, since by perturbing the
it
flow by a small amount 6 and by a small amount
our discussion of Figure
is
program corresponding
are linearly independent.
maximal number
We
columns B of the constraint matrix of a
to variables strictly
between
their
lower and upper bounds
can extend B to a basis of the constraint matrix by adding a
of columns.
Just as cycle free solutions for
network flow problems
correspond to extreme points, spanning tree solutions correspond to basic solutions.
Theorem is
Basis Property.
2.10:
Every spanning
tree solution to a
a basic solution and, conversely, every basic solution
Let us
now make one
final
programming— namely, between program
of the
basis
and the
form Ax = b and suppose
that
x.
row so
B
Then
a nonsingular matrix.
integrality property.
N
Consider a linear
= [B,M] for some basis B and that x =
Also suppose that
compatible partitioning of is
a spanning tree solution.
connection between networks and linear and integer
(x ,x^) is a
that
is
network flow problem
we
eliminate the redundant
45 Bx^ = b - Mx^, or x^ = B-^(b
-
Also, by Cramer's rule from linear algebra,
sums and
of x' as
multiples of components of
determinant of B. Therefore, vector whenever x^, partitioning of
and u are
all
unimodular unimodular
A
b,
if
of
if all
How
composed
Mx^ and
-
N
B, divided by det(B), the
B equals +1 or
-1,
of all integers.
of
its
and consequently
x^
is
an integer.
then x^
an integer
is
In particular,
Let us
bases have determinants either +1 or
the
if
-1,
call a
call
correspond
to
sparming
and the
trees, the triangularity
b,
it
/
A
matrix
totally
square submatrices have determincmt equal to either 0,+l, or
its
are these notions related to network flows
Since bases of
component
possible to find each
corresponds to a basic feasible solution x and the problem data A,
integers, then x^
if all
are
is
it
=b
the determinant of
M
and
b'
Mx^).
-1.
integrality property?
property shows that the
determinant of any basis (excluding the redundant row now), equals the product of the diagonal elements in the triangular representation of the basis, and therefore equals +1 or
-1.
Consequently, a node-arc incident matrix
unimodular. For Otherwise,
it
let
is
unimodular. Even more,
S be any square submatrix of N.
must correspond
of its connected components.
If
to a cycle free solution,
But then,
it is
S
is
singular,
which
is
a
-1.
has determinant
0.
spanning tree on each
easy to see that the determinant of S
product of the determinants of the spanning trees and, therefore, or
it
totally
it is
it
must be equal
is
the
to
4l
(An induction argument, using an expansion of determinants by minors, provides
an alternate proof of
Theorem
this totally
unimodular property.)
Total Unimodularity Property. network flow problem is totally unimodular. 2.11:
The constraint matrix of
a
minimum
cost
M
Network Transformations
2.4
Frequently, analysts use network transformations to simplify a network problem, to
show equivalences
of different
network problems, or
to
put a network problem into a
standard form required by a computer code. In this subsection,
we
describe
some
of these
important transformations. Tl.
then
(Removing Nonzero Lower Bounds).
we
variable
can replace Xjj,
Xy.
the flow
by
Xjj+
on arc
l^-
(i, j)
in the
will
If
an arc
(i, j)
has a positive lower boimd
l^y
problem formulation. As measured by the new
have a lower bound of
0.
This transformation has a
.
46
we
simple network interpretation;
measure incremental flow above b(i)
begin by sending
b(j)
Figure
{Removing
making the
the capacity, constraint
(i,
Capacities).
If
an arc
Multiplying both sides by
-1,
we
x^:
+
a positive
Ujj, if
is
=
that the variable
only one. By subtracting
each of
Xj;
and
appear
Sj;
tantamount
now
xj;
(2.2)
we
can remove
The capacity
introduce a slack variable
Sj;
>
0.
(2.2)
turning the slack variable into an
to
mass balance constraint
for that node.
from the mass balance constraint of node
in exactly
two constraints-in one with the
j,
we
Sj:
in
assure that
and
positive sign
These algebraic manipulations correspond
in
to the
network transformation.
following
b(i)
b(i)
b(j)
(Cjj
O
<^
b(j)
-Uij
Ujj)
,
we
then
appears in three mass balance constraints and
the other with the negative sign.
(Cjj
Uj:,
-Ujj
additional node k with equation (2.2) as the
Observe
lower bound to zero.
has a positive capacity
=
,
+
Uij
oo)
(0,oo)
©
^©< t I
X- j
Xjj
Figure
In the
on arc is X.
^k'
V.
"
(i, j)
= ik
Xjj
/
obtain
-Xjj - Sjj
This transformation
Sj;
+
CD
arc uncapacitated, using the following ideas.
can be written as
j)
(i, j)
b(i)
(Cij'Uij-V
^
Transforming
2.4.
and then
b(i)-/ij
CD
units of flow on the arc
/jj.
'Cij,Ujj)
T2.
/j;
network context,
in the original
and
Xjj^
^" *^^
=
Uj;
this
2.5.
Removing
=
Xjj,
X^j
=
Sjj
arc capacities.
transformation implies the follov^dng.
If x^; is
a flow
network, the corresponding flow in the transformed network
- Xj:;
both the flows x and
x'
have the same
transformed network yields a flow of
Xj:
=
Xjj^
cost.
of the
Likewise, a flow
same
cost in the
47 original network. x^j
x^<
=
Further, since
Consequently,
Ujj.
T3. (Arc Reversal). Let arc flow Ujj
if it is
Uj;
this
by the arc
(j,
i)
with negative
send
X
•:
v^ath a cost
costs.
(i', j)
change
replace
(i, j)
(i, j)
with
by arc
»
its
x^;
by
associated cost
remove
(k,
i)
2.6.
by an
same
cost
(j,
i)
vdth cost
Cj:
arcs
The
-Cj;.
the "full capacity" flow of
An example
arc (k,
and
i')
the nodes of a network.
©
of arc reversal.
of the
capacity.
b(i)
0<
This transformation splits each node
of the
all
in variable:
we "remove" from
Figure 2.7 illustrates the resulting network
transformation for
bound on the
b(i)-Ujj
by an arc
or an upper
(i, j)
Doing so replaces arc
measures the amount of flow
replaces each original arc
i.
a
is
flow on the arc and then replace arc
Splitting).
are both nonnegative,
valid.
is
b(j)
(Node
x:j^
Therefore, this transformation permits us to
-Cj;.
Figure
each
and
This transformation has the following network interpretation:
CD
(i, j)
x^j^.
represent the capacity of the arc
b(i)
T4.
and
u^;
uncapacitated. This transformation
Ujj units of
new flow
=
Xjj^
transformation
X • in the problem formulation.
-
+
x^j^
same
i
into
cost
We also
and
add
when we
two nodes capacity,
arcs
(i, i')
i
and
i'
and
and each arc
of cost zero for
carry out the
node
splitting
+
Ujj
48
(a)
(b)
Figure
We to
2.7.
(a)
The
original network, (b)
The transformed network.
when we use
shall see the usefulness of this transformation in Section 5.11
it
reduce a shortest path problem with arbitrary arc lengths to an assignment problem.
This transformation
is
also
used
in practice for representing
node
data in the standard "arc flow" form of the network flow problem: the cost or capacity for the throughput of
node
i
with the
activities
we
and node
simply associate
new throughput
arc
(i, i').
49
SHORTEST PATHS
3.
and also the most commonly
Shortest path problems are the most fundamental
encountered problems
in the
study of transportation and communication networks. The
problem
arises
when
shortest path
rebable path between one or
trying to determine the shortest, cheapest, or most
many
pairs of
nodes
in a network.
More
importantly,
algorithms for a wide variety of combinatorial optimization problems such as vehicle routing and network design often
call for the solution of a large
problems as subroutines. Consequently, designing amd testing shortest path
problem has been a major area of research
in
number
efficient
of shortest path
algorithms for the
network optimization.
The
Researchers have studied several different (directed) shortest path models.
major types of shortest path problems,
in increasing
finding shortest paths from one node to
nonnegative;
(ii)
(iv)
other nodes
finding shortest paths from one node to
with arbitrary arc lengths; node; and
all
order of solution difficulty, are
(iii)
finding
all
when
(i)
arc lengths are
other nodes for networks
shortest paths from every
node
to every other
finding various types of constrained shortest paths between nodes
(e.g.,
shortest paths with turn penalties, shortest paths visiting specified nodes, the k-th shortest path).
In this section,
we
discuss problem types
approaches for solving problem types setting
and
label correcting.
The
(i)
and
label setting
(ii)
(i),
(ii)
and
(iii).
The algorithmic
Cem be classified into two groups—label
methods are applicable
to
networks with
nonnegative arc lengths, whereas label correcting methods apply to networks with negative arc lengths as well. Each approach assigns tentative distance labels (shortest
path distances) to nodes at each step. Label setting methods designate one or more labels as permanent (optimum) at each iteration. Label correcting methods consider as temporary until the final step label setting
when
methods have the most
practical experience has
shown
they
all
become f>ermanent.
We
will
all labels
show
that
attractive worst-case performance; nevertheless,
the label correcting methods to be modestly
more
efficient
Dijkstra's algorithm first
is
the most popular label setting method. In this section,
discuss a simple implementation of this algorithm that achieves a time
0(n2).
We
we
bound of
then describe two more sophisticated implementations that achieve
improved running times
in practice
emd
in theory.
Next,
we
consider a generic version
of the label correcting method, outlining one special implementation of this general
approach that runs
in
polynomial time and another implementation that perfomns very
50 well in practice.
we
Finally,
discuss a
method
to solve the all pairs shortest path
problem. Dijkstra's Algorithm
3.1
We consider a
G=
network
(N,A) with an arc length
aissodated with each arc
Cj;
(i, j)
e A. Let A(i) represent the set of arcs emanating from node
max
{
Cjj
:
(i, j)
e
A
}.
In this section,
we assume amd
in this section as well as in Sections 3.2
nonnegative.
We
suppose
that
node
3.3,
We
aire
we
G
further
node
j.
We
invoke
and
C
let
=
assume
that arc lengths are
designated node, and assume
contains a directed path from s to every
can ensure this condition by adding an
large arc length, for each
€ N,
lengths are integer numbers, and
s is a specially
without any loss of generality that the network other node.
that
i
artificial arc (s,
with a suitably
j),
assumption throughout
this connectivity
this section.
Dijkstra's algorithm finds shortest paths
nodes. The basic idea of the algorithm of their distances from
we know
once
temporary.
temporary
node are
i
all
that
Initially,
Each node
s.
it
we
to fan out
has a label, denoted by
give node if (s, j)
shortest distance
s a
permanent
€ A, and
«>
label,
makes
of adjacent nodes.
s
and
label
it
nodes
the label
d(i):
label of zero,
s to all
i;
is
other
in order
permanent
otherwise
is
it
and each other node
j
a
otherwise. At each iteration, the label of a
from the source node along a path whose internal nodes
permanently labeled. The algorithm
temporary
from node
represents the shortest distance from s to
label equal to Cgj
is its
i
is
from the source node
permanent, and scans
selects a
node
au-cs in A(i) to
The algorithm terminates when
it
i
with the
minimum
update the distamce
has designated
all
labels
nodes as
permanently labeled. The correctness of the algorithm
relies
we prove later) that it is always possible to minimum temporary label as permanent. The following
designate the node vdth the
(which
basic implementation of Dijkstra's algorithm.
on the key observation
algorithmic representation
is
a
51
algorithm DIJKSTRA; begin
P:=(s); T: = N-{s); d(s) d(j)
:
and pred(s) =
=
:
=
0;
:
and
Cgj
while P *
N
pred(j)
=
:
s
if
A
e
(s,j)
and
,
d(j)
=
:
«»
otherwise;
do
begin (node selection)
let
i
T be
e
node
a
P: = Pu(i); {distance update) for each
(i,j)
d(j)
if
>
which
for
= min
{d(j)
:
j
€ T);
= T-{i};
T:
€ A(i)
do
+
then
d(i)
d(i)
Cjj
d(j)
:
=
d(i)
+
Cjj
and
pred(j)
:
=
i;
end; end;
The algorithm i
i
associates a predecessor index, denoted
The algorithm updates these indices
€ N.
on the
(tentative) shortest path
to
from node
ensure that
node
s to
i.
by
pred(i),
with each node
pred(i) is the last
node
prior to
At termination, these indices
allow us to trace back along a shortest path from each node to the source.
To
At each point
in the algorithm, the
that the label of each
the label of each
node
node
j
in
in
T
with the smallest label i
must contain
away from
P
the source as
P
sets,
P and
i
P.
Then
that
since
is
its
in T.
is
it
possible to transfer the
label
least that of
is at i
node
i;
labeled
node
i,
valid to permanently label
the temporary labels of
node
i
updates the labels of nodes in T -
The computational time its
two
i
T to
at least as far
furthermore, the
P
is at least d(i)
T-
in
(i)
might decrease, because
tentative shortest paths to these nodes. d(j)
in
After the algorithm has permanently
i.
some nodes
node could become an internal node in the must thus scan all of the arcs (i, j) in A(i); if
node
has a nonnegative length because arc
lengths are nonnegative. This observation shows that the length of path it is
Assume
any path P from the source
However, node k must be
segment of the path P between node k and node
and hence
T.
the length of a shortest path from the source, whereas
for the following reason:
node k
node
nodes are partitioned into two
belongs to
j)
d(i) to
a first
is
use an inductive argument.
the length of a shortest path subject to the restriction that
is
each node in the path (except
node
we
establish the validity of Dijkstra's algorithm,
>
d(i)
+
Cj:
,
then setting
d(j)
=
d(i)
We +
Cj;
(i).
for this algorithm can
basic operatior\s--selecting nodes
be
and ujjdating
algorithm requires 0(n) time to identify the node
i
with
split into
the time required by
distances. In an iteration, the
minimum temporary
label
and
52 takes 0( A(i) I
I
))
time to update the distance labels of adjacent nodes. Thus, overall, the
^
algorithm requires Oirr-) time for selecting nodes and CX
ie
updating distances. This implementation
A(i)
|
|
)
= 0(m)
time for
N
of Dijkstra's algorithm
thus runs in O(n^)
time.
been a subject of much research.
Dijkstra's algorithm has
attempted
to
Researchers have
reduce the node selection time without substantially increasing the time for
updating distances. Consequently, they have, using clever data structures, suggested
implementations of the algorithm.
several
These implementations have either
dramatically reduced the running time of the algorithm in practice or improved
worst case complexity. In the following discussion, is
we
describe Oial's algorithm, which
currently comparable to the best label setting algorithm in practice.
describe an implementation using
R-heaps, which
is
nearly
Subsequently the best
of R-heaps gives the best worst-case performance for
we
known
implementation of Dijkstra's algorithm from the perspective of worst-case analysis.
more complex version
its
most
(A all
choices of the parameters n, m, and C.)
3^
Dial's
Implementation
The bottleneck operation the algorithm's performance, all
algorithm
in Dijkstra's
we must
is
node
selection.
To improve
Instead of scanning
ask the following question.
temporarily labeled nodes at each iteration to find the one with the
distance label, can
fashion?
we reduce
the computation time
Ehal's algorithm tries to
computation time
in practice,
FACT
3.1. The distance nondecreasing.
by maintaining distances
minimum in a sorted
accomplish this objective, and reduces the algorithm's
using the foUouing
labels
that
fact:
Dijkstra's algorithm designates as permanent are
This fact follows from the observation that the algorithm permanently labels a
node
i
with
smallest temporary label
d(i),
and while scanning arcs
in A(i)
during the
distance update step, never decreases the distance label of any permanently labeled node since arc lengths are nonnegative. selection.
We
FACT
suggests the following scheme for node
3.1
maintain nC+1 buckets numbered
0, 1, 2,
C
...
,
nC. Bucket k stores each node
whose temporary distance
label
network and, hence, nC
an upper bound on the distance labels of
node
selection step,
nonempty
bucket.
we
is
is k.
Recall that
represents the largest arc length in the
scan the buckets in increasing order until
The distance
label of each
node
in this
bucket
all
we is
the nodes. In the identify the
first
minimum. One by
53 one, arc
we
making them permanent and scanning
delete these rodes from the bucket,
update distance
lists to
We
labels of adjacent nodes.
their
then resume the scanning of
higher numbered buckets in increasing order to select the next nonempty bucket.
By
storing the content of these buckets carefully,
select the next
element of any bucket very
bounded by some linked
we
In this data structure,
list.
two pointers
immediate successor. Doing so permits
node from the
the topmost relabel
add
list,
uses a data structure knov\T»
and
a time
i.e.,
a doubly
bls
order the content of each bucket arbitrarily, storing
one pointer
for each entry:
0(1) time,
efficiently; in fact, in
One implemention
constant.
possible to add, delete,
it is
to its
immediate predecessor and one
by rearranging the
us,
pointers, to select easily
bottommost node, or delete
a
nodes and decrease any node's temporary distance
to its
Now,
a node.
label,
we move
as
we
from a
it
higher index bucket to a lower index bucket; this transfer requires 0(1) time.
Consequently, this algorithm runs in following
FACT
fact
allows us to reduce the
+ nC) time and uses nC+1 buckets. The
number
of buckets to C+1.
distance label that the algorithm designates as permanent at the
If d(i) is the
3.2.
0(m
beginning of an iteration, then at the end of that iteration labeled node j in T.
This fact follows by noting that (ii)
for each finitely labeled
node
j
in T,
d(j)
= d(k) +
cj^;
d(i)
temporary labels are bracketed from below by
d(i)
d(j)
Consequently, C+1 buckets suffice
We
<
to store
need not store the nodes with
buckets-we can add them
to a bucket
Dial's algorithm uses
arranged
node
j
when
vvill
also implies that ...
,
C,
if
and so
+
C
some k € P
for
for each finitely
they
first
FACT
In other words,
+ C.
3.1),
and from above by
all finite
d(i)
temporary distance
finite
and
(by the property
temporary distance labels
in
+ C.
labels.
any of the
receive a finite distance label.
0, 1, 2,
...
,
C
which can be viewed as
implementation stores a temporarily labeled
the bucket d(j)
entire execution of the algorithm, bucket
time this bucket
nodes with
C+1 buckets numbered
d(j) in
labels k, k+(C+l), k+2(C+l),
Cj.:
infinite
in a circle as in Figure 3.1. This
with distance label
k+1, k+2,
+
d(i)
d(i)
k e P (by
d(k) < d(i) for eacl
(i)
<
Hence,
of distance updates).
<
d(j)
mod
Consequently, during the
(C+1).
k stores temporary labeled nodes with distance
forth;
however, because of
FACT
3.2, at
any point
in
hold only nodes with the same distance labels. This storage scheme
bucket k contains a node with
0, 1, 2,
...
,
k-1, store
nodes
minimum
distance label, then buckets
in increeising values of the distance labels.
54
k-l Figure
Dial's algorithm
identify the first
Bucket arrangement in Dial's algorithm
examines the buckets sequentially, in
nonempty
starting at the place
compared
3.1.
where
it
bucket. In the next iteration,
A
left off earlier.
to the original algorithm, is that
wrap around
a
reexamines the buckets
it
potential disadvantage of this scheme, as
C may
be very large, necessitating large
storage and increased computational time. In addition, the algorithm as
many
as n-1 times, resulting in a large computation time.
typically does not encounter these difficulties in practice. For
very large, and the number of passes through Dial's algorithm,
0(m
however,
+ nC) time which
time. For example,
if
C
is
is
fashion, to
all
The algorithm, however,
most applications,
of the buckets
is
rot attractive theoretically. The
not even polynomial time.
Rather,
= n', then the algorithm runs
in
may wrap around
it
much
C
less
is
not
than
n.
algorithm runs in is
pseudopolynomial
O(n^) time, and
if
C
= 2" the
algorithm takes exponential time in the worst case.
The search heis
for the theoretically fastest
led researchers to develop several
next section,
we
can skip
3.3.
it
is
data structures for sparse networks.
In the
consider an implementation using a data structure called a
redistributive heap (R-heap) that
implementation
new
implementations of Dijkstra's algorithm
of a
runs in
0(m +
n log nC) time.
more advanced nature than
the previous
The discussion sections
of this
and the reader
without any loss of continuity.
R-Heap Implementation
Our
first
O(n^) implementation of Dijkstra's algorithm and then Dial's
implementation represent two extremes.
The
first
implementation considers
all
the
55 temporarily labeled nodes together
node with
Could we improve upon these methods by
in different buckets.
adopting an intermediate approach, perhaps by storing many, but not
all,
labels in a
For example, instead of storing only nodes with a temporary label of k in the
bucket?
k-th bucket, different
we
could store temporary labels from 100k to lOOk+99 in bucket
temporary labels
bucket k
[100k
is
Using widths of factor of k.
size
its
width
the range of the bucket;
TOO.
is
k permits us
The
k.
For the preceding example, the range of
called its width.
is
lOOk+99] and
..
make up
that can be stored in a bucket
the cardinality of the range
to
reduce the number of buckets needed by a
But in order to find the smallest distance
label,
elements in the smallest index nonempty bucket. Indeed,
only
for a
nodes by storing any two nodes
Dial's algorithm separates
the smallest label.
with different labels
one large bucket, so to speak) and searches
(in
if
we need
k
is
to search all of the
arbitrarily large,
one bucket, and the resulting algorithm reduces
Dijkstra's
to
we need original
implementation.
Using a width of
TOO, say, for each bucket reduces the
numbered bucket
requires us to search through the lowest
minimum temporary one
label.
If
numbered
for the lowest
we
number
to find the
bucket,
we
that the
number
the ranges of
of buckets
The R-heap algorithm we consider next
distance labels in a is 1.
to find the
algorithm
0(m
is
We now
way
in
is
only
Odog
minimum
nCl We
describe an R-heap in 1
it
we
reallocate
we
...
,
so
nodes with temporary
avoid the need
running time of
16,
dynamically modify
distance label in a bucket
whose width
to search the entire
this version of the
more
detail.
R-heap
For a given shortest path problem,
represent the range of bucket k by range(k) which
store
1, 1, 2, 4, 8,
+ flog nCl buckets. The buckets are numbered as
We store a
permanent nodes.
CONTENT(k). The algorithm each time
we
In the version of
+ n log nC).
closed interval of integers.
do not
In fact, the
Moreover,
nC).
each bucket and
that stores the
minimum.
the R-heap consists of
We
needed
In this way, as in the previous algorithm,
bucket
Flog
present, the widths of the buckets are
numbers stored
node with
could conceivably retain the advantages of bo.h
uses variable length widths and changes the ranges dynamically.
we
still
could devise a variable width scheme, with a width of
the wide bucket and narrow bucket approaches.
redistributive heaps that
of buckets, but
will
changes the ranges,
it
temporary node
The nodes
i
in
is
bucket k
in bucket
0, 1, 2,
...
,
K
=
a (possibly empty) if
d(i)
e range(k).
k are denoted by the
set
change the ranges of the buckets dynamically, and
redistributes the
nodes
in the buckets.
56 Initially, the
buckets have the following ranges:
rarge(0) =
[0];
ranged) =
[1];
range(2) = [2
..
3);
rangeO) =
[4
..
7];
range(4) = [8
..
15];
range(K) = [2^-1
These ranges
will
not increase beyond their distance label
by verifying
is
initial
minimum
4.
The following observation
than
8,
is [8
is ..
and hence buckets
these buckets idle,
we
range
in the
We
15].
..
We
helpful.
15],
that the initial
could verify this is
Since the
we know
to 3 v^ll
minimum
label
never be needed again.
nodes
0,
and we
temporarily labeled nodes into the appropriate buckets
(0, 1, 2,
in
bucket is
Rather than leaving
resulting in the ranges
then set the range of bucket 4 to
this point,
v^l ever again be
can redistribute the range of bucket 4 (whose width is 8)
quickly
index nonempty bucket
no temporary
that
all
will
minimum
fact
nonempty. At
distance label without searching
previous buckets (whose combined width [12.. 15].
[8
example
for
through 3 are empty and bucket 4
that buckets
could not identify the
less
Suppose
widths.
determined to be
whose range
2^-1].
change dynamically; however, the widths of the buckets
we
the bucket
..
[8], [9],
is 8) to
[10
11],
..
the
and
shift (or redistribute) its
and
Thus, each of the
3).
elements of bucket 4 moves to a lower indexed bucket. Essentially,
we have
replaced the node selection step
(i.e.,
finding a node with
smallest temporary distance label) by a sequence of redistribution steps in which
nodes constantly
0(n
to
Actually,
it
in
shifted at
most
we would
label is in a bucket with
find the
minimum temporary
of the elements of bucket 4 in the redistribute step,
is 11.
=
+ flog nCl times.
1
width one, and the
carry out these operations a bit differently.
all
label
K
is
an additional 0(1) time.
scanning
minimum
node can be
minimum temporary
algorithm selects
shift
lower indexed buckets. Roughly speaking, the redistribution time
log nC) time in total, since each
Eventually, the
we
label in the bucket.
Then rather than
redistribute the subrange [11
..
15].
Suppose
it
we
makes sense
for
redistributing the range [8
Since
..
example 15],
will
be
to first
that the
we need
In this case the resulting ranges of buckets
only to 4
57
would be [n],
[12], (13
guaranteed that the
To
k,
we do
reiterate,
whose width
to k-1,
time
is
this redistribution,
label is stored in bucket 0,
whose width
we
are
is 1.
not carry out the actual node selection step until the
bucket has width one. is
end of
at the
greater than
1,
we
If
the
minimum nonempty to buckets
0(n log nC) and the running time
of
the
bucket
is
k into buckets
redistribute the range of bucket
and then we reassign the content of bucket k
redistribution
0(m
Moreover,
14], [15], e.
minimum temporary
minimum nonempty bucket
..
The
to k-1.
algorithm
is
+ n log nC).
We now the figure, the
illustrate
R-heaps on the shortest path example given in Figure
number beside each
arc indicates
its
3.2.
In
length.
source
Figure 3.2
For
this
problem, C=20 and
K
The
shortest path example.
= flog 1201 =
7.
Figure 3.3 specifies the starting
solution of Dijkstra's algorithm and the initial R-heap.
12
Nodei: Label
4
5
15
20
3
4
5
[4 ..7]
[8 ..15]
[16. .31]
3
13
d(i):
12
Buckets:
Ranges:
[0]
CONTENT:
(3)
[1]
[2 ..3]
K
select the
to find the first
has width
1,
nC=120
(2,4)
Figure 3.3
To
6
The
initial
node with the smallest distance
nonempty bucket.
7
6 [32 ..63]
[64
{5}
..
127]
(6)
R-heap.
label,
we
In our example, bucket
scan the buckets is
0,
1,2,
...
nonempty. Since bucket
every node in this bucket has the same (minimum) distance
label. So, the
58 algorithm designates node 3 as permanent, deletes node 3 from the R-heap, and scans the arc (3,5) to change the distance label of
new
distance label of node 5
bucket
5. It isn't.
index bucket. So identify the
first
Since
we
i:
5
from 20
contained in the range of
to 9. its
We
check whether the
present bucket, which
distance label has decreased, node 5 should
sequentially scan the buckets from right to
bucket whose range contains the number
moves from bucket
Node
its
is
node
5 to bucket
4.
Figure 3.4 shows the
new
9,
left,
move
to a lower
starting at bucket
which
R-heap.
is
bucket
is
4.
5, to
Node
5
59
CONTENT(O) =
(5),
CONTENTO)
=
0,
CONTENT(2) =
e,
CONTENTO) =
{2.
CONTENT(4) =
0.
.
4),
This redistribution necessarily empties smallest distance label to bucket
We
now
are
then
add
node
the
The term
m
0.
we
CONTENT(k) and
e
j
sequentially scan lower
to the appropriate bucket.
reflects the
number
every time a node moves, buckets, a
and moves the node with the
,
algorithm and analyze
in a position to outline the general
complexity. Suppose that d(j) « range(k),
bucket 4
node can move
at
most
K
If
numbered buckets from
the modified
and
right to left
0(m + nK) time. nK arises because
Overall, this operation takes
and the term
of distance ujxlates,
moves
it
that d(j) decreases.
its
lower indexed bucket; since there are K+1
to a
Therefore, O(nK)
times.
a
is
bound on
the total
node movements. Next we consider the node buckets from
left
Node
selection step.
to right to identify the first
selection begins
nonempty bucket, say bucket
operation takes 0(K) time per iteration and O(nK) time in
node
in the selected
bucket has the
minimum
those buckets. the bucket
If
the range of bucket k
djj^j^,
is
first
first
,
k-1
k ^
and
2,
then
is Idjj^jp
..
3,
and so
on.
buckets can be as large as
1,
...
,
can
it
to a
move
time.
node
htis
in
we
assign
width <
2"^
and since the
2*^'^ for a total potential 0, 1,
2,
...
,
width of
k-1 in the
described. This redistribution of ranges and the subsequent reinsertions of
Whenever we examine
move
content to
its
label of a
nodes empties bucket k and moves the nodes with the smallest distance 0.
redistribute
the next two integers to bucket
Since bucket k 1, 1, 2,
This
u].
can redistribute the useful range of bucket k over the buckets
manner
we
reinsert
and the smallest distance
integer to bucket 0, the next integer to bucket
widths of the
we
u]
...
If
k.
k=0 or k=l, then any
redistributes the useful range in the following manner:
the next four integers to bucket
2*^,
..
0, 1,
then the useful range of the bucket
The algorithm the
is [/
total. If
distance label.
the "useful" range of bucket k into the buckets
by scanning the
a
node
nonempty bucket k with the
lower indexed bucket; each node can move
a total of at most
Since
in the
K
nK
times. Thus, the
node
= [log nC"L the algorithm runs in
summarize our discussion.
at
most
K
labels to bucket
smallest index,
times, so
all
selection steps take
0(m
+ n log nC) time.
we
the nodes
O(nK)
total
We now
60
Theorem
The R-heap implementation + n log nC) time.
3.1.
path problem in
This algorithm requires
number
of Dijkstra's algorithm
solves the shortest
0(m
of buckets to
1
+ flog
+ flog nCl buckets.
1
CT
FACT
3.2 permits
us to reduce the
This refined implementation of the algorithm runs in
0(m
+ n log C) time. For probelm that satisfy the similarity assumption (see Section
this
bound becomes 0(m+ n
structures,
it
is
log n). Using substantially
possible to reduce this
time algorithm for
all
bound
further to
0(m
sophisticated data
+ n Vlog n
),
which
is a linear
but the sparsest classes of shortest path problems.
Label Correcting Algorithms
3.4.
Label correcting algorithms, as the labels for
nodes and correct the
these algorithms maintain
all
more general than
name
Unlike label setting algorithms,
distance labels as temporary until the end,
The
i.e.,
networks containing negative length
a directed cycle
whose
all
more general
arcs.
To produce
network does not contain any
sum
arc lengths
have the capability
label correcting algorithms
they
to
and are applicable
shortest paths, these algorithms typically require that the
negative directed cycle,
when
label correcting algorithms are conceptually
the label setting algorithms
situations, for example, to
implies, maintain tentative distance
labels at every iteration.
become permanent simultaneously.
Most
more
1.2),
to a negative value.
to detect the presence of negative
cycles.
Label correcting algorithms can be viewed as a procedure for solving the following recursive equations:
d(s) d(j)
As j.
=
(3.1)
0,
= min
(d(i)
+
Cjj
:
i
€ N), for each
j
e
N
-
(3.2)
{s}.
usual, d(j) denotes the length of a shortest path from the source
node
to
node
These equations are knov^m as Bellman's equations and represent necessary conditions
for optimality of the shortest path problem.
cycle in the network has a positive length.
conditions which
Theorem
3.2
is
node:
suitable
Let d(i) for
satisfy the following
source
more
i
e
N
conditions,
These conditions are also
We
will
then
every
label correcting algorithms.
If d(s)
they represent
if
prove an alternate version of these
from the viewpoint of be a set of labels.
sufficient
=
and
the shortest
if in
addition the labels
path lengths from the
61 C3.1.
d(i)
C32.
d(j)
<
+
d(i)
Since d(i)
Proof.
bound on
j)
=
-
i-j
-
i2
V
<
(i,j)
Cj;
•••
•••
-
'k
d(ij^)
/
some path from
the source to
We show
the labels d(i) satisfy C3.2, then they
that
if
shortest path lengths,
P from
node
it
i,
is
an upper
which implies the conclusion
the source to
node
P
Let
j.
)
<
d(ij^.-j)
d(j)
is
+
Cij^-iijc
a lower
Ciii2/
these inequalities yields d(j) =
Adding
bound on
of the
consist of
Condition C3.2 implies that d(i2) ^ d(i^) + Ci^i2 =
=
Therefore
and
i;
the length of any directed path from
P
e
the source to
We
.
-
i3
the source node to node
e A.
the length of
is
bounds on the
d{i3) < d(i2) + Ci2i3'
d(ij^)
(i,
Consider any directed path
theorem. s
for all
Cjj
the shortest path length.
are also lower
nodes
some path from
the length of
is
node
j,
note that
satisfies C3.2.
including a shortest path from s to
if
Suppose C3.2.
d(i) satisfy
the network contains a negative cycle then
network did contain a negative cycle
that the
Consequently,
V
inequalities imply that (i,j)
out in the summation.
j.
e
d(i)
(d(i)
-
d(j)
+
d(j)
Cjj)
+
^
Cj:
=
W
for each
T!, (i,j)
e
^ii
^
^'
no
set of labels d(i)
W and some labels (i,j)
e
W. These
since the labels d(i) cancel
W
This conclusion contradicts our assumption that
W
is
a negative
cycle.
Conditions C3.1 in Theorem 3.2 correspond to primal
programming formulation dual
feasibility.
methods
that
From
of the shortest path problem.
this perspective,
always maintain primal
we might view
feasibility
generic label correcting algorithm that
we
and
feeisibility for
the linear
Conditions C3.2 correspond to label correcting algorithms as
try to achieve dual feasibility.
consider
first is
a general procedure for
successively updating distance labels d(i) until they satisfy the conditions C3.2.
point in the algorithm, the label d(i)
any path from the source
node d(j)
>
d(i)
the arc
+
(i,j)
Cjj,
is
node
j,
or
either «» indicating that it is
the length of
we have
At any
yet to discover
some path from
the source to
based upon the simple observation that whenever the current path from the source to node i, of length d(i), together with
The algorithm
j.
to
is
The
is
a shorter path to
node
j
than the current path of length
d(j).
62 algorithm
LABEL CORRECTING;
begin d(s) d(j)
=
:
and pred(s) =
=
:
:
oo for
each
while some arc
N - (s);
€
j
0;
satisfies d(j)
(i,j)
>
+
d(i)
do
Cj;
begin d(j)
:
=
pred(j)
d(i)
:
=
+
Cjj;
i;
end; end;
The correctness
termination, the labels d(i) satisfy d(j) < d(i) +
We now
the shortest path lengths.
negative cost cycles and
if
number
for all
Cj;
(i, j)
the data are integral. d(j) at
Since
d(j) is
is
finite if there are
no
bounded from above by nC
most 2nC times. Thus when
of distance updates is O(n^C),
At
3.2.
and hence represent
e A,
note that this algorithm
and below by -nC, the algorithm updates integral, the
from Theorem
of the label correcting algorithm follows
all
data are
and hence the algorithm runs
in
pseudopolynomial time.
A
nice feature of this label correcting algorithm
do not
arcs that
convergence.
One drawback
on the choice of
restriction
in
polynomial time.
make
conditions C3.2 in any order and
satisfy
of the
method, however,
arcs, the label correcting
Indeed,
if
we
start
do have exponentially large values algorithm,
we
assure the
finite
that without a further
with pathological instances of the problem and
(Since the algorithm
n.
is
still
can select the
algorithm does not necessarily run
a poor choice of arcs at every iteration, then the
exponentially with
we
is its flexibility:
of C.)
is
number
of steps can
grow
pseudopolynomial time, these instances
To obtain
a polynomial time
bound
for the
can organize the computations carefully in the following manner.
Arrange the arcs
in
A
some
in
In each pass, scan arcs in satisfies this condition,
A
in
(possibly arbitrary) order.
Now make
order and check the condition
then update
d(j)
=
d(i)
distance label changes during an entire pass.
+
We
Cj:.
d(j)
>
passes through A.
d(i)
+
Cj:;
if
the arc
Terminate the algorithm
call this
if
algorithm the modified
no
label
correcting algorithm.
Theorem label
3.3
Wher: applied to a network containing no negative cycles, the modified requires 0(nm) time to determine shortest paths from the
correcting algorithm
source to every other node.
Proof.
We show
that the algorithm performs at
Since each pass requires
most n-1 passes through the arc
list.
0(1) computations for each arc, this conclusion imphes the
63
0(nm) bound.
Let
d''(j)
denote the length of the shortest path from the source
consisting of r or fewer arcs. Further,
passes through the arc for
each
for each
r
=
j
€ N.
1,
...
D''"^(i)
{
claim, inductively, that
We perform
n-1.
The provisions
min
{ly'^(j),
,
We
list.
+
represent the distance label of node
let D'^(j)
D''(j)
<
induction on the value of
Suppose
r.
Next note
that the shortest path to
j
after r
j
N
€
j
,
and
D^*^(j) < d''"Uj)
imply that
of the modified labeling algorithm
Cj; )).
for each
d''(j)
node
to
node
j
I^Cj)
< min
containing no
i*j
more than case
d^'Q)
(i),
either
r arcs
=
d''"^(j),
(i)
has no more than r-1 arcs, or
and
in case
(ii),
d^(j)
min
=
contains exactly
(ii) it
+
{d''"^(i)
In
r arcs.
Consequently,
c^:].
dJi])
=
i*j
min
(d''"^(j),
min
{d^"''(i)
+
Cj;))
from the induction hypothesis.
Hence,
D^'Cj)
the shortest path from the source to any after at
min
> min {D^" "•()),
<
node
d'^(j)
(D''"''(i)
for all
j
+
e N.
consists of at
Finally,
most n-1
If
it
has a set of labels
does not contain any negative
i
changes
cycle.
the n-1 passes, in the n-th pass,
On the other hand, when we make one more pass. more nodes
in
the distance label of
all
path
(a
1
to
i
of
paths from the source
cyde
Improvements stated so far, the modified label correcting algorithm considers every arc of the
network during every pass through the arc arcs in the arc
list
by
consecutively on the
their tail
list.
list.
nodes so that
It
need not do
aill
and
during one pass through the arc
so.
arcs with the
Thus, while scanning the arcs,
time, scanning arcs in A(i)
i.
If
This situation cannot occur ui\less the network contair\s a negative cost
i.
As
node
the algorithm modifies
common) from node
length greater than n-1 arcs that has snnaller distance than
Practical
and the network
then the network contains a directed walk
together with a cycle that have one or
to
lengths.
d(j) satisfying C3-2.
In this case, the algorithm terminates with the shortest path distances
some node
Therefore,
the algorithm does not update any distance label
during an entire pass, up to the (n-l)-th pass, then
all
note that
label correcting algorithm is also capable of detecting the presence
of negative cycles in the network.
distance labels in
we
arcs.
most n-1 passes, the algorithm terminates v^th the shortest path
The modified
node
the inequality fol3o\*^
Cj;));
we
Suppose we order the
same
node appear
consider one node
testing the optimality conditions. list,
tail
Now
i
at a
suppose that
the algorithm does not change the distance label of a
Then, during the next pass
d(j)
S
d(i)
+
Cj;
for every
(i, j)
6 A(i)
and the
;
64 algorithm need not maintain a It
scans this
the arc
list
of
list
nodes whose distance
labels
this savings, the
have changed since
in the first-in, first-out order to assure that
list
A
To achieve
test these conditions.
it
last
algorithm can
examined them.
performs passes through
it
and, consequently, terminates in 0(nm) time. The following procedure
a formal description of
is
modified label correcting
this further m.odification of the
method.
MODIFIED LABEL CORRECTING;
algorithm begin
d(s) d(j)
=
:
=
:
and pred(s) = :
«>
LIST = :
for each
j
0;
N-
e
(s);
(s);
while LIST*
do
begin
element
select the first
delete
of LIST;
i
from LIST;
i
for each
(i, j)
do
e A(i)
if d(j)
>
d(i)
+
d(j)
:
Cjj
then
begin
=
d(i)
pred(j) if
j
:
=
+
C|j
i;
add
« LIST then
j
to the
end of LIST;
end; end; end;
Another modification of
this
the worst case, but greatly improves
the
manner
we
check
in
algorithm sacrifices its
running time
which the algorithm adds nodes
to see
whether the
it
follovdng plausible justification. i
we add If
the
it
node
as a predecessor.
them again when we consider node
Though
this
is
in practice.
to LIST.
i
i.
The modification
While adding a node
to the end of LIST.
If
yes, then
i
in
alters
to LIST,
we add
i
to
This heuristic rule has the
has previously appeared on the LIST, then
It is
these nodes immediately, rather than update
alone, the algorithm
polynomial time behavior
has already appeeired in the LIST.
the beginning of LIST, otherwise
some nodes may have
its
advantageous
to
update the distances
for
them from other nodes and then update
Empirical studies indicate that with this change
several times faster for
many
reasonable problem classes.
change makes the algorithm very attractive in
practice, the worst-case
65
running time of the algorithm algorithm
is
source to
all
from are
exponential.
Indeed, this version of the label correcting
the fastest algorithm in practice for finding the shortest path from a single
nodes
in
a single source
more
is
(For the problem of finding a shortest path
non-dense networks.
node
to a single sink, certain variants of the label setting
algorithm
efficient in practice.)
All Pairs Shortest Path Algorithm
3.5.
we need
In certain applications of the shortest path problem,
between
shortest path distances
all
pairs of nodes.
algorithms to solve this problem. The
combines the modified algorithm
is
first
algorithm
dense graphs.
It is
is
and
label correcting algorithm
better suited for
In this section
shortest path
problem by applying
as the source
node
once.
If
determine
describe two
well suited for sparse graphs.
based on dynamic programming.
we
can solve the
node from which
We
paths.
the network contains arcs with negative arc lengths, then
nodes
path distances
network are reachable,
node
Cj;
(i,
t
as
for all
X
/
j)
^ii
labels d(j) cancel out in the
Cu = (i, j)
Let s be
connected by directed
either terminates with the shortest
Cj;
€ A.
X
~
(i,j)€P
all
i.e.,
the presence of a negative cycle.
length of the arc
Condition C3.2 implies that
to
The algorithm
to all other nodes.
d(j) or indicates
new
from node k
in the
we
use the modified label correcting algorithm to compute shortest path
distances from s
define the
all
pairs
all
n times, cor\sidering each node
Dijkstra's algorithm
can fist transform the network to one with nonnegative arc lengths as follows a
It
The second
Dijkstra's algorithm.
the network has nonnegative arc lengths, then
If
we
to
^ii
+
d(i)
-
In the former case,
d(j)
for each
(i, j)
we
e A.
Further, note that for any path
"*"
^^^^ ~
'^^'^
P
since the intermediate
(i,j)eP
summation. This transformation thus changes the length of
paths between a pair of nodes by a constant amount (depending on the pair) and
consequently preserves shortest paths. transformation,
we
Since arc lengths
between
all
pairs of
nodes in the transformed network.
obtain the shortest path distance between nodes k and d(/)
network.
and
if
after the
can apply Dijkstra's algorithm n-1 additional times to determine
shortest path distances
adding
become nonnegative
/
in the original
We
then
network by
- d(k) to the corresponding shortest path distance in the transformed
This approach requires
0(nm) time
to solve the first shortest path
the network contains no negative cost cycle, the
method
problem,
takes an extra
66
0(n
S(n,
S(n,m,C)
m, C) time is
to
compute
the remaining shortest path distances.
In the expression
the time needed to solve a shortest path problem with nonnegative arc
we
For the R-heap implementations of Dijkstra's algorithm
lengths.
previously, S(n,m,C) =
Another way
m+
n log nC. solve the
to
pairs shortest path
all
programming. The approach we present variables d^(i,
d'"(i, j)
considered
is
known
problem
is
by dynamic
We
as Floyd's algorithm.
define the
as follows:
j)
"
the length of
from node
a shortest path
i
condition that the path uses only the nodes
node
to
1, 2,
...
,
j
subject to the
r-1 (and
and
i
as internal nodes.
Let d(i,
j)
denote the actual shortest path distance. To compute
observe that a shortest path from node either
(i)
i
to
does not pass through the node
pass through the node
d^(i,j)
in
r,
=
which case
node r,
in
j
that passes
which case
d^'*'^{i,
j)
=
(d^'U, jX d^Ci, r)
+
d^Cr,
»
pairs
d^(i, r)
+
d''"*'^(i, j),
through the nodes
d'^"''^(i, j)
d^ir,
j).
=
d''(i, j),
or
we
1, 2, (ii)
first
...
,
r
does
Thus we have
Cjj,
and d^'+^Ci,
We
assume
j)
that
= min Cj;
=
for all
node
j)).
(i,
j)
previous equations recursively for increasing values of
over
N
X
N
algorithm.
for a fixed value of
r.
e A. r,
It is
possible to solve the
and by varying the node
The following procedure
is
pairs
a formal description of this
j)
67
ALL PAIRS SHORTEST PATHS;
algorithm begin for
node
all
pairs
for each
(i, j)
€
A
do
for each
r
=
1
to
n do
:
for each .
T
•
d(i,
j)
j)
=
:
e
NxN
>
d(i, r)
(i, j)
if d(i,
NxN
€
(i, j)
do
d(i,
and
Cj;
=
))•.
and
<«
pred(i,
j)
=
:
pred(i,
j)
:
=
0;
i;
do +
-:
d(r,
then
j)
,
j
:
>
<
begin d(i,
j)
if
=
i
=
:
j
d(i, r)
and
+
d(i,
d(r,
i)
j);
then the network contains a negative
<
cycle,
STOP; pred(i,
j)
:
= pred(r,
j);
end; end;
Floyd's algorithm uses predecessor indices, predd,
index pred(i,
node
i
to
j)
node
denotes the
last
node prior
to
node
j
netw;ork contains a path from node
by tracing the predecessor
to
i
node
j
for each
node pair
that for each finite d(i,
of length d(i,
j).
Consequently,
pair.
it
some node
r;*i, d(i, r)
from node
to
i
node
r
+
it
runs in OCn-') time.
either terminates vdth the shortest path distances or stops
tentative shortest paths
from
j),
the
indices.
computations for each node
In the latter case, for
The
p.
This path can be obtained
This algorithm performs n iterations, and in each iteration
i.
(i,
in the tentative shortest path
The algorithm maintains the property
j.
j),
d(r,
i)
performs 0(1)
The algorithm
when
d(i,
<
Hence, the union of the
0.
and from node
i)
r to
<
for
node
i
some node
contains a
negative cycle. This cycle can be obtained by using the predecessor indices. Floyd's algorithm jilgorithm.
Theorem
is
in
many
respects similar to the modified label correcting
more transparent from
This relationship becomes 3.4
If d(i, j) for
(i,
j)
e
N
x
N
the followang theorem.
satisfy the following
represent the shortest path distances:
=
(i)
d(i, i)
(ii)
d(i, j) is the
(Hi)
d(i, j)
<
for all
d(i, r)
Proof. For fixed
i,
i;
length of some path from node
+
this
c^: for all
theorem
is
i,
r,
and
i
to node j;
j.
a consequence of
Theorem
3.2.
conditions, then they
68
MAXIMUM FLOWS
4.
An capacities
important characteristic of a network
on the
The resolution
maximum
arcs, is the
is its
capacity to carry flow. What, given
flow that can be sent between any two nodes?
of this question determines the "best" use of
a reference point against which to
maximum
the solution of the
compare other ways
tire
of nodes
join this pair of
In this section,
between two nodes
its
we
is
For example, what
maximum number
the
reliability
of
node
all
the
is
paths joining
disjoint paths that
measures indicate the robustness of
discuss several algorithms for computing the
We begin
by introducing
The
flow problem.
of surprising implications in
maximum
flow
a basic labeling algorithm for
validity of these algorithms rests
celebrated max-flow min-cut theorem of network flows.
number
Moreover,
components.
in a network.
maximum
solving the
what
nodes? These and similar
the network to failure of
establishes
flow problem with capacity data chosen judiciously
whose removal from the network destroys
a particular pair of nodes? Or,
and
of using the network.
establishes other performance measures for a network.
minimum number
capacities
upon
the
This remarkable theorem has a
machine and vehicle scheduling, communication
We
systems planning and several other application domains.
then consider improved
versions of the basic labeling algorithm with better theoretical performance guarantees.
we
In particular,
describe preflow push algorithms that have recently emerged as the
most powerful techniques
maximum
for solving the
flow problem, both theoretically
and computationally.
We Uj; for
consider a capacitated network
any arc
network.
We
generality in
(i, j)
e A.
assume
making
The source
s
that for every arc
this
G
and sink
all
we
U = max
(u^;
:
(i, j)
€ A).
As
to find the
maximum
(j,
sum
earlier, the arc
no
loss of
There
also in A.
of the capacities of
i.
is
to
list
In the
flow from the source node
the arc capacities. Formally, the problem
of the
i) is
adjacency
integer capacity
two distinguished nodes
allow zero capacity arcs.
€ A) designates the arcs emanating from node
wish
are
arc capacities are finite (since
capacity of any uncapacitated arc equal to the
Let
t
in A,
(i, j)
assumption since
without any loss of generality that
= GM, A) with a nonnegative
all
We also
assume
we
set the
s to the
can
capacitated arcs).
defined as A(i) =
maximum
is
{(i,
k)
:
(i,
flow problem,
sink node
t
k)
we
that satisfies
69
Maximize v
(4.1a)
subject to V, ifi
r
y {j
<
It
Xjj
€
(i, j)
:
Xj:
is
<
A)
Ujj
y
-
,
{)
:
(j, i)
for each
Xjj
=
"V'
€ A)
(i, j)
s,
ifi*s,t,foraUiG N,
0,
\
=
^
^
>
=
e A.
(4.1c)
possible to relax the integrality assumption on arc capacities for
algorithms, though this assumption
complexity bounds involve
U
assume
integrality of data.
Thus, the integrality assumption
The concept flow
x,
of residual network
the residual
capacity,
rj;
is
of
,
additional flow that can be sent from
Note, however, that rational
the current flow
Consequently,
rj;
=
on arc
x;j
-
Uj;
x^:
+
(j,
xij
crucial to the algorithms
node
i
.
We
call
positive residual capacities the residual represent
4.1
it
to u^:
(i)
node -
x^;,
A
e
(i, j)
j
as G(x). Figure 4.1 illustrates an
we
using the arcs
the
Given a
consider.
represents the (i, j)
unused capacity
which can be cancelled
i)
by appropriately scaling
not a restrictive assumption in practice.
any arc
residual capacity has two components: (ii)
is
some
Algorithms whose
necessary for others.
is
arc capacities can always be transformed to integer arc capacities
the data.
(4.1b)
^'
to increase
maximum
and
(j,
of arc
i).
(i, j),
The and
flow to node
j.
the network consisting of the arcs with
network (with respect to the flow
x),
and
example of a residual network.
Labeling Algorithm and the Max-Flow Min-Cut Theorem
One
of the simplest
flow problem
is
and most
the augmenting
path
intuitive algorithms for solving the
algorithm
due
to
maximum
Ford and Fulkerson.
The
algorithm proceeds by identifying directed paths from the source to the sink in the residual network
and augmenting flows on these paths,
contains no such path.
The following high-level (and
until the residual
network
flexible) description of the
algorithm summarizes the basic iterative steps, without specifying any particular algorithmic strategy for
how
to
determine augmenting paths.
70
algorithm
AUGMENTING PATH;
begin x: =
0;
while there
a path
is
P from
s to
t
do
in G(x)
begin
A = min
(rjj
:
augment A
:
e P);
(i, j)
units of flow along
P and update
G(x);
end;
end;
For each
(i,
by
A.
increases
method to
r:j
e
j)
augmenting A units of flow along P decreases
We now
to identify a directed
show
discuss this algorithm in
maximum
Finally,
finitely.
The
flow.
more
detail.
r^:
First,
by A and
we need
a
path from the source to the sink in the residual network or
network contains no such path.
that the
algorithm terminates
with a
P,
last result
we must
Second,
we need
to
show
that the
establish that the algorithm termirtates
follows from the proof of the max-flow min-cut
theorem.
A
directed path from the source to the sink in the residual network
an augmenting
path.
The residual capacity
residual capacity of any arc that
on the path. The
an additional flow of A
increase in
Xj;
network, or
work
by A
(iii)
a
of an
in arc
in the original
(i,
j)
the
minimum
of the residual network corresponds to
convex combination of
and
is
also called
definition of the residual capacity implies
network, or
directly with residual capacities
augmenting path
is
(i)
to
(ii)
and
a decreeise in (ii).
compute
Xjj
by A
an
in the original
For our purposes, the flows only
(i)
when
it
is
easier to
the algorithm
terminates.
The labeling algorithm performs directed path from s to
t.
It
a search of the residual
network
does so by fanning out from the source node
to find a
s to find a
directed tree containing nodes that are reachable from the source along a directed path in the residual network.
At any
not in the tree as unlabeled.
adjacency
list (in
step,
we
refer to the
The algorithm
selects a labeled
the residual network) to label
more
sink becomes labeled and the algorithm sends the
from
when
s to it
t.
It
nodes in the tree as
labeled
node and scans
uiilabeled nodes.
maximum
and those its
arc
Eventually, the
possible flow on the path
then erases the labels and repeats this process. The algorithm terminates
has scanned
all
labeled nodes
and the sink remains unlabeled. The following
algorithmic description specifies the steps of the labeling algorithm in detail. The
71
Network with
Node
1
is
(Arcs not
the source
The
and node
shown have
Network with
c
arc capacities.
a flow
4
is
the sink.
zero capacities.)
x.
residual network with residual arc capacities.
Figure 4.1 Example of a residua] network.
72
algorithm maintains a predecessor index,
rode
that
caused node
along the path from
a
node
each labeled node
i
The predecessor indices allow us
be labeled.
to
i
pred(i), for
indicating the to trace back
to the source.
algorithm LABELING; begin loop pred(j)
L: =
:
=
for each
j
e
N;
(s);
while L *
and
unlabeled do
t is
begin
node
select a
for each if
€ L;
e A(i)
(i, j)
j
i
do
unlabeled and
is
rj;
>
then
begin pred(j)
mark
j
:
=
i;
as labeled
and add
this
node
to L;
end end;
if
is
t
labeled then
begin
use the predecessor labels to trace back to obtain the augmenting path P
from
A = min :
s to
(rj;
augment A erase
all
:
t;
(i, j)
e P);
units of flow along P;
labels
and go
to loop;
end else quit the loop;
end; (loop)
end;
The rjj
Ujj
=
uj; - xj:
- r^;
and
can be used to obtain the arc flows as follows. Since arc flows satisfy xj: - x:j = uj; - Fjj. Hence, if u^: > rj: we can set x^; =
final residual capacities r
+ x:j
x:j,the
=
0;
otherwise
we
set x^:
=
and
x:j
=
fj; - u^;.
73
show
In order to
new
definitions
and
A
N
- Q)
is
two
into
maximum
Recall from Section 1.3 that a set
notation.
subnetwork G' = (N, cutset partitions set
that the algorithm obtains a
disconnected eind no superset of
A
subsets.
flow,
t
e
(S,
S defines an
An
S).
and € S j
is
arc
to
s.
if
cutset is called am s-t cutset
with
i
Consequently,
e S cind
S
e
j
we
For
called a forward arc,
is
called a backward arc in the cutset
flow vector
this
the source and
N - S:
alternatively designate
(S,
S
is
the set of
X,
to v as the value of the flow.
arc
s-t
(i, j)
cutset as
with
i
e
S
and capacity constraints of
be the amount of flow leaving the source.
v
let
and an
an
S).
Let X be a flow vector satisfying the flow conservation (4.1).
if
Conversely, any partition of the node set as S and S with s e S and
s-t cutset.
(i, j)
some
introduce
Q c A is a cutset the Q Yias this property. A
the sink nodes are contained in different subsets of nodes S cind S =
nodes connected
we
The flow x determines the
net flow across an
s-t
We
refer
cutset (S,
S) as
X
Fx
Def ne
G S
j
X X ie S je
We
S
e
i
the capacity C(S, S) of an
C(S, S) =
I_ X
-
X_Xij
s-t
e
cutset
S
(S,
j
(4.2)
Xij.
e S
S)
is
defined as
^'^•^^
"ij
S
claim that the flow across any
s-t
cutset equals the value of the flow
not exceed the cutset capacity. Adding the flow conservation constraints in
S and noting
Cemcels
-Xjj in
that
when nodes
equation for node
^=1 ie S Substituting
shows
x^;
<
u^;
I.'^ij
i,
and
i
j
both belong to
we
obtain
-
I_ i€ S
j€ S in the first
X
''ij
=
S, x^j in
(4.1
and does
b) for nodes
equation for node
Fx^S, S).
(4.4)
je S
summation and
xj:
^
in the
second summation
that
Fx(S,
S)<
X
Z_
i€ S J6 S
"ij
j
^ C
S).
(4.5)
.
74
This result
viewed as
weak duahty property
the
is
Like most
a linear program.
weak
maximum
of the
duality results,
duahty theory. The more substantive strong duaUty property an equality
some
for
duality property
is
the
maximum
Let x denote the a
Proof.
programming
(Linear
s-t
cutset (S,
S
to
be the
maximum flow, s e S and S, we obtain (4.4). Note that forward arc
for each
xjj
<
Ujj
and
x;,
backward arc
for each
e
t
we have
=
^ j
maximum
flow value
The proof
We
v.
x.
S=
Let
N-
cutset.
capacities are integral
one unit
is at
in
any
of iterations
is
S.
Clearly, since x
S).
Since
rj:
=
U|;
-
Xj:
+
Ujj
a
is
hence
S,
rj:
the conditions
Xjj,
each forward arc in the cutset
(S,
and
S)
x^;
=
these substitutions in (4.4) yields
= C(S,
is
a lower
minimum
when
S)
(4.6)
bound on the capacity
capacity
s-t
cutset
and
of
its
any
s-t cutset.
capacity equals
the labeling algorithm terminates,
But does at
it
terminate finitely?
most once, inspecting each
labeling iteration scans each arc at most once
N - {s})
cutset has finite
of this theorem not only establishes the max-flow min-cut property,
algorithm scans any node
(s,
some
network G(x) when we
in the residual
hemd both the maximum flow value (and a maximum flow s-t
eis
thus have established the theorem.
but the same argument shows that
capacity
flow
€ S
S) is a
(S,
I
the flow conservation equations for nodes in
Making
]£
e S
flow
(S,
Uj; for
observed earlier that v
Coi^equently, the cutset the
xj;
in the cutset.
i
to
s
maximum
S cannot be labeled from the nodes in
in
in the cutset
V = Fx(S, S) =
But
initial
Adding
S.
nodes
(i, j)
nodes
set of labeled
imply that
^
This strong
theory, or our subsequent algorithmic developments,
apply the labeling algorithm with the
=
S).
flow vector and v denote the
guarantee that the problem always has a maximvmn flow as long capacity.) Define
holds as
cisserts that (4.5)
The maximum value of flow from
4.1.
equals the
of an
the "easy" half of the
max-flow min-cut theorem.
(Max-Flow Min-Cut Theorem) minimum capacity of all s-t cuts.
Theorem
value.
and some choice
x
choice of
is
it
flow problem when
and bounded by
vector)
Each labeling eirc
in A(i).
and
a
number U, then
has at
minimum
iteration of the
Consequently, the
a finite
it
If all
arc
the capacity of the cutset
most nU. Since the labeling algorithm increases the flow value by
at least
nU
number
iteration,
it
terminates within
iterations.
This bound on the
not entirely satisfactory for large values of U;
if
U
= 2", the bound
is
75
exponential in the
number
many
In addition,
iterations.
Moreover, the algorithm can indeed perform that
of nodes.
the capacities are irrational, the algorithm
if
although the successive flow values converge, they
terminate:
maximum
Thus
flow value.
augmenting paths
is
not converge to the
we must
be effective,
to
not
select the
we
Several refinements of the algorithms, including those
carefully.
consider in Section 4.2 - 4.4
method
the
if
may
may
overcome
this difficulty
and obtain an optimum flow even
if
the capacities are irrational; moreover, the max-flow min-cut theorem (and our proof
of
Theorem
A
4.1)
is
true even
if
the data are irrational.
second drawback of the labeling algorithm the
iteration,
described erases the labels of this information
when
it
may be
proceeds from one iteration to the next, even though
valid in the next residual network. Ideally,
we
Erasing the labels
should retain a label
Decreasing the
The bound not
of
Number
nU on
from
satisfactory
a
the
of Augmentations
number
theoretical
of augmentations in the labeling algorithm
example given
Furthermore, without further
may
take fiCnU) augmentations, as the
in Figure 4.2 illustrates.
Flow decomposition shows should be able is
to find a
maximum
an optimum flow and y
property,
it
is
is
any
that, in principle,
flow in no more initial
thein
augmenting path algorithms
m
augmentations. For suppose
flow (possibly zero). By the flow decomposition
possible to obtain x from y by a sequence of at most
augmenting paths from
s to
t
m
augmentations on
plus flows around augmenting cycles.
If
we
define
the flow vector obtained from y by applying only the augmenting paths, then
maximum is,
is
perspective.
modifications, the augmenting path algorithm
it
when
can be used profitably in later computations.
4.2
X
The implementation we have
to other nodes.
therefore destroys potentially useful information. it
At each
algorithm generates node labels that contain information about
augmenting paths from the source
much
"forget fulness".
its
is
x'
as
also
x'
is
a
flow (flows around cycles do not change flow value). This result shows that
in theory, possible to find a
maximum
flow using
at
most
Unfortunately, to apply this flow decomposition argument,
maximum
flow.
theoretical
bound.
No
augmentations.
we need
to
know
a
algorithm developed in the literature comes close to achieving this
Nevertheless,
0(nU) augmentations
m
it is
possible to improve considerably
of the basic labeling algorithm.
on the bound
of
76
(a)
10
\l
10^,0
(b)
10^,1
10^,1
(0 Figiire 4.2
A pathological example for the labeling
(a)
The input network with
(b)
After aug^nenting along the path
(c)
arc capacities. s-a-b-t.
After augmenting along the path
along s-a-b-t and
algorithm.
s-b-a-t, the
flow
Arc flow
s-b-a-t. is
is
indicated beside the arc capacity.
After 2 xlO^ augmentations, alternately
maximum.
77
One
natural specialization of the augmenting path algorithm
is
to
augment flow
along a "shortest path" from the source to the sink, defined as a path consisting of the least
number
of arcs.
If
we augment
flow along a shortest path, then the length of any
shortest path either stays the
same or
length of the shortest path
guaranteed
Since no path contains
next section.)
number
of augmentations
An
is
alternative
is
is at
Moreover, within
increases.
more than
prove these results
will
in the
n-1 arcs, this rule guarantees that the
most (n-l)m.
augment flow along
to
(We
to increase.
m augmentations, the
a path of
maximum
residual capacity.
This specialization also leads to improved complexity. Let v be any flow value and v* be the
maximum
By flow decomposition, the network contains
flow value.
augmenting paths whose residual capacities sum
to (v*
-
least less,
2m
consecutive
maximum
Now
we
otherwise
have
will
a
maximum
Thus
flow.
augmentations, the algorithm would reduce the capacity of a
augmenting path by the capacity
a factor of at least two.
must be
at least
1
Since this capacity
until the flow is
capacity augmentations, the flow must be
maximum,
maximum.
m
consider a
capacity augmentations, starting with flow
one of these augmentations must augment the flow by an amount for
most
Thus the maximum
v).
capacity augmenting path has residual capacity at least (v*-v)/m.
sequence of
at
(v*
-
At
v.
v)/2m
or
2m or fewer maximum capacity
after
is initially at
most
U and
0(m log U) maximum that we are essentially
after
(Note
repeating the argument used to establish the geometric improvement approach discussed in Section
1.6.)
In the following section,
we
consider another algorithm for reducing the
number
of augmentations.
4.3
Shortest
A
Augmenting Path Algorithm
natural approach to augmenting along shortest paths
look for shortest paths by performing a breadth the labeling algorithm maintains the set
examining the labeled nodes in the residual network.
L
first
to successively
search in the residual network.
of labeled nodes as a queue, then
in a first-in, first-out order,
Each of these
would be
iterations
it
would obtain
If
by
a shortest path
would take 0(m) steps both
in the
worst
case and in practice, and (by our subsequent observations) the resulting computation
time would be O(nm^).
improve
this
Unfortunately, this computation time
running time by exploiting the
fact that the
is
excessive.
minimum
We
can
distance from any
78
node
node
to the sink
i
monotonically nondecreasing over
is
t
we
fully exploiting this property,
augmentations.
all
By
can reduce the average time per augmentation to 0(n).
The Algorithm The concept
maximum
of distance labels
flow algorithms that
d
distance function
N
:
-*
Z"*"
we
prove
w^ill
discuss in this section
and
We say
Tj: is
in the
and
in Sections 4.4
with respect to the residual capacities
the set of nodes to the nonnegative integers.
A
4.5.
a fimction from
that a distance function
valid
is
if it
follovdng two conditions:
satisfies the
C4.1
d(t)
=
0;
C4-2.
d(i)
<
d(j)
We
refer to d(i) as the distance label of
condition.
be an important construct
to
It
+
for every arc
1
A with r^; > 0.
€
(i, j)
easy to demonstrate that
is
shortest directed path from
i
to
t
node
d(i)
is
i
and condition C4.2 as the
boimd on
a lower
network. Let
in the residual
i
=
-
i^
validit}/
the length of the i2
-
-
i3
-\ -
...
t
be any path of length k in the residual network from node i to t. Then, from C4.2 we have d(i) = d(i|) < d(i2) + 1, d(i2) 2 d(i3) + 1, ... d(ij^) < d(t) + 1 = 1. These inequalities ,
imply that
< k for any path of length k in the residual network and, hence, any
d(i)
node
shortest path from
i
to
t
contains at
leaist d(i)
label d(i) equals the length of the shortest path
we
distance labels exact.
call the
distance label, though d =
We now admissible
if it
satisfies
d(i)
from
to
d(j)
If t
node
for each
in the residual
4.1(c),
d =
i,
the distance
network, then
(0, 0, 0, 0) is
a valid
represents the exact distance label.
additional notation.
=
i
For example, in Figure
(3, 1, 2, 0)
some
define
arcs.
+
1.
consisting entirely of admissible arcs
An
arc
in the residual
(i, j)
A
Other arcs are inadmissible.
is
an admissible
network
path from
is
s to
t
The algorithm we describe
path.
next repeatedly augments flow along admissible paths. For any admissible path of length k, d(s)
=
k.
Since d(s)
sink, the algorithm
we
is
a lower
bound on
augments flows along
the length of
any path from the source
shortest paths in the residual network.
to the
Thus,
refer to the algorithm as the shortest augmenting path algorithm.
Whenever we augment along path
is
exact.
However,
exact distances;
it
distances. There
suffices to
is
nodes
node
i
to
in the
network
it is
not necessary to maintain
have valid distances, which are lower bounds on the exact
no particular urgency
the distance label of in the algorithm,
for other
a path, each of the distance labels for nodes in the
be
less
to
compute these distances
than the distance from
without incuring any significant
cost.
i
to
t,
exactly.
we
By allowing
maintain
flexibility
79
We can
compute
the initial distance labels by performing a
from the source node
We
admissible arcs. indices,
i.e.,
two steps
designates
new
one
partial admissible path
current node.
we
Consequently,
makes
delete (pred(i*),
i*,
adds
becomes the new current node. admissible path
(i.e.,
augmentation on
this
it
store
from the
Whenever
contains node
when
d(s)
augmenting path from the source
it
t),
(we i*)
perform backward breadth
1*
:
=
t
partial admissible path
d(s)
the algorithm
makes
We
to the sink.
first
a
search of the residual network
begin if i*
has an admissible arc then
else
RETREAT(i*);
if i*
=
ADVANCE(i*)
AUGMENT and set i»
end; end;
procedure ADVANCE(i»); begin
end;
let (i*, j*)
be an admissible arc in
pred(j')
=
:
i*
and
i*
:
=
j*;
i*
and node
i*
#
s).
pred(i*)
maximum
possible
The
next describe the algorithm formally.
to obtain the distance labels d(i);
then
then
S n, indicating that the network contains no
< n do
t
and
the partial admissible path becomes an
s;
while
some
i*,
inadmissible (assuming
0;
from node
identifies
refer to this step as a relabel
begin :
of the
to the partial admissible path,
SHORTEST AUGMENTING PATH;
X =
using predecessor
path and begins again with the source as the current node.
algorithm terminates
algorithm
it
This step increeises the distance label of node
the arc (predd*),
i*)
and
The advance step
retreat.
so that at least one admissible arc emanates from Increasing d(i*)
maintains a path
no admissible arc emanates from node
If
the algorithm performs the retreat step.
operation).
It
on the path. The algorithm performs one
(i, j)
advance or
node:
at a time, as follows.
called the current node, consisting entirely of
i',
emanating from node
(i*, j*)
as the
j*
path a
for each arc
i
at the current
admissible arc
some node
to
call this
=
pred(j)
circs,
first
The algorithm generates an
search of the residual network, starting at the sink node.
admissible path by adding admissible
backward breadth
A(i*);
:
=
s;
80
procedure RETREAT(i'); begin d(i*) if !•
:
?t
= min s
then
+
d(j)
{
i*
:
=
1
:
(i, j)
€ A(i*)
and
>
^-
);
pred(i*);
end;
AUGMENT;
procedure begin
using predecessor indices identify an augmenting path P from the source to the sink;
A = min :
{rjj
augment A
:
(i, j)
€ P);
units of flow along path P;
end;
node.
We use the following data structure to select an admissible arc We maintain the list A(i) of arcs emanating from each node
emanating from Arcs in each
i.
a
list
can be arranged arbitrarily, but the order, once decided, remains unchanged throughout
Each node
the algorithm.
next advance step.
makes
all
has a current-arc
Initially,
algorithm examines this it
i
list
becomes the
it
first
assume
implicitly
which
the current-arc of node sequentially
the next arc in the arc
arcs in A(i),
(i, j)
list
i
its
arc
the current arc.
list.
the
first
arc in
its
and whenever the current arc
When
updates the distance label of node arc in
is
the current candidate for the
is
i
is
arc
list.
The
inadmissible,
the algorithm has examined
and the current arc once again
In our subsequent discussion
we
shall
always
that the algorithms select admissible arcs using this technique.
Correctness of the Algorithm
We maximum
Lemma
show
that the shortest
augmentation algorithm correctly solves the
flow problem.
4.1.
The shortest augmenting path algorithm maintains valid distance
Moreover, each relabel step
each step.
Proof.
first
We show
that the algorithm maintains valid distance labels at every step
performing induction on the number of augment and relabel steps.
Assume, inductively,
algorithm constructs valid distance
labels.
function
satisfies the validity
is
labels at
strictly increases the distance label of a node.
valid prior to a step,
i.e.,
check whether these conditions remain valid residual graph changes),
and
(ii)
(i)
after a relabel step.
after
by
Initially, the
that the distance
condition C4.2.
We
need
to
an augment step (when the
81
A
(i)
flow augmentation on arc
network, but this modification
Augmentation on arc
d(i)
+
(j,
with
though, since
d(i)
=
+
d(j)
the end of arc
The distance
satisfied.
remains inadmissible until
an arc
if
then no arc
A(i),
min{d(j) +
1
:
(i, j)
(i, j)
(i, j)
is
rj;
>
0)
=
when
i
the current arc reaches
inadmissible at
when
some
stage, then
it
the current arc reaches the end of the arc
e A(i) satisfies d(i) = d(j) +
and
€ A(i)
node
because of our inductive hypothesis that
d(i) increases
distance labels are nondecreasing. Thus, list
j)
labels satisfy this validity condition,
a relabel step at
Observe that
list A(i).
(i,
by the admissibility property of the augmenting path.
1
The algorithm performs
(ii)
affect the validity of the
might, however, create an and, therefore, also create an additional condition d(j) <
>
rjj
needs to be
that
1
i)
from the residual
this arc
network does not
to the residual
distance function for this arc. additional arc
might delete
(i, j)
d'(i),
1
and
rj;
>
0.
Hence,
<
d(i)
thereby establishing the second part of the
lemma. Finally, the choice for
remains valid for
all
(i,
conditions dOc) < d(i) +
Theorem
4.2.
d(i)
ensures that the condition
d(i)
<
d(j)
+
1
in the residual network; in addition, since d(i) increases, the
j)
1
changing
remain valid
for all arcs Gc,
i)
in the residual network.
The shortest augmenting path algorithm correctly computes a
maximum
flow.
The algorithm terminates when
Proof.
d(s)
^
n.
Since
length of the shortest augmenting path from s to
t,
d(s)
is
a lower
bound on
the
this condition implies that the
network contains no augmenting path from the source
to the sink,
which
the
is
termination criterion for the generic augmenting path algorithm.
At termination of the algorithm, we can obtain a minimum For
< k <
n, let
a^ denote
that Oj^,
must be zero
Let S =
e N: d(i) > k*)
S and
t
{i
e
S,
is
maximum.
for each
(i, j)
e
cutset as follows.
of nodes with distance label equal to k.
Note
V
some k* < n - 1 since Oj^ ^n-1. (Recall that d(s) ^ n.) = k and S = N - S. When d(s; ^ n and the algorithm terminates, s e
construction, d(i) > d(j) +
=
number
for
and both the
rj:
the
s-t
(S,
sets 1
S and S are nonempty. Consider the
for all
S).
(i, j)
Hence,
(S,
e
(S,
S)
is
S).
a
The
s-t
cutset (S, S).
By
validity condition C4.2 implies that
minimum
s-t
cutset
and the current flow
82
Complexity of the Algorithm
We Lemma number
next
show
Each distance
4.2. (a)
of relabel steps
relabeled selects
node
node
at
i
n^
node
at
i
Thus the algorithm
of relabel steps
bounded by
is
Consequently, the total
S
n.
at
most nrnfl.
After the algorithm has
at least one.
From
is
algorithm never
this point on, the
Suppose
+
Then no more flow can be d'(j)
saturations of arc
become saturated
=
(i, j)
at
(i, j)
+
d'(i)
d(j)
,
node
^
1
at least
one
at
arc,
becomes saturated
zero.
that the arc
relabels a
most n times and the
total
number
n'^.
Each augment step saturates
which point
d(i)
time.
again during an advance step since for every node k in the current path,
i
d(k) < d(s) < n.
1).
by
increeises d(i)
most n times,
O(n^m)
in
The number of augment steps
(b)
.
maximvun flow
a
most n times.
label increases at
at most
is
Each relabel step
Proof.
computes
that the algorithm
sent
d(i)
on
+
1
(i, j)
=
d(j)
i.e.,
decreases
some
at
residual capacity to
iteration (at
until flow
+
its
is
which from
sent back
number
total
j
to
=
d(j)
i
(at
Hence, between two consecutive
2).
any
increases by at least 2 units. Cortsequently,
most n/2 times and the
d(i)
arc
of arc saturations is
(i, j)
can
no more
than nm/2.
Theorem
The shortest augmenting path algorithm runs
4.3.
0(n) time, resulting
in
O(n^m)
total effort in the
augmentation
increases the length of the partial admissible path by one,
length by one;
retreat (relabel) steps,
n^m) advance
For each node
i€
A(i)
I
The
first
retreat step decrecises
most
n, the
algorithm
term comes from the number of of augmentations,
which
the algorithm performs the relabel operation 0(n) times, each
i,
I
I
steps.
Each advance step
the previous lemma.
execution requiring 0( A(i)
n
and each
and the second term from the number
bounded by nm/2 by
V
steps.
since each partial admissible path has length at
requires at most 0(n^ +
are
time.
The algorithm performs 0(nm) flow augmentations and each augmentation takes
Proof.
its
O(n^m)
in
= 0(nm).
I
)
The
time.
Finally,
we
total
time spent in
all relabel
operations
is
consider the time spent in identifying admissible
N
arcs.
The time taken
scanning arcs in
to identify the admissible arc of
A(i).
After having performed
reaches the end of the arc
list
and
relabels
I
node
node
A(i) i.
I
i
is
0(1) plus the time sf)ent in
such scannings, the algorithm
Thus the
total
time spent in
all
83
scannings
is
V
0(
i€
The combination
nlA(i)l) = 0(nm).
bounds
of these time
N
establishes the theorem.
The proof of Theorem
4.3 also suggests
The termination
the shortest augmenting path algorithm. satisfactory for a worst-case analysis, but
may
that the algorithm
major portion of which
done
after
it
spends too much time
nodes
aj^
with distance label equal to
We
of d(s) ^ n
is
Researchers
in relabeling, a
maximum flow. The minimum cutset prior to
k, for
can do so by maintaining the number of
^ k <
n.
and terminates whenever
after every relabel operation
for
has already found the
algorithm can be improved by detecting the presence of a
performing these relabeling operations.
criteria
not be efficient in practice.
have observed empirically is
an alternative temnination condition
The algorithm updates it
this array
finds a gap in the
first
a
array,
» i.e., ex.
=
for
denotes a
some
minimum
k* <
n.
As we have seen
augmenting paths and
this
bound
perform f2(nm) augmentations.
is tight, i.e.,
maximum
improvements appears quite
interest,
d(s)
>
k*),
then
is
(S,
intuitively appealing
is
identify at
S)
difficult
and
most 0(nm)
to
to
improve the running time of the
perform
fewer computations per
of a sophisticated data structure, called dynamic trees
flow algorithm runs in
implementations
:
on particular examples these algorithms
The only way
augmenting path algorithm The use
shortest paths
the average time for each augmentation from 0(n) to
the
i
{
The resulting algorithms
easy to implement in practice.
augmentation.
S =
cutset.
The idea of augmenting flows along
shortest
earlier, if
0(nm
OGog
n).
log n) time
,
This implementation of
and obtaining further
except in very dense networks.
Vkith sophisticated data structures
reduces
These
appear to be primarily of theoretical
however, because maintaining the data structures requires substantial overhead
that tends to increase rather than reduce the computationjd times in practice.
discussion of dynamic trees
Potential Functions
A functions.
is
beyond the scope of
and an Alternate Proof of
A
detailed
this chapter.
Lemma
4.2(b)
powerful method for proving computational time bounds
is
to use potential
Potential function techniques are general purpose techniques for proving the
complexity of an algorithm by analyzing the effects of different steps on an appropriately •defined function.
The use
of potential functions enables us to define an "accounting"
relationship between the occurrences of various steps of an algorithm that can be used to
84
bound on
obtain a
the steps that might be difficult to obtain using other arguments.
Rather than formally introducing potential functions,
showing is
number
that the
we
of augmentations in the shortest
illustrate the
technique by
augmenting path algorithm
0(nm).
Suppose
in the shortest
augmenting path algorithm we kept track of the number Let F(k) denote the
of admissible arcs in the residual network.
end
arcs at the
m and
F(K) ^
0.
of admissible
count a step either
Let the algorithm perform
relabel operation.
Clearly, F(0) <
terminates.
we
of the k-th step; for the purpose of this argument,
an augmentation or as a
number
K
steps before
node
creates as
i
same amount. This relabels
any node
at
many
cis
I
A(i)
I
new
nm
Since the
increase in F
is at
most
most n times
(as a
consequence of
initial
decrease in F due to is at
m + nm
most
all
over
all
relabelings, since the algorithm
Lemma
4.1)
V
and
was
to
is at
augmentations
most is
m
m
more than
n
I
A(i)
N
terminal value, the total
its
+ nm. Thus the number of augmentations
representative of the potential function argument.
is fairly
bound
the
number
of augmentations.
We
Our
did so by defining a potential
function that decreases whenever the algorithm performs an augmentation. potential increases only
the
number
bound
the
=
I
= 0(nm).
This argument objective
value of F
Each
admissible arcs, and increases F by the
i€
nm.
it
Each augmentation decreases the residual
capacity of at least one arc to zero and hence reduces F by at least one unit. relabeling of
eis
when
the algorithm relabels distances,
of augmentations using
number
bounds on the number
of steps of one type in terms of
knovm
The
and thus we can bound
of relabels.
In general,
we
number
boiands on the
of
steps of other types.
4.4
Freflow-Push Algorithms
Augmenting path algorithms send flow by augmenting along step further arc.
decomposes
more elementary operation
into the
Thus sending a flow
of
A
operations of sending a flow of
A
a path.
This basic
of sending flow along an
units along a path of k arcs units along an arc of the path.
decomposes
into
k basic
We shall refer to each
of
these basic operations as a push.
A
path augmentation has one advantage over a single push:
conservation of flow
develop in
this
at all
nodes.
it
maintains
In fact, the push-based algorithms such as those
and the following
we
sections necessarily violate conservation of flow.
85
Rather, these algorithms permit the flow into a
node
We
The two basic operations
any such flows as preflows.
will refer to
preflow-push methods are
(i)
to
exceed the flow out of
pushing the flow on an admissible
arc,
and
this
node.
of the generic (ii)
updating a
distance label, as in the augmenting path algorithm described in the last section.
(We
define the distance labels and admissible arcs as in the previous section.)
Preflow-push algorithms have several advantages over augmentation based algorithms.
they are
First,
more general and more
Second, they can push flow
flexible.
augmenting paths. Third, they are better suited
closer to the sink before identifying
for
Fourth, the best preflow-push algorithms currently
distributed or parallel computation.
outperform the best augmenting path algorithms in theory as well as in practice.
The Generic Algorithm
A preflow
x
a function x:
is
A —» R
that satisfies (4.1c)
SO
,foralli€ N-{s,
and the following
relaxation
of (4.1b):
y
y
-
Xjj
€ A)
{j:(j,i)
'^ij
A)
(j:(i,j) €
The preflow-push algorithms maintain
we
a given preflow x,
Z
e(»>= {)
We
:
(j, i)
a
preflow at each intermediate stage. For
define the excess for each node
-
''ji
€ A)
(j
:
t).
X'^ij (i, j) € A)
i
e
N-
{s, t}
•
node with positive excess as an active node.
refer to a
convention that the source and sink nodes are never active. algorithms perform
algorithm (except active node,
i.e.,
a
all
operations using only local information.
its initialization
node
i
e
and
N - {s,
t)
its
measured with respect
from
this
we send
node
to
its
e(i)
when
the network contains
it
0.
flow only on admissible arcs.
node so
creates at least one
following subroutines:
adopt the
The preflow-push
At each
iteration of the
The goal of each
le
one
iterative step is to
excess closer to the sink, closeness being
nodes with smaller distance
that
>
to the current distance labels.
of the
We
termination), the network contains at
with
choose some active node and to send
algorithms,
as
new
labels,
As If
in the shortest aug;menting
the
then
it
path
method cannot send excess increases the distance label
admissible arc. The algorithm terminates
no active nodes.
The preflow-push algorithm uses
the
86
procedure PREPROCESS; begin x: =
0;
perform a backward breadth
first-search of the residual
network,
node
stairting at
t,
to determine initial distance labels d(i); Xgj
=
:
Ugj for each arc
(s, j)
and
e A(s)
d(s)
:
=
n;
end;
procedure PUSH/RELABEL(i); begin the network contains an admissible arc
if
push 5 = min{e(i), :
else replace d(i)
by min
r^:)
{d(j)
then
(i, j)
units of flow from
+
1
i
to
:
(i, j)
e A(i) and
node Tj:
>
to
i
node
j
0};
end;
A
push
increases both saturating
if
of 5 units e(j)
5 =
and
rj;
from node
r;,
by
5 units.
node
We
j
decreases both
and
r^:
by 6 units and
say that a push of 6 units of flow on arc
and nonsaturating otherwise.
the distance label of a node as a relabel operation. to create at least
e(i)
We
(i, j)
is
refer to the process of increasing
The piirpose
of the relabel operation
is
one admissible arc on which the algorithm can perform further pushes.
The following generic version of the preflow-push algorithm combines the subroutines just described.
algorithm
PREFLOW-PUSH;
begin
PREPROCESS; while the network contains an
active
node do
begin select
an active node
i;
PUSH/RELABEL(i); end; end;
It
might be instructive
to visualize the generic
preflow-push algorithm in terms
of a physical network; arcs represent flexible water pipes, nodes represent joints,
distance function measures
we v^h
to
how
far
nodes are above the ground; and
send water from the source
to the sink.
In addition,
we
in this
and the
network,
visualize flow in an
87
admissible arc as water flowing downhill.
and water flows
to its neighbors.
Initially,
In general,
we move
At
towards the
we move
this point,
sink.
the
node upward,
water flows downhill towards the sink;
however, occasionally flow becomes trapped locally neighbors.
the source
at a
node
that has
no downhill
node upward, and again water flows downhill
Eventually, no flow than can reach the sink.
As we continue
to
move
nodes upwards, the remaining excess flow eventually flows back towards the source.
The algorithm terminates when
the water flows either into the sink or into the
all
source.
Figure 4.3 illustrates the push/relabel steps applied to the example given in Figure
Figure 4.3(a) specifies the preflow determined by the preprocess step.
4.1(a).
Suppose the
examines node
select step
d(2) = d(4) +
1,
network and arc it
4)
(2,
the algorithm performs a (saturating)
The push reduces the excess
node,
Since arc
2.
(4, 2) is
of
added
node 2
to
1.
Arc
has residual capacity r24 =
push
(2, 4) is
do not
d(l)+l} = min{2,5) =
to
(2, 3)
and
node
s a
incident to node
s,
shortest path from s to
is
t,
is
a lower
bound on
we
push flow from
In the push/relabel(i) step,
i
we used
a current arc
times.
lists
+
1,
First,
gives each
by selecting
(i, j)
we
which
0(nm)
all
arcs
the length of t.
any
Since
t,
and so there never
s again.
in the shortest
takes
it
are also guaranteed that in subsequent iterations
is
identify
an admissible arc
in A(i)
augmenting path algorithm. the current candidate for the
choose the current arc by sequentially scanning the arc scanning the arc
{d(3)
the residual network contains no path from s to
the residual network will never contain a directed path from s to
each node
min
distance d'(2) =
admissible and setting d(s) = n will satisfy the
Third, since d(s) = n
distances in d are nondecre
data structure
have positive
Second, since the preprocessing step saturates
none of these arcs
validity condition C4.2.
to
an active
Hence, the algorithm
positive excess, so that the algorithm can begin
positive excess.
any need
is still
(2, 1)
step accomplishes several important tasks.
some node with
will be
units.
2.
The preprocessing node adjacent
new
{2, 1}
and
deleted from the residual
satisfy the distance condition.
performs a relabel operation and gives node 2 a
= min
Since node 2
to the residual network.
can be selected again for further pushes. The arc
residual capacities, but they
of value 6
1
total time, if the
list.
We
using the same
We
maintain vrith
push operation.
have seen
We
earlier that
algorithm relabels each node 0(n)
88 d(3) =
1
e3=4
d(l) = 4
d(4)
d(2) =
=
1
e,= 2
(a)
The
residual network after the preprocessing step.
d(3) =
d(l)
=4
d(4)
=
l
6^ =
1
d(2)
(b)
1
After the execution of step PUSH(2).
=
89 d(3)
=
1
d(4) =
d(l) = 4
d(2) = 2
(c)
After the execution of step RELABEL(2).
Figure 4.3
Assuming
show
that
it
An
illustration of
that the generic
finds a
maximum
push and relabel
steps.
preflow-push algorithm terminates,
flow.
network contains no path from the source
is
the
maximum
is
resides
a flow. Since d(s) =
to the sink.
the termination criterion of the augmenting path algorithm, arcs directed into the sink
can easily
The algorithm terminates when the excess
either at the source or at the sink implying that the current preflow r, the residual
we
This condition
and thus the
total
is
flow on
flow value.
Complexity of the Algorithm
We now important times.
result:
The
first
We
analyze the complexity of the algorithm. that distance labels are
begin by establishing one
always valid and do not increase too
of these conclusions follows
from
Lemma
4.1,
many
because as in the shortest
augmenting path algorithm, the preflow-push algorithm pushes flow only on admissible arcs and relabels a node orily
when no
admissible arc emanates from
it.
The
second conclusion follows from the following lemma.
Lemma is
43.
connected to
Proof.
By
the flow decomposition theory,
to the original (ii)
At any stage of the preflow-push algorithm, each node i with positive excess node s by a directed path from i to s in the residual network.
paths from
network
G
s to active
any preflow x can be decomposed with respect
into nonnegative flows along
nodes, and
(iii)
(i)
paths from the source s to
the flows around directed cycles.
Let
i
t,
be an
90
node
active
Then
relative to the preflou' x in G.
flow decomposition of
since paths from s to
x,
contribute to the excess at node
i.
Then
lemma imples
over an empty
Lemma
The
Proof.
P from
and hence
a directed path
from
e N,
i
dii)
<
number
to
s.
2n.
time the algorithm relabeled node
= n and condition C4.2 imply that
4.5.
i
P
does not minimize
had a positive
it
i,
excess,
the residual network contained a path of length at most n-1 from node
Lemma
in the
i
set.
last
fact that d(s)
s to
and flows around cycles do not
t
that during a relabel step, the algorithm
For each node
4.4.
a path
the residual network contains the reversal of
O' with the orientation of each arc reversed),
This
must be
there
Each distance
(a)
of relabel steps
label increases at
most 2n^
at
is
d(i)
.
(b)
< d(s) + n -
<
1
most 2n times.
The number
i
to
and hence
node
The
s.
2n.
Consequently, the total
of saturating pushes
is
most
at
nm. Proof. The proof
much
is ver>'
similar to that of
Lemma
4.2.
Lemma
4.6.
The number
Proof.
We
prove the lemma using an argument based on potential functions.
denote the
of nonsaturating pushes is
O(n^m).
Cor^ider the potential function F =
set of active nodes.
i€
.
n,
and
d(i)
< 2n for
all
2n^. At termination, F cases
must apply:
Case
1.
The
e
i
the initial value of
I,
at
most
e units.
During the push/ relabel (i)
Case
2.
is
i
increases by e ^
i
is
bounded by is
over
all
d(j),
is at
most
it
can push flow.
units. This operation increases
bounded by
2n, the total increase in
F
F due
to increases in
2n''.
able to identify an arc on which
excess at node
increasing F by
<
Since the total increase in d(i) throughout the running time of the
The algorithm
new
1
step,
III
one of the following two
unable to find an admissible arc along which
performs a saturating or a nonsaturating push. create a
preprocessing step)
is
node
Since
I
I
zero.
algorithm for each node distance labels
(after the
d(i).
is
In this case the distance label of
by
F
V
Let
j,
A
it
can push flow, and so
saturating push on arc
thereby increasing the
number
(i,
j)
of active nodes by
it
might 1,
and
which may be as much as 2n per saturating push, and hence 2n'^m
saturating pushes.
Next note that a nonsaturating push on arc
(i, j)
does not
91
The nonsaturatirg push
increase III.
but
simultaneously increases F by
it
active.
If
node
decreeise in
We maximum
F
was
j
F by
will decrease
=
d(j)
d(i)
-
active before the push, then
decrejises
becomes
i
the push causes
1 if
F
d(i) since
node
by an amount
j
inactive,
to
become The net
d(i).
unit per norxsaturating push.
is at least 1
summarize these
The
facts.
F
possible increase in
is Irr-
value of F
initial
is
most 2n^ and the
at
+ 2n^m. Each nonsaturating push decreases F by
one unit and F always remains nonnegative. Hence, the nortsaturating pushes can occur at
most 2n^ + 2n^ + 2n^m = O(n^m) times, proving the lemma. Finally,
we
how
indicate
the algorithm keeps track of active nodes for the
push/relabel steps. The algorithm maintains a set S of active nodes. that
become
that
active following a
become
push and are not already
in S,
inactive following a nonsaturating push.
example, doubly linked
It
adds
to
S nodes
and deletes from S nodes
Several data structures (for
are available for storing S so that the algorithm can add,
lists)
delete, or select elements
from
preflow-push algorithm
in
it
in
0(1) time. Consequently,
We
O(n'^m) time.
it is
easy to implement the
have thus established the following
theorem:
Theorem
A
1.4
The generic preflow-push algorithm runs
in
O(n'^m) time.
Specialization of the Generic Algorithm
The running time
bound
of the shortest
of the generic preflow-push algorithm
By specifying
further improvements.
operations,
we
can derive
example, suppose that
many
we always
Let h* =
push/relabel step.
max
{d(i)
push flow
excess
to
:
nodes with distance
cor\secutive
the algorithm terminates.
and
its
potential for
nodes for push/relabel For
an active node with the highest distance label for e(i)
to
>
0,
i
e N) at
some point
nodes with distance
h*-2,
moves up and then gradually comes
node during n
to the
from the generic version.
different algorithms
Then nodes with distance h* push flow turn,
its flexibility
different rules for selecting
select
comparable
However, the preflow-push
augmenting path algorithm.
algorithm has several nice features, in particular,
is
and so on.
dov^n.
Note
node examinations, then
all
h*-l,
If a
that
if
of the algorithm.
and these nodes,
node
is
in
relabeled then
the algorithm relabels no
excess reaches the sink
node and
Since the algorithm requires O(n^) relabel operations,
we
immediately obtain a bound of O(n^) on the number of node examinations. Each node examination entails
at
most one nonsaturating push.
performs O(n^) nonsaturating pushes.
Consequently, this algorithm
92
variable level which
We
nonempty.
is
an upper bound on the highest index
can store these
lists
doubly linked
as
We
selecting an element takes 0(1) time. starting at LIST(level)
an exercise
show
to
lists
r for
nonempty
identify the highest indexed
needed
the total increase in the distance labels which
We
lists.
leave
list
it
as
bounded by n plus
to scan the lists is
O(n^).
is
is
so that adding, deleting, or
and sequentially scanning the lower indexed
that the overall effort
which LlST(r)
The following theorem
is
now
evident.
Theorem
4.5
The preflcnv-push algorithm
highest distance label runs in
The O(n^) bound and can be improved.
O(n^)
time.
always pushes flow from an active node
that
U
for the highest label
Researchers have
preflow push algorithm
shown using more
Recall that
U
number
of nonsaturating pushes, from
We
time.
)
O(n^m)
Excess-Scaling
it
is
bcised
to
We
represents the largest arc capacity in the network.
algorithm as the excess-scaling algorithm since
will next
0(n^ log U). refer to this
Algorithm
mass balance equations.
at
each intermediate step
is
to
By pushing flows from active nodes, the algorithm The function ej^g^ ~ ^^'^ ^^^'^ i is an
attempts to satisfy the meiss balance equations. active node)
that
on scaling the node excesses.
The generic preflow-push algorithm allows flows violate
straightforward,
another implementation of the generic preflow-push algorithm
dramatically reduces the
4.5
is
clever analysis that the
highest label preflow push algorithm in fact runs in 0(n^ Vrn
describe
ipith the
one measure of the
:
infeasibility of a preflow.
the execution of the generic algorithm,
we would
except that e^^g^^ eventually decreases to vtdue
0.
Note, though, that during
observe no particular pattern in In this section,
we
Cj^^g^,
develop an excess-
scaling technique that systematically reduces Cjj^^ to 0.
The excess-scaling algorithm is based on the following ideas. Let A denote an upper bound on ejj^g^ we refer to this bound as the excess-dominator The excess-scaling .
algorithm pushes flow from nodes whose excess
is
A/2 S
^jj^ax^^-
"^^
choice assures
that during nonsaturating pushes the algorithm sends relatively large excess closer to the
sink.
Pushes carrying small amounts of flow are of
little
benefit
and can cause
bottlenecks that retards the algorithm's progress.
The algorithm
also does not allow the
This algorithmic strategy
may prove
to
maximum
excess to increase beyond A.
be useful for the following reason.
Suppose
93
The algorithm
also does not allow the
This algorithmic strategy
may prove
several nodes send flow to a single
node Vkdll
j
to increase its distance
Thus, pushing too
The
node
excess to increase beyond A.
be useful for the following reason. creating a very large excess.
j,
It is
Suppose likely that
send the accumulated flow closer to the sink, and thus the algorithm
could not
need
to
maximum
much
and return much of
flow to any node
is
likely to
its
excess back toward the source.
be a wasted
effort.
excess-scaling algorithm has the follouang algorithmic description.
algorithm EXCESS-SCALING; begin
PREPROCESS; K:=2riogUl. for k
:
=
K down
do
to
begin (A-scaling phase)
A: = 2^ while the network contains
a
node
i
with
e(i)
> A/2 do
perform push/relabel(i) while ensuring that no node excess exceeds A;
end;
end;
The algorithm performs a number of dominator A decreasing from phase certain value of
base
2.
U
Thus,
vary up and
A as
Initially,
refer to a specific scaling
A=
2'
^ when
^°6
'
the phase.
the
maximum
When Ul +
Cjj^g^ 1
< A/2, a
new
units of flow,
it
may
scaling ph«ise begins.
flow.
excess-scaling algorithm uses the
{e(i), Tj;}
the logarithm has
scaling phases, ejy,ax decreases to value
same step push/relabel(i) as
preflow-push algorithm, but with one slight difference:
min
phase with a
< A < 2U. Ehjring the A-scaling phase, A/2 < Cj^g^ < A and ejj^^^
After the algorithm has peformed flog
The
We
to phase.
the /^-scaling phase.
down during
and we obtain
scaling phases with the value of the excess-
pushes 6 = min
{e(i), Ij;,
A
instead of pushing
- e(j)}
ensure that the algorithm permits no excess to exceed A.
in the generic
units.
6 =
This change will
The algorithm uses
the
following node selection rule to guarantee that no node excess exceeds A. Selection Rule. Among all nodes with excess of distance label (breaking ties arbitrarily).
minimum
more than A/2,
select a
node with
94
Lemma
The algorithm
4.7.
satisfies the
C43.
Each nonsaturating push sends
C4.4.
No
node with smallest distance -
d(i)
min
<
1
{A/2,
since arc
d(i)
ijj)
A/2 vmits excess at node +
A-
e(j)
Lemma
Then
=
e'(j)
0(n^
and
log
U) pushes
a
more than A/2, and d(j) {e(i), r^;, A - e(j)) >
is
less
+ min
e(j)
Let
e(j).
{e(i),
Tj;,
be the
e'(j)
A
-
e(j))
<
than or equal to A.
in total.
^ ie
scaling phases, the second assertion
bounded by
is
this potential function
N Since the algorithm has first.
The
bounded by 2n^ because
e(i) is
During the push/relabeKi)
2n.
Using
d(i)/A.
a consequence of the
is
the beginning of the A-scaling phase
e(i)
lemma.
will establish the first assertion of the
d(i) is
is
i
ensure that in a nonsaturating push the Jilgorithm sends
Proof. Consider the potential function F =
we
excess
< A/2, since node
Hence, by sending min
admissible.
node excesses thus remain
All
e(j)
The excess-scaling algorithm performs O(n^) nonsaturating pushes per
4.8.
scaling phase
we
among nodes whose
is
the push.
after
.
label
> A/2 and
e(i)
Further, the push operation increases only
of flow.
j
we have
(i, j),
(i, j)
units of flow,
at leaist
e(j)
A/2 units of flow.
at least
excess ever exceeds A.
Proof. For every push on arc
=
following two conditions:
step,
initial
Odog U)
value of F
at
bounded by A and
one of the following two cases
must apply: Case
1.
The algorithm
is
unable to find an admissible arc along which
In this case the distance label of
node
i
increases F by at most e units because
increases e(i)
throughout the running of the algorithm increase in F
due
to the relabeling of
(actually, the increase in
Case
The algorithm
2.
F due
is
to
node
<
A.
is
nodes
by
e
^
1
is
since d(j) = d(i)
value of F
- 1,
at the
sends
relabelings
at leaist
after this operation
A/2
is at
most
this scaling
pushes
is
phase sum to
bounded by
8rr.
2n'^
at
it
increaise in d(i)
totcil
is at
at least
4.4),
the total
in the A-scaling
over
all
phase
scaling phases).
can push flow and so
In either
Ccise,
1/2
units.
i
to
node
Since the
most 2n^ and the increases
most 2n^ (from Case
1),
the
number
j
it
A
F decreases.
from node
tmits of flow
F decreaases by
beginning of a A-scaling phase
during
the
bounded by 2n^
able to identify an arc on which
(i, j)
i,
bounded by 2n (by Lemma
performs either a saturating or a nonsaturating push. nonsaturating push on arc
can push flow.
This relabeling operation
units.
Since for each
it
and
initial
in
F
of nonsaturating
95
lemma
This
we have
algorithm since
bound
implies a
0(nm
of
already seen that
+ n^ log U)
we have
—
easy,
if
we
use a scheme similar
e(i)
r
> A/2 and
d(i)
=
which LlST(r)
for
one used
to the
r),
is
and a variable
nonempty.
We
level
label.
We
which
is
maintain the
a lower
show
that the overall effort
needed
With
this observation,
we
to this
minimum
this identification is
to scan the lists
is
LIST(r) =
lists
bound on
{i
€
N
:
the smallest index
nonempty
identify the lowest indexed
of pushes performed by the algorithm plus
operation.
Up
time.
preflow-push method in Section
in the
and sequentially scanning the higher indexed
at LIST(level)
exercise to
more than A/2. Making
excess
node with the highest distance
4.4 to find a
0(nm)
require
as saturating
ignored the method needed to identify a node with the
among nodes with
distance label
— such
other operations
all
pushes, relabel operations and finding admissible arcs point,
for the excess-scaling
We
lists.
list
starting
leave as an
bounded by the number
0(n log U) and, hence,
is
not a bottleneck
can summarize our discussion by the following
result.
Theorem
The preflow-push algorithm with excess-scaling runs
4.6
in
0(nm + n^
U)
log
time.
Networks with Lower Bounds on Flows
To conclude this section, we show how to solve maximum flow problems vdth nonnegative lower bounds on flows. Let /j; ^ denote the lower bound for flow on any eu'C (i,
j)
Although the
e A.
maximum
flow problem v^th zero lower bounds always
has a feasible solution, the problem wiih nonnegative lower bounds could be
We
infecisible.
problem by solving a maximum flow set x^: = /j: for each arc (i, j) e A. This
can, however, determine the feeisibUlity of this
problem with zero lower bounds as follows. choice gives us a pseudoflow with
N.
(We
For each node e(i)
representing the excess or deficit of any node
e(i)
refer the reader to Section 5.4 for the definition of a
excesses and deficits).
with
We
<
0,
i
with
We e(i)
we add an
problem from
s* to
t*.
>
introduce a super source, node 0,
arc
Let
we add an
(i,
t*)
arc
(s*,i)
with capacity
x* denote the
flow value in the transformed network.
If
maximum
and choosing the flow on each
the problem
is
infeasible.
arc
(i, j)
as
e(i)
x^;
e(i)
>
+
,
a
e(i),
super sink,
and
node
for each
t*.
node
i
maximum flow denote the maximum
then solve a
flow and
\ {i:
feasible
We
pseudoflow with both
and
with capacity
-e(i).
v* =
s*,
e
i
v*
then the original problem
is
0)
/jj
is
a feasible flow;
otherwise,
96
Once we have found
a feasible flow,
algorithms with only one change: rj;
=
(ujj
-
Xjj)
+
(xjj
-
/jj).
The
initially first
we
apply any of the
define the residual capacity of an arc
and second tenns
respectively, the residual capacity for incre
on arc
(j,
i).
It is
maximum
on arc
possible to establish the optimality
flow
(i, j)
as
in this expression denote, (i, j)
cmd
for decreasing flow
of the solution generated by the
algorithm by generalizing the max-flow min-cut theorem to accomodate situations with
lower bounds. These observations show that
it
is
possible to solve the
problem with nonnegative lower bounds by two applications of the cilgorithms
we have
already discussed.
maximum maximum
flow
flow
97
MINIMUM COST FLOWS
5.
we
In this section,
We
problem.
consider algorithmic approaches for the
minimum
cost flow
consider the following node-arc formulation of the problem.
Minimize
2^
Cj; x;:
{5.1a)
'
(i,j)€A^ subject to
X € X) -
X^!k) =
X::
{j
<
<
xjj
We assume Let
max
€
(
ujj
:
(i, j)
:
=
}
).
A
e N,
>
(5.1b)
€ A.
(i, j)
(5.1c)
lower bounds
that the
C
a"
(j, i)
for each
Ujj,
nonnegative.
(j
for
t)(>)'
''ii
(i, j)
:
max
Cj;
(
(i, j)
:
/j;
on arc flows are
e
A
and
)
U
We
loss of generality.
assumption that
supply/demand and
assume
A5.1.
data
all
(cost,
minimum
that the
max
=
The transformations Tl and T3
assumptions do not impose any
cost flow
problem
We assume
Feasibility Assumption.
X
zero and that arc costs are [
max
{
in Section 2.4
lb(i)l
:
ie N},
imply that these
remind the reader of our blanket capacity) are integral.
satisfies the
that
all
also
following two conditions.
-
^(^^
We
and
that
minimum
the
cost
ieN flow problem has a feasible solution.
We maximum t*.
can ascertain the
<
b(i)
from
s* to
minimum
i
with
add an
0,
If
t*.
the
arc
b(i) (i,
>
t*)
maximum
0,
add an
arc
with capacity
flow problem
it is
and a super and
with capacity
-b(i).
Now solve a maximum
flow value equals
otherwise,
s*,
:
T
b(i)
b(D >
0)
b(i),
then the
sink
for each
node
node
i
flow problem
minimum
cost
infeasible.
We assume that the network G contains an uncapacitated each arc in the path has infinite capacity) between every pair of nodes.
Connectedness Assumption.
directed path
We (j,
is feasible;
problem by solving a
(s*, i)
{i
A5.2.
cost flow
flow problem as follows. Introduce a super source node
For each node
with
feasibility of the
1) for
(i.e.,
impose
each
j
€
N
this condition,
and assigning
if
necessary, by adding artificial arcs
a large cost
and a very large capacity
(1, j)
and
to each of these
98
arcs.
No
minimum
such arc would appear in a
contains no feasible solution without
cost solution unless the
artificial arcs.
CXir algorithms rely on the concept of residual networks.
G(x) corresponding to a flow x arcs i)
(i, j)
and
has cost
(j, i).
The arc
(i, j)
is
Cj:
rjj
=
The
residual network
We replace each arc
defined as follows:
has cost
and residual capacity
-Cj:
problem
and
residual capacity
x^;.
The
r^;
=
(i, j)
u^j - x^;,
e
A
by two
and the
arc
(j,
residual network consists only of arcs
with positive residual capacity.
The concept example,
if
the original network contains both the arcs
network may contain two arcs from node i
with possibly different
one node
some
of residual networks poses
to
Our
to
j)
and
node and/or two j
(j,
arcs
i),
then the residual
from node
case.
However, rather than changing our notation, we
never arise
network without any Observe
that
(or,
by inserting
Theorem
2.4.
Theorem
5.1.
cycle).
A
to
node
extra
nodes on
parallel arcs,
easily treat this
will tissume that
we
can produce a
parallel arcs).
any directed cycle
in the residual
and vice-versa
cycle with respect to the flow x
augmenting
j
notation for arcs assumes that at most one arc joins
any other node. By using more complex notation, we can
more general parallel arcs
costs.
i
(i,
For
notational difficulties.
network G(x)
is
an augmenting
(see Section 2.1 for the definition of
This equivalence implies the following alternate statement of
feasible flow
x
is
an optimum flow
if
and only
if
the residual network G(x)
contains no negative cost directed cycle.
5.1.
Duality and Optimality Conditions
As we have seen
in Section 1.2,
due
to its special structure, the
minimum
flow problem has a number of important theoretical properties.
programming dual
minimum
cost flow
of this problem inherits
problem and
its
many
of these properties.
conditions.
linear
Moreover, the
dual have, from a linear programming point of
view, rather simple complementary slackness conditions. state the linear
The
cost
In this section,
we
formally
programming dual problem and derive the complementary slackness
,
.
.
99
We each arc
(i,
j)
A.
€
We
generality. in (5.1b).
minimum
consider the
show
possible to
is
It
cost flow
associate a dual variable
7t(i)
(5.1)
that this
assumption imposes no
associate a dual variable
dual problem to
6jj
is
redundant,
we
assume
that
can
set
7c(l)
=
with the upper bound constraint of arc
(i,
variables to an arbitrary value.
that
>
Uj;
for
loss of
with the mass balance corwtraint of node
Since one of the constraints in (5.1b)
We,
assuming
problem
therefore
,
i
one of these dual 0. j)
Further,
we The
in (5.1c).
(5.1) is:
Maximize
X ie
X
~
t)(') '^(i^
N
e
(i,j)
(5 2a)
"ij ^i\
A
^
'
subject to
7c(i)
-
-
7c(j)
<
6ij
Cjj
5jjS
and
for all
,
foraU
0,
(i, j)
(i,j)e
e A,
(5.2b)
A,
(5.2c)
are unrestricted.
Ji(i)
The complementary slackness conditions Xjj
>
6jj
>
=>
7i(i)
-
n(j)
Xjj
=
Ujj.
^
-
5jj
=
=*
0
Xij
To see
= Ujj=>
-
7c(i)
7t(j)
<
to the
Cjj
Jt(i)-
Jt(j)
=
-
n{])
^ qj
n(i)
this equivalence,
(5.3)
Cjj
(5.4)
These conditions are equivalent Xj:
=
for this primal-dual pair are:
following optimality conditions: (5.5)
,
Cjj,
(5.6)
(5.7)
suppose that
<
Xj:
<
Uj: for
some
arc
(i, j).
The condition
(5.3)
implies that 7t(i)-7t(j)
Since (5.6).
Xj:
<
Whenever
Uj;
,
Xj;
=
-5jj =
(5.4) Uj:
(5.8)
Cjj,
implies that
>
for
some
6jj
arc
=
0;
(i, j),
substituting this result in (5.8) yields (5.3)
implies that
n(i)
-
n(j)
-
5jj
=
Cjj
100
Substituting
then
(5.4)
-
equation gives
in this
imples that
We (5.5)
S
5jj
and substituting
=
6jj
define the reduced cost of an arc
imply
(5.7)
that a pair
xj:
if
=
<
for
uj;
some
arc
(i, j)
this result in (5.2b) gives (5.5).
as
(i, j)
Cj;
node
of flows and
n
x,
Finally,
(5.7).
=
Cj:
-
Ji(i)
potentials
+
The conditions
n(j).
optimal
is
if it
satisfies
the follov^ing conditions:
C5.1
X
C5.2
If
Cjj
>
0,
then
C5.3
If
Cjj
=
0,
then
C5.4
If
Cjj
<
0,
then
is feasible.
Observe however,
we
Xjj
=
< x^:
0.
Xj;
=
<
Ujj.
U|j.
that the condition C5.3 follows
retain
for the sake of completeness.
it
terms of the residual network, simplify C5.5
(Primal feasibility) x
is
C5.6
(E>ual feasibility)
t
Note note that
Cj;
residual network C5.6.
A
and
>
would contain
satisf)'ing
Theorem
5.1.
C5.5 and C5.6. Let
X
C5.6 implies that
(i,j)€
-t^ Cjj
(i, j)
subsumes for some arc
arc
(j, i)
if
with Cj;
in the residual
C5.2, C5.3, (i,
Cjj
= -
and
<
j)
Xjj
and
network G(x).
in the original Cjj.
<
But then
Uj;
for
To
C5.4.
network, then the
Cjj
some
see this result,
<
0,
(i, j)
contradicting
in A.
.
W
C:;
Consider any pair
W be any
x,
n of flows and node potentials
directed cycle in the residual network. Condition
S 0. Further
,
S
^
'^
(i,j)e
q: =
W
''
XW C;;
+
(i,j)€
Hence, the residual network contains no negative cost
I (i,j)€
(-Jt(i)
W
+
Jt(j))
cycle,
i)eW To see
negative cycle.
the converse, suppose that x
Then
in the residual
respect to the arc lengths
node
for each arc
easy to establish the equivalence between these optimality conditions and the
condition stated in
(i,
>
Xj;
in
feasible.
similar contradiction arises
It is
-
Cj;
These conditions, when stated
to:
that the condition C5.6
if
from the conditions C5.1, C5.2 and C5.4;
1
to
node
i.
The
Cj:,
is
feasible
and C(x) does not contain
a
network the shortest distances from node 1, with Let d(i) denote the shortest distance from
are well defined.
shortest path optimality condition C3.2 implies that d(j) < d(i) + q;
101
for aU
(i, j)
in G(x).
5^
Let n =
in G(x).
Hence, the pair
x,
-
71
Then
d.
satisfies
< q; +
maximum
=
d(j)
Cj; - Jt(i)
+
7t(j)
=
Cj;
for all
(i, j)
Maximum Flow Problems
flow problem generalizes both the shortest path and
cost
The
flow problems.
-
C5.5 and C5.6.
Relationship to Shortest Path and
The minimum
d(i)
problem from node
shortest path
s to all
other nodes
can be formulated as a minimum cost flow problem by setting b(l) = (n - 1) b(i) = -1 for all 1 * s, and Uj; = «« for each (i, j) e A (in fact, setting Uj: equal to any integer greater ,
than (n -
maximum
will suffice
1)
if
we wish
flow problem from node
s to
to
maintain
node
t
flow problem by introducing an additional arc
=
m
•
max
{u|;
:
(i, j)
algorithms for the
maximum
cost flow
(t,
with
s)
minimum and/or maximum for the
the
= -1 and u^^ = for each arc
~
(i, j)
(in fact,
cost
Uj^
€ A. Thus,
problem solve both the shortest path and
flow problems as special cases.
great use in solving the
problem.
c^g
=
Cj:
Conversely, algorithms for the shortest path and
for these
minimum
can be transformed to the
e A) would suffice), and setting
minimum
Similarly, the
finite capacities).
minimum
cost flow
cost flow problen..
problem either
led to
This relationship will be
minimum
explicitly or implicitly
flow algorithms as subroutines.
two problems have
cost flow problem.
maximum flow problems are of Indeed, many of the algorithms use shortest path
Consequently, improved algorithms
improved algorithms
for the
minimum
cost flow
more transparent when we discuss algorithms
We
have already shov^m
in Section 5.1
how
for
to obtain
an optimum dual solution from an optimum primal solution by solving a single
We now show how
shortest path problem.
to obtain
an optimal primal solution from
an optimal dual solution by solving a single maximum flow problem. Suppose
We
that
7t
is
an optimal dual solution and
c is the vector of
define the cost-residual network G* = (N, A*) as follows.
same supply /demand well as a lower
bound
as the nodes in G. 1^;*,
Any
defined as follows:
arc
(i, j)
e
reduced
costs.
The nodes in G* have the A* has an upper bound u^:* as
102
(i)
For each
(i, j)
(ii)
For each
(i, j)
(iii)
For each
hf =
(i,
A with Cj; > 0, A* contains an arc in A with Cj; < 0, A* contains an arc in A with c,; = 0, A* contains an
in
(i, j)
with
=
1j:»
=
(i, j)
with u^* =
1^:*
=Uj;.
arc
j)
(i,
j)
u^:*
with
0.
=
Uj;*
and
uj;
0-
The lower and upper bounds on arcs in the cost-residual network G* are defined so that any flow in G* satisfies the optimality conditions C5.2-C5.4. If Cj; > for some (i, j)
Cjj
6 A, then condition C5.2 dictates that
<
for
some
upper bound
(i, j)
in the
xj:
=
in the
optimum
€ A, then C5.4 implies the flow on arc
optimum
flow.
If
cjj
=
(i, j)
flow.
Similarly,
must be
if
at the arc's
then any flow value will satisfy the
0,
condition C5.3. .
Now network
the problem
reduced to finding a feasible flow in the cost-residual
is
and upper bound
that satisfies the lower
supply/demand
time, meets the
bounds of arcs
maximum
as described in Section 2.4
flow problem as described
problem
5.3.
restrictions of arcs and, at the
constraints of the nodes.
in
Then
flow in the transformed network. cost
r
We
first
and then transform
eliminate the lower this
problem
assumption A5.1. Let x* denote the x*+/*
is
same
an optimum solution of the
to a
maximum minimum
in G.
Negative Cycle Algorithm Operations researchers, computer
scientists, electrical
engineers and
many
others
have extensively studied the minimum cost flow problem and have proposed a number of different algorithms to solve this problem.
Notable examples are the negative cycle,
successive shortest path, primal-dual, out-of-kilter, primal simplex and scaling-based algorithms.
In this
algorithms for the them.
We
first
dual
minimum
cost flow
sections,
we
discuss most of these important
problem and point out relationships between
consider the negative cycle algorithm.
The negative to attain
and the following
cycle algorithm maintains a primal feasible solution
feasibility.
residual network G(x)
It
x and strives
does so by identifying negative cost directed cycles in the
and augmenting flows
in these cycles.
when
the residual network contains no negative cost cycles.
when
the algorithm terminates,
it
has found a
minimum
The algorithm terminates
Theorem
cost flow.
5.1
implies that
103
NEGATIVE CYCLE;
algorithm
begin establish a feasible flow x in the network;
while C(x) contains a negative cycle do begin use some algorithm 5
= min
:
augment
[t^
W;
to identify a negative cycle
e W);
(i, j)
6 units of flow along the cycle
W and update G(x);
end; end;
A
network can be found by solving a
feasible flow in the
One
as explained just after assumption A5.1. cycle
flow cost by zero
flow problem
algorithm for identifying a negative cost
the label correcting algorithm for the shortest path problem, described in Section
is
which requires 0(nm) time
3.4,
maximum
O(mCU)
Since
unit.
bound on
a lower
is
one
at least
the
mCU
is
optimum flow
an upper bound on an cost, the
and requires O(nm^CU) time
iterations
Every iteration reduces the
to identify a negative cycle.
flow cost and
initial
algorithm terminates after at most
in total.
This algorithm can be improved in the following three ways (which
we
briefly
irizpV summarize)
(i)
Identifying a negative cost cycle in effort
algorithm solution
be discussed
(to
and node
However, due
amoimt (ii)
less
than 0(nm) time.
nearly achieves this objective.
later)
potentials that enable
to degeneracy, the
much
It
The simplex
maintains a tree
to identify a negative cost cycle in
it
0(m)
effort.
simplex algorithm cannot necessarily send a positive
of flow along this cycle.
Identifying a negative cost cycle with
function value.
along a cycle
W
The improvement is
I€ W
-
maximum improvement
in the objective function
(min
(rjj
(i, j)
:
e W)).
due
Let x be
in the objective
to the
augmentation
some flow and
x* be
an
^
(i, j)
optimum
flow.
The augmenting
plus the flow on at most in cost
due
Consequently,
to
m
cycle theorem
(Theorem
augmenting cycles with respect
2.3) implies that x* equals
to x.
Further,
flow augmentations on these augmenting cycles
at least
function by at least
one augmenting cycle with respect
(ex -cx*)/m.
Hence,
if
to x
improvements
sum
to ex -ex*.
must decrease the
the algorithm always
x
objective
augments flow along a
104
maximum improvement, then Lemma 1.1 implies an optimum flow within 0(m log mCU) iterations.
cycle with
obtain
improvement
cycle
a difficult problem, but a
is
a polynomial time algorithm for the
minimum
Identifying a negative cost cycle vdth
(iii)
of a cycle cycle
ais its
cost divided
whose mean
cycle in
0(nm)
cost
is
time.
contains.
it
It is
cost.
moreover, the
mean
its
cost of the
minimum mean -1/n,
(negative) cycle
Lemma
is
the mean cost
minimum mean
shown
cycle value
that
if
the
cycle, then
nondecreasing;
is
m
cycle is a
minimum mean
minimum mean
absolute value decreases by a factor of l-(l/n) within
bounded from above by
0(nm
A
We define
Recently, researchers have
minimum mean
to the next, the
approach yields
of this
possible to identify a
negative cycle algorithm always augments the flow along a
from one iteration
maximum
cost flow problem.
by the number of arcs
m log nC)
or 0(Vri
Finding a
modest variation
minimum mean
as small as possible.
method would
that the
iterations.
Since
bounded from below by -C and
implies that this algorithm will terminate in
1.1
log nC) iterations.
5.4.
Successive
The negative
Algorithm
Path
Shortest
cycle algorithm maintains primal feasibility of the solution at every
step and attempts to achieve dual
feaisibility.
In contrast, the successive shortest path
algorithm maintains dual feasibility of the solution at every step and strives to attain primal
feasibility.
It
maintains a solution x that satisfies the normegativity and capacity
constraints, but violates the
algorithm selects a node
sends flow from
when
terminates
i
to
i
supply/demand
with extra supply and a node
the current solution satisfies
all
the
=
For any pseudoflow
b(i)
+
If e(i) -e(i) is
>
X€ A] ''ii
{j:
for
x,
called the deficit.
A
the capacity
define the imbalance of node
X€ a1
-
''ii'
for all
i
i
The algorithm
constraints.
as
e N.
{j: (i,j)
(j, i)
some node
we
step, the
with unfulfilled demand and
supply/demand
:
e(i)
j
along a shortest path in the residual network.
j
A pseudoflow is a function x A -» R satisfying only constraints.
At each
constraints of the nodes.
i,
node
i
then
vdth
e(i) is e(i)
=
called the excess is
of
called balanced.
<
node
i,
Let S
and T denote the
if e(i)
0,
then
105
sets of excess
and
pseudoflow
defined in the same
is
deficit
nodes respectively.
way
we
that
The residual network corresponding
to a
define the residual network for a flow.
The successive shortest path algorithm successively augments flow along shortest paths computed with respect to the reduced costs Cj;. Observe that for any directed path
P from
node k
a
node
to a
Z
/,
potentials change
and the
all
'' (i,
Y fe
C:; - nil)
Hence, the node
+ n(k).
?'>
path lengths between a specific pair of nodes by a constant amount,
shortest path with respect to
The correctness
Cjj.
=
C;;
fe P
(i,
Cj; is
the
same
bls
the shortest path with respect to
of the successive shortest path algorithm rests
on the following
result.
Lemma
5.1. Suppose a pseudoflow x satisfies the dual feasibility condition C5.6 unth respect to the node potentials it. Furthermore, suppose that x' is obtained from x by sending flow along a shortest path from a node k to a node I in Gix). Then x' also satisfies the dual feasibility conditions with respect to some node potentials.
Since x satisfies the dual feasibility conditions with respect to the node potentials
Proof. jt,
we have
node k
^
Cj:
for all
any node v
to
satisfies the
in G(x).
(i, j)
in G(x) with respect to the arc lengths
d(j)
qj"
Hence, x every arc every arc
P
arc (i, j)
=
Cj:
Cjj
-
satisfies (i, j) (i, j)
We in
=
6
P
Cj;
-
7:'(i)
for all
,
Cjj
now
=
0,
The node
them
to
claim that x also
Jt(i)
+
+
n(j) in
S
n'(j)
these conditions
for all
0,
Cj:
=
c^;
-
its
;c(i)
in a position to
reversal
and so
arc
(j,
i)
(j, i)
Jt'
= 7t-d.
The
in G(x).
(i, j)
(i, j)
and using
+
7t'(i)
=
7t(i)
-
d(i) yields
in G(x).
C5.6 with respect to the node potentials
P and
may add ,
We
C3.2) imply that
on the shortest path P from node k
€
are
cjj
(i.e.,
to
Next note that
n'.
node
/,
Cj;'
since d(j) = d(i) +
= Cjj
for
for
Jt(j).
prove the lemma.
Augmenting flow along any
maiintains the dual feasibility condition C5.6 for this arc.
(i, j)
Cj;.
dual feasibility conditions with re;pect to the potentials
shortest path optimality conditions
Substituting
Let d(v) denote the shortest path distances from
to the residual
arc
Augmenting flow on an
network. But since
Cj:
=
for each arc
also satisfies C5.6.
potentials play a very important role in this algorithm.
prove the correctness of the algorithm,
we
Besides using
use them to ensure that the arc
106
lengths are nonnegative, thus enabling us to solve the shortest path subproblems
The following formal statement
efficiently.
summarizes the steps
more
of the successive shortest path algorithm
of this method.
SUCCESSIVE SHORTEST PATH;
algorithm
begin X
:
=
and
7t
:
=
0;
compute imbalances
e(i)
and
S and T;
initialize the sets
do
while S ^
begin select a
node k
e S
and
a
node / €
T;
determine shortest path distances
from node k
d(j)
to all
other nodes in G(x) with respect to the reduced costs
P denote
let
ujxJaten 6
:
= min
:
[
X,
path from k to
1;
= 7t-d; e(k), -e(/),
augment 6 update
a shortest
Cj;;
min
{
rj:
:
(i, j)
€
P
];
}
units of flow along the path P;
S and
T;
end; end;
To satisfies
we
initialize the algorithm,
set x
=
0,
which
C5.6 with respect to the node potentials n =
lengths are nonnegative.
sum
equals the
Also,
if
5*0,
then
T*
a feasible pseudoflow
and
by assumption,
arc
since,
because the
sum
all
of excesses always
Further, the connectedness assumption implies that the
of deficits.
residual network G(x) contains a directed path from this
is
node k
to
node
Each iteration of
/.
algorithm solves a shortest path problem with nonnegative arc lengths and reduces
the supply of
some node by
one
at least
unit.
Consequently,
if
U
the largest supply of any node, the algorithm terminates in at most the arc lengths
Cj:
0(nU S(n, m, O), where
S(n,
m, C)
the best strongly polynomial -time
So the overall complexity of is
).
The successive
polynomial in
n,
m
iterations.
and the
bound
to
implement
bound
shortest path algorithm largest
this
Since
algorithm
the time taken by Dijkstra's algorithm.
log n) and the best (weakly) polynomial time )
nU
are nonnegative, the shortest path problem at each iteration can be
solved using Dijkstra's algorithm.
nVlogC
an upper bound on
is
supply U.
is
Dijkstra's algorithm is
0(min {m
Currently, is
CXm + n
log log C,
m
pseudopolynomial time since
The algorithm
is,
is
it
+ is
however, f>olynomial
107
time for the assignment problem, a special case of the
U
which
=
minimum
we
In Section 5.7,
1.
minimum
cost flow
problem
for
develop a polynomial time algorithm for the
will
flow problem using the successive shortest path algorithm in
cost
conjunction with scaling.
Primal-Dual and Out-of-Kilter Algorithms
5.5.
The
primal-dual algorithm
is
very similar to the successive shortest path problem,
except that instead of sending flow on only one path during an iteration,
many
flow along
minimum
paths.
cost flow
To explain
problem
adding nodes and arcs as
the primal-dual algorithm,
might send
transform the
and single-sink problem (possibly by
into a single-source
in the
we
it
assumption A5.1).
At every
iteration, the
primal-dual
algorithm solves a shortest path problem from the source to update the node potentials (i.e.,
as before, each
send the reduced
becomes
7:(j)
maximum
7t(j) -
maximum
that the excess of
some node
each iteration, and also assures that the node potential of the sink latter
flow problem to
possible flow from the source to the sink using only arcs with zero
The algorithm guarantees
cost.
and then solves a
d(j))
observation follows from the fact that after
we have
strictly
decreases at
strictly decreases.
solved the
maximum
The flow
problem, the network contains no path from the source to the sink in the residual
network consisting iteration d(t)
^
1.
iterations since the
entirely of arcs with zero
reduced
costs; coi^equently, in the next
These observations give a bound of min {nU, nC} on the number of
magnitude
of each
node
potential
is
bounded by nC. This bound
is
better than that of the successive shortest path algorithm, but, of course, the algorithm
maximum
incurs the additional expense of solving a
flow problem
at
Thus, the algorithm has an overall complexity of 0(min (nU S(n, m, C),
where
S(n,
each iteration.
nC M(n, m,
m, C) and M(n, m, U) respectively denote the solution times of
U)),
shortest p>ath
and maximum flow algorithms.
The successive
and primal-dual algorithnw maintain a solution
shortest path
and the flow bound
that satisfies the dual feasibility conditions violates the
mass balance
These algorithnns
constraints.
potentials so that the flow at each step constraints.
However, we could
intermediate steps.
and may idea
and
is
The
just
comes
iteratively
constraints, but that
modify the flow and
closer to satisfying the
mass balance
as well have violated other constraints at
out-of-kilter algorithm satisfies
only the mass balance cortstraints
and the flow bound restrictior«. The basic if Cj; > 0, Cj: < 0, drive the flow to zero if
violate the dual feasibility conditions
to drive the flow
to permit
on an arc
any flow between
(i, j)
and
to Uj;
Uj: if
Cj:
=
0.
The
kilter
number, represented by
k^:.
108
of an arc
kjj,
with
=
is
is
defined
cis
>
Cjj
0, k^j
=
I
x^j
I
and
number
of at least one arc;
number
of an arc
would obtain
this
it
an arc
with
<
c^j
=
0, k^;
iteration, the out-of- kilter
when
P from node
similar to, but
in the cycle
more
For example, for an arc
j
P
I
u^;
-
x^:
to
node
u
{(i, j)).
i
I
.
An
arc with
(i,
k^:
algorithm reduces the kilter
arcs are in-kilter.
all
would decrease by increasing flow on
(i, j)
one unit of flow
is
increase or decrease in the flow necessary to
feasibility condition.
(i, j)
terminates
a shortest path
at least
algorithm
for
At each
said to be in-kilter.
augment
minimum
the
flow bound constraint and dual
satisfy its j)
(i, j)
the arc.
Suppose the
Then the algorithm network and
in the residual
The proof
kilter
of the correctness of
detailed than, that of the successive shortest path
algorithm.
Network Simplex Algorithm
5.6.
The network simplex algorithm specialization
of
for the
minimum
simplex tableau.
to explicitly maintain the
cost flow
The
substantially
for maintaining
improved the speed
and upxiating the
of the algorithm.
a
linear
for
offers several
tree structure of the basis (see
Section 2.3) permits the algorithm to achieve these efficiencies.
two decades
problem
is
computations and eliminating the
benefits, particularly, streamlining of the simplex
the last
problem
bounded variable primal simplex algorithm
the
programming. The special structure of the minimum
»need
cost flow
The advances made
tree structure efficiently
Through extensive empirical
in
have
testing,
researchers have also improved the performance of the simplex algorithm by
developing various heuristic rules for identifying entering variables. version of the primal network simplex algorithm its
is
known
to
Though no
run in polynomial time,
best implementations are empirically comparable to or better than other
minimum
cost flow algorithms.
In this section,
we
describe the network simplex algorithm in detail.
We
first
define the concept of a basis structure and describe a data structure to store and to
manipulate the
and node
basis,
which
potentials for
is
a spanning tree.
any basis
structure.
We then show how to compute arc flows We next discuss how to perform various
simplex operations such as the selection of entering arcs, leaving arcs and pivots using the tree data structiire. Finally,
simplex algorithm.
we show how
to
guarantee the finiteness of the network
109
The network simplex algorithm maintains
A
basic solution of the
and
U p>artition
tree,
the arc set A.
U
and L and
We
bounds.
called feasible
minimum
cost flow
The
set
by
problem
B denotes the
defined by a triple
is
set of basic arcs,
setting
Xj;
=
U) as
for each
a basis structure. (i, j)
e L,
and
U)
is
called an
optimum
basis structure
potentials n so that the reduced costs defined
by
if it is Cj;
=
A basis
setting
the problem has a feasible solution satisfying (5.1b) and (B, L,
i.e.,
xj:
A
(5.1c).
=
each stage
(B, L,
U); B,
L
arcs of a spanrung
respectively denote the sets of nonbasic arcs at their lower
refer to the triple (B, L, if
a basic feasible solution at
and upper U) is j) g U,
structure (B, L, u^: for
each
(i,
feasible basis structure
possible to obtain a set of node
Cj;
-
nii)
+
n(j) satisfy
the following
optimality conditions:
Cjj
=
Cij
S
,
for each
(i, j)
€ L,
Cjj
<
,
for each
(i, j)
€ U.
,
for each
(i, j)
e B,
(5.9)
(5.10)
(5.11)
,
/
These optimality conditions have a nice economic interpretation. little later
that
tree path in
L denotes
if
We
shall see a
nil) = 0, then equations (5.9)
B from node
1
to
node
Then,
j.
imply that -7t(j) denotes the length of the cj; = Cj; - jc(i) + 7t(j) for a nonbeisic arc (i, p in
the change in the cost of flow achieved by sending one unit of flow through
the tree path from
node
1
to
node
along the tree path from node circulation of flow
is
j
i,
to
through the arc
node
1.
(i, j),
and then returning the flow
The condition
(5.10) implies that this
not profitable for any nonbasic arc in L. The condition (5.11) has a
similar interpretation.
The network simplex algorithm maintains iteration
a feasible basis structure at each
and successively improves the basis structure
basic structure.
procedure.
The following algorithmic description
until
it
becomes an optimum
specifies the essential steps of the
110
algorithm
NETWORK
SIMPLEX;
begin determine an
flow x and the corresponding
initial btisic feasible
basis structure (B, L, U);
compute node
potentials for this basis structure;
while some arc
violates the optimality conditions
do
begin select
an entering arc
add arc
(k,
to the
/)
violating the optimality conditions;
(k, /)
spanning tree corresponding to the
and augment the maximum possible flow determine the leaving arc
baisis
forming a cycle
in this cycle;
(p, q);
perform a basis exchange and update node potentials; end; end; In the following discussion,
network simplex algorithm Obtaining an
we
describe the various steps performed by the
in greater detail.
Basis Structure
Initial
Our connectedness assumption A5.2 provides one way
We
basic feasible solution.
contains arcs
(1,
includes the arc set
L
j)
and
(1, j)
(j,
1)
have assumed that for every node
€
N
-
{!),
with sufficiently large costs and capacities. The
with flow
-b(j) if b(j)
computed using
(5.9),
as
we
and arc
S
jmd the
consists of the remaining arcs,
basis are easily
j
of obtaining an initial
set
U is
(j,
1)
with flow
the network
initial basis
b(j) if b(j)
>
B
The
0.
empty. The node potentials for
this
will see later.
Maintaining the Tree Structure
The
specialized network simplex algorithm
tree property of the beisis.
The algorithm requires
is
possible because of the spanning
the tree to be represented so that the
simplex algorithm can perform operations efficiently and update the representation quickly
root.
when
the basis changes.
We consider the tree We assume that node
We associate
We
next describe one such tree representation.
from a specially designated node, called the
as "hanging"
1 is
the root node. See Figxire 5.1 for an example of the
three indices with each
node
i
in the tree:
index, depthd); and a thread index, thread(i). Each
node
tree.
a predecessor index, pred(i); a depth i
has a unique path connecting
it
Ill
The predecessor index
to the root.
and the depth index
number
stores the
The Figure
indices are zero.
stores the
shows an example
5.1
We
The descendants
pred(i). its
successors,
of
node
nodes
say that pred(i)
and so
we
and 9 are
The thread its
node
A
For the root node these
Note
of these indices.
node
the predecessor of
consist of the
i
that
node
i
and
itself, its
i
set (5, 6, 7, 8, 9)
node with no successors
i
is
a successor of
successors, successors of
contair« the descendents In Figure 5.1,
called a leaf node.
is
way through
the
nodes of the
"left to right"
the root
tree, starting at
order,
and then
and
visiting
nodes
finally returning to the root.
first
depth
1-2-5-6-8-9-7-3-4-1 (see the dotted lines in Figure 5.1).
the next node in the traversal visited after node
two
properties;
itself;
and
(i)
The thread
the predecessor of each
the descendants of any
(ii)
visited until the
at
descendants of node
node 5,
5,
left
in the
we
thread (i) specifies
node are consecutive elements
nodes
visit
visit
i,
sequence before the node
means
node
3.
6,
in the traversal.
for visiting (or finding) all
simply follow the thread from node
and then
we have
that
We
For each node
node appears
depth of the visited node becomes
example, starting
we know
i:
node
This traversal satisfies the following
i.
indices provide a particularly convenient
descendants of a node
The
For our example, this sequence would read
search.
first
in a
search of the tree as described in
Section 1.5 and setting the thread of a node to be the node encountered after the this
node
leaf nodes.
thread indices can be formed by performing a depth
in
by
indices define a traversal of the tree, a sequence of nodes that walks or
"top to bottom" and
itself
i)
can enumerate the path from any node to
For example, the node
on.
5 in Figure 5.1.
4, 7, 8,
threads
of a
is
path (other than node
in that
of arcs in the path.
iteratively using the predecessor indices,
the root node.
node
first
i,
recording the nodes
at least as large as
8, 9,
and 7
node
i.
For
which are the
in order,
Since node 3's depth equals that of node
the "descendant tree" lying below
node
5.
As we
5,
will see,
finding the descendant tree of a node efficiently adds sigiuficantly to the efficiency of the
simplex method.
The simplex method has two given basis structure; and
now
describe
how
Computing Node
We
first
to
(ii)
basic steps:
determining the node
computing the arc flows for a given basis
perform these steps
Potentials
(i)
efficiently
p>otentials of a
structure.
We
using the tree indices.
and Flows for a Given Basis Structure
consider the problem of computing node potentials n for a given basis
structure (B, L, U).
We
assume
that n(l)
=
0.
Note
that the value of
one node potential
112
can be set arbitrarily since one constraint in (5.1b) is redundant. We compute the remaining node potentials using the conditions that Cj: = for each arc (i, j) in B. These conditions can alternatively be stated as
1
113
n(j)
=
Ji(i)
The basic idea indices to
-
is
Cjj,
for every arc
node
to start at
compute other node
1
(i, j)
(5.12)
and fan out along the
potentials.
node
e B.
The
traversal assures that
fanning out procedure
visits
predecessor, say node
hence, the procedure can
i;
compute
indices allow us to
all
j,
it
tree arcs using the thread
whenever
has already evaluated the potential of
node potentials
comput in
7t(j)
using
(5.12).
this its
The thread
0(n) time using the following
method. procedure
COMPUTE POTENTIALS;
begin 7t(l):
j:
=
0;
= thread(l);
while
j
^
1
do
begin i
:
=
pred(j);
if
(i, j)
6
A
then
;:(])
:
=
7t(i)
-
if
(j, i)
€
A then
7t(j)
:
=
7t(i)
+
j
:
Cj;; Cjj;
= thread (j);
end; end;
A
similar procedure will permit us to
basis structure (B, L, U).
node and move
in
We
compute flows on
basic arcs for a given
proceed, however, in the reverse order:
toward the root using the predecessor
indices, while
start at the leaf
computing flows
on arcs encountered along the way. The following procedure accomplishes
this task.
114
COMPUTE FLOWS;
procedure begin e(i)
let
:
=
aU
b(i) for
T be
€ N;
i
the basis tree;
for each
set X|j
:
U do
e
(i, j)
=
subtract Uj; from
u^j,
e(i)
and add
u^: to e(j);
while T*{1) do
begin select a leaf i
:
if
=
node
in the subtree T;
j
pred(j);
T then
€
(i, j)
else
Xjj
add
e(j)
=
:
:
=
-e(j);
e(j);
to e(i);
node
delete
Xj;
j
and the
arc incident to
it
from
T;
end; end;
One way thread indices.
of identifying leaf nodes in
A
is
simple procedure completes
into a stack in order of their appearance
top one at a time. descendants.
T
Note
to select
nodes
this task in
in the reverse order of the
0(n) time: push
all
the nodes
on the thread, and then take them out from the
node appears
that in the thread traversal, each
Hence, the reverse thread traversal examines each node
after
prior to
its
examining
its
descendants.
Now
consider the steps of the method.
equal to their capacity. Thus, additional
demand
of Uj; units at
This effect of setting nodes.
we
The manner
Xj:
=
u^:
The
arcs in the set
connected to the
e(j))
node
and makes the same amount available
i
explains the
U|j for these arcs.
initial
This assignment creates an
rest of the tree
only by the arc
j.
(i, j)
(or
(j,
i)),
of the
Since this subtree
j.
this arc
sum
must carry
-e(j)
(or
units of flow to satisfy the adjusted supply /demand of nodes in the subtree.
which B represents the columns
the spanning tree T. 2.3),
node
at
adjustments in the supply/demand of
for up>dating e(j) implies that each e(j) represents the
The procedure Compute Flows in
must carry flow
=
adjusted supply /demand of nodes in the subtree hanging from node is
U
set x^;
it
is
Since B
is
essentially solves the
system of equations Bx =
in the node-arc incidence matrix
a lower triangular matrix (see
N
b,
corresponding to
Theorem
2.6 in Section
possible to solve these equations by forward substitution, which
is
precisely
115
what the algorithm does. of equations n
Entering
B =
c
by back
bound with
a negative
inajor effect
Cjj
is
for selecting
must examine each
an entering arc among these
An
number
arc at each iteration, list
cyclically
would quickly
due
of iterations
strategies.
which
is
a
(5.10) or
eligible arcs has a
implementation that
has the largest value of
very time
and selecting the
list
poor arc choice.
On
the other
first arc that violates
find the entering arc, but
to the
implementations uses a candidate
between these two
i.e.,
lower
might require the fewest number of iterations in practice, but
arcs,
hand, examining the arc optimality condition
at its
upper bound with
at its
on the performance of the simplex algorithm.
among such
nonbasic arc
These arcs violate condition
eligible to enter the basis.
an arc that violates the optimality condition the most,
I
large
aiiy
reduced cost or any nonbasic arc
The method used
selects
Potentials solves the system
substitution.
types of arcs are eligible to enter the basis:
positive reduced cost, (5.11).
Compute
Arc
Two
I
Similarly, the procedure
might require a
One
of the
approach that strikes an
the
relatively
most successful
effective
compromise
This approach also offers sufficient flexibility for fine
tuning to special problem classes.
The algorithm maintains
a candidate
list
of arcs violating the optimality
conditions, selecting arcs in a two-phase procedure cor«isting of major iterations and
minor
In a
iterations.
major
iteration,
emanating from nodes, one node emanating from node
i
(if
its
maximum
allowable
where the previous major nodes
cyclically as
Once minor
it
at a time,
...
size.
ufxiate the candidate
number
of
Once minor
the
iteration ended.
all
list
list
We
all
repeat this
nodes or the
iteration begins with the
to the candidate
node
list
in a
major
iteration,
it
performs
candidate arcs and choosing a nonbasic arc from this
most
be performed
list
list.
to enter the basis. that
no longer
As we scan
at
list
the arcs,
violate the optimality
becomes empty or we have reached a specified
iteration.
examine arcs
In other words, the algorithm examines
by removing those arcs
iterations to
with another major
We
to the candidate list the arcs
The next major
adds arcs emanating from them
scanning
list.
we have examined
the algorithm has formed the candidate
iterations,
conditions.
adding
until either
that violates the optimality condition the
we
construct the candidate
any) that violate the optimality condition.
selection process for nodes i+1, i+2,
has reached
we
each major iteration,
we
limit
on the
rebuild the
list
116
Leaving Arc
^
Suppose we basis
B forms
We
pivot cycle.
select the arc (k,
1)
exactly one (undirected) cycle
(k, /) if Oc,
W, which
of this arc to the
sometimes referred
is
to as the
W as the same as that of € L, and e U. Let W and W respectively, denote the
define the orientation of
opposite to the orientation of
The addition
as the entering arc.
(k, /) if (k, /)
/)
,
W along and opposite to the cycle's orientation. Sending additional flow the pivot cycle W in the direction of orientation strictly decreases the cost of
sets of arcs in
around
its
the current solution. cycle
W reaches
change
5j:
We
change the flow as much as possible
lower or upper bound;
its
on an arc
(i, j)
|Uj:
-
X|:
,
If P(i)
if (i, j)
send 6 = min
{5jj
:
(i, j)
e
The
arc.
this cycle consists of the arcs {(((k, (k,
/)
The maximum flow
this arc leaves the basis.
is
W,
e
denotes the unique path
consists of the arc
and the
W)
units of flow
around W, and
crucial operation in this step in the basis
/)}
u
P(k)
u
from any node
P(/))
- (P(k)
disjoint portions of P(k)
indices alone permits us to identify the cycle
W as
n
Repeat the same operation for node
labeled, say
node w. Node w, which we might
ancestor of nodes k and to
/.
node w, along with the
The
cycle
arc (k,
/).
/
until
we
an arc
to the root
node, then
In other words,
Start at
node k and using all
the nodes in
encounter a node already is
the
first
common
W contains the portions of the path P(k) and is efficient,
but
it
has the drawback of backtracking along some arcs that are not in the portion of the path P(k) lying between the apex
w
W
Using predecessor
P(/).
refer to as the apex,
This method
(p, q)
to identify the cycle
is
P(/))).
and
follows.
i
select
predecessor indices trace the path from this node to the root and label this path.
in the
if(i,j)eW.
with 5pQ = 6 as the leaving
W.
one of the arcs
W that satisfies the flow bound constraints
e
^j=[Xi;,
We
until
and the
P(/)
up
can be improved.
It
W, namely,
root.
those in
The simultaneous
use of depth and predecessor indices, as indicated in the following procedure, eliminates this extra
work.
117 *
'
procedure IDENTIFY CYCLE; ,
begin i
:
= k and
while
i
j
=
:
/;
^ j do
begin depth(i) > depth(j) then
if
else
depth(j)
if
else
i
:
i
=
:
> depth(i) then
= pred(i) and
j
=
:
j
pred(i) :
= pred(j)
pred(j);
end;
w
:
=
i;
end;
A
simple modification of
W as
can be augmented along /.
Using predecessor indices
flows on arcs. typically
Basis
The
it
this
procedure permits
determines the
entire flow
to determine the flow 6 that
common
to again traverse the cycle
W,
ancestor
w
of
nodes k and
the algorithm can then update
change operation takes CKn) time
in the
worstose, but
examines only a small subset of the nodes.
Exchange In the terminology of the simplex
If
first
it
6 = 0, then the pivot
called degenerate
is
method, a basis exchange
said to be degenerate; otherwise
flow on some basic arc equals
if
nondegenerate otherwise.
Observe
it is
its
is
a pivot operation.
nondegenerate.
A
basis
is
lower or upper bound, and
that a degenerate pivot occurs only in a degenerate
basis.
Each time the method exchanges an entering arc
(k, /) for
a leaving arc (p, q),
it
must update the basis structure. If the leaving arc is the same as the entering arc, which would happen when 6 = uj^j the basis does not change. In this instance, the arc (k,J) ,
merely moves from the
set
L
to the set
U, or vice versa.
If
the leaving arc differs from
more extensive ch«mges are needed. In this instamce, the arc (p, q) nonbasic arc at its lower or upper bound depending upon whether Xpg = or
the entering arc, then
becomes
a
Xpg = Upg. Adding that is again a
follows.
into
Oc,
/)
spanning
The deletion
and deleting tree.
(p, q)
The node
potentials also
,
Note
new
basis
change and can be updated as
of the arc (p, q) from the previous b
two subtrees— one, T^ containing the
the root node.
from the previous basis yields a
that the subtree
root node,
and the
other, T2, not containing
T2 hangs from node p or node
q.
The
arc (k,
/)
has
;
118
one endpoint Cjj - 7t(i)
+
in
=
7t(j)
and the other
T-j
As
in T2.
new
for all arcs in the
is
easy to verify, the conditions n(l) =
basis imply that the potentials of
nodes
0,
and
in the
subtree T^ remain unchanged, and the potentials of nodes in the subtree T2 change by a
constant amount. -
Cj^/
;
if
/
e T|
k e T^ and
If
all
the
and k € T2, they change by the eimount
and depth
the thread
then
e T2,
/
node potentials Cjj.
in
T2 change by
The following method, using
updates the node potentials quickly.
indices,
UPDATE POTENTIALS;
procedure begin
q e T2 then y = q else y =
if
:
:
if k e T| then change = :
7t(y)
z
:
=
:
-
p;
Cjj else
change = Cjj :
+ change;
7t(y)
= thread(y);
while depth(z) < depth(y) do
begin 7c(z)
2
:
:
=
7:(z)
+ change;
= thread (z);
end; end;
The
exchange
final step in the basis
rather involved and
We
the details.
do
we
to ujxiate various indices.
is
This step
is
refer the reader to the reference material cited in Section 6.4 for
note, however, that
it is
possible to update the tree indices in 0(n)
time.
Termination
The network simplex algorithm, to another until (5.11).
It is
is
show
the
new
structiire.
that the algorithm terminates in a finite
nondegenerate.
cost per unit flow sent 0),
Recall that
basis structure
I
cj^/
I
number
of steps
Since there are a finite
has a unique associated
is
number
cost, the
61
cy
I
each
(in
which 6 >
units lower than the previous basis
of basis structures
and every
network simplex algorithm
assunung nondegeneracy. Degenerate
if
represents the net decrease in the
around the cycle W. During a nondegenerate pivot
basis structure has a cost that
we address next.
moves from one
obtains a basis structure that satisfies the optimality conditions (5.9)-
it
easy to
pivot operation
as just described,
basis structure
will terminate finitely
pivots, however, pose theoretical difficulties that
119
Strongly Feasible Bases
The network simplex algorithm does not
we impose an
of iterations unless
necessarily terminate in a finite
number
additional restriction on the choice of entering and
leaving arcs. Researchers have constructed very small network examples for which poor
choices lead to cycling,
Degeneracy
an
i.e.,
network problems
in
infinite repetitive is
sequence of degenerate pivots.
not only a theoretical issue, but also a practical one.
Computational studies have shown that as many as 90% of the pivot operations
common
networks can be degenerate. As
basis, called a strongly feasible basis, the
runs
we show
simplex algorithm terminates
Let (B, L, U) be a basis structure of the
As
we
earlier,
We
feasible basis.
minimum
of flow from
any node
(B, L,
Observe
U)
is
in the tree
without violating any of the flow bounds.
its
that this definition implies that
perturbation technique
is
a
cost flow
problem with
hand-side vector so that every
optimum
strongly feasible
integral
if
we
can send a
to the root along arcs in the tree
no upward pointing at its
well-known method
simplex algorithm for linear programming.
convert an
it
See Figure 5.2 for an example of a strongly
upper bound and no downward pointing arc can be
The
moreover,
finitely;
conceive of a basis tree as a tree hanging from the root node. The
say that a basis structure
amount
positive
special type of
upward pointing (towards the root) or are downward pointing (away from
tree arcs either are
the root).
by maintaining a
practice as well.
feister in
data.
next,
in
can be at
eirc
lower bound. for avoiding cycling in the
This technique slightly pertvirbs the right-
fecisible basis is
nondegenerate and so that
solution of the perturbed problem to an
optimum
it
is
easy to
solution of the
original problem.
We show
simplex method
equivalent to the combinatorial rule knov^Ti as the strongly feasible
is
that a particular perturbation technique for the
network
basis technique.
The minimum cost flow problem can be perturbed by changing the supply/demand vector b to b+E We say that e = (Ej, ££, t^) is a feasible perturbation .
if it
(i)
satisfies the
Ej
>
following conditions:
for all
i
=
n
1
(ii) i
= 2
ti
<
1;
ar»d
2, 3,
...
,
n;
... ,
.
120
r =
El
(iii)
L
-
One
possible choice for a feasible perturbation
= -{n - l)/n
E|
^^
= 2
i
Another choice
).
= a* for
=
i
2,
we gave
for the
Compute-Flows,
for
i
=
2,
o chosen
with
n,
,
...
= 1/n
The perturbation changes the flow on
positive number.
procedure
is Ej
is Cj
basic arcs.
...
,
n (and thus
as a very small
The
justification
earlier in this section, implies that
perturbation of b by e changes the flow on basic arcs in the following maimer:
1.
If
is
(i, j)
downward
a
pointing arc of tree
B and
then the perturbation decreases the flow in arc
D(j) is the set of descendants of
(i, j)
X
by
Ew-
Since
Z
<
k€D(j) 1,
2.
If
the resulting flow
arc of tree
then the perturbation increases the flow
B and in arc
D(i)
(i, j)
is
the resulting flow
Theorem
is
X
by
El.-
Since
node
X
<
El.
i,
<
k € CKi)
rXi)
nonintegral and thus nonzero.
For any basis structure
5.2.
<
keD(j)
the set of descendants of
k€ 1,
Ei,
j,
nonintegral and thus nonzero.
an upward pointing
is
(i, j)
is
node
(B, L.
minimum
U) of
the
at its
upper bound and no downward pointing arc of
cost flow problem, the following
statements are equivalent:
U)
(i)
(B, L,
(ii)
No upward the basis
U)
(B, L,
(iii)
(B,
(iv)
L,
strongly feasible.
is
is
pointing arc of the basis bound.
feasible if
is
U)
we
(i)
^
by b+e, for any feasible perturbation e
replace b
feasible
is
e = (-(n-l)/n, 2/n, 1/n,
Proof,
is
at its lower
...
we
if ,
replace
b
Suppose an upward pointing arc
(ii).
by
b+e,
the
for
perturbation
1/n).
(i, j)
is
at its
upper bound. Then node
cannot send any flow to the root, violating the definition of a strongly feasible the
same
(ii)
=^
reason,
(iii).
no dov^mward pointing arc can be
Suppose
that
(ii) is
true.
As noted
an upward pointing arc by an amount
upward pointing
arc
is
integral
and
perturbed solution remains feasible. perturbed the problem,
downward
strictly
at its
earlier,
For
lower bound.
perturbation increases the flow on
and
between
strictly less
basis.
i
than
its
Similar reasoning
pointing arcs also remain
1.
Since the flow on an
(integral) upp>er
shows
bound, the
that after
feeisible.
we have
121
(iii)
=>
Follows directly because e = (-(n-l)/n, 1/n, 1/n,
(iv).
...
,
1/n)
is
a feasible
perturbation.
(iv)
=*
Consider the feasible basis structure
(i).
arc in the basis
B has a
original problem.
downward
If
positive nonintegral flow.
we remove
upward
the p>erturbation
U) of the perturbed problem. Each
Consider the same basis tree for the
(i.e.,
replace
b +
e
by
b),
flows on the
upward pointing arcs decreaise, and > for downward pointing arcs, x^; <
pointing arcs increase, flows on the
Consequently,
resulting flows are integral. for
(B, L,
pxjinting arcs,
and
(B, L,
U)
is
x^:
the U|:
strongly feasible for the origiruil problem.
This theorem shows that maintaining a strongly feasible basis
is
equivalent to
applying the ordinary simplex algorithm to the perturbed problem. This result implies that both
same
approaches obtain exactly the same sequence of basis structures
As
rule to select the entering arcs.
a corollary, this equivalence
if
they use the
shows
any
that
implementation of the simplex algorithm that maintains a strongly feasible basis performs
at
most
nmCU
with the perturbation e =
on every arc
is a
To
pivots. ( -
(n-l)/n, 1/n, 1/n,
...
,
1/n).
With
this perturbation, the flow
multiple of 1/n. Consequently, every pivot operation augments at
1/n units of flow and therefore decreases the objective function value by Since
units.
solution
mCU
and zero
the
simplex
leeist
at least
1/n
is
an upper bound on the objective function value of the starting
is
a lower
bound on
algorithm will terminate in at most of
problem
establish this result, cortsider the perturbed
algorithm
that
the
nmCU
minimum
iterations.
maintains a
objective function value, the
Therefore, any implementation
strongly
basis
feasible
runs
in
pseudopolynomial time.
We can e.
thus maintain strong feasibility by f>erturbing b by a suitable perturbation
However, there
is
no need
to actually
perform the perturbation.
maintain strong feasibility using a "combinatorial rule" that the original simplex
method
after
rule permits degenerate pivots,
our discussion of
this
we have imposed
it is
is
we
can
equivalent to applying
the perturbation.
guaranteed to converge.
Instead,
Even though
this
Figure 5.2 will illustrate
method.
Combinatorial Version of Perturbation
The network simplex algorithm described earlier to construct the
starts
initial basis
with a strongly feasible basis. The method
always gives such a
basis.
The algorithm
selects the leaving arc in a degenerate pivot carefully so that the next basis is also
122
Suppose
feasible.
common
first
the basis
that the entering arc (k,
ancestor of nodes k and
We define
tree.
W that satisfy
W be the cycle formed by adding arc
the blocking arc
If
5.
basic arcs will be at their lower or
arc,
w
lower bound and the apex
same
identifies the blocking arcs,
more than one blocking
cycle contains
some
=
5jj
Let
/.
at its
the orientation of the cycle as the
updating the flow, the algorithm cycle
/) is
(k,
those arcs
i.e.,
unique, then
is
as that of arc
(k, /) to
After
/).
(i, j)
in the
leaves the basis.
it
If
then the next basis will be degenerate;
upper bounds.
the
is
the i.e.,
In this case, the algorithm selects
the leaving arc in accordance with the following rule:
When
Combinatorial Pivot Rule. method,
pivot cycle
do
so,
W along
its
We next
show
we show
that in this basis every
send positive flow
W
-
w and arc
W|
-
that this rule guarantees that the next basis
node
is
strongly feasible. To
W can send positive flow to the
in the cycle
Notice that since the previous basis was strongly feasible, every node could
root node.
apex
introducing an arc into the basis for the network simplex say arc (p, q), encountered in traversing the orientation starting at the apex w.
select the leaving arc as the last blocking arc,
(p, q),
our example.
when we
W^
be the segment of the cycle
traverse the cycle along
Define the orientation of segments
{(p, q)).
W. See
the orientation of
node. Let
to the root
Since arc
its
orientation.
W^ and W2 to W, no
the last blocking arc in
Further,
let
W2 =
be compatable vdth
Figure 5.2 for an illustration of the segments
(p, q) is
W between the
W|
W2
arc in
is
W2
and
for
blocking and
W2 can send positive flow to the root along the Now consider nodes contained in the segment W^.
every node contained in the segment orientation of
We
segment
node w.
via
cases.
a positive
W^
node w. If
was a nondegenerate pivot, then the pivot flow along the arcs in Wj; hence, every node in the
the current pivot
amount
of
can augment flow back to the root opposite the current pivot
If
the segment of feasibility,
and
two
distinguish
augmented
via
W2
W
was
between node
a degenerate pivot, then
w
and node
every node on the path from node
/
k,
to
Now
W^
must
node
in
positive
amount
of
path can be a blocking arc in a
therefore, since the pivot does not
W^
could send
change flow values,
be able to send positive flow to the root after the pivot as well.
study the
degenerate pivot. Since arc arc belongs to the path
w can send a
this
This conclusion completes the proof that the next basis
We now
and
must be contained
observe that before the pivot, every node in
positive flow to the root and,
every node in
W^
W^
because by the property of strong
flow to the root before the pivot and, thus, no arc on
degenerate pivot.
to the orientation of
effect of the basis (k,
/)
enters the basis
from node k
to
is
strongly feasible.
change on node potentials during a at its lower bound, cj^j < 0. The leaving
node w. Hence, node k
lies in
the subtree
T2 and
123
the potentials of
degenerate pivot
assumptions the
number So
arc (k,
pivot
is
sum
sum
(k,
Icist
node w.
of all
/).
node
potentials
is
bounded from below,
is finite.
lower bound.
that the entering arc is at its
The
criteria to select the
node
/
is
of the
node
If
the entering
W as opposite to
leaving arc remaii\s unchanged-the
blocking arc encountered in traversing
In this case,
this
node potentials (which by our prior
W
along
its
orientation
contained in the subtree T2 and, thus, after the
T2 again increase by the amount -
in
sum
increases the
of all
Consequently,
0.
upper bound, then we define the orientation of the cycle
the
nodes
all
Since the
we have assumed
the orientation of arc
starting at
T2 change by the amount - c^j >
of successive degenerate pivots
far
leaving arc
in
strictly increases the
integral).
is
is at its
/)
nodes
all
Cj^^j
;
consequently, the pivot again
potentials.
Complexity Results
The strongly
feasible basis technique implies
some
nice theoretical results about
the network simplex algorithm implemented using Dantzig's pivot rule,
i.e.,
the arc that most violates the optimality conditions (that
with the largest
value of
among
the arc
is,
(k,
/)
pivoting in
arcs that violate the optimality conditions).
This technique
also yields polynomial lime simplex algorithms for the shortest path
and assignment
Cj^j
I
|
all
problems.
We
have already shown that any version of the network simplex algorithm
maintairis a strongly feasible basis performs
O(nmCU)
that
Using Dantzig's pivot rule
pivots.
and geometric improvement arguments, we can reduce the number of pivots
0(nmU
log H), with
H
defined as
problem with perturbation function value
e
H
=
mCU. As
= (-(n-l)/n, 1/n, 1/n,
of the perturbed
minimum
...
cost flow
simplex algorithm, x denote the current flow, and structure.
nonbasic
maximum
Let arc.
A > If
denote the
maximum
earlier, ,
we
consider the perturbed
1/n). Let
problem (B, L,
to
z*^
denote the objective
at the k-th iteration of the
U) denote the current
basis
violation of the optimality condition of
any
the algorithm next pivots in a nonbasic arc corresponding to the
violation, then the objective function value decreases
by
at least
A/n
units.
Hence,
^k.^k+l^^/n
We now
(513)
need an upper bound on the
objective function after the k-th iteration.
It is
total
easy to
possible
show
that
improvement
in the
124
ap>exw
(3,4)
(2,2)
0,5)
(0,5)
Entering arc
Figure
5.2.
A
represented as
and the leaving
W2
The figure shows the flows and
strongly feasible basis. (x^:, Ujj).
arc
are as shown.
The entering
is (7, 5).
arc
This pivot
is (9, 10);
is
the blocking arcs are
a degenerate pivot.
(2, 3)
capacities
and
(7, 5);
The segments W^ and
125
A
(i,j)e
'
A^
(i,j)€
'
ieN
^
Since the rightmost term in this expression
node
is
improvement with respect
potentials, the total
a constant for fixed values of the
^
to the objective function
C:: x;-
(i,j)€A^
equal
is
the
to
£
function
£ (i, j)
'
the
with
respect
improvement
total
the
to in
the
objective objective
''
Cj; Xj; is
A
€
Further,
c;; x;;.
(i,j)€A'^ function
improvement
total
bounded by
improvement
the total
in the following relaxed
'
.'"
problem:
minimize
X {i,j)6
•
f
(514a)
C;; x;;,
A
»]
1]
subject to
<
xjj
<
Ujj,
for all
(i, j)
€ A.
(5.14b)
For a given basis structure (B, L, U), we construct an optimum solution of (5.14) by setting Xj; = u^ for all arcs (i, j) € L vdth Cj: < 0, by setting xj: = for all arcs (i, j) e U
with
Cjj
>
0,
and by leaving
the flow on the basic arcs unchanged. This readjustment of
flow decreases the objective function by
at
most mAU.
We have
thus
shown
z^-z»^mAU. Combining
(5.13)
and
(5.15)
we
that
(5.15)
obtain
nmu By Lemma
W)
1.1, if
iterations.
Theorem
5.3.
We
H
=
mCU,
the network simplex algorithm terminates in
summarize our discussion as
0(nmU
log
H)
pivots.
log
follows.
The network simplex algorithm that maintains a strongly
Dantzig's pivot rule performs
0(nmU
feasible basis
and uses
'
126
This result gives polynomial time bounds for the shortest path and assignment
problems since both can be formulated as
U
=
In fact,
respectively.
1
arguments
show
to
and runs
pivots
U
problems with
cost flow
possible to modify the algorithm
= n and
and use the previous
problems
that the simplex algorithm solves these
0(nm
in
is
it
minimum
0(n^ log C)
in
These results can be found in the references
log C) total time.
cited in Section 6.4.
Right-Hand-Side Scaling Algorithm
5.7
ni
Scaling techniques are
among
the most effective algorithmic strategies for
minimum
designing polynomial time algorithms for the
we
section,
The RHS-scaling algorithm
is
augmentations
may
amounts
of augmentations in the worst case. sufficiently large
We
augmentations substantially.
minimum
cost flow problem,
cost scaling.
in the successive shortest
carry relatively small
each augmentation carries
a
RHS-scaling on the uncapacitated Uj:
=
»
for each
cost flow
5.4.
It
algorithm for the either 2'
'°S
the
^
sum
Then
e(i)
(i) '.
< 2A
j
A
for all
of deficits)
i,
or
flow problem, (ii)
e(i)
> -2A
is
bounded by 2nA.
we
for all
sum
A be
let i,
{
T(A),
we perform
a
number
we did
the least
i
:
e(i)
)
carries
A
At
this point,
we
begin a
new
scaling phase.
has been
it
as defined in
of 2 satisfying Initially,
(whose magnitude
^A
This
in the excess scaling
power
and
let
T(A) =
or T(2A) =
imits of flow.
implies that within n augmentations the algorithm will decrease
least 2.
e(i)
of augmentations, each from a
and each of these augmentations
after
but not necessarily both.
of excesses
Let S(A) =
as
beginning of the A-scaling phase, either S(2A) =
A-scaling phase,
node €
maximum
This definition implies that the
at the
Much
performs a number of scaling phases.
e A.
j)
2.4).
and the imbalances
x
(i,
problem
converted into an uncapacitated problem (as described in Section
Section
that
flow and thereby reduces the number of
minimum
The algorithm uses the pseudoflow
that
is
The RHS-sc
problem with
algorithm can be applied to the capacitated
path algorithm
of flow, resulting in a fairly large
shall illustrate
i.e.,
cost
an improved version of the successive shortest
path algorithm. The inherent drawback
number
In this
upon
sections present polynomial time algorithms based
and simultaneous right-hand-side and
scaling,
cost flow problem.
describe an algorithm based on a right-hand-side scaling (RHS-scaling)
The next two
technique.
.
0.
node
The
A by
Hence, within
is
{ j
:
A=
equal to
e(j)
< -A
).
In the given i
c S(A) to a
definition of
a factor of at
Odog U)
scaling
127
A <
phase,
1.
has found an
By
the integrality of data,
optimum
The driving
we
will
prove
force behind this scaling technique
is
now
zero and the algorithm
an invariant property (which
each arc flow in the A-scaling phase
is
a multiple of A. This flow
and the connectedness assumption (A5.2) ensure
send A units of flow from a node is
imbalances are
flow.
later) that
invariant property
description
all
in S(A) to a
node
in T(A).
that
we
can always
The following algorithmic
a formal statement of the RHS-scaling algorithm.
algorithm RHS-SCALING;
begin
,
X
.
,
:= 0, e := b,
let
^
>
n be the shortest path distances in G(0);
,^ 2f log
U1;
while the network contains a node with nonzero imbalance do begin
S(A):={i€ N:e(i)^A); T(A)
:=
{ i
€
N
< -A
e(i)
:
);
and T(A) * e do
while S(A) *
begin select a
node k e S(A) and
a
node / e
T(A);
determine shortest path distances d from node k to in the residual
let
P denote
network G(x) with respect
the shortest path from
node k
to
all
to the
node
other nodes
reduced costs
/;
update n:=n-d;
augment A update
X,
units of flow along the path P;
and
S(A)
T(A);
end;
A := A/2; end; end;
The RHS-scahng algorithm A-scaling phcise, to a
node / e
it is
able to send
A
correctly solves the
units of flow
T(A). This fact follows
problem because during the
on the shortest path from a node k € SiA)
from the follovdng
result.
128
Lemma
5.2.
The residual
We
use induction on the number of augmentations and scaling phases.
capacities of arcs in the residual
A Proof.
A because they
residual capacities are a multiple of
initial
network are always integer multiples of
augmentation changes the residual capacities by
or
A
units
hypothesis.
A
hypothesis.
This result implies the conclusion of the lemma.
Let
are either
or «.
The Each
and preserves the inductive
decrease in the scale factor by a factor of 2 also preserves the inductive
S(n,
m, C) denote the time
to solve a shortest path
problem on a network
with nonnegative arc lengths.
Theorem 5.4. The RHS-scaling algorithm correctly computes a minimum cost flow and performs 0(n log U) augmentations and consequently solves the minimum cost flow problem in 0(n log U
O)
Sin, m,
time.
The RHS-scaling algorithm
Proof.
is
algorithm and thus terminates with a
performs
minimum
would imply
seeding phases, this fact
phase,
S(2A) =
0.
S(A)
<
I
n.
node
starts at a
therefore,
|
A
it
when
similar proof applies
Observe
ends
in S(A),
decreases
I
A<
that
S(A)
I
at a
e(i)
that the algorithm
Since the algorithm requires
the conclusion of the theorem.
beginning of the A-scaling phase, either S(2A) =
when
We show
cost flow.
most n augmentations per scaling phase.
at
Ul
l+Flog
a special case of the successive shortest path
or T(2A) =
T(2A) =
consider the case
At the beginning of the scaling
0.
< 2A for each node
node with a
We
0.
At the
deficit,
i
e S(A).
and
Each augmentation
carries
A
units of flow;
by one. Consequently, each scaling phase can perform
at
most n augmentations.
Applying the scaling algorithm problem introduces some
subtlety, because
The inductive hypothesis
As we noted problem
is
directly to the capacitated
fails to
previously, one
be true
Lemma
method of solving the
technique described in Section
phase performs
at
2.4.
does not apply for
We
problem
cajjacitated
to
n+m
are
minimum
or
Uj;.
cost flow
an uncapacitated one using the
then apply the RHS-scaling algorithm on the
The transformed network contains
most
cost flow
this situation.
initially since the residual capacities
to first transform the capacitated
transformed network.
5.2
minimum
augmentations.
The
n+m
nodes, and each seeding
shortest path
problem on the
transformed problem can be solved (using some clever techniques) in S(n, m, C) time.
Consequently, the RHS-scaling algorithm solves the capacitated
problem
in
0(m
log
U S(n,
m,
O)
time.
A
RHS-scaling algorithm solves the capacitated
recently developed
minimum
minimum
cost flow
modest variation of the
cost flow
problem
in
0(m
lof^
n
129
(m + n
This method
log n)) time.
algorithm for solving the
minimum
is
currently the best strongly
polynomial-time
cost flow problem.
Cost Scaling Algorithm
5.8.
We now
describe a cost scaling algorithm for the
miiumum
cost flow problem.
This algorithm can be viewed as a generalization of the preflow-push algorithm for the
maximum
flow problem.
This algorithm relies on the concept of approximate optimality.
be e -optimal for some
e
>
x together with
if
some node
A
flow x
is
said to
potentials n satisfy the following
conditions.
C5.7
(Primal feasibility) x
is
C5.8.
(e -EHial feasibility)
Cj;
We
feasible.
^
-e for
each arc
(i, j)
refer to these conditions as the e -optimality conditions.
Cjj
e -optimality conditions permit -e
>
for
an arc
(i, j)
at its
conditions.
The
Lemma
Any feasible flow
5.3.
<
<
Cj;
network G(x).
These conditions are
and reduce to C5.5 and C5.6 when e is 0. an arc (i, j) at its lower bound and e S
a relaxation of the original optimality conditions
The
in the residual
for
upper bound, which
is
a relaxation of the usual optimality
follovsdng facts are useful for analysing the cost scaling algorithm. is
e -optimal for
ekC. Any
e -optimal feasible flow for
E
is
an optimum flow.
Proof.
Clearly,
any
feasible flow with zero
consider an e-optimal flow with e <
any directed cycle
1
/n.
node
potentials satisfies C5.8 for e
The e-dual
W in the residual network,
arc costs are integral, this result implies that
i^
feasibility conditior«
C;:
Y
=
X6 W' ^ ^\\
0.
^ C.
Now
imply that for
C;;^-n£>-l. Since
all
Hence, the residual network
(i, j)
contaii« no negative cost cycle and from
The
Theorem
5.1
the flow
cost scaling algorithm treats e as a parameter
flows for successively smaller values of
e.
Initially e
and
is
optimum.
iteratively obtains e-optimal
= C, and
finally e
< 1/n. The
algorithm perfom\s cost scaling phases by repeatedly applying an Improve-Approximation
procedure that transforms an e-optimal flow into an e/2-optimal flow. After l+Tlog nCl
;
130
< 1/n and the algorithm terminates with an optimum flow. More
cost scaling phases, e
we
formally,
can state the algorithm as follows.
COST SCALING;
algorithm
begin j:
and
:=
let
e := C;
X be any feasible flow;
while e S
1
/n do
begin
IMPROVE- APPROXIMATION-I(£,
x,
re);
E:=£/2; end; X
an optimum flow for the
is
minimum
cost flow problem;
end; i
The Improve-Approximation procedure transforms an E/2-optimal flow.
pseudoflow
does so by
It
pseudoflow x
(a
conditions C5.8), and then
(ii)
called e -optimal
and
an arc
call
(i, j)
if it satisfies
the e -dual feasibility
gradually converting the pseudoflow into a flow while
always maintaining the e/2-dual active
converting an e -optimal flow into an 0-optimal
(i) first
is
We
feasibility conditions.
in the residual
network admissible
call a
node
-e/2 <
if
c^;
i
with
<
e(i)
The
0.
see later that pushing flows on admissible arcs preserves the e/2-dual
>
basic
We
operations are selecting active nodes and pushing flows on admissible arcs.
conditions.
an
e -optimal flow into
shall
feasibility
The Improve-Approximation procedure uses the following subroutine.
procedure PUSH/RELABEL(i); begin if
G(x) contains an admissible arc
push 6 else
:=
Jt(i)
:=
7c(i)
min
{
e(i), rj:
+ e/2 + min
{
}
c^:
(i, j)
then
units of flow from :
(i, j)
e A(i) and
r^j
node >
i
to
node
j
0);
end; Recall that
r^:
denotes the residual capacity of an arc
discussion of preflow-push algorithms for the
we
push as saturating; otherwise
refer to the
updating of the potential of a node as a operation
is to
create
new
admissible arcs.
maximum it
is
(i, j)
in G(x).
flow problem,
nonsaturating.
relabel operation.
Moreover,
we
As
We
if
in
our earlier
5 =
r^;,
then
also refer to the
The purpose of
a relabel
use the same data structure
bls
;
131
used
maximuin flow algorithms
in the
maintain a currenl arc
The current
arc
which
(i, j)
to identify admissible arcs.
the current candidate for pushing flow out of
is
found by sequentially scanning the arc
is
The following generic version summarizes
its
For each node
of the
we
i,
node
i.
list A(i).
Improve-Approximation procedure
essential operations.
procedure IMPROVE- APPROXIMATION-I(e,
x,
Jt);
begin if
Cjj
else
>
if
then
<
Cjj
Xj; :=
then
Xj: := uj;
compute node imbalances; while the network contains an active node do begin select
an active node
i;
PUSH/RELABEL(i); end; end;
The correctness
of this procedure rests on the iollowing result.
Lemma 5.4. The Improve-Approximation procedure always maintains e /2-optimality of the pseudoflow, and at termination yields an e /2-optimal flow. Proof. This proof
similar to that of
is
Lemma
4.1.
At the beginning of the procedure, the
algorithm adjusts the flows on arcs to obtain an E/2-pseudoflow
pseudoflow).
We
(in fact,
it
use induction on the number of push/relabel steps to
is
show
algorithm preserves £/2-optimality of the pseudoflow. Pushing flow on arc
add
its
reversal
admissibility),
(j,
Cjj
and the condition C5.8
>
algorithm relabels node
i
when
Cj;
rule for increasing potentials, after
>
0) units, the
network. But since -e/2 S
to the residual
i)
^
we
residual network.
throughout and,
Ji(i)
satisfied for
for every arc increaise
reduced cost of every arc
addition, increasing
is
(i, j)
Jt(i)
with
maintains the condition
(i, j)
Cj;
that the
(i, j)
c
>
in the residual network.
>
still
{
Cj:
:
(i, j)
satisfies
cj^ t -e/2
might
(by the criteria of
any value of
by e/2 + min rj:
<
a 0-optiCTiaI
e A(i)
Cj;
0.
By our and
^ -e/2.
for all arc (k,i)
The
fjj
In
in the
Therefore, the procedure preserves e/2-optimality of the pseudoflow
at termination, yields
an e/2-optimal flow.
132
We will
show
We
next analyze the complexity of the Improve-Approximation procedure. that the complexity of the generic version is
specialized version running in time OCn-^).
of the preflow-push algorithms for the
O(n^m) and then describe
These time bounds are comparable
maximum
a
to those
flow problem.
Lemma 5.5. No node potential increases more than 3n times during an execution of the ImproveApproximation procedure. Proof. Let X be the current £/2-optimal pseudoflow and
end of the previous cost scaling phase. Let n and to the
pseudoflow x and the flow
repectively.
x'
x'
be the e-optimal flow
be the node potentials corresponding
n'
It is
possible to show, using a variation
of the flow decomposition properties discussed in Section 2.1, that for every
node
positive imbalance in x there exists a satisyfing the properties that
reversal
P
networks implies that there
path
P = vq
X
<
...
-
v^ is a
its
and
its
(ii)
This fact in terms of the residual
sequence of nodes v = vq, v^,
path in G(x) and
x,
...
,
P = vp
reversal
vj
=
vj.j
on the path P
to arcs
w -
with the
...
-
V|
is
in G(x),
a
we
^-/(e/2). Alternatively,
C:;
(i,j)eP
7i(v)
- v-j -
exists a
x'.
Applying the e/2- optimality conditions
in G(x').
obtain
an augmenting path with respect to
is
node v with
with negative imbalance in x and a path P
an augmenting path with respect to
is
property that
P
(i)
w
at the
^
Jt(w)
y
+ /(e/2) +
(5.16)
Cjj.
apeP^J Applying the
7l'(w)
Combii\ing
Jt(v)
<
£
-
optimality conditions to arcs on the path
7t*(v)
(5.16)
<
n'(v)
+
and +
Now we use
/£
I
_C;;
(j,i)€
P^'
+
=
7t'(v)
+
2
/£ -
P
and
now
(iii)
(5.17)
C;;.
(5.17) gives
(7c(w)
-
n'(w))
the facts that
+
(5.18)
(3/2)/£.
(i)
k(w) =
it'(w) (the potentials of
each increase in potential increases
immediate.
we obtain
(i,j)eP'J
imbalance does not change because the algorithm never selects n,
in G(x'),
Ji(v)
by
it
at least
a node with a negative
for push/relabel),
e/2
units.
(ii) /
The len\ma
< is
133
Lemma
5.6.
Proof.
This proof
The Improve- AppToximation procedure performs 0(nm) saturating pushes.
is
similar to that of
Lemma
4.5 ar\d essentially
that
between two consecutive saturations of an arc
and
j
to
showing
the potentials of both the nodes
(i, j),
i
Since any node p>otential increases 0(n) times, the algorithm
increase at least once.
also saturates
amounts
any arc 0(n) times
resulting in
0(nm)
total saturating
pushes.
result.
We
define the admissible network as the network consisting solely of admissible arcs.
The
To bound
following result
the
number
of nonsaturating pushes,
crucial to analyse the complexity of the cost scaling algorithms.
is
Lemma
5.7.
The admissible network
Proof.
We
establish this result
pushes and
relabels.
the pseudoflow
is
push flow on an
The
result
is
is
acyclic throughout the cost scaling algorithms.
by an induction argument applied
to the
arc
with
(i, j)
Cj:
>
Cjj
<
0.
0;
because for any arc
but (k,
it
i),
A
also deletes cj^j
hence,
the algorithm adds
if
Thus pushes do not
preserve the inductive hypothesis. (i, j),
number
of
true at the beginning of each cost scaling phase because
We
0-optimal and the network contains no admissible arc.
the residual network, then
admissible arcs
we need one more
relabel operation at
all
(j, i)
to
admissible arcs and
node
i
may
create
new
The latter result is true operation, and cj^j ^ after the
admissible arcs
k -e/2 before a relabel
relabel operation since the relabel operation increases
new
create
reversal
its
always
(k,
by
7t(i)
i).
at least
e/2 units. Therefore
the algorithm can create no directed cycles.
Lemma
5.8.
The Improve- Approximation procedure performs 0(n m) nonsaturating pushes.
Proof (Sketch).
Let g(i) be the
admissible network and
let
number
of nodes
that are reachable
the potential function F =
X i
showing
that a relabel operation or a saturating
and each nonsaturating push decreases F by at
most 3n2
relabel operations
these observations yield a
As
in the
Approximation procedure
is
algorithm takes 0(nm) time
1
of
O(nTn) on
the
in the
Th^ proof amounts
push can increase F by
at least
i
to
active at
most n units
Since the algorithm performs
unit.
and 0(nm) saturation pushes, by Lemmas
bound
maximum
g^i)-
from node
number
5.5
and
5.6,
of nonsaturating pnjshes.
flow algorithm, the bottleneck operation in the Improvethe nor«aturating pushes, to
which take O(n^m)
time.
The
perform saturating pushes, and the same time to scan
arcs while identifying admissible arcs.
Since the cost scaling algorithm calls Improve-
Approximation l+Tlog nCl times, we obtain the following
result.
134
Theorem 5S. The
The
generic cost scaling algorithm runs in 0(n^Tn log nC) time.
cost scaling algorithm illustrates an important connection
maximum
and
flow
minimum
the
Improve-Approximation problem
problems.
flow
cost
number
the
is
We
of nonsaturating pushes.
describe one such improvement
The wave algorithm selects active
nodes
is
same
the
as the
,
As
acyclicity of the admissible network.
in the
(i, j)
called the
known, nodes
network,
i
<
0(m)
ordering, called a topological ordering of nodes, in
wave algorithm.
j.
It is
time.
may
create
new
it
The algorithm uses the
of an acyclic
network can
possible to determine this
Observe
pushes do not
that
change the admissible network since they do not create new admissible operations, however,
using
Improve-Approximation procedure, but
well
is
Researchers have
si>ecific order, or
for the push/relabel step in a specific order.
be ordered so that for each arc
an
maximum flow problem. maximum flow problem, the
suggested improvements based on examining nodes in some clever data structures.
Solving
very similar to solving a
is
Just as in the generic preflow-push algorithm for the
bottleneck operation
between the
arcs.
The
may
admissible arcs and consequently
relabel
affect the
topological ordering of nodes.
The wave algorithm examines each node is active,
then
it
nodes push flow
to
and
if
node
the
in this order, active
higher numbered nodes, which in turn push fiow to even higher so on.
A
to the topological order.
method again
However,
if
examine the nodes according
starts to
within n cortsecutive node examinations, the
algorithm performs no relabel operation then excesses and the algorithm obtains a flow.
we immediately
changes the numbering of nodes and
relabel operation
the topological order, and thus the
examinations.
When examined
performs a push/relabel step.
numbered nodes, and
operations,
in the topological order
obtain a
Each node examination
all
nodes have discharged
active
their
Since the algorithm requires O(n^) relabel
bound
of OCn-^) on the
entails at
number
of
node
most one nonsaturating push.
Consequently, the wave algorithm performs O(n^) nor\saturating pushes per Improve-
Approximation.
We now describe a relabel operation.
algorithm.
Suppose
An
procedure for obtaining a top)ological order of nodes after each initial
that while
after the relabel operation at
node
i
(see the proof of
topological ordering
examining node
node
Lemma
i,
5.7).
i,
the algorithm relabels
the network contains
We
then
determined using an 0(m)
is
it.
Note
no incoming admissible
move node
i
from
its
that
arc at
present position in
135
the topological order to the
new
topological ordering of the
node
i
j
and so the previous order nodes (possibly relabels a
node
Theorem minimum
5.9.
a
eis
is still
this
The
and examines nodes
list)
moves
order starting
cost scaling
cost flow problem in
i
Whenever
to the first place in this order
it
node
at
in this order.
set of it
and again
i.
result.
approach using the wave algorithm as a subroutine solves the
0(n^
nC)
log
time.
Double Scaling Algorithm The double
scaling approach combines ideas from both the RHS-scaling
scaling algorithms and obtains an
For the sake of simplicity,
we
improvement not obtained by
G
= 0^^
A
supply and demand nodes respectively.
be solved by
problem
(as
first
cost
either algorithm alone.
double scabng algorithm on the N2, A), with Nj and N2 as the sets of
u
capacitated
minimum
cost flow
problem can
transforming the problem into an uncapacitated transportation
described in Section 2.4) and then applying the double scaling algorithm.
The double
scaling algorithm
is
in the previous section except that
it
Approximation procedure.
the
same
as the cost scaling algorithm discussed
uses a more efficient version of the Improve-
The Improve-Approximation procedure
section relied on a "pseudoflow-push" method.
augmenting path based method.
A
natural alternative
A
i.e.,
augmentations since each augmentation would saturate
0(nm)
would be
a path in
natural implementation of this approach
the algorithm requires
in the previous to try
an
This approach would send flow from a node with
excess to a node with deficit over an admissible path,
admissible.
and
shall describe the
uncapacitated transportation network
5.6,
node
(i, j),
Thus the algorithm maintains an ordered
valid.
have established the following
5.6.
for each outgoing admissible arc
(ii)
(i)
the rest of the admissible network does not change
(iii)
doubly linked
the algorithm
i,
examines nodes in
We
and
in the order;
arc;
a
is
admissible network. This result follows from the facts
has no incoming admissible
precedes node
Notice that this altered ordering
position.
first
arc saturations.
Thus,
would
at least this
which each arc result in
one arc and, by
is
0(nm)
Lemma
approach does not seem
to
improve the O(nTn) bound of the generic Improve-Approximation procedure.
We number
of
can, however, use ideas
augmentations
to
from the RHS-scaling algorithm to reduce the
0(n log U)
for
an uncapacitated problem by ensuring
that
136
each augmentation carries
that does cost scaling in the outer loop
number
and within each
of RHS-scaling phases; hence,
The advantage
algorithm.
shortest path
identifies
problem
algorithm
RHS-scaling algorithm,
in the
fact,
this
cost scaling phase performs a is
called the double scaling
of the double scaling algorithm, contrasted with solving a
an augmenting path
augmentations. In
This approach gives us an algorithm
sufficiently large flow.
in
is
double scaling algorithm
that the
0(n) time on average over a sequence of n
the double scaling algorithm app>ears to be similar to the shortest
augmenting path algorithm
maximum
for the
flow problem; this algorithm,
requires 0(n) time on average to find each augmenting path.
The double
also
scaling
algorithm uses the following Improve-Approximation procedure.
procedure
IMPROVE- APPROXIMATION-n(e, x,
n);
begin
and compute node imbalances;
set X := 7t(j)
:=
7t(j)
+ E for ,
all
j
€ N2;
A:=2riogUl; while the network contains an active node do begin S(A) :=
(
i
€
Nj
while S(A) ^
u N2
:
e(i)
^A
};
do
begin OlHS-scaling phase) select a
node k
in S(A)
and delete
it
from
S(A);
determine an admissible path P from node k to some node with
e(/)
augment A
<
/
0;
units of flow
on P and update
x;
end;
A := A/2; end; end;
We
shall describe a
on the correctness of
method
to
this procedure.
determine admissible paths after First, observe that
beginning of the procedure and, by adding e to optimal
(in fact, a 0-optimal)
admissible arcs and, from
pseudoflow. Thus,
pseudoflow.
Lemma
5.4, this
it(j)
c^;
for each
^ j
first
-e for all
e N2/
we
commenting
(i, j)
e
A
at the
obtain an e/2-
The procedure always augments flow on choice preserves the e/2-optimality of the
at the termination of the procedure,
we
obtain an £/2-optimal flow.
137
Further, as in the RHS-scaling algorithm, the procedure maintains the invariant
property that
A and thus each
residual capacities are integer multiples of
all
augmentation can carry A units of flow.
The algorithm maintain a
-
partial
whichever has a
node
leist
a predecessor index,
we perform one of P, say
node
i,
i.e., if
(u, v)
e
P then
two
of the following
terminating
when
prediy) steps,
the last
node
deficit.
advanced). e(j)
in the algorithm,
applicable, at the
is
P using
admissible path
At any point
u.
We
an admissible path by gradually building the path.
identifies
<
0,
If
the residual network contains an admissible arc
(i, j),
then add
(i, j)
to P.
If
then stop. the residual network does not contain an admissible arc
rctreat(i).
If
n(i) to
+ e/2 + min
7t(i)
(pred(i),
i)
from
The creating
{
Cj;
:
(i, j)
€ A(i)
and
>
r^:
0).
If
P has
then ujxiate
arc,
then delete
P.
retreat step relabels (increases the potential
new
(i, j),
one
at least
oO node
for the
i
purpose of
admissible arcs emanating from this node; in the process, the arc (pred(i),
becomes inadmissible.
Hence,
we
delete this arc from P.
The proof
of
Lemma
i)
5.4
implies that increasing the node potential maintaii^s e/2-optimality of the pseudoflow.
We
consider
next
complexity
the
Improve-Approximation procedure. l+flog e(i)
Ul RHS-scaling
< 2A
for each
i
e S(A).
than A.
node k
i.e.,
A<
in S(A) to a
node
A and
/
with
e(/)
<
0.
This operation reduces the
ertsures that the excess at
node
/,
if
there
is
any,
a
new
scaling phase.
at
The algorithm thus
0(n log U) augmentations.
coimt the number of advance steps. Each advance step adds an arc to the
partial admissible path,
and
a retreat step deletes
Thus, there are two types of advance steps:
on which the algorithm
later
(i)
an arc from the
most n advance steps of the
performs an augmentation; and
first
partial admissible path.
those that add arcs to an admissible path
cancelled by a retreat step. Since the set of admissible arcs at
the
Consequently, each augmentation deletes a node from S(A) and after
total of
We next
of
Each execution of the procedure performs
most n augmentations, the method begins performs a
implementation
During the scaling phase, the algorithm augments A
excess at node k to a value less then is less
this
phases. At the beginning of the A-scaling phase, S(2A) = 0,
node
units of flow from a
of
is
(ii)
those that are later
acyclic (by
type, the algorithm will discover
Lemma
5.7), after
an admissible path
138
and
perform an augmentation.
vsdll
Since the algorithm requires a total of 0(n log U)
augmentations, the number of the
first typ>e
algorithm performs advance steps
at
total
number
is
at
most 0(n^ log U). The
most O(n^) of the second type because each
step increases a node potential, and by
The
of advance steps
Lemma
of advance steps, therefore,
node potentials increase
5.5,
retreat
0{t\^) times.
0(n^ log U).
is
n
The amount
of time needed to identify admissible arcs
is
£
0(
=
lA(i)ln)
i=l
0(nm)
between a potential increase of a node
since
arcs for testing admissibility.
We
Theorem 5.7. The double scaling 0((nm + rr log U) log nC) time.
To
solve
the algorithm will
i,
examine
have therefore established the following
I
A(i)
I
result.
algorithm solves the uncapacitated transportation problem in
minimum
the capacitated
problem ,we
cost flow
first
transform
it
into
an uncapacitated transportation problem and then apply the double scaling algorithm.
We
leave
it
to use the
0(nm
log
as an exercise for the reader to
double scaling algorithm
U
that
the transformation permits us
to solve the capacitated
minimum
cost flow
problem
The references describe further modest improvements
log nC) time.
using
more sophisticated data
structures
polynomial-time algorithm for most classes of the 5.10
how
of the
For problems that satisfy the similarity assumption, a variant of this
algorithm.
algorithm
show
is
minimum
currently
the
fastest
cost flow problem.
Sensitivity Analysis
The purpose solution of a
minimum
(supply/demand practitioners
of sensitivity analysis
is
vector, capacity or cost of
simplex algorithms.
There
is,
the basis tree
do not
arc).
Traditionally, researchers
at
this
approach.
The
every iteration and conducts sensitivity
in the b
is
and
using the primal simplex or dual
however, a conceptual drawback to
by determining changes
basis in the simplex algorithm
any
this sensitivity analysis
simplex based approach maintains a basis tree aruilysis
determine changes in the optimum
flow problem resulting from changes in the data
cost
have conducted
to
often degenerate, though,
by changes
in the data.
The
and consequently changes
in
necessarily traiislate into the changes in the solution. Therefore, the
simplex based approach does not give information about the changes in the solution as the data changes; instead,
it
tells
us about the changes in the
basts tree.
139
We
present another approach for performing serisitivity analysis.
does not share the drawback
we have
we
For simplicity,
mentioned.
just
This approach
our
limit
discussion to a unit change of only a particular type. In a sense, however, this discussion is
quite general:
we
simple changes
We
cor^sider.
problem
cost flow
more complex changes
possible to reduce
is
it
show
to a
sequence of the
minimum
that the sensitivity analysis for the
essentially reduces to solving shortest path or
maximum
flow
problems.
optimum
Let X* denote an
solution of a
the corresponding node potentials and Further,
costs.
d(k,
let
/
minimum
=
Cj;
Cj;
-
node
to
jt*(/)
Z
/ ,
node k
to
X
=
^ij
(i,j)€
node
/
nonnegative. Hence,
we
Cjj
Cj;
.
- K(k)
Since for
+
Cj;
can compute d(k,
node / in the any directed path to
d(k,
jt(l),
/
)
equals the
P cjj
of all arcs in the residual
for all pairs of
/)
Let n* be
denote the reduced
with respect to the arc lengths
At optimality, the reduced costs
).
7t*(j)
denote the shortest distance from node k
)
(i,j)6P shortest distance from
+
7C*(i)
residual network with respect to the original arc lengths
P from node k
cost flow problem.
nodes k and
/
plus
7t*(k)
(
-
network are by solving n
single-source shortest path problems with nonnegative arc lengths.
Supply/Demand
We
first
supply/demand becomes problem
b(/)
-
Sensitivity Analysis
study the change in the supply/demand vector. of a 1.
dictates that
node k becomes bGc) + (Recall
X N
from Section b(i)
=
0;
1.1
hence,
1
Suppose
and the supply/demand that feasibility of the
we must change
that the
of another
minimum
node
/
cost flow
the supply /demand values
ie of
two nodes by equal magnitudes, and must increase one value and decrease the
Then x*
is
a
pseudoflow
for the modified problem;
dual feasibility conditions C5.6.
moreover,
Augmenting one unit
flow. 5.1
Tliis
this vector satisfies the
node k
of flow from
along the shortest path in the residual network G(x') converts
this
is
optimum
for the
node
to
pseudoflow
augmentation changes the objective function value by d(k,
implies that this flow
other).
/ )
units.
into
/
a
Lemma
modified minimvmi cost flow problem.
Arc Capacity Sensitivity Analysis
We
next consider a change in an arc capacity. Suppose that the capacity of an arc
(p, q) increases
by one unit
.
The flow x*
is
feasible for the modified problem. In
140
addition,
Cpq S
if
optimum flow
0,
it
satisfies the optimality
for the modified problem.
flow on the arc must equal flow on the arc
(p, q)
node q and
unit at
capacity.
its
by one
unit,
If
Cpg <
We
satisfy this
0,
hence,
it
is
an
then condition C5.4 dictates that
requirement by increasing the
which produces a pseudoflow with an excess of one
one unit
a deficit of
conditions C5.2 - C5.4;
at
node
We
p.
convert the pseudoflow into a
flow by augmenting one unit of flow from node q to node p along the shortest path in the residual network which changes the objective function value by an amount Cpg + This flow
d(q, p).
is
optimum from our observations concerning supply /demand
sensitivity analysis.
When strictly less
the capacity of the arc (p, q) decreases by one unit
than
its
capacity, then x* remains feasible,
modified problem. However,
the flow on the arc
if
by one unit and augment one unit
due
in
capacity,
we
for the
decrease the flow
from node p to node q along the shortest
d(p, q).
The preceding discussion shows how solution value
and hence optimun,
is
This augmentation changes the objective function value
path in the residual network.
by an amount -Cpn +
of flow
is at its
and flow on the arc
to unit
determine changes in the optimum
to
changes of any two supply /demand values or a unit change
any arc capacity by solving n single -source shortest path problems.
We can, however,
obtain useful upper bounds on these changes by solving only two shortest path
problems.
This observation uses the
nodes k and
/.
Consequently,
to all other nodes,
1
all
d(k,
/)
.
and from
fact that
we need all
to
d(k,
/)
S d{k,
1)
+ d(l,
/)
for all pairs of
determine shortest path distances from node
other nodes to
node
1
to
compute upper bounds on
Recent empirical studies have suggested that these upper bounds are very
close to the actual values; often these
usually they are within
5%
upper bounds and the actual values are equal, and
of each other.
Cost Sensitivity Analysis Finally, that the cost of
we
discuss changes in arc costs, which
an arc
(p, q)
<
both the
0.
Similarly,
Ctises,
we
change and Xp_ >
if
Cpq >
If
0,
Cpq =
1
<
before the change, then after the change
before the change, then
c_
^
preserve the optimality conditions. However, 0,
are integral. Suppose
increases by one unit. This change increases the reduced cost
of arc (p, q) by one unit as well.
c^
we assume
then after the change
Cpq =
1
>
after the change.
if
Cpg =
and the solution
In
before the violates the
141
To
condition C5.2.
flow on arc
becomes
satisfy the optimality condition of the arc,
change the potentiak so
(p, q) to zero, or
we must
that the
either reduce the
reduced cost of arc
(p, q)
zero.
We
reroute the flow
first try to
We
any of the optimality conditions. defined as follows:
from node p
x
do so by solving
the flow on the arc (p, q)
(i)
•
is set
to
node q without
a
maximum
to zero, thus
violating
flow problem
creating an excess of
•
X
at
Pi
node p and
a deficit of x
node q as the sink node; and
We
sink.
node
at
Pi (iii)
maximum
permit the
v"
Let
C5.4.
node p as the source node and
define
»
send a
maximum
of
x__
units from the source to the
flow algorithm, however, to change flows only on arcs
with zero reduced costs, since otherwise
and
(ii)
q;
it
would generate
a solution that violates C5.2
denote the flow sent from node p to node q
and
x"
denote the
»
resulting arc flow.
v° =
If
x
,
then
x°
denotes a
minimum
cost flow of the
Pi
modified problem. In
optimal objective function values of the original and
this Ccise, the
modified problems are the same. »
cut
On the (X, N- X)
other hand,
v° < x
if
then the
with the properties that p € X, q e
cutset with zero
reduced cost has others
node
potential of every
that
this
change
in
node
N-X
in
by one unit.
x^
by case
aruilysis
(p, q) to zero.
Consequently,
- v° and obtain a feasible
minimum is
x_,
we
can set
cost flow.
-
v"
In
units
that of the original problem.
The assignment problem Section I
1.1
=
I
,
this
problem
N2 = n) 1
,
objective
is
is
and a
one of the best-known and most intensively studied
network flow problem. As already indicated in defined by a set N|, say of f)€rsoris, a set N2, say of objects cost
a collection of
to-object assignments,
The
is
minimum
special cases of the
in A.
then decrease the
Assignment Problem
5.11
Nj
We
eeisy to verify
value of the modified problem
this case, the objective function
I
every forward arc in the
capacitated. It is
s-t
node potentials maintains the optimality conditions and,
the flow on arc (p, q) equal to
(
flow algorithm yields an
N - X, and
at the arc's
furthermore, decreases the reduced cost of arc
more than
maximum
cost
node
Cj;
pairs
A C Nj
x
N2
representing possible person-
(possibly negative) associated with each element
to assign each person to
one object
,
(i, j)
choosing the assignment with
142
minimum
The problem can be formulated as the following
possible cost.
linear
program:
Minimize
2e
(i, j)
Cj;X::
A
(5.18a)
'
^
subject to
X e A) =l,foraUi€ X::
{j
N-i,
(5.18b)
X € X) =l,foraUje N2,
(5.18c)
(i. j)
:
Xji
(i
:
(i, j)
^
xjj
0,
The assignment problem is with node set N = N| u N2, arc
G
=
N| and
b(i)
=
-1 if
The assignment problem
is
also
b(i)
1 if
i
We Xjj
=
e
a
i
is
assigned to
j
and
set
€ A.
(5.1 8d)
Cj;,
e N2. The network
is
problem defined on a network and supply /demand specified as
cost flow
A, arc costs
known
j
(i, j)
minimum
G
has 2n nodes
A
i
€
X
Ni and
-
''ii
{i:(i,j)e
|
|
arcs.
an assignment.
0-1 solution x of (5.18) is
assigned to
i.
A 0-1
solution x satisfying
^
for all
m= A
as the bipartite matching problem.
use the following notation.
then
1,
i
for all
1 fo'" 3^'
j
e
No
is
^
"ii {j:(i,j)eA)
If
1
Associated
called a partial assignment.
A)
with any partial assignment x
is
not assigned to any other node
is
an index
X
set
unassigned.
defined as
X=
{(i, j)
e
A
:
x^;
=
1}.
A
node
.
Researchers have suggested numerous algorithms for solving the assignment
problem. Several of these algorithms apply, either explicitly or implicitly, the successive
typically select the initial
and
7t(j)
= min
{cj;
:
(i, j)
minimum
These algorithms node potentials with the following values: nii) = for all i e N|
shortest path algorithm for the
e A) for
potentials are nonnegative.
all
j
e N2-
cost flow problem.
All reduced costs defined by these
node
The successive shortest path algorithm solves the
assignment problem as a sequence of n shortest path problems with normegative arc lengths,
and consequently runs
in 0(n S(n,m,C)) time.
(Note that S(n,m,C)
required to solve a shortest p>ath problem with nonnegative arc lengths)
is
the time
143
relaxation approach
The
another popular approach, which
is
The
to the successive shortest path algorithm.
smallest
Cjj
value.
overassigned.
As
is
easy to solve:
a result,
some
assign each person
objects
from overassigned objects
these paths.
The algorithm solves
at
to
to
an
object
person.
with the
j
objects
may be
assignment by identifying
a feasible
vmassigned objects and augmenting flows on
most n shortest path problems.
approach always maintains the optimality conditions, problems by implementations of
i
more than one
may be unassigned and other
The algorithm gradually builds
shortest paths
also closely related
relaxation algorithm removes, or relaxes,
the constraint (5.18c), thus allowing any object to be assigned to
This relaxed problem
is
Dijkstra's algorithm.
Because
this
can solve the shortest path
it
Consequently, this algorithm also
runs in 0(n S(n,m,C)) time.
One method,
well knovkn solution procedure for the assignment problem, the Hungarian
essentially the primal-dual variant of the successive shortest path algorithm.
is
The network simplex algorithm, with provisions basis,
for maintaining a strongly feasible
another solution procedure for the assignment problem. This approach
is
moreover, some implementations of
efficient in practice;
bounds.
it
is fairly
provide polynomial time
For problems that satisfy the similarity assumption, however, a cost scaling
algorithm provides the best-knowT> time bound fo- the tissignment problem. Since these
algorithms are special cases of other algorithms Rather, in this section,
specify their details.
based upon the notion of an auction.
we
we have
described earlier,
we
will not
will discuss a different type of algorithm
Before doing so,
we show
another intimate
connection between the assignment problem and the shortest path problem.
Assignments and Shortest Paths
We
have seen that by solving a sequence of shortest path problems,
any assignment problem. assignment problem so,
we
to solve the shortest
we
shortest path.
a negative cycle; and,
can solve
can also use any algorithm for the
path problem with arbitrary arc lengths. To do
apply the tissignment algorithm twice.
network contains
Section
Interestingly,
we
if it
The
first
application determines
if
the
doesn't, the second application identifies a
Both the appbcations use the node splitting transformation described in
2.4.
The node replaces each arc
splitting tremsformation replaces (i, j)
by an arc
(i,
j),
and adds an
each node (artificial)
i
by two nodes
zero cost arc
(i, i').
i
and
We
i',
first
note that the transformed network always has a feasible solution with cost zero
:
144
namely, the assignment containing
all
artificial arcs
optimal value of the assignment problem
is
negative
if
(i,
We
i').
and only
show
next
that the
the original network
if
has a negative cost cycle.
First,
suppose the original network contains
Then the assigment
Jl^-jj.
a negative cost cycle, iy\2 -J3
{
(j^,
j
2), (J2
/
J3)/
•
•
•
,
(Jk' J])
'
is
must contain
i
at least
one arc of the form
assignment must contain a ^'-
^^^
set of
•
~
Jk
~
)l
arcs of the form
the partial assignment
the optimal assignment cost
is
with
j')
^°^^ °^ *^'^ "partial" assignment
more expensive than
negative.
(i,
negative,
{
is
(jj
some
,
•
'
Conversely, suppose the cost of an optimeil assignment
ii
PA
=
negative.
* {
*
j
.
(j|
,
"
This solution
Consequently, the
t
)
,
(J2
nonpositive, because j,'), (J2
partial
/
jA
)
/
•
•
•
»
,
(Jk- Iv
assignment
^ negative cost cycle in the original network.
jo
) /
•
•
•
/
can be no
it
^
PA
But then by construction of the transformed network, the cycle
^
...
^^^ 2 Ok+1 Jk+1^' '^h\' jp^) Therefore, the cost of the optimal assignment must be negative.
negative cost.
Qk'
-
^
•
Since
must be j|
-
J2
~
•
145
(a)
(b)
Figure
5.3.
(a)
The
original network, (b)
The transformed network.
146
If
we
the original network contains no negative cost cycle, then
shortest path
between
a specific pair of nodes, say
from node
to
1
node
can obtain
consider the transformed network as described earlier and delete the nodes the arcs incident to these nodes.
Now
We
as follows.
n,
1'
a
and n and
See Figure 5.3 for an example of this transformation.
observe that each path from node
1
node n
to
network has
in the original
a
corresponding assignment of the same cost in the transformed network, and the converse
is
assignment (4,
5'),
(3,
also true.
For example, the path 1-2-5 in Figure 5.3(a) has the corresponding
and an assignment
in Figure 5.3(b),
((1, 2'), (2, 5'), (3, 3'), (4, 4'))
{(1, 2'), (2, 4'),
in Figure 5.3(b) has the corresponding path 1-2-4-5 in Figure 5.3(a).
3'))
Consequently, an optimum assignment in the transformed network gives a shortest path in the original network.
The Auction Algorithm
We now describe an algorithm for the assignment problem known as the auction algorithm. We first describe a pseudopolynomial time version of the algorithm and then incorporate scaling to
make
the algorithm polynomial time.
instance of the bit-scaling algorithm described in Section
algorithm,
we
cor\sider the
version appears
more
The
to
reduce
objective
is
asking prices, the
assume
Let
C
max
=
prices are
associate with each person
that person's
highest marginal
(i, j)
admissible
if
{lu^jl
i
for
a
utility, i.e., value(i) -
bid and has no admissible bid, then value(i) (u^j - price(j)
:
(i, j)
e A(i)).
Cj;
=
valued),
^ max
price(j)
is
€
-uj;
For a given set of
price(j). j
set
(i, j)
At each stage of the
e A).
buying car
for each
i
is U|j
-
price(j).
At each
We
in dollars.
number
valued) = uj:
(i, j)
j
that has the highest margir\al utility.
measured
i
:
which
is
an upper bound on
{u^: - price(j)
:
(i, j)
e
A(i)}.
and inadmissible otherwise.
algorithm requires every bid in the auction to be admissible.
max
this
Each person
We can
utility.
represented by
j,
marginal utility of person
and
auction.
a nonnegative utility Uj; for car
an assignment with m
to (5.18).
by
cars that are to be sold
an asking price for car
that all utilities
bid
buy n and has
to
an unassigned person bids on a car
We call a
to find
problem
this
algorithm, there
iteration,
is
the auction
natural for interpreting the algorithm.
interested in a subset of cars,
A(i).
To describe
1.6.
an
is
maximization version of the assignment problem, since
Suppose n persons want is
This scaling algorithm
If
too high and
person
we
i
is
We The
next in turn to
decrease this value to
147
So the algorithm proceeds by persons bidding on car
j,
then the price of car
Also, person there
As
was
is
i
j
{u^
more
e
A(i)} for
for car
is
on
The procedure
we
can
each person
starts
set price(j)
=
Although
i.
j,
if
car.
to the
We now
assigned a car.
with some valid
for each car
j
and
this initialization is
At termination, the procedure yields an almost
clever initialization.
tissignment
For example,
price(j).
(i, j)
a bid
pseud opolynomial time version, the polynomial time version requires
sufficient for the
a
:
makes
and hence the marginal values
The auction stops when each person
describe this bidding procedure algorithmically.
max
i
Subsequently, person k must bid on another
the auction proceeds, the prices of cars increase
value(i) =
a jjerson
The person k who was the previous bidder
j.
one, becomes uneissigned.
choices for value(i) and
If
goes up by $1; therefore, subsequent bids are of higher value.
assigned to car
persons decrease.
cars.
optimum
x°.
procedure BIDDING(u,
x", value, price);
begin let
the
initial
assignment be a null assignment;
while some person
unassigned do
is
begin select if
an unassigned person
some
bid
is
(i, j)
i;
admissible then
begin assign person price(j) if
:
=
i
to car
price(j)
+
j;
1;
person k was already assigned to car
j,
then
person k becomes unassigned;
end else update vzJue(i)
:
= max
{uj: - price(j)
:
(i, j)
€ A(i)};
end; let
x° be the current assignment;
end;
We now show of the
optimum
that this
utility.
procedure gives an assignment whose
Let x" denote a partial assignment at
execution of the auction algorithm and x* denote an value(i) is
valued) ^
-
price(j) for all
(i, j)
e A(i).
some
Consequently,
vdthin $n
point during the
optimum assignment.
always an upper bound on the highest marginal
Uj:
utility is
utility of
Recall that
person
i,
i.e.,
148
X
<
Uji
I
The
value(i) = Ujj
because
at the
$1. Let
Ze X°
UB(x°)=
with N°
+
1,
for all
(i, j)
+
^
i
€
N2
S
UB(x^) ^
-
Uj:
(5.20)
price(j)
and immediately
after the bid,
I °value(i), N
denoting the unassigned persons
that unassigned cars in
e X°,
UB(x°) be defined as follows.
"ii
(i, j)
(5.19)
satisfies the condition
time of bidding value(i) =
goesupby
priceCj)
price(j)
-
price(j)
J€N2
assignment \° also
partial
X
valued) +
i€Ni
(x,i)eX''
have zero
prices,
I
value(i) + J
e
(5.21)
in
N^. Using
we
(5.20) in (5.21)
and observing
obtain
price(j)
-
n.
(5.22)
N2
(5.23)
As we show
in
our discussion to follow, the algorithm can change the node
number
values and prices at most a finite
modify
node value or node
a
number
price
utility of this
of the assignment x"
is at
is
all utilities
will differ
by
units of the
We
x°
is
most $n
Since the algorithm v^l either
not an assignment, within a finite a complete assignment x".
assignment (since Nj less
than the
maximum
is
empty)
all utilities Uj;
now
are
at least
.
Hence, the
Then utility
utility.
easy to modify the method, however, to obtain an
Suppose we multiply Since
whenever
method must terminate with
of steps the
UB(x°) represents the
It
of times.
optimum assignment.
by (n+1) before applying the Bidding procedure.
multiples of (n+1), two assignments with distinct toted utility
(n+1) units.
optimum value and,
The procedure
yields an assignment that
is
within n
hence, must be optimal.
next discuss the complexity of the Bidding procedure as applied to the
assignment problem largest utility is
v^ith all
utilities
C = (n+l)C. We
first
multiplied by (n+1 ).
show
In this modified problem, the
that the value of
any person decreases CXnC)
,
149
Since
times.
are nonnegative, (5.23) implies UBCx") S -n.
all utilities
Substituting this
inequality in (5.21) yields
^ -n(C' +
valued)
ie
1).
No 1
Since valued) decreases by at
once takes 0( Ad)
i
I
( persor\s
is
changes, this inequality shows
most O(nC') times. Since decreasing the value
time, the total time
)
I
at
it
needed
to ujxiate Veilues of all
\
O
I ie
We
one unit each time
any person decreases
that the value of
of a person
le«ist
n
I
Ad)
I
C
= O(nmC').
N^
number
next examine the
of iterations performed
iteration either decreases the value of a person
i
by the procedure.
or assigns the person to
our previous arguments, the values change O(n^C') times in
total.
value(i) > Uj; - price(j)
after
and
increases by one unit,
a person
consecutive decreases total
number
of times
person
in valued). all
i
i
hais
been aissigned to car
can be assigned at most
This
to locate
established the following result.
Theorem
5.8.
potentially very slow because
the price of car
of
the "current
can
time.
C = nC, K
increase prices
(and thus decreases values) in small increments of $1 and the final prices can be as large as
n^C
(the values as small as -n^C).
Using a scaling technique
in the auction
algorithm ensures that the prices and values do not change too many times. As in the bit -scaling
technique described in Section
sequence of algorithm.
We
in
the original problem into a
solve each problem by the auction
use the optimum prices and values of a problem as a starting solution
sctiling phaise.
problem
we decompose
Odog nC) assignment problems and
of the subsequent problem
per
1.6,
0(nm
Thus,
and show
we
that the prices
solve each problem in
j
O(nmC') on the
O(n^mC) it
By
times betvk^een two
bound
in
j.
Further, since
As can be shown, using
The auction algorithm solves the assignment problem
is
I
car
admissible bids in O(nmC') time. Since
we have
The auction algorithm
A(i)
I
observation gives us a
bidders become ass'.gned.
arc" data structure permits us
j
some
Each
and values change only CXn) times
0(nm) time and solve
the original
log nC) time.
The scaling version of the auction algorithm first multiplies all utilities by (n+1) and then solves a sequence of K = Flog (n+l)Cl assignment problems Pj, ?£, ...
150
Pj^
.
The problem
Pj^ is
an assignment problem
leading bits in the binary representation of
necessary) that each
Uj; is
or
1.
The
k = 2u- +
k+1 u^-
bits long.
Note
utilities u-j= Luj; / 2'^*'^ J.
subsequently
K
scaling algorithm
ujj,
which the
utility of arc
(i,j) is
problem
Pp
problem
all utilities
Pj^
if
has the arc
are
or
depending upon whether the newly added
works as
the k
assuming (by adding leading zeros
In other words, the
that in the
{0 or 1),
in
1,
and
bit is
follows:
ASSIGNMENT;
algorithm
begin multiply
by
all Uj;
(n+1);
K: = riog(n+l)Cl price(j)
value(i)
for k
:
=
=
:
=
:
1
for each car
j;
for each person
to
i;
K do
begin let ujj
:
=
price(j)
value(i)
L Ujj /
= 2
:
:
2^-^J for each
price(j) for
= 2 value
BIDDING(uK
(i)
+
(i, j)
each car 1
€ A;
j;
for each person
i;
x°, value, price);
end; end;
The assignment algorithm performs scaling phase,
k u--.
It is
it
a
number
of cost scaling phtises. In the k-lh
obtains a near-optimum solution of the problem with the utilities
easy to verify that before the algorithm invokes the Bidding procedure, prices
and values
satisfy value(i)
^
max
{uj; - price(j)
:
(i, j) e.
A(i)), for
Bidding procedure maintains these conditions throughout
its
execution.
scaling phase, the algorithm solves the assignment problem with
and obtains an optimum solution
of the original problem.
each person
i.
The
In the last
the original utilities
Observe
that in each scaling
phase, the algorithm starts with a null assignment; the purpose of each scaling phase to obtain
good
We is
prices
and values
for the subsequent scaling phase.
next discuss the complexity of this assignment fdgorithm.
that the prices
is
The
crucial result
and values change only 0(n) times during each execution of the
151
Bidding procedure.
We
define the reduced utility of an arc
(i, j)
phase
in the k-th scaling
as
_
ic
= Ujj
Ujj
price(j)
-
-
value(i).
and
In this expression, price(j) calling the
Bidding procedure. For any assignment
_
y (i,
have the values computed
value(i)
)U X
y
=
u;;
^
(i,
x,
we have
ic
X
U:: -
jfe X'^
j
e
X
-
price(j)
N2
i
e
value(i).
Nj
Consequently, for a given set of prices and values, the reduced
assignment differs from the
utility of that
t u-
Uij
<
Now assignment
-
•
price(j) for
0, for
aU
(i, j)
each
end of the
where
-
price'(j)
-
j
price'(j)
and
maximizes the
Since
utility.
we have
utilities of arcs in the
=
-1,
for all
value'(i) are the
(i, j)
e
assignment
The equality
(k-l)-st scaling phase).
value'(i)
an
(5.24)
consider the reduced
at tie
e A,
utility also
e A.
k-1
u.
(i, j)
utility of
assignment by a constant amount. Therefore,
an assignment that maximizes the reduced value(i)
just before
V
(5.20)
x*^"*
(the final
implies that
1
x*^"',
(5.25)
corresponding values
at the
end of the
(k-l)-st
Before calling the Bidding procedure, we set price(j) = 2 price'(j), value(i) k k-1 = 2 value'(i) + 1, and Uj; = 2 u- + (0 or 1). Substituting these relationships in (5.25), we
scaling phase.
find that the reduced utilities
Uj; of arcs in x*'"
reduced
If
utility is at least -3n.
x°
is
some
*
are either -2 or
partial
-3.
Hence, the optimum
assignment in the k-th scaling phase,
then (5.23) implies that UBCx") t -4n. Using this result and (5.24) in (5.21) yields
I
valued) ^-4n.
(5.26)
icNj Hence, for any
Theorem
5.7,
we
i,
valued) decreases 0(n) times.
Using
this result in the
proof of
observe that the Bidding procedure would terminate in 0(nm) time.
The assignment algorithm applies the Bidding procedure Odog nC) times and, consequently, runs in
0(nm
log nC) time.
We
summarize our discussion.
152
Theorem 5.9. The 0(nm log nC) time.
The in
0(Vn
(5.26).
scaling version of the auction algorithin can be further
m log nC) If
we
number
the
scaling version of the auction algorithm solves the assignment problem in
time.
This improvement
prohibit person
to assign n-
remaining FVii
1
persons.
FVn
For example,
99%
would assign
the
would assign
the remaining
all
first
1 f>ersons
value(i) S
and 0((n n =
of the persons in
1%
if
most Vn.
if
4Vn
rVn
persons.
It
1
persons
-
fVn
1
,
10,000, then the auction algorithm
1%
of the overall running time
99%
when
and use successive shortest path algorithms
overall running time
assumption, then
bound
it
and
of the time.
has assigned
to assign these
so happens that the shortest paths have length 0(n) and thus Oial's 3.2, will find
these shortest paths in
This version of the auction algorithm solves a scaling phase in its
then by (5.26)
)m) time to assign the
of the persons in the remaining
algorithm, as described in Section
and
run
Hence, the algorithm takes
therefore terminate the execution of the auction algorithm
but
to
based on the following implication of
from bidding
of unassigned persons is at
CXVn m) time
We
i
is
improved
this
is
0{-\fn
m log nC).
If
we
version of the algorithm currently
for solving the assignment
problem
.
0(m)
time.
0(Vn m) time
invoke the similarity
heis
the best
known
time
153
6.
Reference Notes In this section,
we
present reference notes on topics covered in the
discussion has three objectives:
each topic,
6.1
to
This
review important theoretical contributions on
to point out inter-relationships
(ii)
comment on
(i)
text.
among
different algorithms,
and
(iii)
to
the empirical aspects of the algorithms.
Introduction
The study
cf
network flow models predates the development of
programming techniques. The
first
linear
studies in this problem domain, conducted by
Kantorovich (1939], Hitchcock [1941], and Koopmans (1947], considered the transportation problem, a special case of the studies provided
algorithms.
some
Interest in
insight into the
the tranportation problem.
optimum
solution.
He
(1955].
problem structure and yielded incomplete
noted the traingularity of the basis and integrality of
Orden
(1956] generalized this
bounded
minimum
minimum
simplex algorithm for the capacitated
by Dantzig
These
Dantzig (1951] specialized the simplex algorithm for
simplex algorithm for the uncapacitated
the development of the
cost flow problem.
network problems grew with the advent of the simplex
algorithm by Dantzig in 1947.
the
minimum
specializing the
The network
cost flow problem.
cost flow
variable simplex
The book by Dantzig
work by
method
(1962] contains a
problem follov/ed from
programming
for linear
thorough description of these
contributions along with historical perspectives.
During the
minimum the
1950's,
cost flow
maximum
researchers began to exhibit increasing interest in the
problem as well as
its
special cases-the shortest path problem,
flow problem and the assignment problem
important applications. solve these problems.
— mainly
because of their
Soon researchers developed special purpose algorithms
to
Dantzig, Ford and Fulkerson pioneered those efforts.
Whereas Dantzig focused on the primal simplex based algorithms. Ford and Fulkerson developed primal-dual type combinatorial algorithms to solve these problems. Their book. Ford and Fulkerson (1962], presents a thorough discussion of the early research conducted by of flow decomp)osition theory,
Since
these
generalizations
them and by
which
pioneering
is
others.
It
credited to Ford
works,
emerged as major research
network
also covers the
development
and Fulkerson. flow
problems and
their
topics in operations research; this research
154
is
documented
be surveying
in
thousands of papers and many
many important
text
research papers in the following sections.
important books summarize developments in the literature:
11962]
We
and reference books.
field
and serve as
shall
Several
a guide to the
Ford and Fulkerson [1962] (Flows in Networks), Berge and Ghouila-Houri
(Programming
Games and
,
Transportation Networks),
(1969]
Iri
(Network
Flows, Transportation and Scheduling),
Hu
Network Flows), Frank and
(Communication, Transmission and
Transportation
Frisch [1971]
Networks), Potts and Oliver [1972] (Flows in Transportation
Networks), Christophides [1975] (Graph Theory: [1976] (Linear
An
Algorithmic Approach), Murty
and Combinatorial Programming), Lawler
Networks and Matroids), Bazaraa and
Optimization:
Programming and Network Flows), Minieka
Programming), Jensen and Barnes
[1980]
(Combinatorial
[1976]
Jarvis [1978]
[1978] (Optimization
Networks and Graphs), Kennington and Helgason
(Linear
Algorithms for
[1980] (Algorithms for
(Network Flow Programming),
Garcia-Diaz [1981] (Fundamentals of Network Analysis), [1981] (Graphs,
Programming and
[1969] (Integer
Swamy and
Networks and Algorithms), Papadimitriou and
Network
Phillips
and
Thulsiraman
Steiglitz [1982]
(Combinatorial Optimization: Algorithms and Complexity), Smith [1982] (Network
Optimization Practice), Syslo, Deo, and
Kowalik
[1983] (Discrete
Optimization
Algorithms), Tarjan [1983] (Data Structures and Network Algorithms), Gondran and
Minoux
[1984]
(Graphs and Algorithms), Rockafellar [1984] (Network Flows and
Monotropic Optimization), and Derigs Graphs).
As an additional source
[1988]
(Programming
in Netorks
and
of references, the reader might consult the
bibliography on network optimization prepared by Golden and Magrvanti [1977] and the extensive set of references on integer the University of
Bonn (Kastning
[1976],
programming compiled by researchers
Hausman
[1978],
and Von Randow
at
[1982,
1985]).
Since the applications of network flow modelsa are so pervasive, no single
source provides a comprehensive account of network flow models and their impact
on
practice.
Several researchers have prepared general surveys of selected
application areas.
Notable
among
on the applications of minimum problems.
A number
these
is
the paper by Glover
cost flow
and Klingman
and generalized minimum
of books written in special problem
domains
valuable insight about the range of applicatior\s of network flow in this category are the
[1976]
cost flow
also contain
modek. Examples
paper by Bodin, Golden, Assad and Ball [1983] on vehicle
routing and scheduling problems, books on commurucation networks by Bertsekas
155
and Gallager [1987] and on transportation planning by collection of survey articles
Golden
[1988].
on
facility location
[1988] has described the census
Sheffi [1985], as well as a
edited by Francis and Mirchandani
rounding application given
in Section
1.1.
General references on data structure serve as a useful backdrop for the algorithms presented in this chapter. The book by Aho, Hop>croft and Ullman [1974] is
an excellent reference for simple data structures as arrays, linked
linked
lists,
queues, stacks, binary heaps or d-heaps.
lists,
The book by Tarjan
doubly
[1983]
is
another useful source of references for these topics as well as for more complex data structures such as
We Gabow
dynamic
trees.
have mentioned the "similarity assumption" throughout the chapter.
[1985] coined this term in his
paper on scaling algorithm for combinatorial
This important paper, which contains scaling algorithms for
optimization problems.
several network problems, greatly helped in popularizing scaling techiuques.
6^
Shortest Path Problem
The
shortest path problem
research literature.
As
and
its
a guide to these results,
bibliographies compiled by Gallo, Pallattino,
Pang
we
refer the reader to the extensive
Ruggen and
Starchi [1982]
and Deo and
This section, which summarizes some of this literature, focuses
[1984].
especially
generalizations have a voluminous
on issues of computational complexity.
Label Setting Algorithms
The
first
label setting algorithm
was suggested by
Dijkstra [1959],
The
independently by Dantzig [1960] and Whiting and Hillier [I960].
implementation of Dijkstra's algorithm runs in 0(n2) time which running time for
fully
must examine every networks.
dense networks (those with
m = fiCn^
)),
since
is
The following
original
the optimal
any algorithm
However, improved running times are possible
arc.
and
for sparse
table svimmarizes various implementations of Dijkstra's
algorithm that have been designed to improve the running time in the worst case or in practice.
In the table,
network plus
2.
d =
[2
+ m/n] represents the average degree of a node
in the
156
«
157
whose
Boas, Kaas and Zijlstra [1977] suggested a data structure
depends upon the takes
key
D stored
The
in a heap.
initialization of this algorithm
0(D) time and each heap operation takes Odog log
algorithm time.
largest
is
implemented using
Dijkstra's algorithm in
The
data structure,
suggested an improvement of
Johiison [1982]
implement
this
0(m
it
runs in
this
who
Dijkstra's
m log
0(nC +
log nC) it
to
C) time.
log log
due
is
Fredman and
to
The Fibonacci heap
use a Fibonacci heap data structure.
somewhat complex, data
ingenious, but
When
D).
data structure and used
best strongly polynomial-time algorithm to date
Tarjan [1984]
analysis
structure that takes an average of
is
Odog
an n)
time for each node selection (and the subsequent deletion) step and an average of
Consequently, this data structure implements
0(1) time for each distance update. Dijkstra's algorithm in
0(m
+ n log n) time.
Dial [1969] suggested his implementation of Dijkstra's algorithm because of
This algorithm was independently discovered
encouraging empirical performance.
by Wagner[1976].
improved version of algorithm
is
if
w
=
Dial's algorithm,
which runs better
only pseudopolynomial-time,
Denardo and Fox
case behavior. that
Kamey and Klingman
Dial, Glover,
max
[1,
minlcj,:
algorithm, hence reducing the
in practice.
[1979] suggest several such
number
temporary distance
w
-
1.
Though
Dial's
we
improvements. Observe
can use buckets of width
from 1+
of buckets
C
correctness of this observation follows from the fact that
minimum temporary
have proposed an
[1979]
successors have had improved worst-
its
€ A}], then
(i,j)
its
if
to
d*
w
in Dial's
l+(C/w).
The
the current
is
distance labels, then the algorithm will modify no other
label in the range [d*, d*
w
+
-
1]
since each arc has length at least
Then, using a multiple level bucket scheme, Denardo and Fox implemented
the shortest path algorithm in
any choice of
k.
0(max{k C^^K
Choosing k = log
C
m log (k+1), nk(l+C^/^/w)]
yields a time
bound
Depending on n,m and C, other choices might lead
to a
of
0(m
log log
time for
C+n
log C).
modestly better time bound.
Johnson [1977b] proposed a related bucket scheme with exponentially growing
widths and obtained the running time of structure it
is
the
same
0((m+n
log
Olog
log C).
This data
as the R-heap data structure described in Section 33, except that
performs binary search over
the redistribution of ranges replaces the binary search
Odog C)
buckets to insert nodes into buckets during
and the distance updates. The R-heap implementation
by a sequential search and improves the running time by a
158
factor of
Odog
log C).
Ahuja, Mehlhom, Orlin and Tarjan [1988] suggested the R-
heap implementation and
its
further improvements, as described next.
The R-heap implementation described
A
system.
in section 3.3
two-level bucket system improves further
of Dijkstra's algorithm.
The two-level data
uses a single level bucket
on the R-heap implementation
structure consists of
K
(big) buckets,
each
bucket being further subdivided into L (small) subbuckets. Ouring redistribution, the two-level bucket system redistributes the range of a subbucket over buckets.
This approach permits the selection of
reducing the number of buckets. By using
K
much
all
of
its
previous
larger width of buckets, thus
= L = 2 log C/log log C, this two-level
bucket system version of Dijkstra's algorithm runs in
0(m+n
log
C/log log C)
time.
Incorporating a generalization of the Fibonacci heap data structure in the two-level
bucket system with appropriate choices of
0(m
+ nVlog
C
If
).
we
K
and L further reduces the time bound
to
invoke the similarity aissumption, this approach currently
gives the fastest worst-case implementation of Dijkstra's algorithm for
all classes
of
graphs except very sparse ones, for which the algorithm of Johnson [1982] appears
more
attractive.
however, and so
The Fibonacci heap version it
is
of two-level R-heap
unlikely that this algorithm
is
would perform well
very complex, in practice.
Label Correcting Algorithm
Ford [1956] suggested, the shortest path problem.
in skeleton
form, the
first label
correcting algorithm for
Subsequently, several other researchers
Fulkerson [1962] and Moore [1957]
-
-
Ford and
studied the theoretical properties of the
algorithm.
Bellman's [1958] algorithm can also be regarded as a label correcting
algorithm.
Though
0(nm)
time, the
specific
implementations of label correcting algorithms run in
most general form
is
nonpolynomial-time, as
shown by Edmonds
[1970].
Researchers have exploited the flexibility inherent in the generic label correcting algorithm to obtain algorithms that are very efficient in practice.
The
modification that adds a node to the LIST (see the description of the Modified Label
Correcting Algorithm given in Section
3.4.) at
the front
if
previously examined the node earlier and at the end otherwise,
popular.
is
probably the most
This modification was conveyed to Pollack and Wiebenson [1960] by
D'Esopo, and later refined and tested by Pap>e [1974]. this
the algorithm has
algorithm as D'Esopo and Pape's algorithm.
We shall subsequently refer to A FORTRAN listing of this
159
Though
algorithm can be found in Pape [1980].
modified label correcting
this
algorithm has excellent computational behavior in the worst-case exponential time, as
shown by Kershenbaum
it
runs in
[1981].
Glover, Klingman and Phillips [1985] proposed a generalization of the FIFO label correcting algorithm, called the partitioning shortest path (PSP) algorithm.
general networks, the
FSP algorithm runs
nonnegative arc lengths
in
0(nm)
time, while for networks with
runs in 0(n2) time and has excellent computational
it
Other variants of the label correcting algorithms and
behavior.
attributes can be
found
For
in Glover,
computational
their
Klingman, Phillips and Schneider
[1985].
Researchers have been interested in developing polynomial-time primal
simplex algorithms for the shortest path problem.
Klingman
[1979]
and Zadeh
[1979]
showed
Dial, Glover,
that Dantzig's pivot rule
Karney and
(i.e.,
pivoting in
the arc with largest violation of optimality condition) for the shortest path problem starting
pivots
from an
is
0(n)
shortest path
artificial basis
if all
Thus, the number of
leads to Dijkstra's algorithm.
arc costs are nonnegative.
Primal simplex algorithms for the
problem with arbitrary arc lengths are not
that efficient.
Akgul
[1985a]
developed a simplex algorithm for the shortest path problem that performs O(n^) pivots.
Using simple data structures, Akgul's algorithm runs
can be reduced
Hao and
to
0(nm
in
O(n^) time which
+ n^logn) using the Fibonacci heap data structure. Goldfarb,
Kai [1986] described another simplex algorithm for the shortest path
problem: the number of pivots and running times for to those of
Akgul's algorithm.
Orlin [1985]
showed
this
algorithm are comparable
that the simplex algorithm with
Dantzig's pivot rule solves the shortest path problem in 0{rr log nC) pivots.
and Orlin
0(n^
approach that performs
[1988] recently discovered a scaling variation of this
log C) pivots
and runs
in
0(nm
log C) time.
Ahuja
This algorithm uses simple data
structures, uses very T\atural pricing strategies, aiul also permits partial pricing
.
All Pair Shortest Path Algorithms
Most algorithms manipulation.
The
first
that solve the all pair shortest path
such algorithm appears to be a part of the folklore. Lawler
[1976] describes this algorithm in his textbook.
0(n3
is
The complexity of
log n), which can be improved slightly by using
multiplication procedures.
and
problem involve matrix
more
The algorithm we have presented
based on a theorem by Warshall [1962]. This algorithm
this algorithm is
sophisticated matrix
is
due
nms
in
to Floyd [1962]
0(n3) time and
160
is
also capable of detecting the presence of negative cycles.
Dantzig [1967] devised
another procedure requiring exactly the same order of calculations. The bibliography
by Deo and Pang [1984] contains references
for several other all pair shortest path
algorithms.
From solve the
all
problems.
a worst -case
complexity point of view, however,
it
might be desirable
to
pair shortest path problem as a sequence of single source shortest path
As pointed out
in the text, this
approach takes CXnm) time
to construct
an
equivalent problem with nonnegative arc lengths and takes 0(n S(n,m,C)) time to solve the n shortest path problems (recall that S(n,m,C) shortest path
problem with nonnegative arc
algorithm by Fredman [1976]
is
lengths).
faster than this
is
the time neede to solve a
For very dense networks, the
approach
in the
worst
Computational Results Researchers have extensively tested shortest path algorithms on a variety of
network
classes.
and Law
[1978],
and Fox
[1979],
and Gallo and
The studies due
Van
to Gilsinn
and Witzgall
Vliet [1978], Dial, Glover,
Imai and
Iri
[1984], Glover,
depends upon many
results, the
factors:
for
written; the language, compiler
Kamey and Klingman
,
computational performance of an algorithm
and the computer used; and the distribution tested.
is
upon the density
faster than the original
of the network.
The
These studies generally suggest that path problem.
It is
OCn^) implementation, the binary heap, d-heap or the
Denardo and Fox
all
network
all
at this
is fcister
of their test problems; however,
extrapolating the results, they observe that their implementation
very large shortest path problems.
classes tested
[1979] also find that Dial's algorithm
than their two-level bucket implementation for
implementation and so
of
results of these studies also
Fibonacci heap implementation of Dijkstra's algorithm for researchers.
is
Hence, the results of computational
Dial's algorithm is the best label setting algorithm for the shortest
available.
Denardo
[1979],
Klingman, Phillips and Schneider [1985]
studies are only suggestive, rather than conclusive.
by these
[1974], Kelton
example, the manner in which the program
networks on which the algorithm
greatly
Pape
Pallottino [1988] are representative of these contributions.
Unlike the worst
depend
[1973],
would be
faster for
Researchers have not yet tested the R-heap
moment no comparison with
Dial's algorithm is
161
Among
and by Glover, Klingman,
Phillips
and Schneider
[1985] are the
et al.
algorithm.
Other researchers have also compared
their algorithm is
finds that
correcting algorithms.
two
fastest.
The study
superior to D'Esopo and Pape's
by Glover
label setting algorithms
with label
Studies generally suggest that, for very dense networks, label
setting algorithms are superior and, for sparse networks,
perform
by D'Esopo and Pape
the label correcting algorithn\s, the algorithms
bbel correcting algorithms
better.
Kelton and
Law
[1978]
have conducted a computational study of several
aill
This study indicates that Dantzig's [1967] algorithm
pair shortest path algorithms.
with a modification due to Tabourier [1973]
Warshall algorithm described
faster
is
in Section 3.5.
(up to two times) than the Floyd-
This study also finds that matrix
manipulation algorithms are faster than a successive application of a single-source shortest path algorithm for very dense networks, but slower for sparse networks.
6.3
Maximum Flow Problem The maximum flow problem
distinguished by the long succession of
is
research contributions that have improved algorithrr\s;
some, but not
all,
of these
upon
the worst-case complexity of
improvements have produced improvements
in practice.
Several researchers
and
Elias, Feinstein
min-cut theorem.
-
Dantzig and Fulkerson [1956], Ford and Fulkerson [1956]
and Shannon
[1956]
-
independently established the max-flow
Fulkerson and Dantzig [1955] solved the
maximum
flow problem
by specializing the primal simplex algorithm, whereas Ford and Fulkerson Elias et
al.
[1956] solved
it
by augmenting
have developed a number of algorithms
p>ath algorithms.
for this problem;
running times of some of these algorithms.
In the figure,
[1956]
Since then, researchers
Figure 6.2 summarizes the
n
is
the
number
of nodes,
m is the number of arcs, and U is an upper bound on the integral arc capacities. algorithms whose time bounds involve
U
assume
and
integral capacities; the
The
bounds
specified for the other algorithms apply to problems with arbitrary rational or real capacities.
162
#
Discoverers
1
Edmonds and Karp
2
Dinic [1970]
CKn2m)
3
Karzanov
0(n3)
4
Cherkasky
5
Malhotra,
6
Galil [1980]
7
GalU and Naamad
8
Shiloach and Vishkin [1982]
9
Sleator
10
Tarjan [1984]
0(n3)
11
Gabow[1985]
0(nm
12
Goldberg [1985]
0(n3)
13
Goldberg and Tarjan [1986]
CXnm
14
Bertsekas [1986]
0(n3)
15
Cheriyan and Maheshwari [1987]
0(n2
16
Ahuja and Orlin
0(nm + n^
Running Time 0(nm2)
[1972]
[1974]
0(n2 VIS")
[1977]
Kumar and Maheshwari
0(n3)
[1978]
0(n5/3m2/3)
and Tarjan
0(nm
[1980]; Shiloach [1978]
log2 n)
CXn3)
0(nm
[1983]
[1987]
log U)
log (n^/m))
Vm
Jnm O
,. Ca)
log n)
+
1^
Ahuja, Orhn and Tarjan [1988]
17
(b)
uvnm ol
(c)
O nm
)
log
—
r?•,
U)
log .
"
log log
U .,
U
+ n ^VlogU)
( V
Table
6.2.
Running times
of
maximum flow algorithms.
Ford and Fulkerson [1956] observed that the labeling algorithm can perform as
many
as 0(nU) augmentations for networks with integer arc capacities.
showed an
that for arbitrary irrational arc capacities, the labeling algorithm can
infinite
the
They
flow value.
Edmonds and Karp
[1972] suggested
two
specializations of
the labeling algorithm, both with improved computational complexity. that
if
the algorithm
containing the smallest possible
augments flow along a shortest path
number
They
(i.e.,
one
of arcs) in the residual network, then the
algorithm performs 0(nm) augmentations. will
perform
sequence of augmentations and might converge to a value different from
maximum
showed
also
A
breadth
first
determine a shortest augmenting path; consequently,
search of the network
this version of the labeling
163
algorithm runs in 0(nm2) time.
maximum
flow along a path with
0(m log U) with maximum
Edmonds and
residual capacity.
performs
augmentations.
path
residual capacity in
of the labeling algorithm runs in
They proved
U)
augment
determine a
to
0(m) time on average; hence,
log
to
that this algorithm
shown how
Tarjan [1986] has
0(m2
was
Karp's second idea
this version
time.
Dinic [1970] independently introduced the concept of shortest path networks, called layered networks is
for solving the
subgraph of the residual network
a
on
,
network can be partitioned in the layered
some
network connects nodes
A
k).
in the
construct, in a total of
and
in adjacent layers
network G' «
.
.,
m
most
nodes and arcs that
(i.e.,
,
i
Nk and
e
(N', A') is a
lie
in a layered
so that for every arc j
e
(i, j)
Nk+1
flow that blocks
sense that G' contains no directed path with positive
from the source node to the sink node.
residual capacity
at
.
layered network
The nodes
to the sink.
nodes N], N2,
blocking flow in a layered
flow augmentations
performing
into layers of
A
flow problem.
that contains only those
one shortest path from the source
at least
for
maximum
0(nm)
Dinic showed
time, a blocking flow in a layered
augmentations.
how
to
network by
His algorithm constructs layered networks
establishes blocking flows in these networks.
Dinic showed that after each
blocking flow iteration, the length of the layered network increases and a^er at most
n
iterations, the source is disconnected
from the sink
in the residual network.
Consequently, his algorithm runs in OCn^m) times.
The
shortest
same time bound it
augmenting path algorithm presented
in Section 4.3 achieves the
as Dinic's algorithm, but instead of constructing layered networks
maintains distance
labels.
Goldberg [1985] introduced distance labels
of his preflow push algorithm.
in the context
Distance labels offer several advantages:
They are
simpler to understand than layered networks, are easier to manipulate, and have led to
more
efficient algorithms.
Orbn and Ahuja
[1987]
based augmenting path algorithm given in Section algorithm
is
equivalent both to
algorithm in the sense that
all
Edmonds and
developed the distance
4.3.
They
also
showed
label
that this
Karp's algorithm and to Dinic's
three algorithms enumerate the
same augmenting
paths in the same sequence. The algorithms differ only in the manner in which they obtain these augmenting paths.
Several researchers have contributed improvements to the computational
complexity of
maximum
flow algorithms by developing more efficient algorithms to
establish blocking flows in layered networks.
Karzanov [1974] introduced the concept
164
of preflows in a layered network.
comprehensive description of
(1976] for a
algorithm and the paper by Tarjan [1984] for a
this
showed
simplified version.) Karzanov
Even
(See the technical report of
that an
implementation that maintains
preflows and pushes flows from nodes with excesses, constructs a blocking flow in
0(n2) time. Malhotra, Kumar and Maheshwari [1978] present a conceptually simple
maximum
flow algorithm that runs in OCn^) time. Cherkasky [1977] and Galil [1980]
improvements of Karzanov's algorithm.
presented further
The search
for
researchers to develop first
more
new
efficient
maximum
flow algorithms has stimulated
data structure for implementing Dinic's algorithm.
The
such data structures were suggested independently by Shiloach [1978] and Galil
and Naamad
Dinic's algorithm (or the shortest
[1980].
augmenting path algorithm
described in Section 4.3) takes 0(n) time on average to identify an augmenting path
and, during the augmentation, saturated arcs from this path,
it
we
store these path fragments using
some
saturates
arcs in this path.
The
obtain a set of path fragments.
some data
If
we
delete the
basic idea
structure, for example, 2-3 trees (see
Hopcroft and Ullman [1974] for a discussion of 2-3 trees) and use them identify
augmenting paths quickly.
showed how
to
augment flows through path fragments
flow in O(m(log n)^) time. in
0(nm
Shiloach [1978] and Galil and
(log n)2) time.
Sleator
and Tarjan
log n) time
Gabow
bound
improved
[1983]
trees to store
this
Aho,
Naamad
[1980]
that finds a blocking rur\s
approach by using a
and update path fragments. Sleator and
Tarjan's algorithm establishes a blocking flow in
0(nm
way
to
later to
Hence, their implementation of Dinic's algorithm
data structure called dynamic
an
in a
is
0(m
log n) time
and thereby yields
for Dinic's algorithm.
[1985] obtained a similar time
bound by applying
a bit scaling
approach
maximum flow problem. As outlined in Section 1.7, this approach solves a maximum flow problem at each scaling phase with one more bit of every arc's to the
capacity.
Ehiring a scaling phase, the
flow value by
at
most
m
units
flow value differs from the m£iximum
and so the shortest augmenting path algorithm (and
also Dinic's algorithm) performs at scaling phase takes
initial
0(nm) time and
most
m
augmentations.
the algorithm runs in
Consequently, each
0(nm
log C) time.
If
we
time bound
is
comparable to that of Sleator
and Tarjan's algorithm, but the scaling algorithm
is
much
invoke the similarity assumption,
this
Orlin and Ahuja [1987] have presented a variation of the
same time bound.
simpler to implement.
Ga bow's
algorithm achieving
165
Goldberg and Tarjan [1986] developed the generic preflow push algorithm and the highest-label preflow that the
FIFO version
first-in-first-out
push algorithm.
of the algorithm that
order runs in OCn-^^ time.
active nodes; at each iteration,
a
push /relabel step
at this
selects a
it
pushes flow from active nodes
in the
(This algorithm maintains a queue
node from the
of
front of the queue, j>erforms
node, and adds the newly active nodes to the rear of the
queue.) Using a dynamic tree data structure, Goldberg and Tarjan [1986] the running time of the
shoum
Previously, Goldberg (1985] had
FIFO preflow push algorithm
to
0(nm log
improved
(n^/m).
This
algorithm currently gives the best strongly polynomial-time bound for solving the
maximum
flow problem.
Bertsekas [1986] obtained another his
minimum
flow algorithm by specializing
cost flow algorithm; this algorithm closely resembles the Goldberg's
FIFO preflow push algorithm. that
maximum
Recently, Cheriyan
and Maheshwari
[1987]
showed
Goldberg and Tarjan's highest-label preflow push algorithm actually performs
OCn^Vin
)
nonsaturating pushes and hence runs in
OiriNm
)
time.
Ahuja and Orlin [1987] improved the Goldberg and Tarjan's algorithm using the excess-scaling technique to obtain an
the similarity
assumption,
0(nm
Further,
this
node with the highest distance
label,
of nonsaturating pushes to
for
U
and pushing flow from
dynamic
excess-scaling algorithm and
OCn^ log U/ log log
tree data structure its
dramatic as they have been for example, the
0(nm + n^ Vlog U
by using dyiuimic
trees, as
variations,
E>inic's
)
U).
VlogU
Ahuja, Orlin and Tarjan
).
though the improvements are not as
and the FIFO preflow push algorithms.
algorithm improves to
O nm log
showT» in Ahuja, Orlin and Tarjan [1988].
0(nm
log
which further
improves the running times of the
conjectures that any preflow push algorithm that performs
can be implemented in
a large excess
Ahuja, Orlin and Tarjan [1988] reduced the
reduces the number of nonsaturating pushes to 0(n^
of the
invoke
networks that are both non-sp>arse
[1988] obtained another variation of origir\al excess scaling algorithm
The use
we
algorithm does not use any complex data structures.
Scaling excesses by a factor of log U/log log
number
If
algorithm improves Goldberg and Tarjan's
this
0(iun log (n2/m)) algorithm by a factor of log n
and nondense.
+ n^ log U) time bound.
—— — °
For
+2
Tarjan [1987]
p nor«aturating pushes
(2+p/nm) time using dynamic
trees.
Although
this
166
conjecture
is
true for
all
known preflow push
algorithms,
it
open
still
is
for the
general case.
Developing a polynomial-time primal simplex algorithm for the
maximum
flow problem has been an outstanding open problem for quite some time.
Recently,
Goldfarb and Hao [1988] developed such an algorithm. This algorithm
based on selecting pivot arcs so that flow source to the sink.
As one would
is
augmented along
implement
this
0(nm
algorithm in
Tarjan[1988] recently
dynamic
logn) using
essentially
a shortest path
expect, this algorithm performs
can be implemented in ©(n^m) time.
is
0(nm)
from the
pivots
and
showed how
to
trees.
Researchers have also investigated the following special cases of the
maximum (i.e.,
flow problems:
U=l);
(ii)
maximum
the
flow problem on
unit capacity simple networks
(i)
unit capacity networks
U=l, and, every node
(i.e.,
in the
network, except source and sink, has one incoming arc or one outgoing arc) bipartite networks;
and
(iv)
networks
for unit capacity
is
less
algorithm will solve these problems to solve
maximum
Observe that the
planar networks.
;
(iii)
flow value
than n, and so the shortest augmenting path in
0(nm)
time.
Thus, these problems are easier
than are problems with large capacities. Even and Tarjan [1975] showed that
maximum
Dinic's algorithm solves the
flow problem on unit capacity networks in
O(n^'-'m) time and on unit capacity simple networks in 0(n^/2in) time.
Orlin and
Ahuja [1987] have achieved the same time bounds using a modification of the shortest
augmenting path algorithm. Both of these algorithms
maximum
contained in Hopcraft and Karp's [1973] algorithm for
Femandez-Baca and Martel
on ideas
rely
bipartite matching.
[1987]
have generalized these ideas for networks with
maximum
flow algorithms run considerably faster on a
small integer capacities.
Versions of the bipartite Let
G
networks
= (N^
u
N2, A)
if
n^=|N J, n2 = |N2| andn = n^+n2-
and Fernandez-Baca
Nj
j
n2
)
and 0(nj
[1985] obtained the
first
+ nm) respectively.
improved upon these ideas by shov^dng time bounds for
all
bipartite networks.
<<
Suppose
running times of Karzanov's and Malhotra
0(n^
j
that
j
that
N2
|(or
nj < n2
j
N2 .
j
«
N^
|
).
Gusfield, Martel
such results by showing
et al.'s
j
how
the
algorithms reduce from O(n^) to
Ahuja, Orlin, Stein and Tarjan [1988] it
is
possible to substitute nj for n in the
preflow push algorithms to obtain the
new
time bounds for
This result implies that the FIFO preflow push algorithm and the
167
original excess scaling algorithm, respectively, solve the bipartite
networks
in a
A
nodes.)
maximum
)
and 0(n,
maximum
m + n,
flow problem on
log U) time.
flow problem on planar networks
on general networks.
efficiently than
drawn
m + n,
possible to solve the
It is
more
0(n,
in
maximum
(A network
is
called planar
can be
if it
two-dimensional plane so that arcs intersect one another only planar network has at most
6n
much at the
hence, the running times of the
arcs;
flow algorithms on planar networks appear more attractive.
Specialized
solution techniques, that have even better running times, are quite different than
those for the general networks.
flow algorithms are
Itai
Some important
and Shiloach
references for planar
maximum
Johnson and Venkatesan (1982] and
[1979],
Hassin and Johnson [1985].
Researchers have also investigated whether the worst-case bounds of the
maximum case
flow algorithms are
bounds
for
some
Edmonds and Karp that the
n2-
[1981] constructed
Edmonds and
i.e.,
whether the algorithms achieve
families of networks.
algorithm
same examples imply
showed
Baratz [1977]
tight,
is
tight
that the
that the
an interesting
Zadeh
when bound
m
[1972]
= n^.
showed
that the
Even and Tarjan
when
is tight.
of
m=
Galil
examples and showed that the algorithms of
Karp, Dinic, Karzanov, Cherkasky, Galil and Malhotra
their worst-case
bound
[1975] noted
of Dinic's algorithm is tight
bound on Karzanov's algorithm
class of
their worst-
et al.
achieve
bounds on those examples.
Other researchers have made some progress in constructing worst-case examples for preflow push algorithms. Martel [1987] showed that the FIFO preflow
push algorithm can take n(nm) time
to solve a class of unit capacity networks.
Cheriyan and Maheshwari [1987] have showTi that the bound of 0(n2 highest-label preflow
push algorithm
family of examples to
show
and the bound O(n^m)
community has not
that the
for the
Cheriyan [1988] has also constructed a
is tight.
bound O(n^)
for the generic preflow
for
FIFO preflow push algorithm
push algorithm
is tight.
established similar results for other preflow
especially for the excess-scaling algorithms.
Vm)
It is
The research
push algorithms,
worth mentioning, however, that
these knovkTi worst-case examples are quite artificial and are not likely to arise in practice.
Several computational studies have assessed the empirical behavior of
maximum
flow algorithms.
The studies performed by Hamacher
[1979],
Cheung
168
[1980], Glover,
Klingman, Mote and Whitman [1979, 1984), Imai (1983] and Goldfarb
and Grigoriadis
[1986]
are noteworthy.
development of algorithms
and Karp,
most
Ehnic's
that
These studies were conducted prior
use distance
and Karzanov's algorithms
classes of networks.
These studies rank Edmonds
labels.
order of performance for
in increasing
Dinic's algorithm
to the
competitive with Karzanov's
is
algorithm for sparse networks, but slower for dense networks. Imai [1983] noted that Galil
and Naamad's
data structures,
is
[1980] implementation of Dinic's algorithm, using sophisticated
slower than the original Dinic's algorithm.
Sleator
and Tarjan
(1983] reported a similar finding; they observed that their implementation of Dinic's
algorithm using dynamic tree data structure algorithm by a constant
factor.
is
slower than the original Dinic's
Hence, the sophisticated data structures improve only
the worst-case performance of algorithms, but
Researchers have also tested the Malhotra
et al.
algorithm due to Fulkerson and Dantzig [1955]
are not useful
empirically.
algorithm and the primal simplex
and found these algorithms
to
be
slower than Dinic's algorithm for most classes of networks.
A number
of researchers are currently evaluating
performance of preflow push algorithms. Derigs and Meier
and Ahuja, Kodialam and Orlin
the computational
[1988], Grigoriadis [1988],
have found that the preflow push algorithms
[1988]
are substantially (often 2 to 10 times) faster than Dinic's and Karzanov's algorithms for
most
classes of networks.
highest-label preflow
and
its
Among
all
nonscaling preflow push algorithms, the
push algorithm runs the
The
fastest.
We
variations have not been tested thoroughly.
excess-scaling algorithm
do not anticipate
that
implementations of preflow push algorithms would be useful in
dynamic
tree
practice;
in this case, as in others, their contribution has
been
to
improve the worst-
case p>erformances of algorithms.
Finally,
problem:
(i)
we
maximum maximum dynamic
discuss two important generalizations of the
the multi-terminal flow problem;
(ii)
the
flow
flow
problem.
maximum flow value between every pair of nodes. Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximum In the multi-terminal flow problem,
we wish
to
determine the
flow problems. Recently, Gusfield [1987] has suggested a simpler multi-terminal flow algorithm.
These
results,
however
flow problem on directed networks.
,
do not apply
to the multi-terminal
maximum
169
In the simplest version of
with each arc
The
that arc.
to the sink
showed
(i, j)
nunimum
is
node within
that the
network a number
in the
objective
to
6.4
Minimum
is to
associate
node
possible flow from the source
Ford and
Fulkerson [1958]
first
flow problem can be solved by solving a
(Ford and Fulkerson [1962] give
a nice treatment of
minimize the average cost per period.
Cost Flow Problem
The minimum
cost
caise of
the
minimum
and solved (though incompletely) by Kantorovich [1947].
The
flow problem has a rich history.
transportation problem, a special
Koopmans
we
has considered infinite horizon dynannic flow problems
Orlin [1983]
which the objective
flow problem,
denoting the time needed to traverse
a given time period T.
maximum dynamic
in
tj:
maximum
send the
cost flow problem.
this problem).
maximum dynamic
Dantzig [1951] developed the
first
cost flow
[1939],
classical
problem,was posed
Hitchcock [1941], and
complete solution procedure for
the transportation problem by specializing his simplex algorithm for linear
programming.
He observed
integrabty property of the
bounding technique
optimum
for linear
simplex algorithm for the
the spanning tree property of the basis and the solution.
programming
minimum
Later his development of the upper
led to an efficient sp)ecializatior of the
Dantzig's book [1962]
cost flow problem.
discusses these topics.
Ford and Fulkerson [1956, 1957] suggested the for the uncapacitated
known
combinatorial algorithms
and capacitated transportation problem; these algorithms are Ford and Fulkerson [1962] describe the
as the primal-dual algorithms.
primal-dual algorithm for the
and Busaker and Gowen algorithm.
first
minimum
cost flow problem.
Jewell [1958],
[1960]
[1961] independently discovered the successive shortest path
These researchers showed
how
to solve the
minimum
cost flow
as a sequence of shortest path problems v^th arbitrary arc lengths.
and Edmonds and Karp
Iri
[1972] independently pointed out that
if
problem
Tomizava
[1971]
the computations
use node potentials, then these algorithms can be implemented so that the shortest path problems have nonnegative arc lengths.
Minty algorithm.
[1960]
and Fulkerson
[1961] independently discovered the out-of-kilter
The negative cycle algorithm
is
credited to Klein [1967]. Helgason
Kennington [1977] and Armstrong, Klingnun and Whitman [1980]
and
describe the
170
programming dual simplex algorithm
specialization of the linear cost flow
problem (which
is
not discussed in this chapter).
minimum
for the
Each of these algorithms
perform iterations that can (apparently) not be polynomially bounded. Zadeh [1973a] describes one such example on which each of
simplex algorithm with Dantzig's pivot
algorithms
several
rule, the
—
the primal
dual simplex algorithm, the
negative cycle algorithm (which augments flow along a most negative cycle), the successive shortest path algorithm, the primal-dual algorithm, and the out-of-kilter
algorithm
- performs an
exponential
number
of iterations.
Zadeh
11973b) has also
described more pathological examples for network algorithms.
The
inter-relationship
showed
one example
fact that
among
this relationship
is
bad
for
The
the algorithms.
by pointing out
many network insightful
paper by Zadeh [1979]
that each of the algorithms
are indeed equivalent in the sense that they perform the
augmentations provided
ties are
broken using the same
paths.
rule.
mentioned
just
same sequence
of
All these algorithms
between appropriately defined nodes
essentially cortsist of identifying shortest paths
and augmenting flow along these
algorithms suggests
Further, these algorithms obtain shortest
paths losing a method that can be regarded as an application of Dijkstra's algorithm.
The network simplex algorithm and
its
most popular with operations researchers.
practical implementations
have been
Johnson [1966] suggested the
manipulating data structure for implementing the simplex algorithm.
first
tree
The
first
implementations using these ideas, due to Srinivasan and Thompson [1973] and
Kamey, Klingman and Napier
Glover,
of the simplex algorithm.
Graves
[1977],
and
improved data excellent
[1974],
significantly
reduced the running time
Glover, Klingman and Stutz [1974], Bradley,
Barr, Glover,
structures.
and Klingman
The book
of
Brown and
[1979] subsequently discovered
Kennington and Helgason [1980]
is
an
source for references and background material concerning these
developements. Researchers have conducted extensive studies to determine the most effective pricing strategy,
i.e.,
selection of the entering variable.
These studies show that the
choice of the pricing strategy has a significant effect on both solution time and the
number
of pivots required to solve
strategy
we
BrovkTi
Mead
described
and Graves [1983]
is
due
to
minimum
Mulvey
[1978], Grigoriadis
and Grigoriadis
[1986]
cost flow problems.
[1978a].
and Hsu
The candidate
list
Goldfarb and Reid [1977], Bradley,
[1979],
Gibby, Glover, Klingman and
have described other strategies
that
have been
171
effective in practice.
appears that the best pricing strategy depends both upon the
It
network structure and the network
size.
minimum
Experience with solving large scale established that
more than 90%
cost
flow problems has
of the pivoting steps in the simplex
method can be
degenerate (see Bradley, Brown and Graves [1978], Gavish, Schweitzer and Shlifer [1977]
and Grigoriadis
The strongly
theoretical issue.
{1976]
Thus, degeneracy
(1986]).
and independently by
contributed on both fronts.
both a computational and a
is
feasible basis technique,
proposed by Cunningham
and Klingman
Barr, Glover
[1977a, 1977b, 1978) has
shown
Computational experience has
strongly feasible basis substantially reduces the
number
that maintaining
of degenerate pivots.
On
the
theoretical front, the use of this technique led to a finitely converging primal simplex
algorithm. Orlin [1985] showed, using a p>erturbation technique, that for integer data
an implementation of the primal simplex algorithm that maintains feasible basis
and 0(nm
performs
O(nmCU)
C log (mCU))
The strongly
pivots
pivots
when used with
arbitrary pricing strategy
Dantzig's pricing strategy.
feasible basis technique prevents cycling during a
consecutive degenerate pivots, but the
be exponential.
when used with any
phenomenon
This
number is
a strongly
sequence of
of consecutive degenerate pivots
known
as stalling.
Cunningham
may
[1979]
described an example of stalling and suggested several rules for selecting the entering variable to avoid stalling.
which orders the arcs
in
One such
rule
is
the
an arbitrary, but
LRC fixed,
(Leaist
Recently Considered) rule
manner.
The algorithm then
examines the arcs in the wrap-around fashion, each
iteration starting at a place
where
eligible arc into the basis.
it
left
off earlier,
Cunningham showed pivots.
the
Goldfarb,
minimum
and introduces the
that this rule admits at
Hao and
first
most
nm
consecutive degenerate
Kai [1987] have described more anti-stalling pivot rules for
cost flow problem.
Researchers have also been interested in developing polynomial-time simplex algorithms for the
minimum
cost flow
polynomial time-simplex algorithm for the simplex algorithm due to Orlin [1984]; the uncapacitated
minimum
this
problem or
minimum
maximum
cost flow
problem
The only is
a dual
minimum
Developing a polynomial-time
cost flow
However, researchers have developed such algorithms the
special CJises.
algorithm performs 0(n^log n) pivots for
cost flow problem.
primal simplex algorithm for the
its
problem
is
still
open.
for the shortest path problem,
flow problem, and the assignment problem:
Dial et
al.
[1979],
Zadeh
172
[1979], Orlin [1985],
Akgul
[1985a], Goldfarb,
Hao and
[1988] for the shortest path problem; Goldfarb
problem; and Roohy-Laleh [1980],
and Orlin
for the
[1988]
The
Hung
maximum
[1988] for the
flow
Akgul [1985b] and Ahuja
assignment problem.
proposed by Bertsekas and his associates are other
relaxation algorithms
mirumum
For the
generalization.
and Hao
[1983], Orlin [1985],
minimum
algorithms for solving the
attractive
Kai [1986] and Ahu)a and OrUn
cost
flow problem and
cost flow problem, this algorithm maintains a
pseudoflow satisfying the optimality conditions. The algorithm proceeds by either
augmenting flow from an excess node with zero reduced cost, or latter case,
the optimality conditions;
and
node along
to a deficit
however,
to their
this
optimum dual
that each
[1985]
objective function value,
it
and when
minimum
Bertsekas and Tseng [1985]
this
integer data).
cost flow
problem with
real data,
and
extended
for the generalized
Bertsekas
problem (with
for the
minimum
of empirical studies
have extensively tested
minimum
algorithms for wide variety of network structures, data distributions,
The most common problem generator
and Stutz
[1974],
which
is
minimum cost flow
Klingman
and
[1974]
Aeishtiani
out-of-kilter algorithms.
and Whitman algorithm.
[1980]
minimum
NETGEN, due
cost flow problems.
and Magnanti
[1976]
Glover,
Kamey and
have reported on extensive studies of the dual simplex
Kamey and Klingman [1988]
Klingman, Napier
have tested the primal-dual and
The primal simplex algorithm has been a
[1978b], Grigoriadis
to
and problem
Helgason and Kennington [1977] and Armstrong, Klingman
investigation; studies conducted
and Tseng
is
cost flow
capable of generating assignment, and capacitated or
uncapacitated transportation and
Glover,
determines
optimum primal
cost flow
approach
node
(see Section 6.6 for a definition of this problem).
A number sizes.
in the
finally
it
has also obtained an
suggested the relaxation algorithm for the
problem
change
This relaxation algorithm has exhibited nice empirical behavior.
solution.
satisfy
flow assignment might change the excesses
potentials increases the dual objective function value
the
In the
lower or upper bounds so as to
The algorithm operates so
deficits at nodes.
(i)
a path cortsisting of arcs
changing the potentials of a subset of nodes.
on some arcs
resets flows
it
(ii)
its
and Hsu
subject of
more rigorous
by Glover, Kamey, Klingman and Napier [1974]
[1974], Bradley, Brov^Ti
[1979]
and Graves
[1977],
,
Mulvey
and Grigoriadis [1986] are noteworthy. Bertsekas
have presented computational
results for the relaxation algorithm.
173
In view of Zadeh's [1979] result,
we would
expect that the successive shortest
path algorithm, the primal-dual algorithm, the out-of-kilter algorithm, the dual
simplex algorithm, and the primal simplex algorithm with Dantzig's pivot rule
should have comparable running times. that determine a
By using more
good entering arc without examining
effective pricing strategies
we would
all arcs,
expect that
the primal simplex algorithm should outperform other algorithms.
computational studies
have verified
this expectation
and
All the
until very recently the
primal simplex algorithm has been a clear winner for almost
all
classes of
network
problems. Bertsekas and Tseng [1988] have reported that their relaxation algorithm substantially faster than the primal simplex algorithm.
finds his
new
algorithm.
At
However, Grigoriadis
is
[1986]
version of primal simplex algorithm faster than the relaxation this time,
it
appears that the relaxation algorithm of Bertsekas and
Tseng, and the primal simplex algorithm due to Grigoriadis are the two fastest algorithms for solving the
Computer codes public domain.
minimum
for
cost flow
some minimum
problem
cost flow
in practice.
problem are available
These include the primal simplex codes
in the
RNET and NETFLOW
developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980], respectively,
and the relaxation code
RELAX
developed by Bertsekas and Tseng
[1988].
Polynomial-Time Algorithms In the recent past, researchers have actively pursued the design of fast
(weakly) polynomial and strongly polynomial-time algorithms for the cost flow problem.
running time
is
Recall that an algorithm
polynomial
in the
terms containing logarithms of
C
number or U.
capacitated.
It
strongly polynomial-time
of nodes
The
and
arcs,
minimum
networks with n nodes and
its
summarizes
cost flow problem.
m arcs,
cissumes that the integral cost coefficients are
if
and does not evolve
table given in Figure 6.3
these theoretical developments in solving the table reports running times for
is
minimum
The
m' of which are
bounded
in absolute
value by C, and the integral capacities, supplies and demands are bounded in absolute value by U. The term S()
is
the running time for the shortest path problem
term M() represents the corresponding running time to solve a problem.
and the
maximum
flow
174
Polynomial-Time Combinatorial Algorithms
Running Time
#
Discoverers
1
Edmonds and Karp
2
Rock
[1980]
0((n +
3
Rock
[1980]
0(n
log
4
Bland and Jensen [1985]
0(n
log
5
Goldberg and Tarjan [1988a]
0(nm log irr/nx) log nC)
6
Bertsekas and Eckstein [1988]
o(n3
7
Goldberg and Tarjan [1987]
0( n^ log nC
7
Gabow and
0(nm
log n log
U log nQ
8
Goldberg and Tarjan
0(nm
log n log
nC)
9
Ahuja, Goldberg, Orlin
0(nm
(log
and Tarjan
[1972]
Tarjan [1987] [1987, 1988b]
C M(n, m, U)) C M(n, m, U))
log
nC) )
U/log log U) log nC)
and
[1988]
0(nm
log log
Strongly Polynomial -Time Combinatorial Algorithms
#
U S(n, m, C)) m') log U S(n, m, O)
0((n + m") log
U log nQ
175
For the sake of comparing the polynomial and strongly polynomial-time
we
algorithms,
invoke the similarity assumption.
similarity assumption, the best
problems
bounds
For problems that satisfy the
and maximum flow
for the shortest path
are:
Polynomial-Time Bounds S(n,m, C) =
min (m
log log C,
Discoverers
m + rh/logC
)
Johnson
[1982],
and
Ahuja, Mehlhom, Orlin and Tarjan [1988]
M(n, m, C) =
nm
^—
^%rT^gTJ log [
Ahuja, Orlin and Tarjan [1987]
+ 2 J
Strongly Polynomial -Time Bounds
m) =
S(n,
M(n, m) =
m+
Discoverers
Fredman and Tarjan
n log n
nm
log
[1984]
Goldberg and Tarjan [1986]
(n^/m)
Using capacity and right-hand-side scaling, Edmonds and Karp [1972] developed the
first
(weakly) polynomial-time eilgorithm for the
problem. The RHS-scaling algorithm presented the
in Section 5.7,
Edmonds-Karp algorithm, was suggested by Orlin
did not
having
initially little
capture the interest of
practical utility.
many
[1988].
minimum
which
The
L>
cost flow
a Vciriant of
scaling technique
researchers, since they regarded
However, researchers gradually recognized
it
as
that the
scaling technique has great theoretical value as well as potential practical significance.
Rock
[1980]
developed two different bit-scaling algorithms for the
minimum
cost
flow problem, one using capacity scaling and the other using cost scaling. This cost scaling algorithm reduces the
minimum
0(n log C) maximum flow problems.
cost flow
problem
to a
sequence of
Bland and Jensen [1985] independently
discovered a similar cost scaling algorithm.
The pseudoflow push algorithms
for the
minimum
discussed in Section 5.8 use the concept of approximate
cost flow
optimality,
problem
introduced
independently by Bertsekas [1979] and Tardos [1985]. Bertsekas [1986] developed the first
pseudoflow push algorithm.
This algorithm was pseudopolynomial-time.
Goldberg and Tarjan [1987] used a scaling technique on a variant of obtain the generic pseudoflow push algorithm described in Section
proposed a wave algorithm for the
maximum
flow problem.
this
5.8.
algorithm to Tarjan [1984]
The wave algorithm
,
176
for the
minimum
problem described
cost flow
in Section 5.8
,
which was developed
independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988],
upon
Using a dynamic
similar ideas.
push algorithm, Goldberg and Tarjan
0(nm
log n log nC).
They
pseudoflow
tree data structure in the generic
[1987] obtained a computational time
showed
also
minimum
that the
solved using 0(n log nC) blocking flow computations.
cost flow
Mehlhom
and dynamic
[1984])
0(nm
Tarjan [1988a] obtained an
of
(The description of Dinic's
Using both
data structures, Goldberg and
tree
log (n^/m) log nC)
bound
problem cam be
algorithm in Section 6.3 contains the definition of a blocking flow.) finger tree (see
relies
bound
for ^he
wave
algorithm.
These algorithms, except the wave algorithm, required sophisticated data structures that impose a very high computational overhead.
algorithm
is
very practical,
situation has
its
prompted researchers
computational complexity of
complex data
structures.
Tarjan [1987],
who
log
U
log nC).
[1988],
worst-case running time
The
developed a
as described in Section
success in this direction
triple scaling
runs in
0(nm
improving the
log
was due
to
algorithm running in time
any
Gabow and 0(nm log n
to Ahuja, Goldberg,
Orlin and Tarjan
The double
scaling algorithm,
the double scaling algorithm. 5.9,
This
cost flow algorithms without using
The second success was due
who developed
not very attractive.
is
to investigate the possibility of
minimum first
Although the wave
U
log nC) time.
Scaling costs by
an
appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC)
and
a
dynamic
log nC).
algorithm
tree
implementation improves the bound further to
For problems satisfying the similarity is
0(nm
log log
U
assumption, the double scaling
network topologies except
faster than all other algorithms for all
for
very dense networks; in these instances, algorithms by Goldberg and Tarjan appear
more
attractive.
Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developed Both the algorithms are based on the negative
other polynomial-time algorithms. cycle algorithm
due
to Klein [1967].
negative cycle algorithm cycle
W for which V
Cj;
6
W
(i,j)
Goldberg and Tarjan [1988b] showed that
always augments along /
|W
|
is
minimum), then
flow a it
Goldberg and Tarjan described an implementation of
0(nm(log
n) minflog nC,
m
log
n)).
is
this
minimum mean
if
the
cycle (a
strongly polynomial-time.
approach running
Barahona and Tardos
algorithm suggested by Weintraub [1974], showed that
if
in
time
[1987], analyzing
an
the negative cycle algorithm
177
augments flow along then
it
0(m
performs
improvement
a cycle with
difficult
is
mCU)
log
(i.e.,
maximum improvement
iterations.
in the objective function,
Since identifying a cycle with
maximum
NP-hard), they describe a method (based upon solving
an auxiliary assignment problem)
to
determine a disjoint
set of
augmenting cycles
with the property that augmenting flows along these cycles improves the flow cost
by
much
at least as
(mCU)
in 0(.Tr\^ log
as
augmenting flow along any single
S(n,
m,
Edmonds and Karp the
minimum
O)
[1972]
proposed the
and
strongly polynomial-time algorithm.
range from
1
to 20,
motivated primarily by
(Indeed, in practice, the terms log
and are sublinear
in n.)
run on
that can
they might, at
a
C and
log
U
typically
Strongly polynomial-time algorithms are
network flow algorithms (ii)
polynomial-time algorithm for
was
This desire
two reasons:
and
first
also highlighted the desire to develop a
theoretically attractive for at least
data,
Their algorithm runs
time.
cost flow problem,
theoretical considerations.
cycle.
real
(i)
they might provide, in principle,
valued data as well as integer valued
more fundamental
underlying complexity in solving a problem;
i.e.,
level, identify the
are problems
source of the
more
difficult or
equally difficult to solve as the values of the tmderlying data becomes increasingly larger?
The
first
strongly polynomial-time
minimum
cost flow algorithm
is
due
to
Tardos
[1985].
Several researchers including Orlin [1984], Fujishige [1986], Galil and
Tardos
[1986],
and Orlin
time.
[1988] provided subsequent
improvements
running
Goldberg and Tarjan [1988a] obtained another strongly polynomial time
algorithm by slightly modifying their pseudoflow push algorithm. Tarjan [1988b] also
mean
in the
cycles
is
show
that their algorithm that proceeds
also strongly polynomial time.
polynomial-time algorithm
minimum
cost flow
is
due
to Orlin
Goldberg and
by cancelling minimvun
Currently, the fastest strongly
[1988].
This algorithm solves the
problem as a sequence of 0(min(m log U,
m log n)) shortest path
problems. For very sparse networks, the worst-case running time of this algorithm nearly as low
cis
the best weakly polynomieil-time algorithm, even for problems that
satisfy the similarity
assumption.
Interior point linear
programming algorithms are another source of
polynomial-time algorithms for the
minimum
cost flow problem.
Kapoor and
Vaidya [1986] have shown that Karmarkar's [1984] algorithm, when applied
minimum
cost
is
flow
problem performs
0(n^-^
mK)
operations,
to the
where
,
178
K=
C
log n + log
programming
Vaidya [1986]
+ log U.
minimum
that solves the
suggested another algorithm for linear
cost flow
problem
in
0(n^-^ y[m K) time.
Asymptotically, these time bounds are worse than that of the double scaling algorithm.
At fully
this time, the research
programming algorithms folklore,
for the
and Orlin
minimum
According
cost flow problem.
to the
even though they might provide the best-worst case bounds on running
the scaling algorithms
times,
yet to develop sufficient evidence to
computational worth of scaling and interior point linear
the
assess
community has
[1986]
not as efficient as the non-scaling algorithms. Boyd
eu-e
have obtained contradictory
minimum
scaling algorithm for the
results.
Testing the right-hand-side
found the scaling
cost flow problem, they
algorithm to be competitive with the relaxation algorithm for some classes of
Bland and Jensen [1985] also reported encouraging results with their cost
problems.
scaling algorithm.
We
believe that
when implemented with appropriate speed-up
techniques, scaling algorithms have the potential to be competitive with the best
other algorithms.
Assignment Problem
6.5
The assignment problem has been emphasis
in the literature has
The primary
a popular research topic.
been on the development of empirically
efficient
algorithms rather than the development of algorithms with improved worst-case
Although the research community has developed several different
complexity.
algorithms for the assignment problem, features.
The successive
minimum algorithms. [1955],
[1971]
many
shortest path algorithm, described in Section 5.4 for the
cost flow problem, appears to
This algorithm
known
lie
at the heart of
as the Hungarian method,
and
is explicit
in the papers
applied to an assignment problem on the network
we
first
problem by adding
i€N|, and
(j,t)
transform the assignment problem into a a source
for all
node
J€N2
due
to
Kuhn
by Tomizava
[1972].
the successive shortest path algorithm operates as follows.
approach,
many assignment
implicit in the first assignment algorithm
is
and Edmonds and Karp
When
common
of these algorithms share
;
s
and
a sink
node
t,
= (N^
To use
s to
t
u N2
and unit
arcs
A)
cost flow (s,i)
capacity.
with respect
,
this solution
minimum
and introducing
these arcs have zero cost
algorithm successively obtains a shortest path from
G
for all
The
to the lir«;ar
179
programming reduced
updates the node potentials, and augments one unit of
costs,
The algorithm solves
flow along the shortest path.
problem by n
the assignment
applications of the shortest path algorithm for nonnegative arc lengths
0(nS(n,m,C)) time, where S(n,m,C)
is
for a Fibonacci
heap implementation
similarity assumption, S(n,m,C)
The
fact that the
is
0(m+nlogn).
it is
min(m
log log C,
is
O(n^) and
For problems satisfying the
m+nVlogC}.
assignment problem can be solved as a sequence of n shortest
path problems with arbitrary arc lengths follows from the works of Jewell [1958], [1960]
and Busaker and Gowen
Tomizava
[1971]
[1961]
[1972] independently pointed out that
costs leads to shortest path
problems with nonnegative arc
Weintraub and Barahona [1979] worked out the
lengths.
and Klingman
[1986]
is
Edmonds-Karp
details of
The more recent
algorithm for the assignment problem. algorithm by Glover, Glover
Iri
on the minimum cost flow problem. However,
and Edmonds and Karp
working with reduced
in
the time needed to solve a shortest path
For a naive implementation of Dijkstra's algorithm, S(n,m,C)
problem.
and runs
threshold
assignment
also a successive shortest path
algorithm which integrates their threshold shortest path algorithm (see Glover,
Glover and Klingman [1984]) with the flow augmentation process.
Carraresi
and
Sodini [1986] also suggested a similar threshold assignment algorithm.
Hoffman and Markowitz path problem
to
[1963] pointed out the transformation of a shortest
an assignment problem.
Kuhn's [1955] Hungarian method
potentials, the
flow problem
node
t
to
Hungarian method solves a (particularly simple)
send the
maximum
possible flow from the source
augments flow along
all
in
an
iteration, the
the shortest paths from the source
use the labeling algorithm to solve the resulting
these applications take a total of
0(nm) time
and each augmentation takes 0(m) runs in
0(nm
time.
node
maximum
s to the sink
Whereas the successive shortest path
using arcs vdth zero reduced cost.
problem augments flow along one path
we
the primal-dual version of the successive
After solving a shortest path problem and updating the
shortest path algorithm.
node
is
node
maximum
Hungarian method to the sink node.
If
flow problems, then
overall, since there are
n augmentatior\s
Consequently, the Hungarian method, too,
+ nS(n,mC)) = 0(nS(n,m,C)) time.
(For
some time
after the
development of the Hungarian method as described by Kuhn, the research
community considered
it
to
be O(n^) method.
Lawler [1976] described an
Oiri^)
180
implementation of the method.
Hungarian method
Subsequently,
in fact runs in
many
researchers realized that the
0(nS(n,m,C)) time.) Jonker and Volgenant [1986]
suggested some practical improvements of the Hungarian method.
The
relaxation approach for the
and Kronrod
(1969],
Hung
Rom
eind
minimum
[1980]
cost flow
and Engquist
infeasible assignment
and gradually make
it
The successive
the nature of the infeasibility.
feasible.
objects
may
is
start writh
The major difference
is in
and with no person or is
object
assigned, but
be overassigned or unassigned. Both the algorithms maintain optimality
and work toward
path problems with nonnegative arc lengths.
and Engquist
[1982] are essentially the
the shortest path computations are
Kronrod
to E>inic
This approach
[1982].
Throughout the relaxation algorithm, every person
of the intermediate solution
[1969]
due
shortest path algorithm maintains a
solution w^ith unassigned persons and objects,
overassigned.
is
Both approaches
closely related to the successive shortest path algorithm.
an
problem
[1969].
The algorithm
of
basis rooted at an overassigned
feasibility
by solving
The algorithms
Hung and Rom
node and,
after
most n shortest
of Dinic
same as the one we
somewhat disguised
at
in the
and Kronrod but
just described,
paper of Dinic and
[1980] maintains a strongly feaisible
each augmentation, reoptimizes over
the previous basis to obtain another strongly feaisible basis.
All of these algorithms
run in 0(nS(n,m,C)) time.
Another algorithm worth mentioning This algorithm
is
gradually converts
is
due
to Balinski
and Gomory
a primal algorithm that maintains a feasible it
into
[1964].
assignment and
an optimum assignment by augmenting flows along
negative cycles or by modifying node potentials. Derigs [1985] notes that the shortest
path computations vmderlie
this
method, and that
it
rurrs in
0(nS(n,m,C)) time.
Researchers have also studied primal simplex algorithms for the assignment
problem.
The
variables, only
basis of the assignment problem
n are nonzero.
is
highly degenerate; of
its
2n-l
Probably because of this excessive degeneracy, the
mathematical programming community did not conduct
much
research on the
network simplex method for the assignment problem until Barr, Glover and
Klingman
[1977a] devised the strongly feasible basis technique.
These authors
developed the details of the network simplex algorithm when implemented
to
maintain a strongly feasible basis for the assignment problem; they also reported
encouraging computational
results.
Subsequent research focused on developing
ISl
polynomial-time simplex algorithms. Roohy-Laleh [1980] developed a simplex pivot rule requiring O(n^) pivots.
Hung
[1983] describes a pivot rule that performs at
O(n^) consecutive degenerate pivots and
at
most
most 0(n log nC) nondegenerate pivots.
Hence, his algorithm performs 0(n^log nC) pivots. Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots.
amounts
to solving
n shortest path problems and runs
This algorithm essentially in
0(nS(n,m,C)) time.
Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for the netvk'ork simplex algorithm
and showed
requires O(n^lognC) pivots.
0(n^m
A
and Orlin
log nC). Ahuja
and can be implemented
The algorithm
reduced
cost.
to
run
in
its
value
is
The algorithm defines the term
it
is
algorithm do have this property.)
and runs
in
O(n^) time.
simplex
at
every iteration; some variants of
Balinski's algorithm
performs O(n^) pivots
O(n^)
time using simple data structures and in
n^log n) time using Fibonacci heaps.
The auction algorithm suggested in Bertsekas [1979].
is
analysis
is
somewhat
at a time,
to Bertsekas
Out
different that the
For example, the algorithm
one unit
due
Bertsekas
version of the auction algorithm.
by the
a dual
Goldfarb [1985] described some implementations of
Balinski's algorithm that run in
0(nm +
is
not a dual simplex algorithm in the traditional sense because
does not necessarily increase the dual objective
this
within
(Although his basic algorithm maintains a
algorithm for the eissignment problem. it
C and
halved.
Balinski [1985] developed the signature method, which
dual feasible basis,
0(nm log C)
essentially consists of pivoting in
"sufficiently large" iteratively; initially, this threshold value equals
O(n^) pivots
this rule
[1988] described a scaling version of Dantzig's pivot
time using simple data structures. sufficiently large
problem
naive implementation of the algorithm runs in
rule that performs 0(n^log C) pivots
any arc with
that for the eissignment
we have
and uses basic ideas originally
and Eckstein
[1988] described a
recent
presentation of the auction algorithm tmd
one given by Bertsekas and Eckstein
its
[1988].
presented increases the prices of the objects by
whereas the algorithm by Bertsekas and Eckstein increases prices
maximum amount
that preserves e-optimality of the solution.
[1981] has presented another algorithm for the assignment
specialization of his relaxation algorithm for the
Bertsekas [1985]).
more
problem which
minimum
cost flow
Bertsekas is
in fact a
problem
(see
bound
Currently, the best strongly polynomial-time
problem
0(nm
is
+ n^ log n)
Scaling algorithms can
Gabow
[1985]
,
do
which
better for
is
to solve the
assignment
many assignment
algorithms.
satisfy the similarity
assumption.
achieved by
problems that
using bit-scaling of costs, developed the
assignment problem.
first scciling
algorithm for the
His algorithm performs O(log C) scaling phases and solves
each phase in OCn'^'^m) time, thereby achieving jm OCn'^'
Gabow and
Using the concept of e-optimality,
^m
log C) time bound.
Tarjan [1987] developed another scaling
^m log nC). Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5.8 solves problem in 0(nm log nC) since every push is a saturating push.
algorithm running in time 0(n^'
push algorithm the assignment
Bertsekas and Eckstein [1988]
algorithm runs in this
0(nm
showed
log nC).
that the scaling version of the auction
Section 5.11
has presented a modified version of
algorithm in Orlin and Ahuja [1988]. They also improved the time bound of the
auction algorithm to
Gabow and
0(n^'^m lognC). This time bound
Tarjan 's algorithm, but the two algorithms
computational attributes.
For problems satisfying
two algorithms achieve the
best time
boimd
comparable
is
to that of
would probably have
different
the similarity assumption, these
to solve the
assignment problem
without using any sophisticated data structure.
As mentioned
previously, most of the research effort devoted to assignment
algorithms has stressed the development of empirically faster algorithms. years,
many
algorithms.
Over the
computational studies have compared one algorithm with a few other
Some
representative computational studies are those conducted by Barr,
Glover and Klingman [1977a] on the network simplex method, by McGinnis [1983]
and Carpento, Martello and Toth [1982]
[1988]
on the primal-dual method, by Engquist
on the relaxation methods, and by Glover
et al.
Since no paper has
Volgenant [1987] on the successive shortest path methods.
compared
all
of these zilgorithms,
it
is difficult to
and Jonker and
[1986]
assess their computational merits.
Nevertheless, results to date seem to justify the following observations about the algorithms' relative performance.
The primal simplex algorithm
is
Among
primal-dual, relaxation and successive shortest path algorithms. three approaches, the successive shortest path algorithms
and Jonker and Volgenant [1988]
found
[1987]
appear
to
be the
due
fastest.
to
slower than the
Glover
the latter
et al. [1986]
Bertsekas and Eckstein
that the scaling version of the auction algorithm
is
competitive with
Jonker and Volgenant's algorithm. Carpento, Martello and Trlh [1988] present
183
FORTRAN
several
implementations of assignment algorithms for dense and sparse
cases.
Other Topics
6.6
Our
problems with linear
domain
of
Several other generic topics in the broader problem
costs.
network optimization are of considerable
convex cost flows;
shall
now
(iii)
and
theoretical
In particular, four other topics deserve mention: (ii)
commodity network flow
discussion in this paper has featured single
practical interest.
generalized network flows;
(i)
multicommodity flows; and
(iv)
We
network design.
discuss these topics briefly.
Generalized Network Flows
The flow problems we have considered conserve flows,
assume
in this chapter
that arcs
the flow entering an arc equals the flow leaving the arc.
i.e.,
models of generalized network flows, arcs do not necessarily conserve flow. units of flow enter an arc
(i,
j),
then
units "arrive" at
Tj: Xj:
nonnegative flow multiplier dissociated with the lossy and, 1
if
1
<
Tj;
<
«>,
then the arc
is
arc.
rj:
<
1,
j;
may
xj:
If
Tj; is
then the arc
In the conventional flow networks,
gainy.
Generalized network flows arise in
for all arcs.
<
If
node
In
application contexts.
Tjj
a is
=
For
example, the multiplier might model pressure losses in a water resource network or losses incurred in the transportation of perishable goods.
Researchers have studied several generalized network flow problems. extension of the conventional
maximum
flow problem
is
the generalized
flow problem which either maximizes the flow out of a source the flow into a sink
node
(these
two
objectives are different!)
An
maximum
node or maximizes
The source version
of
the problem can be states as the following linear program.
Maximize v^
(6ia)
subject to
X {j:
(i,j)
€ A)
"ij
S
{j:
(j,i)
€ A)
"'ji'^ji
=
K'if» = s S 0,
if
i ?t
[vj, if
i
s,t,
=
t
for
aU
i
E
N
(6.1b)
184
<
<
x^j
Note
uj:
,
for all
e A.
(i, j)
that the capacity restrictions apply to the flows entering
Further, note that Vg
not necessarily equal to
is
v^,
the arcs.
because of flow losses and gains
within arcs.
The generalized maximum flow problem has many
minimum
similarities with the
Extended versions of the successive shortest path
cost flow problem.
algorithm, the negative cycle algorithm, and the primal-dual algorithm for the
minimum
cost flow
problem apply
maximum
to the generalized
flow problem.
The
paper by Truemper [1977] surveys these approaches. These algorithms, however, are not pseudopolynomial-time, mainly because the optimal arc flows and node
The
potentials might be fractional.
describes the
[1986]
generalized
maximum
first
minimum
and Tardos
polynomial-time combinatorial algorithms for the
flow problem.
In the generalized
ordinary
recent paper by Goldberg, Plotkin
minimum
cost flow problem,
cost flow problem,
we wish
to
which
is
an extension of the
determine the
minimum
cost
flow in a generalized network satisfying the specified supply/demand requirements of nodes. These are three
due
main approaches
to solve this
problem. The
to Jewell [1982], is essentially a primal-dual algorithm.
the primal simplex algorithm studied by Elam, Glover
Elam
others.
find that
it is
minimum [1988b],
et al.
approach,
The second approach
and Klingman
[1979]
is
among
find their implementation to be very efficient in practice;
they
about 2 to 3 times slower than their implementations for the ordinary
cost flow algorithm.
generalizes their
generalized
first
minimum
The
third approach,
minimum
cost
due
to Bertsekeis
and Tseng
flow relaxation algorithm for the
cost flow problem.
Convex Cost Flows
We
shall restrict this brief discussion to
separable cost functions,
V (i,j)
e
X-J3
Cjj
(x^j).
i.e.,
convex cost flow problems with
the objective function can be written in the form
Problems containing nonconvex nonseparable cost terms such as xj2
A are substantially
more
difficult to solve
and continue
challenge for the mathematical programming community.
to
pose a significant
Even problems with
nonseparable, but convex objective functions are more difficult to solve; typically.
185
programming techniques
analysts rely on the general nonlinear
to solve these
problems. The separable convex cost flow problem has the follow^ing formulation:
V
Minimize
(i,j)
(6.2a)
Cj; (xj;)
A
e
subject to
Y {j: (i,j)
€
S
-
^i]
A
(j,i)
{j:
<
''ji
e
A
x^j
<
Ujj
In this formulation, Cj;
research
=
,
for all
(xjj)
for each
(i)
each Cj;
(xjj) is
problems are quite
There
is
a
€ N,
(6.2b)
e A.
(i,j)
(62c)
e A,
a convex function.
is
classes of separable
of
convex costs flow each Cj;
(xj;) is
a
different.
linear functions to a linear
program
a separable
convex
Hax
(see, e.g., Bradley,
This transformation reduces the convex cost flow problem to a
[1972]).
minimum
(ii)
The
Solution techniques used to solve the two
well-known technique for transforming
program with piecewise and Magnanti
i
a piecewise linear function;
continuously differentiate function.
standard
(i, j)
community has focused on two
problems:
classes of
^°^ all
^^'^'
cost flow problem:
introduces one arc for each linear segment
it
in the cost functions, thus increasing the
carry out this transformation implicitly
problem
size.
However,
it
is
possible to
and therefore modify many minimum
flow algorithms such as the successive
cost
shortest path algorithm, negative cycle
algorithm, primal-dual and out-of-kilter algorithms, to solve convex cost flow
problems without increasing the problem [1984] illustrates this technique
Observe that segments chosen
(if
it
size.
The paper by Ahuja,
and Gupta
and suggests a pseudopolynomial time algorithm.
possible to use a piecewise
is
Batra,
linear function, with linear
necessary) with sufficiently small size, to approximate a convex
More
function of one variable to any desired degree of accuracy. alternatives are possible.
For example,
convex problem a priori (which of
problem exactly using a
elaborate
we knew the optimal solution to a separable course, we don't), then we could solve the if
linear approximation for
any arc
(i,
j)
with only three
186
breakpoints: at
0, Uj;
and the optimal flow on the
would be
linear approximation
computationally wasteful.
irrelevant
Any
arc.
other breakpoint in the
and adding other points would be
This observation has prompted researchers to devise
adaptive approximations that iteratively revise the linear approximation beised upon the solution to a previous, coarser, approximation. of this approach).
If
we were
(See
Meyer
interested in only integer solutions, then
choose the breakpoints of the linear approximation
example
[1979] for an
we
could
at the set of integer values,
and
therefore solve the problem in pseudopolynomial time.
Researchers have suggested other solution strategies, using ideas from
progamming
nonlinear
Some important
for solving this general separable
references on this
Kennington and Helgason
[1980],
convex cost flow problems.
topic are Ali, Helgason
Meyer and Kao
[1981],
[1981], Klincewicz [1983], Rockafellar [1984], Florian [1986],
Tseng
Dembo and
[1978],
Klincewicz
and Bertsekas, Hosein and
[1987].
Some time.
and Kennington
versions of the convex cost flow problems can be solved in polynomial
Minoux
cases, the
one of
[1984] has devised a polynomial-time algorithm for
mininimum quadratic
cost flow problem.
developed a polynomial-time algorithm
to obtain
Minoux
its
special
has also
[1986]
an integer optimum solution of
the convex const flow problem.
Muticommodity Flows Multicommodity flow problems
same underlying network, but share common
programming formulation
a linear
problem and point the reader
Suppose through
r.
Let
that the
arc capacities.
In this section,
multicommodity minimum
to contributions to this
problem contains
multicommodity minimum
cost flow
^
V 1^=1
subject to
of the
several commodities use the
r
problem and
distinct
its
V (i,j)e
A
k
k
c^:
x^-
we
state
cost flow
specializations.
commodities numbered
denote the supply/demand vector of commodity
b*^
Minimize
when
arise
k.
Then
problem can be formulated as follows:
(6.3a)
1
the
187
k
~ ''ii
1]
(i,j)
{j:
'
e A)
k
y
< u:j, for
X..
(i,j)
{j:
'^
all
k
k
V
-
X;;
^i
'
^OT a\]
i
and
k,
(6.3b)
A
e
(i,j),
(63c)
,
^
ktl
k
<
<
Xj.
k u-
,
for all
k In this formulation,
x--
commodity k on
of flow for
and
(i,j)
c--
(6.3d)
.
model contains additional capacity
represent the
amont
As indicated by
(i,j).
the total flow on any arc cannot exceed
each
k
k
and
arc
all
its
and the
the "bundle constraints" (6.3c),
capacity. Further, as captured
restrictions
unit cost
by
(6.3d), the
on the flow of each commodity on
arc.
Observe
that
constraints, then
it
if
the multicommodity flow problem does not contain bundle
decomposes
into r
problems, one for each commodity. (6.3c),
the essential problem
commodities
We problem
in a
first
way
that
is to
commodity minimum
With the presence of the bundle
showed how
(6.3).
costs.
In this problem, every
network
in
source or a
flow algorithm.
[1960]
s*^
to
commodity k has
and tK The t*^
objective
for all k.
Hu
a is
[1963]
two-commodity maximum flow problem on an undirected
showed how
common
maximum
from
s*^
pseudopolynomial time by a labeling algorithm.
Frisch [1968]
with a
of flows that can be sent
to solve the
corxstraints
The multicommodity maximum flow
consider some special cases.
maximize the sum
cost flow
distribute the capacity of each arc to individual
minimizes overall flow
a special instance of
is
single
source node and a sink node, represented respectively by to
of flow
to solve the
common
multicommodity
sink by
Rothfarb, Shein and
maximum
flow problem
a single application of any
maximum
Ford and Fulkerson [1958] solved the general multicommodity
flow problem using a column generation algorithm.
Dantzig and Wolfe
subsequently generalized this decomposition approach to linear programming.
Researchers have proposed three basic approaches for solving the general
multicommodity minimum resource-directive
cost flow problems:
price-directive
decomposition and partitioning methods.
We
decomposition,
refer the reader to
188
the excellent surveys by Assad [1978] and Kennington [1978] for descriptions of these
The book by Kennington and Helgason
methods.
[1980] describes the details of a
primal simplex decomposition algorithm for the multicommodity
made on
cost
Unfortunately, algorithmic developments on the multicommodity
flow problem.
minimum
minimum
problem have not progressed
cost flow
commodity minimum
the single
pace as the progress
at nearly the
Although specialized
cost flow problem.
primal simplex software can solve the single commodity problem 10 to 100 times
programming systems, the algorithms
general purpose linear
faster than the
developed for the multicommodity minimum cost flow problems generally solve thse problems about 3 times faster than the general purpose software (see Ali et
al.
[1984]).
Network Design
We
have focused on solution methods
network;
that
for finding optimal routings in a
on analysis rather than synthesis.
The design problem
is
of
considerable importance in practice and has generated an extensive literature of
its
own.
Many
is,
design problems can be stated as fixed cost network flow problems:
(some) arcs have an associated fixed cost which
is
These network design models contain
any flow.
whether or not an arc
is
incurred whenever the arc carries 0-1 variables
yjj
that indicate
models involve
included in the network.
Typically, these
The design decisions
and routing decisions
k
multicommodity flows. related
yjj
x^- are
by "forcing" constraints of the form
2 k=l
-
''ii
"ij yij
^^^ '
^"
^^'^^
which replace the bundle constraints
of the
form
(6.3c)
in
the convex cost
k
multicommodity flow problem
commodity k on the arc
is
arc
(i,j)
to
(6.3).
be zero
if
These constraints force the flow the arc
included, the constraint on arc
design capacity constraints
Ujj
may
applications, the
Many
is
(i,j)
x^-
of each
not included in the network design; restricts the total
flow to be the
if
arc's
modelling enhancements are possible; for example, some
restrict the
underlying network topology
network must be
(for instance, in
a tree; in other applications, the
some
network might
189
need alternate paths
to
ensure reliable operations).
functions arise in practise.
One
""
Minimize
£ ^ k=l (i^j)e
of the
k c-
most popular
k x^^
•
+
A
Y. ^ (i,j)
€
Also,
many
is
V
ij
A
k which models commodity dependent per unit routing costs c
•
Fjj for
different objective
(as well zs fixed costs
the design arcs).
Usually, network design problems require solution techniques from any integer
programming and other type
of solution
optimization. These solution methods include
methods from combinatorial
dynamic programming, dual ascent
procedures, optimization-based heuristics, and integer programming decomposition
(Lagrangian relaxation. Benders decomposition) as well as emerging ideas from the field of
polyhedral combinatorics.
Magnanti and
Wong
[1984]
and Minoux
[1985,
1987] have described the broad range of applicability of network design models
summarize solution methods network design
literature.
for these
and
problems as well as many references from the
Nemhauser and Wolsey
[1988] discuss
many
underlying
methods from integer programming and combinatorial optimization.
Acknowledgments
We Wong and
are grateful to Michel
Goemans, Hershel
Safer,
Lav^ence Wolsey ,Richard
Robert Tarjan for a careful reading of the manuscript and
suggestions.
We
are particularly grateful to William
many
Cunningham
for
useful
many
valuable and detailed comments.
The research Presidential
Young
of the first
and
third authors
was supported
in part
by the
Investigator Grant 8451517-ECS of the National Science
Foundation, by Grant AFOSR-88-0088 from the Air Force Office of Scientific Research, and by Grants from Analog Devices, Apple Computer,
Computer.
Inc.,
and Prime
190
References Aashtiani, H.A., and T. L. Magnanti.
Flow Algorithms. Technical Report
Implementing Prin\al-E>ual Network
1976.
OR
055-76,
Operations Research Center, M.I.T.,
Cambridge, MA.
Aho, A.V.
,
J.E.
Hop>croft,
and
Ullman. 1974. The Design and Analysis
J.D.
Addison-Wesley, Reading,
Algorithms.
Ahuja, R. K.,
L. Batra,
J.
and
S.
A.V.
K. Gupta.
Goldberg,
1984.
Minimum-Cost Rows by Double of
Management,
M.I.T.,
Ahuja, R.K., K. Mehlhom,
J.B.
A
Parametric Algorithm for the
Euro. ].of Oper. Res. 16, 222-25
and R.E. Tarjan. 1988.
Finding
Working Paper No. 2047-88, Sloan School
Scaling.
MA.
Cambridge,
Ahuja, R.K., M. Kodialam, and
Orlin,
J.B.
Computer
MA.
Convex Cost Network Flow and Related Problems. Ahuja, R.K.,
of
1988.
Orlin.
J.B.
and
Orlin,
Personal Communication.
1988.
R.E. Tarjan.
Faster Algorithms for
the Shortest Path Problem. Technical Report No. 193, Operations Research Center, M.I.T.,
Cambridge, MA.
Ahuja, R.K., and
J.B.
Flow Problem.
Working Paper
Cambridge, MA. 1987. To appear Ahuja, R.K., and
J.B.
Bipartite
J.B.
Orlin,
J.B.
Maximum Flow
Maximum M.I.T.,
Improved Primal Simplex Algorithms
for the
in Oper. Res.
and
Cost Flow Problems. To appear.
R.E. Tarjan.
1988.
Improved Algorithms
for
and
Problem.
R.E. Tarjan.
Working Paper
1988.
Improved Time Bounds
1966-87, Sloan School of
for the
Management,
Cambridge, MA.
Akgul, M. of
for the
Management,
Minimum
Orlin, C. Stein,
and Simple Algorithm
Network Flow Problen«. To appear.
Ahuja, R.K.,
M.I.T.,
Fast
1905-87, Sloan School of
1988.
Orlin.
Shortest Path, Assignment and
Ahuja, R.K.,
A
1987.
Orlin.
1985a.
Shortest Path and Simplex Method.
Research Report, Department
Computer Science and Operations Research, North Carolina
Raleigh, N.C.
State University,
191
Akgul, M.
A
1985b.
Genuinely Polynomial Primal Simplex Algorithm for the Research Report, Department of Computer Science and
Assignment Problem.
Operations Research, North Carolina State University, Raleigh, N.C. D. Bamett, K. Farhangian,
Ali,I.,
Wong.
J.
Kennington,
B. Patty, B. Shetty, B.
McCarl and
P.
Multicommodity Network Problems: Applications and Computations.
1984.
LIE. Trans. 16,127-134.
Ali, A.
I.,
R. V. Helgason,
A
Flow Problem:
and
J.
L.
Kennington.
State-of-the-Art Survey.
Methodist University,
The Convex Cost Netwrork
1978.
Technical Report
OREM
78001, Southern
Texeis.
Armstrong, R.D., D. Klingman, and D. Whitman.
1980.
Implementation and
Analysis of a Variant of the Dual Method for the Capacitated Transshipment Problem. Euro.
J.
Oper. Res. 4, 403-420.
Assad, A. 1978. Multicommodity Network Flows Balinski, M.L.
A Survey.
Networks 8,37-91.
Signature Methods for the Assignment Problem.
1985.
Oper. Res. 33,
527-536.
Balinski, M.L.,
and
R.E.
Comory.
Transportation Problems. Man.
Barahona,
F.,
and
E.
Tardos.
A
1964.
Sci. 10,
Primal Method for the Assignment and
578-593.
1987.
Note on Weintraub's Minimum Cost Flow
Algorithm. Research Report, Dept. of Mathematics, M.I.T., Cambridge,
Baratz, A.E.
1977.
Construction and Analysis of a Network Flow Problem Which
Forces Karzanov Algorithm to O(n^) Running Time.
Laboratory for Computer Science, MIT, Cambridge,
Ban,
R., F.
for the
Glover, and D. Klingman.
1977a..
Assignment Problem. Math. Prog.
Barr, R., F. Glover,
MA.
and D. Klingman.
MA.
The Alternating Path
Basis Algorithm
12, 1-13.
1977b.
A Network Augmenting
Algorithm for Transshipment Problems. Proceedings External Methods and System Analysis.
Technical Report TM-83,
of the International
Path Basis
Symposium on
192 Barr, R., F. Glover,
and D. Klingman.
for Transportation Problems.
Euro.
Oper. Res. 2, 137-144.
].
and D. Klingman.
Barr, R., P. Glover,
Wiley
&
J.J.
INFOR
of
Spanning Tree Labeling
17, 16-34.
Linear Programming and Network Flows.
1978.
Jarvis.
Enhancement
1979.
Procedures for Network Optimization. Bazaraa, M., and
Generalized Alternating Path Algorithm
1978.
John
Sons.
On
Bellman, R. 1958.
Berge, C.,
Networks.
a Routing Problem.
and A. Ghouila-Houri. John Wiley
Bertsekas, D.P.
&
16, 87-90.
Programming, Games and Transportation
1962.
Sons.
A
1979.
QuaH. Appl. Math.
Working Paper, Laboratory
Distributed Algorithm for the Assignment Problem. for Information Decision Systems, M.I.T.,
Cambridge,
MA.
A
Bertsekas, D.P. 1981.
Nev^ Algorithm for the Assignment Problem. Math. Prog.
21,
152-171.
Bertsekas, D. P.
A
1985.
Unified Framev^ork for Primal-Dual Methods in
Minimum
Cost Network Flow Problems. Math. Prog. 32, 125-145. Distributed Relaxation Methods for Linear
Bertsekas, D.P. 1986.
Problems.
Proc. of 25th
Bertsekas, D. P.
1987.
IEEE Conference on Decision and
The Auction Algorithm:
the Assignment Problem.
and
J.
MA.
Eckstein.
Also in Annals 1988.
Network Flow Problems. To appear Bertsekas, D.,
and
Bertsekas, D.
P., P.
Distributed Relaxation
Method
for
R. Gallager.
A. Hosein,
1987.
and
P.
of Operations Research 14, 105-123.
IXial Coordinate Step
Methods
for Linear
in Math. Prog., Series B.
Data Networks. Prentice-Hall.
Tseng.
Flow Problems with Convex Arc Costs. 25,1219-1243.
Control, Athens, Greece.
Report LIDS-P-1653, Laboratory for Information Decision
systems, M.I.T., Cambridge,
Bertsekas, D.P.,
A
Network Flow
.
1987.
SIAM
Relaxation Methods for Network J.
of Control
and Optimization
193
and
Bertsekas, D.P.,
P.
Tseng.
Network Flow Problems.
In B. Simeone, et
As Annals
Optimization.
and
Bertsekas, D.P.,
P.
The Relax Codes
1988a.
(ed.),
al.
of Operations Research
Tseng.
for Linear
FORTRAN
Cost
Codes for Network
13, 125-190.
Relaxation Methods for
1988b.
Minimum
Ordinary and Generalized Network Flow Problems. Oper.
Minimum
Cost
Res. 36, 93-114.
On
the Computational Behavior of a
Polynomial-Time Network Flow Algorithm.
Technical Report 661, School of
Bland, R.G., and D.L. Jensen.
1985.
Operations Research and Industrial Engineering, Cornell University, Ithaca, N.Y. Boas, P.
Van Emde,
Efficient Priority
Bodin, L. D., B. of Vehicles
R. Kaas,
and
Queue. Math.
L.
E. Zijlstra.
Sys. Theory 10, 99-127.
Golden, A. A. Assad, and M. O.
and Crews. Comp. and
Boyd, A., and
J.B.
Orlin.
1986.
Design and Implementation of Large
1977.
Man.
21, 1-38.
Sri.
A. C. Hax, and T. L. Magnanti.
P.,
Routing and Scheduling
1983.
Personal Communication.
Scale Primal Transshipment Algorithms.
S.
Ball.
Oper. Res. 10, 65-211.
Bradley, G., G. Brown, and G. Graves.
Bradley,
Design and Implementation of an
1977.
Applied
1977.
Addison-Wesley.
Programming.
Busaker, R.G., and
P.J.
Gowen.
1961.
A
Procedure for Determining a Family of
Minimal-Cost Network Flow Patterns. O.R.O. Technical Report No. Research Office, John Hopkins University, Baltimore,
Carpento, G.,
S.
Martello,
Assignment Problem.
and
Problem. Eur.
Cheriyan,
J.
J.
1988.
India.
and
P. Toth.
Simeone
1988.
15,
Operational
MD.
Algorithms and Codes for the
et al. (eds.),
FORTRAN
Codes for Network
of Operations Research 33, 193-224.
C. Sodini.
1986.
An
Efficient
Algorithm for the Bipartite Matching
Oper. Res. 23, 86-93.
Technical Report,
Bombay,
In B.
As Annals
Optimization.
Carraresi, P.,
Mathematical
Parametrized Worst Case Networks for Preflow Push Algorithms.
Computer Science Group, Tata
Institute of
Fundamental Research,
194 Cheriyan,
and S.N. Maheshwari.
J.,
Maximum Network
Flow.
1987.
Analysis of Preflow Push Algorithms for
Technical Report, Dept. of
Engineering, Indian Institute of Technology,
Cherkasky, R.V.
Vl
with Complexity of OCV^
Cheung,
7,
112-125
)
Network Flow Problem.
Cunningham, W.H.
ACM
1976.
Cunningham, W.H.
Dantzig, G.B. In T.C.
Sons,
4,
(in Russian).
Trans, on Math. Software 6, 1-16.
1979.
An
Algorithmic Approach.
Network Simplex Method.
Academic
Press.
Mafft. Pro^. 11, 105-116.
Theoretical Properties of the
Network Simplex Method.
196-208.
(ed.). Activity
&
Analysis of Production and Allocation, John Wiley
359-373.
Dantzig, G.B. in Linear
A
:
Application of the Simplex Method to a Transportation Problem.
1951.
Koopmans
Inc.,
Networks
in
Operation, Mathematical Methods of Solution of
Christophides, N. 1975. Graph Theory
Math, of Oper. Res.
Maximum Flow
Computational Comparison of Eight Methods for the Mzocimum
1980.
T.
Delhi, India.
Algorithm for Cor\struction of
1977.
Economical Problems
New
Computer Science and
1955.
Upper Bounds, Secondary
Programming. Economeirica
Dantzig, G.B. 1960.
Dantzig, G.B.
1962.
On
Constraints,
and Block Triangularity
23, 174-183.
the Shortest Route through a Network.
Man. Sd.
187-190.
6,
Linear Programming and Extensions. Princeton University Press,
Princeton, NJ.
Dantzig, G.B. 1967. All Shortest Routes in a Graph. In P. Rosenthiel Graphs,
Gordon and Breach, NY,
In
Theory of
91-92.
Dantzig, G.B., and D.R. Fulkerson.
Networks.
(ed.),
1956.
On
H.W. Kuhn and A.W. Tucker
the
Max-Flow Min-Cut Theorem
(ed.). Linear Inequalities
of
and Related
Systems, Annals of Mathematics Study 38, Princeton University Press, 215-221.
Dantzig, G.
B.,
and
Oper. Rfs. 8, 101-111.
P.
Wolfe.
1960.
Decomposition Principle
for Linear
Programs.
195
Dembo,
and
R. S.,
G. Klincewicz.
J.
1981.
A
Scaled Reduced Gradient Algorithm for
Network Flow Problen\s with Convex Separable Deo, N., and
C Pang.
1984.
Costs. Math. Prog. Study 15, 125-147.
Shortest Path Algorithms:
Taxonomy and Annotation.
Networks 14, 275-323.
Denardo, E.V., and
and Buckets. Derigs, U.
Shortest-Route Methods:
B.L. Fox. 1979.
Reaching, Pruning
1.
Oper. Res. 27, 161-186.
The Shortest Augmenting Path Method
1985.
for Solving
Assignment
Motivation and Computational Experience. Annals of Operations Research
Problems: 4,57-102.
Derigs, U.
Programming
1988.
and Mathematical Systems, Derigs, U., and
W.
Networks and Graphs.
in
Lecture Notes in Economics
Vol. 300, Springer-Verlag.
Implementing Goldberg's Max-Flow Algorithm:
Meier. 1988.
A
Technical Report, University of Bayreuth, West
Computational Investigation.
Germany. Dial, R. 1969.
Comm.
ACM
Algorithm 360:
Shortest Path Forest with Topological Ordering.
12, 632-633.
Kamey, and D. Klingman.
Dial, R., F. Glover, D.
1979.
A
Computational Arvalysis of
Alternative Algorithms and Labeling Techniques for Finding Shortest Path Trees.
Networks
9, 2-[5-248.
Dijkstra, E.
1959.
A
Note on
Two
Problems
in
Connexion with Graphs. Numeriche
Mathematics 1,269-271.
Dinic, E.A. 1970.
Algorithm for Solution of a Problem of
Networks with Power Estimation, Dinic, E.A.,
and M.A. Kronrod.
Problem. Soviet Maths. Doklady
Edmonds,
J.
1970.
Soviet
1969.
Maximum Flow
in
Math. Dokl. 11, 1277-1280.
An
Algorithm for Solution of the Assignment
10, 1324-1326.
Exponential Grov^h of the Simplex Method for the Shortest Path
Problem. Unpublished paper. University of Waterloo, Ontario, Canada.
196
Edmonds,
Efficiency for
Elam,
F.
J.,
and R.M. Karp.
J.,
Theoretical Improvements in Algorithmic
1972.
Network Flow Problems.
ACM
/.
Glover, and D. Klingman.
A. Feiitstein, and C.E. Shannon.
Elias, P.,
Network. IRE Trans, on Engquist, M.
1982.
Problem.
INFOR 20,
Even,
1976.
S.
Infor.
A
Even, SI
AM
and
S., }.
Femandez-Baca,
Maximum Flow Through
a
370-384.
R.E. Tarjan. 4,
Note on
1956.
The Max-Flow Algorithm
Comput.
of Oper. Res. 4, 39-59.
Successive Shortest Path Algorithm for the Assignment
1979. Graph Algorithms.
S.
Strongly Convergent Primal Simplex
Theory TT-2, 117-119.
Technical Report TM-80, Laboratory for
Even,
A
1979.
Algorithm for Generalized Networks. Math,
19, 248-264.
of Dinic
Computer
Science, M.I.T.,
Computer Science
1975.
An
and Karzanov:
Exposition.
Cambridge,
MA.
Press, Maryland.
Network Flow and Testing Graph Connectivity.
507-518.
D.,
and C.U. Martel.
1987.
On
the Efficiency of
Maximum Flow
Algorithms on Networks with Small Integer Capacities.
Research Report,
Department of Computer Science, Iowa
lA.
State University,
Ames,
To appear
in
Algorithmica.
Florian,
M.
1986.
Nonlinear Cost Network Models in Transportation Analysis.
Math. Prog. Study 26, 167-196. Floyd, R.W. 1962. Algorithm 97: Shortest Path.
Ford, L.R.,
Comm. >4CM
Network Flow Theory. Report
Jr.
1956.
Jr.,
and D.R. Fulkerson.
P-923,
5, 345.
Rand
Corp., Santa Monica,
CA. Ford, L.R., J.
Math.
8,
Maximal Flow through
a Network.
Canad.
399-404.
Ford, L.R.,
Sd.
1956.
3, 24-32.
Jr.,
and D.R. Fulkerson.
1956.
Solving the Trar\sportation Problem. Man.
,
197
Ford, L.R.,
and D.R. Fulkerson.
Jr.,
Capacitated Hitchcock Problem. Naval Res.
Ford, L.R.,
Jr.,
and DR. Fulkerson.
A
1957.
Primal-Dual Algorithm for the
Logist. Quart. 4, 47-54.
Constructing Maximal Dynamic Flows from
1958.
Static Flows. Oper. Res. 6, 419-433.
Ford,
L., R.,
and D.
R. Fulkerson.
Multicommodity Network Flow. Man. Ford, L.R.,
and D.R. Fulkerson.
Jr.,
A
1958.
Sci. 5,
Suggested Computation for Maximal
97-101.
1962. Flows in Networks..
Princeton University
Press, Princeton, NJ.
and
Francis, R.,
Sons.
P.
Mirchandani
(eds.).
1988.
Discrete Location Theory.
John Wiley
&
To appear.
Frank, H., and
1971.
Frisch.
I.T.
Communication, Transmission, and Transportation
Networks. Addison-Wesley.
Fredman, M.
SIAM
].
of
1986.
L.
Computing
New Bounds 5,
83
-
on the Complexity of the Shortest Path Problem.
89.
Fredman, M.L., and R.E. Tarjan. 1984. Fibonacci Heaps and Their Uses
Network Optimization Algorithms. 338-346, also in
/.
of
25th Annual
IEEE Symp. on Found,
in of
Improved
Comp. Sci
ACM 34(1987), 596-615.
An 0(m^ log n) Capacity -Rounding Algorithm for the Minimum Problem: A Dual Framework of Tardos' Algorithm. Math. Prog. 35,
1986.
Fujishige, S.
Cost Circulation 298-309.
Fulkerson, D.R. 1961.
SIAM J.
An
Out-of-Kilter
Method
for
Minimal Cost Flow Problems.
Appl. Math. 9, 18-27.
Fulkerson, D.R., and C.B. Dantzig.
Networks. Naval
Gabow, H.N.
1955.
Computation of
Maximum Flow
in
Res. Log. Quart. 2, 277-283.
1985.
Scaling Algorithms for
Network Problems. J.ofComput.Sys.Sci.
31, 148-168.
Gabow, H.N., and Problems.
SIAM
].
R.E. Tarjan.
1987.
Comput. (submitted).
Faster Scaling Algorithms for
Network
198 GaUl, Z.
OCV^/S E^/^) Algorithm
1980.
for the
Maximum Flow
Problem, Acta
Informatica 14, 221-242.
On
1981.
Galil, Z.
the Theoretical Efficiency of Various
Theoretical
Comp.
Galil, Z.,
and A. Naamad. /.
and
E.
Galil, Z.,
103-111.
Sci. 14,
Flow Problem.
ofComput. Tardos.
An 0(VE
1980.
Algorithm
log^ V)
and
S.
An 0(n^(m
1986.
(eds.).
As Annals
1988.
Pallottino.
Network Optimization,
B.
+ n log n) log n)
Simeone,
Document
Schweitzer, and
Transportation Problems and
Gibby, D.,
F.
Toth, G. Gallo,
P.
and G.
Its
81 -PI
J.,
136-146.
In
F. Maffioli,
Starchi.
Fortran Codes for
and
S.
Pallottino
Computational Implications. Math. Prog.
Glover, D. Klingman, and M. Mead.
and C. Witzgall.
Algorithms for Calculating
1983.
Network Codes.
1973.
A
A
Italy.
The Zero Pivot Phenomenon
1977.
E. Shlifer.
Shortest Paths:
1982.
-4-SOFMAT-27, Rome,
Selection Rules for Primal Simplex Based
Gilsinn,
,
Min-Cost Flow
of Operations Research 13, 3-79.
Bibliography. Sofmat
B., P.
Sci.
Shortest Path Algorithms.
Gallo, G., S. Pallottino, C. Ruggen,
Gavish,
Maximum
for the
Sys. Sci. 21, 203-217.
Algorithm. Proc. 27th Annual Symp. on the Found, of Comp. Gallo, G.,
Network Flow Algorithms.
A
in
12, 226-240.
Comparison
of Pivot
Oper. Res. Letters 2, 199-202.
Performance Comparison of Labeling
Shortest Path Trees.
Technical Note 772, National
Bureau of Standards, Washington, D.C. Glover,
F.,
R. Glover,
and D. Klingman.
Algorithm. Netxvorks 14, No.
Glover,
F.,
R. Glover,
1984.
The Threshold Shortest Path
1.
and D. Klingman.
1986.
Threshold Assignment Algorithm.
Math. Prog. Study 26, 12-37.
Glover,
F.,
D.
Kamey, and D. Klingman.
Comparisons of Primal,
Network
Eow Problem.
EXial
1974.
Implementation and Computational
and Primal-Dual Computer Codes
Networks
4,
191-212.
for
Minimum
Cost
199
Glover, R, D.
Kamey, D. Klingman, and A. Napier.
Start Procedures, Basis
Change
Criteria,
1974.
A
Computational Study on
and Solution Algorithms
for Tranportation
Problem. Man. Sd. 20, 793-813.
Glover,
F.,
and D. Klingman.
Government. AIIE Transactions Glover,
D. Klingman,
F.,
9,
Glover,
Mote, and D. Whitman. 1979. Comprehensive Computer
J.
Maximum Flow
for the
Maximum Flow
Glover,
F.,
Algorithms. Applications of
Science 3, 109-175.
D. Klingman,
F.,
and
363-376.
Evaluation and Enhancement of
Management
Netvk'ork Applications in Industry
1976.
Mote, and D. Whitman.
J.
Problem. Naval Res.
A
1984.
Primal Simplex Variant
Logis. Quart. 31, 41-61.
D. Klingman, and N. Phillips.
A New
1985.
Polynomially Bounded
Shortest Path Algorithm. Oper. Res. 33, 65-73.
Glover,
F.,
D. Klingman, N. Phillips, and R.F. Schneider.
New
1985.
Shortest Path Algorithms and Their Computational Attributes.
Polynomial
Man.
Sci. 31,
1106-1128.
Glover,
F.,
D. Klingman, and
Stutz.
J.
Network Optimization. INFOR Goldberg,
A.V.
for
E.
Tardos.
ACM
1988.
M.I.T.,
Cambridge,
Report
MA.
Combiiuitorial Algorithms for the
Research Report.
Laboratory for Computer
MA.
Goldberg, A.V., and RE. Tarjan. 18th
Technical
Algorithm.
Computer Science,
Generalized Circulation Problem. Science, M.I.T., Cambridge,
for
12, 293-298.
Goldberg, A.V., S.A. Plotkin, and
Proc.
Augmented Threaded Index Method
A New Max-Flow
1985.
MIT/LCS/TM-291, Laboratory
Problem.
1974.
1986.
A New
Approach
to the
Maximum Flow
Symp. on the Theory of Comput., 136-146. To appear in
/.
ACM. Goldberg, A.V., and R.E. Tarjan. Successive Approximation.
1987.
Proc. 19th
Solving
ACM
Minimum
Cost Flow Problem by
Symp. on the Theory
of
Comp. 136-146.
200
Goldberg, A.V., and R.E. Taijan. 1988a.
Solving
Minimum
Cost Flow Problem by
(A revision of Goldberg and Tarjan
Successive Approximation.
[1987]. )To
appear
in
Math. Oper. Res.
Goldberg, A.V., and R.E. Tarjan.
ACM
Canceling Negative Cycles. Proc. 2(Hh
Golden,
B. 1988.
Application of
M.
I.
T.
,
Golden,
LP and Networks. Seminar given
and
the Theory of Comp., 388-397.
Magnanti.
T. L.
1985.
:
An
OperatJons Research Center,
at the
MA.
Bibliography. Networks
Goldfarb, D.
Symp. on
Controlled Rounding of Tabular Data for the Cerisus Bureau
Cambridge, B.,
Finding Minimum-Cost Circulations by
1988b.
7,
Deterministic
1977.
Network Optimization:
A
149-183.
Efficient
Dual Simplex Algorithms
for the
Assignment Problem.
Math. Prog. 33, 1S7-203.
Goldfarb, D., and M.D. Grigoriadis.
and Network Simplex Methods
FORTRAN Codes for Network Goldfarb, D.,
J.
Hao, and
1986.
for
Kai.
Computational Comparison of the Dinic
Maximum
Optimization.
S.
A
1986.
Flow.
As Annals
In B.
Simeone
et al. (eds.)
of Operations Research 13, 83-124.
Efficient Shortest
Path Simplex Algorithms.
Research Report, Department of Operations Research and Industrial Engineering,
Columbia University, Goldfarb, D.,
New
Hao, and
J.
S.
Kai.
1987.
Anti-Stalling Pivot Rules for the
Network
Research Report, Department of Operations Research and
Simplex Algorithm.
Columbia University,
Industrial Engineering,
Goldfarb, D., and
York, NY.
J.
Hao. 1988.
Maximum Flow Problem
in
A
At Most
New
York, NY.
Primal Simplex Algorithm that Solves the
nm
Pivots and
O(n^m) Time.
Technical
Report, Department of Operations Research and Industrial Engineering, Columbia University,
New
York, NY.
Goldfarb, D., and J.K. Reid.
Math. Prog.
Gomory, 551-570.
1977.
A
Practicable
Steepest
Edge Simplex Algorithm.
12,361-371.
R. E.,
and
T. C.
Hu. 1961. Multi-Terminal Network Flows. f.ofSlAM
9,
201
Gondran, M., and M. Minoux. 1984. Graphs and Algorithms. Wiley-Interscience.
M. D.
Grigoriadis,
Method. Math. Prog. Study Grigoriadis,
M. D.
Grigoriadis,
M.
Subroutines.
and
SIGMAP
Gusfield, D.
26, 83-111.
T.
Hsu.
The Rutgers Minimum Cost Network Flow
1979.
Bulletin of the
ACM
26, 17-18.
Very Simple Algorithms and Programs
1987.
Flow Analysis.
Implementation of the Network Simplex
Efficient
Personal Communication.
1988.
D.,
An
1986.
Research Report No. CSE-87-1.
Network
Computer Science and
of
CA.
Engineering., University of California, Davis,
Femandez-Baca.
Gusfield, D., C. Martel, and D.
Dept.
for All Pairs
1985. Fast Algorithms for Bipartite
Network Row. Technical Report No. YALEN/DCS/TR-356, Yale
University,
New
Haven, CT.
Hamachar, H.
Numerical Investigations on the Maximal Flow Algorithm of
1979.
Karzanov. Computing Hassin, R., and D.
22, 17-29.
Johnson.
B.
in Undirected Planar
Networks.
Hausman, D.
Integer
1978.
.-<
1985.
An
SIAM J.
O(nlog^n) Algorithm for
Comput.
14,
612-^24.
A
Programming and Related Areas:
Lecture Notes in Economics and Mathematical Systems, Vol.
Helgason, R. V., and
Implementing Hitchcock,
Numerous
J.
L.
Kennington.
1977.
The Distribution
1941.
F. L.
Facilities.
/.
Math. Phys
of a Product
and Transportation Problems. Naval
J.
E.
,
and
Bipartite Graphs.
Hu,
T. C.
1963.
160.
Springer-Verlag.
Efficient
R.
M. Karp.
SIAM
J.
of
9, 63-68.
from Several Sources
to
A
Note on Shortest Path, Assignment,
Res. Log. Quart. 10, 375-379.
1973.
Comp.
Procedure for
20, 224-230.
.
Hoffman, A. J., and H.M. Markowitz. 1963.
Hopcroft,
An
Classified Bibliography.
Dual-Simplex Network Flow Algorithm. AIIE Trans.
a
Maximum Flow
2,
An n
'
Algorithm for
Maximun Matching
225-231.
Multicommodity Network Flows. Oper.
Res. 11, 344-260.
in
202
Hu, T.C.
1969. Integer Programming and Network Flours.
A
Hung, M.
S.
Oper.Res.
31,595-600.
Hung, M. Oper. Res
Imai, H.
S.,
1983.
and
a
1980.
Solving the Assignment Problem by Relaxation.
28, 969-892.
.
1983.
On
and M.
New
M.
Iri,
Polynomial Simplex Method for the Assignment Problem.
and W. O. Rom.
the Practical Efficiency of
Iri.
1984.
Maximum Flow
Algorithms,
/.
Practical Efficiencies of Existing Shortest-Path Algorithms
Bucket Algorithm.
/.
A New Method
1960.
Various
26,61-82.
Oper. Res. Soc. Japan
Imai, H.,
Addison-Wesley.
of the Oper. Res. Soc. Japan 27, 43-58.
of Solving Transportation-Network Problems.
J.
Oper.
Res. Soc. Japan 3, 27-87.
Iri,
M.
1969. Network Flaws, Transportation and Scheduling.
A.,
Itai,
Comput.
and
No.
8,
Jewell,
Maximum Flow
in Planar
Press.
SIAM
Networks.
J.
8,135-150.
Jensen, P.A., and
Jewell,
1979.
Y. Shiloach.
Academic
W.
W.
Barnes.
Operation Research Center, M.I.T., Cambridge,
W.
S.
1962.
&
Network Flow Programming. John Wiley
Optimal Flow Through Networks.
1958.
S.
1980.
Sons.
Interim Technical Report
MA.
Optimal Flow Through Networks with
Gair>s.
Oper. Res.
10, 476-
499.
Johnson, D. B. 1977a. Efficient Algorithms for Shortest Paths in Sparse Networks.
ACM
/.
24,1-13.
JohT\son, D. B.
Allerton Conference on
Johnson, D.
B.
Operations Take
Efficient Special
1977b.
Purpose Priority Queues.
Comm., Control and Computing,
1982.
OGog
A
Priority
Queue
in
Proc. 15th
Annual
1-7.
Which
Initialization
log D) Time. Math. Sys. Theory 15, 295-309.
and Queue
203 Johnson, D.
and
B.,
Venkatesan. 1982. Using Oivide and Conquer to Find Flows in
S.
Directed Planar Networks in O(n^/^logn) time. In
Comm.
Allerton Conference on
Champaign, Johnson,
Control, and Computing.
Univ. of Dlinois, Urbana-
IL.
1966.
E. L.
Networks and Basic
Jonker, R., and T. Volgenant.
Solutions. Oper. Res. 14, 619-624.
Improving the Hungarian Assignment
1986.
Letters 5, 171-175.
Algorithm. Oper. Res.
Jonker, R.,
Proceedings of the 20th Annual
and A. Volgenant.
1987.
A
Shortest
Augmenting Path Algorithm
Dense and Sparse Linear Assignment Problems. Computing Kantorovich, L. V.
1939.
Publication
of Production.
House
6(1960), 366-422.
Kapoor,
and
P.
Vaidya.
of the Leningrad University, 68 pp.
1986.
Fast
Programming and Multicommodity Flows, Theory of Comp.
,
Karmarkar, N.
38, 325-340.
Mathematical Methods in the Organization and Planning
in Mfln. Sci.
S.,
for
Translated
Algorithms for Convex Quadratic
ACM
Proc. of the 18th
Symp.
on the
147-159.
1984.
A New
Polynomial-Time Algorithm
for Linear
Programming.
Combinatorica 4, 373-395.
Karzanov, A.V.
1974.
Determining the Maximal Flow in a Network by the Method
of Preflows. Soviet Math. Doklady 15, 434-437.
Kastning, C.
1976.
Integer
Programming and Related Areas:
A
Classified Bibliography.
Lecture Notes in Economics and Mathematical Systems. Vol. 128. Springer-Verlag.
Kelton,
W.
D.,
and A. M. Law.
1978.
A
Mean-time Comparison of Algorithms
the All-Pairs Shortest-Path Problem with Arbitrary Arc Lengths.
Kennington,
J.L.
1978.
Networks
8,
for
97-106.
Survey of Linear Cost Multicommodity Network Flows. Oper.
Res. 26, 209-236.
Kennington,
J.
L.,
and
Wiley-Interscience,
R. V. Helgason.
NY.
1980.
Algorithms for Network
Programming,
204
A
Kershenbaum, A. 1981.
Note on Finding Shortest Path Trees. Networks
11,
399-
400.
Klein,
M.
A
1967.
Klincewicz,
G.
J.
A Newton Method
1983.
Problems. Networks
Klingman,
Primal Method for Minimal Cost Flows. Man.
Sci.
14, 205-220.
Convex Separable Network Flow
for
13, 427-442.
NETGEN: A Program for Assignment, Transportation, and Minimum
and
D., A. Napier,
Large Scale Capacitated
J.
1974.
Stutz.
Generating
Cost Flow
Network Problems. Man. So. 20,814-821.
Koopmans,
C.
T.
Optimum
1947.
Utilization of the Transportation System.
Washington, DC. Also
Proceedings of the International Statistical Conference,
as supplement to Econometrica
Kuhn, H. W.
1955.
reprinted
17 (1949).
The Hungarian Method
for the
Assignment Problem. Naval
Res.
Log. Quart. 2, 83-97.
Lawler, E.L. 1976. Combinatorial Optimization:
Networks and Matroids. Holt, Rinehart
and Winston. Magnanti,
T. L.
Combinatorial Optimization and Vehicle Fleet Planning:
1981.
Perspectives and Prospects. Networks 11, 179-214.
Magnanti,
T.L.,
and
R. T.
Models and Algorithms.
Wong.
1984.
Trans. Sci. 18, 1-56.
Malhotra, V. M., M. P. Kumar, and for Finding
Maximum Flows
Martel, C. V.
Algorithms.
1987.
Network Design and Tranportation Planning:
A
in
S.
N. Maheshwari. 1978.
Networks. Inform.
Comparison
An CK V
Process. Lett. 7
of Phase
I
,
1
3)
Algorithm
277-278.
and Non-Phase Network Flow
Research Report, Dept. of Electrical and Computer Engineering,
University of California, Davis, CA.
McGinnis,
L.F.
1983.
Implementation and Testing of a Primal-Dual Algorithm
the Assignment Problem. Oper. Res. 31, 277-291.
Mehlhom,
K. 1984.
Data Structures and Algorithms.
Springer Verlag.
for
205
Two Segment
Meyer, R.R. 1979.
Meyer,
and
R. R.
C. Y. Kao.
Separable Programming. Man.
Sri. 25,
285-295.
Secant Approximation Methods for Convex
1981.
Optimization. Math. Prog. Study 14, 143-162.
Minieka,
New
Optimization Algorithms for Networks and Graphs.
1978.
E.
Marcel Dekker,
York.
Minoux, M.
A
1984.
Problems. Eur.
Minoux, M.
J.
Polynomial Algorithm for
Quadratic Cost Flow
Oper. Res. 18, 377-387.
Network Synthesis and Optimum Network Design Problems:
1985.
Models, Solution Methods and Applications. Universite Pierre
Minoux, M.
Mirumum
et
Marie Curie,
Paris, France.
Solving Integer
1986.
Technical Report, Laboratoire MASI,
Minimum
Cost Flows with Separable Convex
Cost Objective Polynomially. Math. Prog. Study 26, 237-239.
Minoux, M.
1987.
Network Synthesis and E>ynamic Network Optimization. Annals
of Discrete Mathematics 31, 283-324.
Minty, G.
Moore,
1960.
J.
E.
1957.
F.
International
Monotone Networks.
Proc. Roy. Soc.
London
The Shortest Path through a Maze.
Symposium on
the Theory of Switching Part
,
257 Series A, 194-212.
In Proceedings II;
of the
The Annals of the
Computation Laboratory of Harvard University 30, Harvard University Press, 285-292.
Mulvey,
1978a.
J.
Pivot Strategies for Primal-Simplex
Network Codes.
J.
ACM
25,
266-270.
Mulvey,
J.
1978b. Testing a Large-Scale
Network Optimization Program. Math.
Prog.
15,291-314.
Murty, K.C. 1976. Linear and Combinatorial Programming. John Wiley
Nemhauser, Wiley
&
G.L.,
and L.A. Wolsey.
1988.
Integer
&
Sons.
and Combinatorial Optimization. John
Sons.
Orden, A. 1956. The Transshipment Problem. Man.
Sci. 2,
276-285.
106 Orlin, J.B.
Maximum-Throughput Dynamic Network Flows. Math.
1983.
Prog. 27,
214-231.
Orlin,
the
Genuinely Polynomial Simplex and Non-Simplex Algorithms
1984.
B.
J.
Minimum
Cost Flow Problem.
Management,
M.I.T.,
Orlin,
1985.
B.
J.
On
Proc. 20th
Orlin,
J.
ACM
B.,
A
1988.
B.
J.
and
Technical Report No. 1615-84, Sloan School of
Cambridge, MA. the Simplex Algorithm for
Networks. Math. Prog. Study Orlin,
Faster Strongly Polynomial
Symp. on
B.,
Minimum
and
R. K. Ahuja.
New
1987.
R. K. Ahuja.
Cycle
Minimum
Cost Flow Algorithm.
the Theory of Comp., 377-387.
School of Management, Massachusetts
J.
Networks and Generalized
24, 166-178.
E>istance-E>irected
Flow and Parametric Maximum Flow Problems.
Orlin,
for
New
1988.
Mean Problems.
Research Center, M.I.T., Cambridge, Papadimitriou, C.H., and K.
Ii\stitute of
Maximum
Algorithms for
Working Paper
1908-87, Sloan
Technology, Cambridge,
MA.
Scaling Algorithms for the Assignment
Working Paper No.
OR
and
178-88, Operations
MA. 1982.
Steiglitz.
Combinatorial Optimization:
Algorithms
and Complexity. Prentice-Hall. Pape, U. 1974. Implementation and Efficiency of Moore- Algorithms for the Shortest
Route Problem. Math. Prog. 7,212-222. Pape, U. 1980. Algorithm 562: Shortest Path Lenghts.
ACM Trans.
Math. Software
6,
450-455.
Phillips, D.T.,
and A. Garcia-Diaz.
1981.
Fundamentals of Network Analysis. Prentice-
HaU. Pollack, M.,
and W. Wiebenson.
1960.
Solutions of the Shortest-Route
Problem-A
Review. Oper. Res. 8,224-230.
Potts, R.B.,
Rock, H.
and R.M. 1980.
Oliver. 1972. Floips in Transportation Netxvorks.
Scaling Techniques for Miiumal Cost
.(ed.). Discrete Structures
and Algorithms
.
Academic
Network Flows.
Carl Hansen, Munich, 101-191.
Press.
In V.
Page
207
1984.
Rockafellar, R.T.
Network Flows and Monotropic Optimization.
Wiley-
Interscience.
Roohy-Laleh, Method.
E.
Improvements
1980.
Network Simplex
Unpublished Ph.D. Dissertation, Carleton University, Ottawa, Canada.
Rothfarb,
N.
B.,
Shein,
P.
MuJticommodity Flow. Oper. Sheffi, Y.
to the Theoretical Efficiency of the
1985.
and
T.
I.
Frisch.
1968.
Common
Terminal
Res. 16, 202-205.
Urban Transportation Networks: Equilibrium Analysis with Mathematical
Programming Methods. Prentice-Hall.
An
Shiloach, Y., 1978.
Maximum Flow
0(nl log^(I))
Algorithm.
Technical Report
STAN-CS-78-702, Computer Science Dept., Stanford University, CA.
and U. Vishkin.
Shiloach, Y.,
1982.
An OCn^
log n) Parallel
Max-Flow Algorithm.
/.
Algorithms 3 ,128-'i46.
Sleator, D. D.,
and
R. E. Tarjan.
1983.
A
Data Structure for Dynamic Trees,
/.
Comput.
Sys.Sci. 24,362-391.
Smith, D. K. 1982. Network Optimisation Practice:
&
A
Computational Guide. John Wiley .
Sons.
Srinivasan, V., and G. L.
Thompson.
1973.
Techniques for Primal Transportation Algorithm,
Swamy,
M.N.S., and K. Thulsiraman.
Wiley
&
Syslo,
M.M., N. Deo, and
1981.
Benefit-Cost Analysis of Coding /.
ACM
20, 194-213.
Graphs, Networks, and Algorithms. John
Sons.
Prentice-Hall,
New
J.S.
Kowalik.
1983.
Discrete Optimization
Inductive Algorithm. Disc. Math.
E.
1985.
Combinatorica
5,
Algorithms.
Jersey.
Tabourier, Y. 1973. All Shortest Distances in a Graph:
Tardos,
-,.
-
A
An Improvement
to Dantzig's
4, 83-87.
Strongly Polynomial
Minimum
Cost Circulation Algorithm.
247-255,
Tarjan, R.E. 1983. Data Structures and Network Algorithms.
SIAM, Philadelphia, PA.
208
Tarjan, R. E. Res. Letters 2
A
1984.
Simple Version of Karzanov's Blocking Flow Algorithm, Oper.
265-268.
,
Tarjan, R. E.
Maximum Network
1986.
Algorithms for
1987.
Personal Communication.
Flow. Math. Prog. Study 26,
1-11.
Tarjan, R. E.
Personal Communication.
Tarjan, R. E. 1988.
Tomizava, N.
On Some
1972.
Network Problems. Networks
1,
Techniques Useful for Solution of Transportation
173-194.
On Max Flow
Truemper, K. 1977.
with Gair\s and Pure Min-Cost Flows. SI
AM
].
Appl.Math. 32,450-456.
Vaidya,
P.
An
1987.
Algorithm
for Linear
Programming which Requires 0(((m
+n)n^ + (m+n)^-^n)L) Arithmetic Operations,
Proc. of the 19th
ACM
Symp. on the
Theory of Comp., 29-38.
Van
Vliet, D.
Improved Shortest Path Algorithms
1978.
for Transport
Networks.
Transp.Res. 12,7-20.
A
Von Randow,
R.
1978-1981.
Lecture Notes in Economics and Mathematical Systems, Vol.197.
1982. Integer
Programming and Related Areas:
Classified Bibliography
Springer-Verlag.
A
Von Randow,
R.
1981-1984.
Lecture Notes in Economics and Mathematical Systems, Vol. 243.
1985. Integer
Programming and Related Areas:
Classified Bibliography
Springer-Verlag.
Wagner,
R. A. 1976.
A
Shortest Path Algorithm for
Edge
-
Sparse Graphs.
/.
ACM
23^-57. Warshall,
S.
1962.
Weintraub, A.
Convex
Costs.
A Theorem on Boolean
1974.
Man.
A
Matrices.
J.
ACM
9,11-12.
Primal Algorithm to Solve Network Flow Problems with
Sci. 21, 87-97.
209
Weintraub, A., and
Barahona.
F.,
1979.
A
Algorithm for the Assignment
Ehial
Departmente de Industrias Report No.
Problem.
2,
Universidad de Chile-Sede
Occidente, Chile.
Whiting,
Through
P. D.
J.
A. Hillier.
Road Network.
a
WiUiams,
and
,
J.
Zadeh, N.
W.
1964.
J.
1972.
Minimum Zadeh, N.
1973a.
A Method
Algorithm 232: Heapsort. Comm. y4CM 7
Theoretical Efficiency of the /.
y4CM
A Bad Network
More
Route
,
347-348.
Edmonds-Karp Algorithm
for
19, 184-192.
Problem
Cost Flow Algorithms. Math. Prog.
1973b.
for Finding the Shortest
Oper. Res. Quart. 11, 37-40.
Computing Maximal Flows. Zadeh, N.
1960.
5,
for the
Simplex Method and other
255-266.
Pathological Examples for
Network Flow Problems. Math.
Prog. 5,217-224.
Zadeh, N.
No.
1979.
26, Dept. of
Near Equivalence
of
Network Flow Algorithms. Technical Report
Operations Research, Stanford University, CA.
l^8^7
U^6
Date Due
m^
ne
?«;*
>
SZQ0^
nrr
^^. 0.5
W
,„_
4Pi? 2 7 1991
t f^cr
J
CM-
OS
1992
•
1
::m
\995t-
o 1994 Lib-26-67
.
MIT
3
LIBRARIES DUPl
I
TDSD DQ5b72fl2
b