EE 220 Economic Operation
Economic Operation of Power Power System •
•
One of the objective of power system planners and operators is to minimize the cost of operating a power system A power system is composed of several components: – – – – –
Generators Transmission Lines Transformers Capacitors, Inductors Other devices such as breakers, synchronous condensers etc.
Economic Operation of Power Power System •
However generator units have the major contribution in operating costs since fuel is needed –
–
–
Fuel can be oil, coal, uranium, natural gas Fuel prices are volatile and dictated by market forces
Although hydro plants are cheaper but its availability availability is inferior than those plants that utilizes conventional conventional fuel
Objectives of Economic Operation Study •
•
Optimize certain controllable power system variables to achieve a desired objective The common objectives are: –
Minimize generator operating cost
–
Minimize copper loss (I2R)
–
Optimize transmission/distribution configuration (advance)
Controllable Variables •
Generator active power output
•
Generator reactive power output
•
•
Transmission/distribution configuration through breaker configuration Status of power system components: ON/OFF
Optimization •
•
•
Refers to choosing the best element from some set of available alternatives [1] In the simplest case, this means solving problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set Although “brute-force” method can be employed to find the optimal solution to a problem, however a large-scale and realistic system has several or hundreds of variables that must be taken into consideration
Optimization Techniques •
Conventional Optimization Methods –
–
–
–
–
–
–
–
–
Unconstrained Optimization Nonlinear Programming (NLP) Linear Programming (LP) Quadratic Programming (QP) Generalized Reduced Gradient Method Newton Method Network Flow Programming (NFP) Mixed Integer Programming (MIP) Interior Point Programming
Optimization Techniques •
Intelligent Method –
Neural Networks (NN)
–
Evolutionary Programming (EP)
–
Particle Swarm Programming (PSO)
Optimization Techniques •
Optimization with Uncertainties –
Probabilistic Optimization
–
Fuzzy Set
–
Analytic Hierarchal Process
Conventional Optimization Methods •
Unconstrained Optimization –
–
–
Serves as basis for constrained optimization formulation
No constraints i.e. transmission limit, generator limit Approaches gradient method, line search, Lagrange multiplier method
Conventional Optimization Methods •
Linear Programming – –
Linearization of nonlinear equations are necessary Has two major components: • •
– – – –
–
–
(1) Objective (2) Constraints
Has reliable convergence Very easy to formulate once linearization is performed Very fast However due to the linearization properties some nonlinear properties introduce approximation inaccuracies i.e. line losses However its solution/precision is generally acceptable for most applications Trivia: The Philippine Wholesale Electricity Spot Market uses LP solution to optimize schedules and derive process
Conventional Optimization Methods •
Nonlinear Programming –
–
–
–
–
Directly handles non-linear equations in the problem solution However it requires a good approximation of a starting point to start (aids in finding the global extreme points) More accurate than LP since little or no information is lost This is generally slower than LP More complicated to formulate
Conventional Optimization Methods •
Interior Point Programming –
Can handle linear and non-linear equations
–
Accuracy is greater than LP
–
Although is harder to formulate
Intelligent Methods •
•
•
•
Non-traditional method of finding optimal solution Usually simulates a natural event or phenomenon Example of a natural event: Evolution which was used as a pattern for Evolutionary algorithm (mutation, reproduction, selection etc.). Usually used for academic and research since commercial systems utilizes conventional methods
Optimization with Uncertainties •
•
•
•
•
Several events/parameters are probabilistic/uncertain in nature For Power System: Real-time demand is uncertain but can be forecasted This type of optimization considers several parameters as probabilistic inputs to determine a solution Probabilistic inputs are usually modeled using Probability Distribution Functions (PDF) i.e. Normal Curve Usually helpful when analyzing possibilities and uncertainties
Unconstrained Optimization •
Extreme point of a function f (X) – defines either a maximum or a minimum of the function f .
f ( x )
a
A point X0 = ( x 1
,
f X 0
h
x 1
…
,
x j
,
A point X0 = ( x 1
,
h
,
h j
, … ,
f X 0
…
x 3
x 4
x 5
x n) is a maximum if
f X 0
for all h = (h1
x 2
f X 0
…
,
, … ,
x j
,
…
,
hn) such that |h j | is sufficiently small for all j .
x n) is a minimum if
x 6
b
x
Necessary and Sufficient Conditions for Extrema •
•
Assuming that the first and second partial derivatives of f (X) are continuous at every X, A necessary condition for X0 to be an extreme point of f (X) is that
f X 0 0 •
A sufficient condition for a stationary point X0 to be extremum is that the Hessian matrix H = 2 f (X) evaluated at X0 is – –
Positive definite when X0 is a minimum point Negative definite when X0 is a maximum point
Example Consider the function f ( x 1, x 2, x 3) = x 1 + 2 x 3 + x 2 x 3 – x 12 – x 22 – x 32 •
The necessary condition
f X 0 0
Example f x1 f x 2 f x3 •
1 2 x1 0
x3 2 x 2 0
2 x 2 2 x3 0
Solution to 3 unknowns and 3 equations: X0
1 2 4 , , 2 3 3
Lagrange Method •
•
A method that provides a strategy in finding the maximum/minimum of a function subject to constraints Named after Joseph Louis Lagrange, an Italian born mathematician
Lagrange Equation L( x, ) f ( x) g ( x) c •
Where is a Lagrange Multiplier
Lagrange Method •
•
f(x): is a function representing controllable variables i.e. cost-function that depends on number of controllable output product g(x): A function representing a set of constraints i.e. maximum limit a controllable variable can be delivered
Karush –Kuhn –Tucker conditions Optimality Conditions •
•
Are necessary conditions for the solution of a nonlinear optimization problem to be optimal Originally developed by Harold W. Kuhn and Albert W. Tucker and later developed by William Karush
Karush –Kuhn –Tucker conditions Optimality Conditions 1.
L
x i
x
0
0
, λ ,μ
2. gi
3. hi
0
4. i
x
x
0
g i
i
0
0
0
0
0
i
0
1
i 1
0 x
i 1
N
N
N g
0
0
i 1
N g
Karush –Kuhn –Tucker conditions Optimality Conditions •
•
•
Condition 1 partial derivative of Lagrange function must equal zero at the optimum. Conditions 2 and 3 restatement of constraint conditions. Condition 4 complementary slackness condition. Slack variables or excess is equal to zero in optimal condition
Example Consider two thermal plants feeding a given power demand. The fuel costs of each is related to the output power as follows: F 1
4 P 1
F 2
2 P 2
2
0.01 P 1
2
0.03 P 2
The objective is to minimize the total cost of operation while satisfying the equality constraint
P 1
P 2 P D
Example The modified cost is L( P1 , P2 , )
4P1 2P2 0.01P12 0.03P22 P D P1 P 2
The necessary conditions for minimization are L( P1 , P 2 , ) P 1 L( P1 , P 2 , ) P 2 L( P1 , P 2 , )
4 0.02 P 1 0
2 0.06 P 2 0 P1 P2 P D
Example For this simple system we are able to eliminate l and P2 to obtain a single equation in P1 given by 0.08 P 1
PD 50 100 200 250
2 0.06 P D 0 P1 12.5 50 125 162.5
P2 37.5 50 75 87.5
4.25 5 6.5 7.25