ARTIFICIAL INTELLIGENCE AND NEURAL NETWORK APPLICATIONS IN POWER SYSTEMS
Document By SANTOSH BHARADWAJ REDDY Email:
[email protected]
Engineeringpapers.blogspot.com More Papers and Presentations available on above site
each of which may have an effect on
ABSTRACT:
The The elec electr tric ic powe powerr indus industr try y is currently
undergoing
the securi security ty of the syste system. m. Neural Neural
an
networks networks have shown great promise
unprecedente unprecedented d reform, reform, ascribable ascribable to,
for thei theirr abi ability to qui quickly ckly and and
one
accu accura rattely ely
of
the
most
potentially
exciting
profitable
and
pred prediict
the
syst system em
recent secu securi rity ty when when trai traine ned d with with data data
developments in increasing usage of
coll collect ected ed from from a smal smalll subs subset et of
artificia artificiall intelligenc intelligencee techniques. techniques. The
system variables.
artificial neural network approach has attr attrac acte ted d
numb number er
of
appl applic icat atio ions ns
The intention of this paper is to give an overview of application of
espe especi cial allly in the the field eld of powe power r
arti artifi fici cial al inte intell llig igen ence ce
system since it is a model del free
network (NN) techniques in power
estima estimator tor.. Neural Neural networ networks ks provid providee
syst system emss to progn prognos osti tica cate te load load on
solutions
power plant and contingency in case
to
nonl nonliinear near
ver very
com comple plex
prob probllems ems.
and
Nonl Nonliinear near
and neura neurall
of any unexpecte unexpected d outage. outage. In this this
probl problems ems,, like like load load forecas forecastin ting g that that
paper we present the key concepts of
cann cannot ot
artificial neural networks, its history,
be
solve olved d
wit with
stan standa darrd
algorithms but can be solved with a
imitation
neur neural al
architecture
netw networ ork k
accu accura racy cy.. pow power er
with with
Moder Modern n syst system emss
rema remark rkab able le
inte interc rconn onnect ected ed
oft often
cons consiist
brain and
neuron’s
finally
the
applic applicati ations ons (load (load foreca forecasti sting ng and
of contingency
thous thousan ands ds of piec pieces es of equi equipm pment ent
of
analysis).
Th e
applications of artificial intelligence
in areas of load forecasting by error
cont contrrol, ol,
Backpropagat Backpropagation ion learning learning algorithm algorithm
Management
Systems
and conting contingenc ency y analysi analysiss based based on
wide widely ly
in
Quality
cent center ers. s. The The abno abnorm rmal al mode modess of
index
have
been
comp comput uter er base based d
used used
Ener Energy gy
are
ener energy gy
now cont contrrol
perspicuously explained.
system operation may be caused by
INTRODUCTION:
networ network k fault faults, s, active active and reacti reactive ve
Modern power systems are required
pow power er imbal mbalan ance cess,
to gener generat atee and and suppl supply y high high qual qualit ity y
deviations. deviations. An unplanned unplanned Operation Operation
electric energy to customers. To
may may lead lead to a mal mal or a comp comple lete te
achieve achieve this requirement requirement,, computers computers
syste ystem m
have have been been applied applied to power power syste system m
emergency
pla plann nnin ing, g, moni monito tori ring ng and and contr control ol..
syst system emss are are rest restor ored ed back back to the the
Power system application application programs programs
normal state according to decisions
for analysing system behaviours behaviours are
made made
stored in computers. In the planning
engineers. There is also a need to
stag stagee of a powe powerr syst system em,, syste ystem m
develo develop p fast fast and effici efficient ent methods methods
anal analy ysis sis
for the pred prediicti ction of abno abnorm rmal al
prog progrrams ams
repe repeat ated edly ly.. modify
are are
Engi Engine neer erss
the
programs
input
adju adjust st
data
according
exec execut uted ed
to
and and
blac blacko kout ut..
by
Under nder
situations,
expe experi rien ence ced d
thes hese power
oper operat atio ion n
system behaviour.
these
to
or freq freque uenc ncy y
Artifi Artificia ciall intell intellige igence nce (AI) (AI)
their has
provided
reasoning
for
experi experience ence and heuris heuristic tic knowled knowledge ge
encodin ding
about about the system system until until satisf satisfact actory ory
declarative declarative knowledge. The advent
plans are determined.
of neur neural al net network workss
For sophisticat sophisticated ed approaches approaches
and
techniques
(NN" (NN"s) s),,
wit with
in
additi addition, on, provid provides es neural neural networ network k
to syste system m planni planning, ng, develo developme pment nt of
modul modules es which which can be execu execute ted d in
meth method odol olog ogie iess
to
an online online enviro environme nment. nt. These These new
incorp incorpora orate te practi practical cal knowle knowledge dge of
techniques supplement conventional
pla plann nnin ing g
computing techniques and methods
and and
tech techni niqu ques es
engi engine neer erss into into prog progra rams ms
whic which h also also incl include ude the the nume numeri rical cal
for sol solving ving prob proble lem ms of powe power r
analysis programs are needed. In the
syst system em
area of power System monitoring and
control.
plan planni ning ng,,
oper operat atio ion n
and and
Unit Unit commit commitment ment,,
Areas of Applications:
Poss Possib ible le
appli applica cati tion onss
of
arti artifi fici cial al
Mainte Maintenanc nancee
scheduling, Load forecasting.
intelligence in power system planning and operati operation on were were invest investiga igated ted by power titles titles and researchers. researchers. In the last
decade,
intelligence
many
systems
1. Artificial Neural Networks
artificial
and
expert
1.1 What is a Neural Network?
systems have been built for solving problems in different areas within the field field of power power system systemss only. only. These These areas are summarized below.
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired
System planning
by by
Transmission planning and design, Generation Generation expansion, expansion, Distribut Distribution ion planning.
the the
way way
biol biolog ogiical cal
ner nervous vous
systems, such as the brain, process information. The key element of this parad paradigm igm is the novel novel struct structure ure of the informatio information n processing processing system. system.
System Analysis
It is composed of a large number of
Loadflow engine, Transient stability
highly highly
interc interconn onnect ected ed
elem elemen ents ts System Operation h Monitoring Alarm processing, processing, Fault diagnosis, diagnosis, Substa Substati tion on monito monitorin ring, g, System System and network network restorati restoration, on, Load shedding, shedding, Volt Voltag agee / reac reacti tive ve power power cont contro rol, l, Cont Contin inge genc ncy y
sele select ctio ion, n,
switching, Voltage collapse.
Netw Networ ork k
(neu (neuro rons ns))
proces processin sing g work workin ing g
in
unison unison to solve solve specif specific ic proble problems. ms. ANNs,
like
people,
learn
by
example. example. An ANN is configured configured for a
speci pecifi ficc pattern
appl appliicati cation on,, recognition
such uch or
as data
classi classific ficati ation, on, throug through h a learni learning ng pro proce cess ss..
Lear Learni ning ng
in
biol biolog ogic ical al
systems systems involves involves adjustments adjustments to the Operational Planning
syna synapt ptic ic
conn connec ecti tion onss
that that
exis existt
between the neurons. This is true of
is a speci pecial aliz ized ed cell cell whi which can can
ANNs as well.
propagate an electrochemical signal. The neuron has a branching input structure (the dendrites), a cell body, and a branching output structure (the axon). The axons of one cell connect to the the dend dendri rite tess of anot anothe herr via via a synapse. When a neuron is activated, it fire firess an elec electr troc ochem hemic ical al signa signall along the axon. This signal crosses the synapses to other neurons, which may in turn fire. A neuron fires only if the total signal received at the cell
The major breakthrough in the fiel ield of ANN ANN occu occurrred wit with the inven vention
of
body from the dendrites exceeds a certain level (the firing threshold).
Back ackpropagation
algorithm which enabled design and
To capture the essence of biological
learni learning ng techni technique quess of multil multilaye ayered red
neural systems, systems, an artifici artificial al neuron
neur neural al netw networ orks ks..
is defined as follows:
Since nce
then hen
the
development development and areas of applicatio application n in which ANN is applied has been
It rece receiv ives es a numb number er of inpu inputs ts (either from original data, or from
thriving.
the output of other neurons in the neural network). Each input comes
1.3 Biological inspiration:
via a connection that has a strength The brain is principally composed of
a
very
large
10,000,000,000)
number of
(circa
neurons,
mass massiv ivel ely y inte interc rcon onne necte cted d (wit (with h an average
of
several
thousand
interconnect interconnectss per neuron neuron,, alth althoug ough h this varies enormously enormously). ). Each neuron
(or
weight);
these
weights
correspond to synaptic efficacy in a biological biological neuron. Each neuron also has has a sing single le thre thresh shol old d valu value. e. The The weighte hted sum of the inputs is formed, subtracted,
and to
the
threshold
compose
the
activation of the neuron (also known
algori orithmic
appro proach
as the post-synaptic potential, or PSP,
computer
of the neuron).
inst instru ruct ctio ions ns in orde orderr to solv solvee a
follows
i.e.
a
the
set
of
problem. •
The The acti activat vatio ion n signa signall is pass passed ed thro throug ugh h an activ activat atio ion n func functi tion on (also
known
as
a
Neural
networks
process
transfer
info inform rmat atio ion n in a simi simila larr way way the the
function) function) to produce produce the output of
human brain does. Neural networks
the
learn learn by example. example. They They cannot cannot be
neuron.
progr programm ammed ed to perfor perform m a specif specific ic task.
On
the
conv conven enttional onal cogn cognit itiive
other
comp comput uter erss
appr approa oach ch
to
hand, use use
a
prob probllem
solving; the way the problem is to solved must be known and stated in smal smalll
unamb unambig iguou uouss
These
instructions
inst instru ruct ctio ions ns.. are
then
If a network is to be of any use, there
converted to a high level language
must ust be input nputss (whic which h carr carry y the
program and then into machine code
values of variables of interest in the
that the computer can understand.
outsid outsidee world) world) and output outputss (which (which form predictions, or control signals).
Neura Neurall networ networks ks and convent convention ional al
The input, hidden and output neurons
algor algorit ithm hmic ic comput computer erss are are not in
need to be connected together.
compet competiti ition on but comple complemen mentt each each other. other. Even more, a large number of
1.4
Neural
networks
versus
conventional computers
task tasks, s, requi require re syst system emss that that use use a combination of the two approaches (normally (normally a conventional conventional computer computer
Neu Neura rall netw networ orks ks take take a diff differ eren entt approach to problem solving than that of
conventional
Conv Conven enttional onal
comp comput uter erss
computers. use use
an
is used used to supe superv rviise the the neur neural al netw networ ork) k) in orde orderr to perf perfor orm m at maximum efficiency.
functi function on (tran (transfe sferr functi function) on) that that is
1.5 Features
1.
High co computational ra rates
specified for the units. This function
due to the massive parallelism.
typi typica call lly y fall fallss into into one one of thre threee
2.
Fault tolerance.
categories:
3.
Training
adopts pts
itself,
the
network
bas based on
the
inform informati ation on receiv received ed from from the
Linear (or ramp) Threshold
environment. 4.
Programmed rules are
Sigmoid
not necessary. 5.
Primitive c om omputational
For linear units, the output activity is proportional to the total weighted
elements.
output. threshold unit, the output is set For threshold
1.6 The Learning Process
at one of two levels, depending on Supe Su perv rvis ised ed
lear learni ning ng
which
whet whether her the the tota totall input input is grea greate ter r
incorp incorpora orates tes an extern external al teacher teacher,, so
than than or less less than than some some thre thresh shol old d
that each output unit is told what its
value.
desi desire red d resp respon onse se to inpu inputt sign signal alss ough oughtt to be. be. Duri During ng the the lear learni ning ng proce process ss global global inform informati ation on may be required.
For sigmoid units, the output varies varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurons
Unsupervised Unsupervised learning learning uses uses no
than do linear or threshold units, but
exte extern rnal al teac teache herr and and is base based d upon upon
all three must be considered rough
only only loca locall info inform rmat atio ion. n. It is also also
approximations.
referred to as self-organisation 2. APPLICATIONS 1.7 Transfer Function App1: Power Systems Load
The behaviour of an ANN depends on both the weights weights and the input-output input-output
Forecasting
difficulty
to
find
functional
Commonly and popular problem that
relati relations onship hip betwee between n all attrib attribute ute
has an import important ant role role in econom economic, ic,
vari variab able le
financ financial ial,,
expansi expansion on
demand, difficulty to upgrade the set
and planni planning ng is load load foreca forecasti sting ng of
of rules that govern at expert system
power systems. Generally most of the
and is ability to adjust themselves
papers and projects in this area are
with with rapid rapid nonline nonlinear ar syste system-l m-load oad
categorized into three groups:
changes. The NNs can be used to
develo developme pment, nt,
and and
inst instan anta tane neou ouss
load load
solve these problems. Most of the ﻢ
Short-term (STLF)
load fore orecasting
ove over
an
interval
projects projects using NNs have considered considered many
such
as
weather
rang rangiing from rom an hour hour to a
condit condition ion,, holida holidays, ys, weekend weekendss and
week is important for various
spec pecial
appl appliicati cation onss
fore foreca cast stin ing g
such such
commitment, dis dispat patch, ch,
as
uni unit
economic
ener energy gy
scheduli uling
and and
sport
matches
mode model, l,
day days
in
succ succes essf sful ully ly..
This is because of learning ability of
tran transf sfer er NNs with many input factors. real
time
ﻢ
Mid-term
load
control. A lot of studies have
forec orecas astting( ing(M MTLF TLF)
been done for using of short-
range from one month to five
term term load load fore foreca cast stin ing g with with
years,
diff differ eren entt
of
enough fuel for power plants
be
afte afterr elec electr tric icit ity y tari tariff ffss are are
these
meth method ods. s. One One methods
may
classified as follow:
&
syst system ems, s,
Jenkins Fuzz Fuzzy y
model, infe infere renc nce, e,
used
to
that that
pur purchas hase
calculated.
Regression Regression model, model, Kalman Kalman filtering, filtering, Box
actors
ﻢ
Long-te -term load forecasting
Expert
(LTLF), covering from 5 to
Neur Neuro o
20 year yearss or more more,, used used by
fuzzy models and Chaos time series analysis. Some of these methods have main limitations such as neglecting of some forecasting attribute condition,
planning
engineers
and
economists to determine the type
and
generating
the
size
plants
of that
minimize
bot both
fixed
and
variable costs.
discrete time series over forecasting intervals. Sta Stand nda ard
standard
Load
load
:-The
curve
is
produce uced once nce a day . It needs needs resc rescal alin ing g over over time time . The
standard
load
characterizes the base load . It is cal calcul culated ated by usin using g histor historica icall load load data data .The .The standar standard d load load calcul calculati ation on can be div divided ded in two parts. The first one makes an
average
using
all
common days in the same 2.1.1
Overview
of
STLF
Techniques:-
A
wide
perio period. d. The holida holidays ys are includ included ed with with Saturd Saturdays ays and Monday Mondays. s. The
variety
of seco second nd part part inve invest stig igat ates es on the the techniques/algorithms for STLF have particular characteristic for each day been reported in the literature (These of the week, separately. For this a procedures typically make use of two simple weighted moving average is basic models peak load models and
made.
load shape modes. (Load d Standard Standard Load Load Concept Concept :- (Loa Shape Model) The load forecasting is divided into two general general parts; parts; peak load load model model and load shape model .Former deals with with dai daily or weekl eekly y peak peak load load mode modeli ling ng & late laterr desc descri ribes bes load load a
Resi Residu dual al/D /Dev evia iati tion on
Load Load:-The
residual load is used to represent the most recent variation of the load . This value contains information for last last 3 hour hours. s. Auto Auto regr regres essi sive ve and and exponential smoothing are the most common methods used to calculate the deviation of load value.
desi desirred 2.1. 2.1.2 2
Arti Artifi fici cial al
Neur Neural al
netw networ ork k
based short term load forecasting:
The deve devellopme opment nt of an ANN ANN
outp output ut for
traini aining ng
the
network. An initial input data set is pr presen esente ted d
to
ANNS ANNSTL TLF F
whi which
based STLF model is divided into two
adj adjusts usts the the weig weight ht val values ues for for a
proce processe sses, s, the "learn "learning ing phase" phase" and
minimu minimum m error. error. Follow Following ing a new
the "recall phase". In learning phase,
input data set is presented and the
the
weight
neurons
are
trained
using
values
are
adjusted
in
hist histor oric ical al input input &out &outpu putt data data and and
acco accord rdan ance ce .The .The proc proces esss fini finish shes es
adju adjust stab able le
grad gradua uall lly y
when the difference between target
optimized to minimize the difference
output and the found output for all
betwe between en the comput computed ed and desire desired d
the input sets is close to zero. The
output. The ANN allows outputs to be
feed forward Multilayer Perceptron
calc calcul ulat ated ed based based on some some form form of
(MLP) neural network model is used
experiences,
than
for implem implement enting ing the STLF STLF model model
understanding the connection between
(ANN (ANNST STLF LF). ). Fig5 Fig5 shows shows a MLP MLP
input and output (or cause and effect).
wit with
In recall phase the new input data is
advantage of this model is that it is
appl applie ied d to the the netw networ ork k & and and its its
abl able
outputs are computed and evaluated
mappi ppings. gs.
for testing purpose. In the ANN based
trained by standard backpropagation
STLF
training algorithm and developed by
weig weight htss
model,
are are
rather
a
layered
ANN
structure structure (Input layer, layer, Hidden layer,
sing singlle
to
lear learn n The
hidd hidden en
high highlly MLP
1ay 1ayer.T er.Tli liee
nonnon-llinea inear r model
is
Rumelhart.
Output layer) is used. In this method the weig weight htss are are cal calcul culated ated by a learning
process
using
error
propa propagat gation ion in parall parallel el distri distribut buted ed pro proce cess ssin ing. g. The The STLF STLF prob proble lem m is formulated with the past data as the input data and the latest data are the 2.1.3 2.1.3 Multila Multilayer yer Percept Perceptron ron and
Its application in load forecasting:-
instances ; which is achieved , thus
The multi layer Perceptron and the
avoidi avoiding ng overloa overloadin ding g of networ network k ,
associated backpropagation algorithm
by by term termin inat atin ing g Lear Learni ning ng once once a
pro propo pose sed d a sound sound meth method od to trai train n
per perfo form rman ance ce
networks having more than two layers
reac reache hed d
of neur neuron onss. The The learn earniing rule ule is
network is capable of approximating
known as Backpropagation which is a
arbitrary mappings given a net of the
grad gradiient ent
output.
dece decent nt
techn echniique que
wit with
backward error(gradient) propagation is
depicted
in
Fig.6. The
back
.
pat pattern ern
The The
has has
been been
back backpr prop opag agal alio ion n
The name backpropagation comes from the fact that the error (gradient)
propagation network in essence learns
of hidd hidden en unit unitss are are deri derive ved d from from
a mapping from a set of input patterns
pro propa paga gati ting ng back backwa ward rd the the erro errors rs
(e.g. extracted features) to, a set of
asso associ ciat ated ed with with the the outp output ut Unit Unitss
output
class
since since the target target values values for hidden
inform informati ation on ) .This .This networ network k can be
units are not given or it is defined to
designed and trained to accomplish a
obta obtain in the the valu values es of the the desi desire red d
wide variety of mappings. This ability
output at hidden layer.
patterns
(
e.g.
comes from the nodes in hidden layer or layer of the network which learns to respond to features found in input patte pattern. rn. The featur features es recogn recognize ized d or extracted by the hidden units ( nodes) corr corres espo pond nd to the the
corr correl elat atio ion n
of
activity among different input units. As the the netw networ ork k is trai rained ned wit with differ different ent exampl examples, es, the networ network k has the ability to generalize over similar featur features es found found in differ different ent patter patterns. ns. The The
hidd hidden en uni unit
(node nodes) s)mu musst be
trained to extract a sufficient set of genera eral
feat eatures
appl pplicab cable
to
Error Backpropagation:-
The back propagation (or backup) algorithm is a generalization of the
Widrow Hoff error correction rule. In the Widrow-Hoff technique an error which is the difference difference between between what the output is and what it is supposed to be is form formed ed and and the the syna synapt ptic ic strength is changed in proportion to erro errorr time timess the the input nput sign signal al in a direction which reduces the error. The direction of change in weights is such that hat the erro errorr will will redu reduce ce in the direction of the gradient (the direction of most rapid change of the error). This type of learning is also called gradient
search. In
the
case
of
multil multilaye ayerr networ networks, ks, the proble problem m is
2.1.4 The Application of ANN to STLF & Results:-
much more difficult.
The The
Choice of activation function
The
most
common
mult multil ilay ayer er
activat vation
function used in multilayer perceptron is the sigmoid. The equation of the sigmoid function is
ANNS ANNSTL TLF F feed feed
impl implem ement entss
forw forwar ard d
neur neural al
network which was trained by using backpropagation training algorithm. Naturally 24 hours data points leads to 24 input nodes in MLP model. Here 2 hidden layers are considered. MSEB data for the period Oct 94 to June
95
i.e
of 35
weeks
for
The back propagation algorithm for
development and implantation of the
network
software was utilized.
using
the
sigmoid
88
activation function is described below .The equation of sigmoid function is Written as
safe safe operat operation ion of electr electrica icall energy energy networ networks. ks. During During the steady steady state state study of an electrical network any one of the possib possible le conting contingenci encies es can have either no effect, or serious effect, or even fatal results for the The algorithm
Backpropagation with
MLP
model
of
netw networ ork k safe safety ty,, depe depend ndin ing g on a given network operating state.
Arti Artifi fici cial al Neura Neurall Netw Networ ork k (A") (A") is
Load flow analysis can be used
develo developed ped for the proble problem m of short short
as a crisp technique for contingency
Term Load Forecasting (STLF) with
risk
a lead time of at least 24 hours. The
performing at run time the necessary
best performance was obtained for the
load oad flow flow anal analy ysis sis stud studiies is a
load load fore foreca cast stin ing g for for the the Tues Tuesda day y
tedious
which
and
operation. operation. An alternative alternative solution is
averag averagee percent percentage age error error of 2.00% 2.00%
the off-line training and the run-time
and 0.20% respectivel respectively. y. This comes
appl appliicat cation
very close to the precision obtained
net networ works. ks.
by the human forecaster. The turning
describing how artificial neural
of
terms
networks can be used to bypass the
selection of the weights and threshold
traditional load flow cycle, resulting
values play key role in convergence
in significan significantly tly faster faster computation computation
of the network. High values of the
times
weight weightss lead lead to the diverg divergence ence and
anal analys ysis is.. A disc discus ussi sion on over over the the
generally generally small values of the order of
efficiency
10-2 the yield better results.
techniques is also included.
App2: Power System Contingency
2.2.1 .2.1 What is
Analysis
power system?
gives
the
gain,
the
maxi aximum
momentum
assessment.
and
for
However
time
of This his
art artifi ificial cial art articl icle
onl online
of
consuming
neur neural al aim aims
at
cont ontinge ngency
the
proposed
conti ntinge ngency in
system contingency is defined Cont Contin inge genc ncy y anal analys ysis is and and risk risk assessment are important tasks for the
as a disturbance that can occur in the network and can result in possible
loss loss of part partss of the the netw networ ork k like like
on all operating points found in the
buses, buses, lines, lines, transforme transformers, rs, or power
data databa base se and and then then a powe powerr flow flow
unit unitss in any any of the the netw networ ork k area areas. s.
solution
Load Load flow flow analysi analysiss
network. According to the results of
is an adequ adequate ate
is
pos possi sibl blee cont contin inge genc ncy y on a give given n
contingency applied of the specific
operating point of the network. It is
oper operat atin ing g point point can can be rank ranked ed as
oft often
“innocent”,
that hat
expe experrienc ienced ed
solution
the
the
case case
flow
on
ans ans for for stud study ying the the eff effect ect of a
the
power
attempted
“violating”,
the
or
engineers, involved in operation of a
“div “diver ergi ging ng / seri seriou ous” s”.. The The prepre-
given given system system,, can guess guess effect effective ively ly
contingency
operating
point
contingency contingency without without the the support support of parameters, various operating point numerical
comp omputa utations.
This
indices and metrics, the contingency
intuition of the operators is useful in
and the power flow result are next
suppor supportin ting g the initia initiall select selection ion of a
stored stored in a table table per conting contingenc ency. y.
list of possible contingencies, which
This contingency table constitutes a
then hen wil will be anal analy ysed sed using sing the
set of features and tuples that can be
described here technique.
considere ered
as
suit uitable ble
neura ural
network input layer data elements if 2.2.2. System Architecture
sele select cted ed in any any comb combin inat atio ion n and
A suitable way of studying the effects
after being statistic statistically ally normalized. normalized.
of cont contin inge genc ncie iess on an elec electr tric ical al
The power flow solution classifying
network is through the definition of
any contin contingenc gency y for any operat operating ing
repr repres esen enta tati tive ve
point, is the output layer value of the
oper operat atin ing g
poin points ts,,
creati creation on of a releva relevant nt data data base, base, in
neural network.
which which parame parameter terss relati relating ng to these these oper operat atin ing g poin points ts is stor stored ed as thes thesee
Neu Neura rall netw networ ork k trai traini ning ng is a
have been measured directly through
computer computer intensive intensive work that needs,
network snapshots. Once a number of
however, to be done only once. As
operating points is simulated, a list of
soon as the neural network is trained
contin contingenc gencies ies to be studie studied d upon is
for a contin contingenc gency, y, the predic predicti tions ons
formed. Each contingency is applied
about the effects of a contingency on
any any oper operat atin ing g poin pointt can can easi easily ly be
cumu cumullativ ativee
deduce uced.
genera generati tion, on, reacti reactive ve load, load, reacti reactive ve
The
pre predi dict ctiions ons
efficiency
depe depend ndss
of
on
the
vari variou ouss
act active
load, oad,
act active
genera generati tion, on, appare apparent nt power power etc. etc. In
fact factor orss such such as the the qual qualit ity y and and the the
recent
quantity of the training features, the
reference nces
type, complexity and connectivity of
aggregates aggregates that yield better better results results
the neural network.
when when appli applied ed,, such such as the the acti active ve
2.2.4 Feature
Neural
Network
Input Selection
bibliogr ography phy in
apparent
ther here
more ore
power
are
elabo aborate
margin
index
(expr (expres esse sed d as the the frac fracti tion on of the the flowing flowing aggregate aggregate apparent apparent power, over
the
agg aggrega egate
MVA
line
transmission limits) and the voltage stability index. The voltage stability index
is
computed
as
“the
sens sensit itiv ivit itie iess of the the tota totall reac reacti tive ve pow power er gene genera rattion to
a
pow power er
know known n
cons consum umpt ptiion, on,
‘reactive
power
reac reacttive ive as
dispatch
coefficients’ ”.
Neu Neura rall
net networ works
can can
be
trai traine ned d with with any any numbe numberr of input input features.
The
neura ural
trai traini ning ng
proc proces esss
can can
network ork sele select ctiv ivel ely y
overweight the most salient features A wide wide range range of electr electrica icall networ network k parameters can be used for describing the network state. Some of them can be the network load level expressed as a perc percen enttage age of the maxi maxima mall network load, the number of lines, the cumu cumula lati tive ve rati rating ng of all all lines lines,, the the
and underweight the least significant ones.
However, er,
the
selection
procedure is time consuming for the training of the neural network, while after the training is complete, it is not not alwa always ys obvi obviou ouss whic which h of the the
input nodes are of greater importance. Furt Further her more more,, the the leas leastt impo import rtan antt input layer nodes may add noise to the neural network network training training process. Bearing this in mind, a pre-selection of the neural network input nodes is of grea greatt use. use. This This can can be achie achieved ved through the use of statistical methods. The statistical methods that apply in the the proc proced edur uree of the the sele select ctio ion n of features are used in the classification theory. The classification of a set of training examples by two features in two classes is considered to be better when when
the the
sub-p ub-pop opul ulat atiions ons
look look
different. the simplest test proposed is the the test test of sepa separa rati ting ng two two clas classe sess usi using just ust the the mean means. s. A featu eaturre selection
test
from
Means
where M1 and M2 are the vectors of feature means for class 1 and class 2, C1 -1 and C2 -1 are the inverse of the covariance matrix for class 1 and class 2 respectively. For reasons of simplicity, a combination of bus and line losses only has been considered as
a
constituent
element
of
a
contingency under study. The four most salient features found were the aggregate reactive power generation, the the
volt voltag agee
stab stabil ilit ity y
inde index, x, the the
aggregate aggregate MVA power flow and the real power margin index. This set of selected features has been used for the training and testing of the neural networks subsequently built.
and
Variances is also proposed:
2.2.5 Quality index
The The qual qualit ity y inde index x is a qual qualit itat ativ ivee measure of the classification power of the neural network. It is an index A and B are of the same feature
that that has has been been calc calcul ulat ated ed for for all all
measured for the classes 1 and 2 n1
simulations and applies on the idea
and n2 are the corresponding number
that that with within in the the thre threee clas classe sess of
of cases sig is a significance level. In
cont contiingen ngency cy
[4] the following measure for filtering
diff differ eren ence ce can can be cons consid ider ered ed to
features separating two classes is also
occur
proposed:
categor gories
of
con contingencies:
“inn “innoc ocen ent” t”
and and
“non “non-i -inn nnoc ocen ent” t”
stat states es,,
between
the
two
major ajor
possible
contin contingenc gencies ies.. In order order to comput computee
comput computati ationa onall time time and recour recourses ses
this
requirements.
“quality
index”
(QA)
the
following formula has been used: 2.5.6 Result
Seven ANNs have been trained for predicting where ai,j is the I-th element of the jth colu column mn of the the conf confus usio ion n matr matrix ix Ai,j. Ai,j. The The conf confus usio ion n matr matrix ix Ai,j Ai,j is a matr atrix
of freq freque uenc ncie iess.
For each each
element element of the matrix matrix ai,j ai,j the i index index refers to predicted values, while the j index refers to real values. The values range from one to three denoting the three possible contingency cases: one in
case
of
nonconvergence
the
severity
of
contingencies for the network of the islan sland d of Cret Crete, e, the tes testing ting set performance of these ANNs was in the range of 57% to 96% 96%, this per perfo form rman ance ce did did not not seem seem to be affe affect cted ed by the the spli splitt betw betwee een n the the trai traini ning ng and and test testin ing g case cases, s, as for for both a 70-30 and a 85-15 split the results were simillar.
/
potentiall potentially y serious serious contingency contingency,, two in
case
of
MVA
and
voltage
viol violat atio ions ns and and thre threee in case case of an innocent contingency. This This techni technique que is involv involves es a tedious training phase, where a set of neural
networks
is
created,
corr corres espo pond ndiing to a give given n set set of possible contingencies. The resulting set of ANNs demonstrate satisfactory predi predicti ctive ve power power in classi classify fying ing the contin contingenc gencies ies correc correctly tly at run time. time. The The run run time time perf perfor orma manc ncee of the the sys system tem is very very good good in terms erms of
while sensitivity analysis in terms of the ANN architecture demonstrated that that the the numb number er of hidde hidden n nodes nodes seem to have a serious effect on the performance
of
the
network ork,
sugg sugges esti ting ng use use of more more comp comple lex x ANNs.
The promis promising ing resul results ts of this this study study
Applications,” Proceedings of the
suggest
IEEE, October 1990.
appli plication
of
similar
techniques in other areas of security
D.C. Park, et al., “Electric Load
asse assess ssme ment nt of powe powerr syst system emss and and
Forecasting Using an artificial
other industrial processes.
Neural Network,” IEEE Transactions on Power Systems,
4. BIBILIOGRAPHY:
Neural networks
1.Patrick K. Simpson, Artificial Neural Systems, Pergamon Press, Elmsford, N. Y., 1990.
Volume 6, Number 2,May 1991, pages 442-449.
5.Load Forecasting by ANN-IEEE computer applications in power systems. Duane D. Highley" and Theodore J. Hilmes "
2.”Special Issue on Neural Networks I: Theory and Modeling.”
Contingency analysis:
Proceedings of the IEEE, September 1990.
6.Mitchell T. M., Machine Learning, McGraw-Hill
Load forecasting
Series in Computer Science, 1997, p.81
3.Jacques de Villiers and Etienne Barnard, ”Backpropagation Neural Nets with One and Two Hidden Layers,” IEEE Transactions on Neural Netcuorks, Volume 4, January 1993, pages 13G144.
7 Grainger J. J., W D. Stevenson, Jr., Power System Analysis, McGraw-Hill, 1994, chap. 9 8 Wehenkel L.A., Automatic Learning Techniques in Power Systems, Kluwer Academic
4.“Special Issue on Neural Networks 11: Analysis, Techniques, and
Publ., 1998, (p. 210)
3. Keywords:
Artificial neural networks, contingency analysis, load forecasting, applications of ANN in power system, Artificial intelligence training and testing.
Document By SANTOSH BHARADWAJ REDDY Email:
[email protected]
Engineeringpapers.blogspot.com More Papers and Presentations available on above site