Appendix Vectors, Tensors and M a t r i c e s
Cartesian co-ordinat co-ordinates es in th ree dimensions In ou study of dynamics we have come across three types of physical quantity. The first type is a scalar and requires only a single number for its definition; this is a scalar. The second requires three numbers and is a vector. The third form needs nine numbers for a comp lete definition. definition. All three can be considered to be tensors of different rank o order. A tensor o f the zeroth rank is the scalar. scalar. A tensor o f the firs rank is a vector and may be written in several ways. A three-dim ensional Cartesian vector is (Al.l)
i+yj+zk
where an are the respectiv e unit vectors. In matrix form we have
(elT(v
(ijk)
(A1.2)
It is comm on practice practice to refer to a vector simply simply by its comp onents where it is under stood that all vectors in an equ ation are referred referred to to the same basis (e). so that we may w rite It is co nvenient to replace z) with ( x , x,
from
to
(Al.3)
This tensor is said to be of rank because only one index is needed. A dyad is defined defined by the follow ing expression (Al.4)
A(B*C)
where written
is the dyad and
an
are vectors. vectors. In three dimensions a dyad m ay be
AppendixI
(Al.6) The square matrix is the matrix representation of the dyad and can be written (A1.7)
D,
Thus the dyad is a tensor of rank 2 as it requ ires two indices to define its elements. The sum tw or mo re dyads is termed a dy adic. The majority of rank 2 tensors encountered in physics are either symmetric or antisymmetric. For a symmetric tensor D,,,nd thus there are only six independent elements. For anti-symmetric tensor, D, -DJ,nd, b ecause this implies that D,,0, there are on ly three indepen dent elements; this is similar to a vector. The process o f outer multiplication of tw tensors is defined typically by (Al.8)
AI,#lm
where is a tensor of rank If both tensors an ar of rank 2 then the elemen (A 1.9)
=AIf&
Thus, if the indices range from to then will have 34 elements. e now ak ej and sum over all values j or k) o obtain (A1.lO) Further, we could omit the summation sign if it is assumed that summation is over the repeated index. This is known as Einstein's sum mation convention. Thus in comp act form
c,
(A l. 11
AIB,/
The process of making tw suffices the same is known as contraction, and outer m ultiplication followed by a contraction is called inner multiplication. In the case of two rank 2 tensors the process is identical to that of m atrix multiplication two square matrices. If we co nsider tw tensors of the first rank (vectors) then o uter m ultiplication is (A 1. 2)
c,,
and these can be thought of as the com ponents of a square matrix. In matrix notation,
[cl
(A I. 13
)
If we now perform a contraction C = A P i
ZAPi)
(A1.14)
we have inner m ultiplication, which in matrix notation is (A)~(B)
and this is the scalar product.
(A
2 74
Appendix
Alternatively, becau se (e) (e)T
[I], the id entity matrix, we m ay w rite
(A)T(e).(e)T(B) (A)T(B)
(A 1.1 6)
The vector product of tw vectors is w ritten (A 1.1 7)
C = A x B
and is defined as ABsinae
(A1.18)
where is the smallest angle between an an is a unit vector normal to both an in a sense given by the right hand rule. In matrix notation it can be demonstrated that A2B3)
(-A&
(A&
A,B3)I'
(-A2Bl
AlB2)
A2
(A1.19)
(eIT(C)
Th e squ are matrix, in this book, is denoted by A] so that equation (A l. 19 may be written (e)T[A]x(B) or, since (e).(e)T
(A 1.20)
, the u nit m atrix
(e)T[A]"(e).(e)T(B)
A".B
(A1.21)
where (e)T[A]x(e)s a tensor operator of rank 2. In tensor no tation it can be shown that the vector product is given by EikA Bk where
gk
A .2
is the alternating tensor, defined as
cijk
+1
if ij is a cyclic permutation of (1 2 3) if ij is an anti-cyclic permutation of (1
3)
(A 1.23)
otherwise Equation (A 1.22) may be written (EijkAj)Bk
(A 1.24)
Now let us define the tensor Til,
(A 1.25)
If we chang e the order of an hen , because of the definition of the alternatin g tensor, ,; therefore is anti-symmetric.
AppendixI
The elem ents are then
T13
Elldl
&122A2
& 1 3 d 3
E I I J I
EIzJ~
EIJ~
EZIJI
E22J2
&23d3
-T21
+A2
-T31 -T32
and the diagonal terms are all zero. These three equations may be w ritten in m atrix form as (A1.26) which is the expected result. In sum mary the vector product of tw vectors
an
may be w ritten
C = A x B , (e)'(C)
(e)TL41x(e)*(e)T(4
[AIX(B)
an C,
eflkA,Bk (summing o
j and k)
T,gk (summing over k) Transformation of co-ordinates We shall co nsider the transformation of three-dimen sional Cartesian co-ordinates due to a rotation of the axes about the origin. In fact, mathematical texts define tensors by the w ay in which they transform. For examp le, a second-order tensor is defined as a multi-directional quantity which transforms from one set of co-ordinate axes to ano ther according to the rule A'mn
lnl,L,A,
The original set of coordinates will be d esignated x,, x2, will be , , e3 In these terms a po sition vector will be x,e,
x2ez
x3e3
and the associated unit vectors
x,e,
(A1.27)
Using a primed set of coordinates the sam e vector will be x;e;
x;e;
x;e;
de',
(A1.28)
The primed unit vectors are related to the original unit vectors by e;
e2
ne
where I, an are the direction cosines between the primed unit vector in the and tho se in the orig inal set. We shall now adop t the follow ing notation
(A 1.29) direction
27
Appendix allel
a,,e,
aI3e3
a,ej
(Al.30)
with similar expressions for the other two unit vectors. Using the su mm ation convention, (A1.3 1)
aeei
In matrix form (A1.32) and the inv erse transfo rm, , is such that
bll
bl2
b13
b2l
b22
b2
(A1.33)
b33
It is seen that s the d irection cosine of the an gle between an whilst is the direction cosine o f the angle between an e,’; b31. Therefore is the transp ose of au, that is aji. The transformation tensor a, is such that it inverse is its transpose, in m atrix form [A][AIT 1. Such a transform ation is said to be orthog onal. ow eG
4-
(A1.34)
so prem ultiplying both sides by
gives (A1.35) (A1.36)
It should be noted that xl!
is equivalent to the previous equation as o nly the arrangement of indices is significant. In ma trix notation (e>’(x)
bu (e’)
[a](e), and
(e’IT(xf)
(A1.37)
therefore
(e)T[alT(x’)
Prem ultiplying each side by (e gives [aIT(x’)
and inverting we obtain (x’)
[am)
The squ are of the magn itude of a vector is
(A1.38)
Appendix (x)'(x)
77
(xr)'(x')
(x)'EaI'[al(x)
(A1.39)
and because (x is arbitrary it follows that
Wlbl
[al'bl
(A 1.40)
where [a]'
[a]-'
In tensor n otation this equation is (Al.4 1)
aiiajl
b,aj,
where is the K ronecker delta defined to be 1 when an otherwise. Because ajiail aj,aji.equation (A1.41) yields six relationships between the nine elements a,, and this implies that only three independ ent constants are requ ired to define the transformation. These three constants are not arbitrary if they are to relate to proper rotations; for example, they must all lie between an 1. Another condition which h as to be met is that the triple scalar product of the unit vectors must be unity a s this represents the volum e of a unit cube. e3 =e,' (e;
(e
(Al.42)
since allel
al2e2 aI3e, etc.
We can use the w ell-known determinant form for the triple product and w rite (Al.43)
et [a] Th e above argum ent only holds if the original set of axes and the transformed set are both tion of the z' axis being reversed then the bottom row of the determinant would all be of opposite sign, so the value of the determinant would be It is interesting to n ote that no way of formally d efining a left- or right-handed system has been d evised; it is only the difference that is recognized. In gen eral vectors which require the use of the right hand rule to define their sense transform d ifferently when changing from right- to left-handed systems. Such vectors are called axial vectors or pseud o vectors n contrast to po lar vectors. Examples of polar vectors are position, displacement, velocity, acceleration and force. Exam ples of axial vectors ar angular velocity and mom ent of force. It can be dem onstrated that the v ector product of a po lar vector and an axial vector is a polar vector. Ano ther interesting point is that the vector of a 3 anti-symmetric tensor is an ax ial vector. This point doe s not affect any of the argum ents in this book because we are always dealing with righthanded systems and pure rotation does not chan ge the handed ness of the axes. However, if
Appendix the reader wishes to delve deeper into relativistic mechanics this distinction is of some importance.
Diagonalization
a second-order tensor
We shall consid er a second-order sym metric Cartesian tensor which m ay represent mo ment o f inertia, stress or strain. Let this tensor be and the matrix of its elements be The transformation tensor is and its matrix is [A]. The transformed tensor is [T
(A 1.44)
[AITITl[Al
Let us now assum e that the transformed matrix is diagonal so h2
[T']
(A 1.45)
0 If this dyad acts on a vector
c;
the result is
hlCl (A 1.46)
h,C,
c;
h3C3
Thu s if the vector is wholly in the direction the vector i"xr would still be in the direction, but multiplied by XI Therefore the vectors C l r i ' ,C 2'j' an C 3 ' k r orm a unique set of orthogonal axes which ar known as the principal axes. From the point of view of the original set of ax es if a vector lies along a ny one of the principal axes then its direction will remain unaltered. Such a vecto r is called an eigenvector. In sy mb ol form (A 1.47)
hC
or [T
(A 1.48)
h(C)
Rearranging equation (Al.48) gives
([Tl where
UllHC)
(0)
is the unit m atrix. In detail (A 1.49)
(T33 (0). This expand s to three ho mog eneous equations which have the trivial solution of The theory of linear equations states that for a non-trivial solution the determinant of the square matrix has to be zero. That is,
AppendixI (TI, T2,
(A1.50)
23
T32
Th is leads to a cubic in thus yielding the three roots which are known as the eigenvalues. Associated with each eigenv alue is an eigenvector, all of which can be shown to be mutuall orthogonal. The eigenvectors only define a direction because their m agnitudes are arbitrary. Let us con sider a special case for which an In this case for a vector (1 O)T the product [ T l ( C ) yields a vector ( T I ,0 0) which is in the same direction as (C). Therefore the direction is a principal axis and the plane is a plane of symm etry. Equation (Al.5 0) now becomes Ti
h)(T -
h)[(T22
(A1.51)
Tf3I
In gen eral a symm etric tensor when referred to its principal co-ordinates takes the form (A1.52)
[T 0
and wh en it operates on an arbitrary vector (C the result is (Al.53) Let us now consider the case of degen eracy with h3 th plane, that is C3)T,he (0
It is easily seen that if
lies in
(Al.54)
[TI(C)
rom wh ich we see that the vector remains in the plane and is in the same direction. Th is also implies that the directions of the an axes can lie anywh ere in the p lane normal to th x, axis. Th is would be true if the axis is an axis of sym metry. If the eigenv alues are triply degen erate, that is they are all equal, then any arbitrary vector w ill have its direction unaltered, from wh ich it follows that all axes are principal axes. Th e orthogo nality of the eigenvectors is readily proved by reference to equation (A l.48). Each eigenvector will satisfy this equ ation with the appropriate eigenvalue thus [TI(C),
h,(C),
(A1.55)
[TI(C),
h2(C)*
(A1.56)
an We premultiply equation (A1.55) by (C): and eq uation (A1.56) by (C): to obtain the scalars (C):[Tl(c)l
h,(C):(C),
(A1.57)
28
Appendix
an
[TI(C)2
(A1.58)
1 2 ( C ) kC)2
Transposing both sides
the last equation, remembering that
is symmetrical, give (A1.59)
(C)mC)I
and subtracting equation (Al.59) from (A l.57) give 12)(C)3C)I
so when
1, we have that ( C ) ~ ( C ) , 0; that is the vectors are orthogonal.
(A1.60)