Spence, Insel, Friedberg: The textbook for Elementary Linear Algebra.
A2 level according to CEF
A2 level according to CEF
Elementary Linear Algebra 8th Edition Larson Solutions Manual Download at: https://goo.gl/2p7bmz elementary linear algebra 8th edition ron larson pdf elementary linear algebra 8th edition larson...
Elementary Linear Algebra 8th Edition Larson Solutions Manual Download at:https://goo.gl/5tbKM3 People also search: elementary linear algebra by ron larson (7th edition) pdf elementary ...
Introductory Linear Algebra Applied Edition 0131437402
Helpful for linear algebra students
Solutions to Linear Algebra
Tugasan MTE3110. Merangkumi kaedah Penghapusan Gauss dan Petua Cramer. Senarai rujukan: Abdul Samad B. Taib & Noor Khaliza Bt. Mohd Khairuddin. (2010). Modul Matematik MTE3110 Algebra Linear. C...
Elementary Linear Algebra: A Matrix Approach, 2nd Edition No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
User: Thomas McVaney
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: iv No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
Library of CongrtSs Cataloging-in-Publication Data on File.
Editorial Director. Computer Science. Engineering. and Advanced Mathematics: Marcia J. Horton Senior Editor: Holly Stark
Editorial Assistant: Jcnnircr Lonschcin Senior Managing Editor/Production Editor: Scott Disanno Art Director: John Christiana Cover Designer: Tamar.t Newnan Art Editor: Thom~L'\ Benfatti Manufacturing Manager: Alexis Heydt-Long Manufacturing Buyer: Lisa McDowell Senior Marketing Manager: Tim Galligan
@
2008 Pears
Pearson Education. Inc. Upper Saddle River. NJ 07458
All rights reserved. No part of this book may be reproduced in any form o r by any means. without permission in writing from the publisher.
Pearson Prentice HaHn1 is a trademark of Pearson Education. J.nc. MATLAB is a registered trademark of The Mathworks. Inc .. 3 Apple Hill Drive. Nntick. MA 01760-2098. The author and publisher of this book have used their best effon.s in preparing this book. These efforts include the development, research. and testing of the theories and programs to detennine their effectiveness. The author and publisher make no warranty of any kind. expressed or implied. with regard to these progrdms or the documentation contained in this book. The author and publisher shall not be liable in any event for inciden1al or consequential damages in connection with. or <'rising out of, 1he furnishing, performance. or use of these programs.
Printed in the United States of America 10 9 8 7 6 5 4 3 2 I
ISBN: 0-13-1871 41-2 Peo1rson Education Lid .. Umtlnn
Education Australia Pty. Ltd .. Sydney Education Singapo,·e. Pte. Ltd. Education Nol'lh Asia Ltd .. Hung Kung Education Can;tda. Inc .. Toronlo Educaci6n de Mexioo. S.A. de C. V. Education-Japan. Tokyo Education M:tlaysia. Pte. Ltd. Education. Inc .. Upper Saddle River. New Jersey
User: Thomas McVaney
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: v No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
To ourfamilies: Linda, Stephen, and Alison Barbara, Thomas. and Sara Rutlz Ann, Raclzel, Jessica, and Je remy
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: vi No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
This page intentionally le.fi. blank
User: Thomas McVaney
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: x No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
This page intentionally le.fi. blank
User: Thomas McVaney
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xi No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
PREFACE In response to concern that the first course in linear algebra was not meeting the needs of the students who take it, the Lineru· Algebra Curriculum Study Group was formed in January 1990. With the support of the National Science Foundation. it sponsored a workshop on the undergraduate linear algebra cu rricu lum at the College of William and Mary in August of that year. Participants at the workshop included representatives from mathematics departments and other disciplines whose students study linear algebra. Its recommendations, 1 published in January 1993, have inspired a number of textbooks, including this one. ln its core syllabus. abstract vector spaces are regarded as a supplementary topic. What dist inguishes this book from many others is its complete development of the concepts of linear algebra in R" before the introduction of abstract vector spaces. We agree wi th the following statement by the Linear Algebra Curriculum Study Group:
Furthermore. an overemphasis on abstraction may overwhelm beginning students the poim where they leave the course with liu/e understanding or mastery of the basic concepts they may need in later courses and their careers.
10
Although we bel ieve that the first linear algebra course is a natural one in which to introduce mathematica l theory and proofs, this ca n be accomplished without discussion of abstract vector spaces. In addition, if abstract vector spaces are included in the first linear algebra course, we bel ieve that they can be taught more effectively after a thorough presentation of the essential topics in the more familiar context of R". Never1heless. we have written this book so that it is possible to teach concepts in the context of abstract vector spaces immediately after the correspond ing topics are discussed in R". (Suggestions for doing so are included in the sample course descriptions on page xiv.) Although there is no use of calcu lus until the introduction of vector spaces in Chapter 7, the material is aimed at students who have the mathematical maturity obta ined by having taken one year of calculus. The core topics can be comfortably covered within one semester, but there is adequate material for a two-quarter course.
PEDAGOGICAL APPROACH This text is written for a matrix-oriented course. as reconunended by the Linear Algebra CutTiculum Study Group. In our experience, such a course results in greater understanding of the concepts of linear algebra and serves the needs of students in many disciplines. It begins with the study of matrices, vectors, and systems of linear equations and gradually leads to more complicated concepts and general principles. such as linear independence, subspaces, and bases. As mentioned, this text develops all the core content of linear algebra in R" before introducing abstract vector spaces. This provides students add itional opportuni ties to visualize concepts in the fami liar context of the Euclidean plane and 3-space before encountering the abstraction of vector spaces. Our approach is based on an early introduction of the rank of a matrix. This concept is then encountered in other contexts throughout the book. For example, the 1
David Carlson, Charles R. Johnson, David C. Lay, and A. Duane Porter. "The Linear Algebra Curriculum StudyGroup Recommendations for the First Course in Linear Algebta." 1J,. Coll•g• Mllthemlltic.< JOLmw/, 24 (1993), pp. 41 - 46.
xi
User: Thomas McVaney
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xii No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
xi i Preface rank of a matrix is used initially to check if solutions of a system of linear equations exist and are unique. Later, it is used to test if sets are linearly independent or are spanning sets for R". Then, in Chapter 2, it is used to determine whether linear transformations are one-to-one or onto. Even though a course taught from th is book may devote less time to the study of abstract mathematics than a more traditional course. we have found that it s till serves as an excellent prerequisite to abstract algebra and an abstract second comse in linear algebra. such as one using our text Linear Algebra.
TECHNOLOGY The Linear Algebra Study Group recommends that technology be used in the first course in linear algebra. In our experience, the use of technology, whether through computer software or supcrcalculators. greatly enhances a course taught from this book by freeing students from tedious computations and enabl ing them to concentrate on conceptual understanding. Most sections include exercises designed to be worked by means of MATLAB or similar technology. Additional technology exercises are found at the end of each chapter. For MATLAB users, our website contains data files and M-files that can be downloaded. For the convenience of those wish ing to use technology, we have added an appendix (Appendix D) with an introduction to MATLAB. It provides sufficient back· ground to prepare students to perform the calcu lations required for this book and to work the technology exercises.
EXAMPLES AND PRACTICE PROBLEMS Our examples motivate and illustrate definitions and theoretical results. These are written to be understood by students so that an instmctor need not discuss each example in class. In fact, in our own teaching, we almost never d iscuss examples from the text, but rather present simi lar examples, leaving the text examples to be read by students. Many examples are accompanied by similar practice problems that enable students to test their understanding of the material in the text. Complete solutions of these practice problems are included within the text in order to help prepare students to work the exercises in each section.
EXERCISES We felt that the first edition of this book had an ample number of computational exercises. Yet some of our conscientious students asked us for more. Therefore we have significantly increased the number of computational exercises in this edition. As in the first edition, except for the specially marked exercises that use technology, all our computational exercises are designed so that the calculations involve "nice" numbers. This permits students to concentrate on the linear algebra concepts rather than the computations. Most sections include approximately 20 true/fa lse exercises that are designed to test a student's understanding of the conceptual ideas in each section. l n response to reviewer suggestions, we have moved these to follow the straightforward computational exercises so that students can gain confidence by working computational exercises before encountering the true/false exercises. Answers to a ll true/false questions arc given in the book. so students will know if they have misunderstood some concept.
User: Thomas McVaney
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xiii No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
Preface xiii
For a proof-ori ented course, we have included a signi ficant number of accessible exercises requiring proofs. These are ordered according to difficulty. Finally, each chapter ends with a set of review exerc ises that provide practice wi th all the main topics in each chapter. The exerc ises are designed for students to use as preparation for a chapter exam ination. As noted, there are a lso technology exercises at the end of each c hapter and in the exercise sets of most sections. For a list of suggested exercises for each section, see our website.
APPLICATIONS This book includes a wide variety of applications, and they are given when the necessary prerequisites have been introduced. There arc applications to economics (the Leontief inpu t-output model), electrical networks (cu1Tent flow through an electrical network). population change (the Leslie matrix). traffic flow, scheduling ((0, I )matrices), anthropology, Google searches, counting problems (d ifference equations), predator-prey models and harmonic motion (systems of differential equations). leastsquares approx imation, computer graph ics, principal component analysis, and music (applying tri gonometric polynomials). These applications are entirely independent and so may be taught according to the interests of the instructor and students. Although we do not assume that any pmticular applications will be covered, we believe thai the core material is greatly enhanced when its use is demonstrated through applications.
ORGANIZATION OF THE TEXT In Chapter I, students are introduced to matrices. vectors, and systems of linem· equations. The study of linear combinations of vectors leads naturally to the span of a set of vectors and the concepts of linear dependence and independence. The rank and nullity of a matrix are introduced early m1d used to detennine when a system of equations has solutions, and how many solutions there are. In Chapter 2, we introduce matrix multiplication, along with matrix inverses and linear transformations. The relationship between linear transfom1ations and matrices is emphasized so that matrix techniques can be used to answer questions such as whether a linear transformation is one-to-one or o nto. Because we use detenninants mainly in the context of eigenvalues, we provide a short but complete treatment in Chapter 3. The important topics of subspaces, bases, and dimension are covered in Chapter 4 . We follow these concepts with a discussion of coord inate systems and matrix representations of linear transformations. Although Chapters 3 and 4 can be interchanged, we prefer to cover determ inants before subspaccs so that there is no delay between the discussions of change of coordinates and d iagonalization in Chapter 5. Chapter 5 contains the important results about eigenvalues, eigenvectors, and diagonalization of matrices and linear operators. With the introduction of orthogonality in Chapter 6, we are able to emphasize the geometry of vectors, matrices, and linear transformations. The important applications of orthogonal projections and least-squares lines illustrate the usefulness o f these ideas. We continue with a discussion of orthogonal matrices and operators and the study of symmetric matrices. The chapter concludes with the singular value decomposition of a matrix and applications to principal component analysis and computer graphics. Chapter 7 introduces abstract vector spaces. Because of the careful foundation that has been developed in Eucl idean spaces. most of the concepts in earlier
chapters- such as span, linear independence, and subspace- are easily genemli zed.
User: Thomas McVaney
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xiv No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
xiv Preface We focus mainly on function and matrix spaces. For example, a nice application in this context arises when Lagrange interpolating polynomials are used to find a basis for a polynomial space. Differential operators and integrals are now explored as special cases of linear transformations. Finally. inner product spaces are introduced. and the Gram-Schmidt process is applied to produce the Legendre polynomials. Leastsquares theory and trigonometric polynomials arc used to explore periodic motion in the selling of musical notes. Although we prefer to introduce abstract vector spaces only after the complete development of all the core topics in R", the book is wrillen to allow abstract vector spaces to be introduced earlier. alongside the cotTesponding topics in R". For details, see the sample syllabi that follow. We have included a number of optional topics, for examp le. the LU. QR. and singular value decomposition, of a matrix, the spectr.JI decomposition of a symmetric matrix, Lagrange interpolating polynomials. block multiplication, the Moore-Penrose generalized inverse, and quadratic forms. In addition, there are many applications of the subject matter throughout the book.
SAMPLE SYLLABI The following list shows the sections of this book that cover the Linear Algebra Curriculum Study Group's core syllabus:
Linear Algebra Cu"iculum Study Group's Core Syllabus Matrix Addition and Multiplication (4 days: Sections 1.1, 1.2, 2.1, and 2.5) Systems of Linear Equations (5 days: Sections 1.3, 1.4. 2.3. 2.4. 2.6) Determinants (2 days: Sections 3.1 and 3.2) Properties of R" (10 days: Sections 1.2. 1.6. 1.7, 4.1. 4.2. 4.3, 2.7. 2.8. 6.1, and 6.2) Eigenvalues and Eigenvectors (4 days: Sections 5.1, 5.2, 5.3, and 6.5) More on Orthogonality (3 days: Sections 6.2. 6.3, and 6.4) Suggested Syllabus for a One-Semester 3-Credit Course Chapter I: Sections I. I, 1.2, 1.3, 1.4, 1.6, 1.7 Chapter 2: Sections 2.1. 2.3, 2.4. 2.7, 2.8 Chapter 3: Sections 3. I, 3.2 (omitt ing optional material) Chapter 4: Sections 4.1. 4.2, 4.3, 4.4 Chapter 5: Sections 5. 1, 5.2, 5.3 Chapter 6: Sections 6.1. 6.2, 6.3, 6.4, 6.5 Supplementary materia l selected from Sections 1.5, 2.2, 2.6, 5.5, 6.6, 6.7, 7.1, 7.2, 7.3. 7.5 Suggested Syllabus for a One-Semester 3-Credit Course that Integrates Abstract Vector Spaces with R" Chapter I: Sections 1.1, 1.2, 1.3. 1.4. 1.6, 1.7 Chapter 2: Sections 2. 1, 2.3, 2.4, 2.7, 2.8 Chapter 3: Sections 3. 1. 3.2 (omitting optional material) Chapter 4: Sections 4. 1, 7.1, 7.2, 4.2, 4.3, 4.4, 7.3 Chapter 5: Sections 5.1. 5.2, 7.4. 5.3 Chapter 6: Sections 6. 1, 6.2, 6.3, 6.4, 7.5, 6.5 Suggested Syllabus for a Two-Quarter Sequence of 3-Credit Courses Chapter I: Sections 1.1. 1.2, 1.3, 1.4, 1.5, 1.6. 1.7 Chapter 2: Sections 2. 1, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8
User: Thomas McVaney
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xv No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
SUPPLEMENTS A Swdem Solulions Manual (ISBN 0-1 3-239734-X) is ava ilable for purchase by students. It includes solutions of all the true/false exercises and over hal f of the odd-numbered exercises. An lnslruc/or's Solutions Manual that includes solutions of all the exercises is available from the publ isher.
ACKNOWLEDGMENTS This second edition has been greatly improved by the comments of the following reviewers: Adam Avilez (Mesa Community College). Roc Goodman (Rutgers University), Chungwu Ho (Evergreen Valley College), Steve Kaliszewski (Arizona State University), Noah Rhec (U niversity of Missouri-Kansas City). and Edward Soares (College of the Holy Cross). We especia ll y appreciate the many detailed comments provided by Jane Day (San Jose State University). In addition, we are indebted to Development Ed itor Paul Trow, who provided significant editorial and mathematical suggestions about several chapters of mtr manuscript and to W.R. Winfrey (Concord College) for wri ting the introductory application for each chapter. Finall y. we would like to express our appreciation to Senior Editor Holl y Stark and Senior Managing Editor Scott Disan no of Pearson Pre ntice Hall for the ir he lp during the wri ting and production of thi s book. To find the latest information about this book, consult our home page on the World Wide Web. We encourage comments, which can be sent to us by e-mail or ordinary post. Our home page and e-mail addresses are as follows : home page: e-mail:
http://www.cas.ilstu.edu/math!matrix matri [email protected] LAWRENCE
E.
ARNOLD STEPHEN
User: Thomas McVaney
H.
SPENCE
J.
l NSEL
FRIEDBERG
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xvi No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
This page intentionally le.fi. blank
User: Thomas McVaney
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xvii No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
TO THE STUDENT Linear algebra is concerned with vectors and matrices, and with special functions called linear tra11.1j'omwrions that are defined on vectors. Since vectors arise in a wide variety of settings. linear algebra can be applied to many disciplines. ln fact, it is one of the most imp011ant tools of applied mathematics. In th is book, we present applications of linear algebra to economics. physics. biology, statistics, computer graphics, and other fields. Like most areas of mathematics, linear algebra has its own terminology and notation. To be successful in you r study of linear algebra, you must be able to solve problems and communicate ideas with its language and symbolism. Developing these ab ilities requ ires more than just altend ing lectures or C
xvii
User: Thomas McVaney
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: xviii No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
xviii To the Student
• Prepare regularly for each class. You cannot expect to learn to play the piano merely by attending lessons once a week-long and carefu l practice between lessons is necessary. So it is wi th linear algebra. At the least. you should study the material presented in class. Often. however, the current lesson bu ilds on previous material that must be reviewed . Usually, there arc exercises to work that deal with the material presented in class. In add ition, you should prepare for the next class by reading ahead. Each new lesson usually introduces several important concepts or definitions that must be learned in order for subsequent sections to be understood. As a result, falling behind in your study by even a single day prevents you from understanding the new material that follows. Many of the concepts of linear algebra are deep; to fully understand these ideas requires time. It is simply not possible to absorb an entire chapter· s worth of material in a single day. To be successful, you must !cam the new material as it arises and not wait to study unti l you arc less busy or until an exam is imm inent. • Ask questions of yourself and others . Mathematics is not a spectator sport. It can be learned only by the interaction of study and questioning. Certain natural questions arise when a new topic is introduced: What purpose does it serve? How is the new topic related to previous ones? What examples illustrate the topic? For a new theorem. one might also ask: Why is it true? How does it relate to previous theorems? Why is it usefu l? Be sure you don't accept something as true unless you believe it. If you are not convinced that a statement is correct, you should ask for further details. • Review often . As you attempt to understand new material, you may become aware that you have forgotten some previous materia l or that you haven 't understood it well enough. By relearning such material, you not only gain a deeper understanding of prev ious topics, but you also enable you rself to leam new ideas more qu ickly and more deeply. When a new concept is introduced , search for related concepts or results and write them on paper. This enables you to see more easily the connections between new ideas and previous ones. Moreover, expressing ideas in writing helps you to learn, because you must think carefully about the materia l as you write. A good test of your understanding of a section of the textbook is to ask yourself if, with the book closed. you can expla in in writing what the section is about. If not, you will benefit from reading the section again. We hope that your study of linear algebra is successful and that you take from the subject concepts and techniques that arc useful in future courses and in your career.
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 1 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1
INTRODUCTION
Ideal Edge
Real Edge
For computers to process digital images, whether satellite photos or x-rays, there is a need to recognize the edges of objects. Image edges, w hich are rapid changes or d iscontinuities in image intensity, reflect a boundary bet ween dissimilar regions in an image and thus are important basic characteristics of an image. They often indicate the physica l extent of objects in the image or a bou ndary between light and shadow on a sing le su rface or other reg ions of interest. The lowermost two figures at the left indicate the changes in image intensity of t he ideal and real edges above, when moving from right to left. We see that rea l intensities can change rapidly, but
___j __/
not instantaneously. In principle, the edge may be found by looking for very large changes over small distances. However, a digital image is discrete rather t han continuous: it is a matrix of nonnegative entries
that provide numerical descriptions of the shades of gray for t he pixels in the image, where the entries vary from 0 for a white pixel to 1 for a black pixel. An analysis must be done using the discrete analog of the derivative to measure the rate of change of image intensity in two directions.
User: Thomas McVaney
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 2 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
2
1 Introduction
The Sobel matrices, 51
=[
=~ ~ ~ - 1 0 I
]
and S2
=
~ ~ ~
] provide a method for measuring [ - 1 - 2 - 1 these intensity changes. Apply the Sobel matrices 51 and 52 in turn to the 3x3 subimage centered on each pixel in the original image. The resu lts are the changes of intensity near the pixel in the horizontal and the vertical directions, respectively. The ordered pair of numbers that are obtained is a vector in the plane that provides
Original Image
Notice how the edges are emphasized in the thresholded image. In regions where image intensity is constant, these vectors have length zero, and hence the corresponding regions appear white in the thresholded
User: Thomas McVaney
the direction and magnitude of the intensity change at the pixel. This vector may be thought of as the discrete analog of the gradient vector of a function of two variables studied in calculus. Replace each of the original pixel values by the lengths of these vectors, and choose an appropriate threshold value. The final image, called the thresho/ded image, is obtained by changing to black every pixel for which the length of the vector is greater than the threshold value, and changing to white all the other pixels. (See the images below.)
Thre. holded Image
image. Likewise, a rapid change in image intensity, which occurs at an edge of an object, results in a relatively dark colored boundary in the thresholded image.
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 3 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
CHAPTER
1
MATRICES, VECTORS, AND SYSTEMS OF LINEAR EQUATIONS he most common usc of linear algebra is to solve systems of linear equations, which arise in appl ications to such diverse disciplines as physics, biology, econom ics. engineering. and sociology. ln this chapter, we describe the most efficient algorithm for solv ing systems of linear equations, Gaussian eliminmion. This algorithm, or some variation of it, is used by most mathematics software (such as MATLAB). We can write systems of linear equations com pactly, using arrays called matrices and vectors. M ore importantly, the arithmetic properties of these arrays enable us to compute solutions of such systems or to determine if no solutions exist. This chapter begins by developing the basic properties of matrices and vectors. In Sections 1.3 and 1.4. we begin our st\1dy of systems of linear equations. In Sections 1.6 and 1.7. we introduce two other important concepts of vectors, namely, generating sets and linear independence. which provide information about the existence and uniqueness of solutions of a system of linear equations.
T
[!}]
MATRICES AND VECTORS
Many types of numerical data are best displayed in two-dimensional arrays, such as tables. For example, suppose that a company owns two bookstores, each of w hich sells newspapers , magazines, and books. Assume that the sales (i n hundreds of dollars) of the two bookstores for the months of July and August are represented by the following tables: Ju ly S tore Ne\\'spapcrs Magazines Books
I
6 15 45
2 8 20 64
and
Store New, pape rs Magazi nes Books
Augu>t I 2 7 9
18
52
31 68
The first column of the July table shows that store I sold $ 1500 worth of magazines and $4500 wonh of books during July. We can represent the infom1ation on July sales more s imply as
[ I~ 45
User: Thomas McVaney
2~].
64
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 4 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
4
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Such a rectangular array of real numbers is called a matrix.• It is customary to refer to real numbers as sca la rs (originally from the word scale) when working with a matrix. We denote the set of real numbers by 1?.. Defin itions A matrix (plural, matrices) is a rectangular array of scalars. If the matrix has m rows and 11 columns, we say that the size of the mat rix is m by 11 , written m x 11. The matrix is square if m = n. The scalar in the ith row and j th column is called the (i, j )·entry of the matrix.
If A is a matrix, we denote its (i ,j)-entry by aij . We say that two matrices A and
B are eq ual if they have the same size and have equal corresponding entries: that is, a;1 = bij for all i and j. Symbolically, we write A = 8.
In our bookstore example, the July and August sales are contained in the matrices
8 =
[~~45 642~]
c
and
=
[~~52 683~] ·
Note that b 12 = 8 and c 12 = 9, so 8 :/; C. Both 8 and C are 3 x 2 matrices. Because of the context in which these matrices arise, they are called in vel/lory matrices. Other examples of matrices are
[!
-4
and
[!l
[-2
0
I] .
The first matrix has size 2 x 3, the second has size 3 x I, and the third has size I x 4. Let A
Practice Problem 1 .,..
= [~ ~l
(a) What is the ( I, 2)-ent ry of A? (b) What is azz? Sometjmes we are interested in only a part of the information contained in a matrix. For example, suppose that we are interested in only magazine and book sales in July. Then the relevant information is contruned in the last two rows of 8: that is, in the matrix E defined by
E-
['5 45
?Q] 64
E is called a submarrix of B. In general. a s ubmatrix of a matrix M is obtained by deleting from M entire rows, entire columns, or both. It is permissible, when fonn.ing a submatrix of M. to delete none of the rows or none of the columns of M. As another example, if we delete the first row and the second column of 8 , we obtain the submatrix
1 Jame~ Jo~eph Sylve~ter (1814-
User: Thomas McVaney
1897) coined the term matrix in the 1850~.
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 5 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.1 Matrices and Vectors
5
MATRIX SUMS AND SCALAR MULTIPLICATION Matrices are more than conven ient devices for storing information. Their usefulness lies in their arithmetic. As an example, suppose that we want to know the total numbers of newspapers, magazines, and books sold by both stores during July and August. It is natural to form one matrix whose entries are the sum of the corresponding entries of the matrices 8 and C , namely, Store I Newspapers 13 33 Magazines [ 97 Books
2
17]
51 132
.
If A and 8 are 111 x 11 matrices. the sum of A and 8. denoted by A + 8. is the m x 11 matrix obtained by adding the corresponding entries of A and 8; that is, A + B is the m x 11 matrix whose (i .j)-entry is a;i + bij . Notice that the matrices A and 8 must have the same size for their sum to be defined. Suppose that in our bookstore example, July sales were to double in all categories. Then the new matrix of Jul y sales would be 12 30
16] 40 128
[ 90
We denote this matrix by 28. Let A be an m x 11 matrix and c be a scalar. The sca lar multiple cA is the m x 11 matrix whose entries are c times the corresponding entries of A; that is, cA is the 111 x 11 matrix whose (i ,j)-entry is ca;i . Note that lA =A. We denote the matrix ( - I )A by - A and the matrix OA by 0. We call the m x 11 matrix 0 in which each entry is 0 the m x 11 zero matrix.
Example 1
Compute the matrices A + 8, 3A, - A, and 3A + 48, where
A=[~ Solution
4 -3
~]
8= [ -4 5
and
I
-6
~].
We have
7 - 9 ~].
A + 8 = [- 1
5
3A =
[~
12 -9
~].
-A=
-4 [-3 -2 3
- 2] 0 •
and
3A + 48 =
[~
12 6] -9 0
+
16 OJ_ [-7 26 -33 !].
4 [ - 16 20 -24 4 -
Just as we have defined addition of matrices, we can also define s ubtraction. For any matrices A and 8 of the same size, we define A - 8 to be the matrix obtained by subtracti ng each entry of 8 from the corresponding e ntry of A. Thus the (i ,j)-entry
of A - 8 is
User: Thomas McVaney
aij -
b;i . Notice that A - A
= 0 for all matrices A.
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 6 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
6
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
If, as in Example I, we have
4
A=[~
-3
~l
B=
[-45
~l
I
-6
and
0
=[~
~l
0 0
then - B
Practice Problem 2 .,.
=[-~
Let A = [ ;
- I
6
-
-~l
A- B
= [ -~
~ -~] and B = [ ~
_
3 3
-~l
and A - 0
= [~
4 - 3
~l
~ ~l Compute the following matrices:
(a) A - B
(b) 2A (c) A+ 38
We have now defined the operations of matrix addition and scalar multiplication. The power of linear algebra lies in the natural re lations between these operations, which are described in our first theorem.
THEOREM 1.1 (Properties of Matrix Addition and Scalar Multiplication)
mx
11
Let A, B, and C be
matrices, and let .\' and r be any scalars. Then
(a) A+ 8 = B +A. (b) (A+ B) + C =A+ (B +C).
(commutative law of matrix addition) (associative law of matrix addition)
(c) A+ 0 =A.
(d) A+ (- A)=O . (e) (s1)A
= S(IA).
(f) s(A+B)=sA + sB. (g) (s
+ r)A =sA + rA .
We prove pans (b) and (f). The rest arc left as exercises. (b) The matrices on each side of the equati on are m x 11 matrices. We must show that each entry of (A + B) + C is the same as the co1Tcsponding entry of A + (B + C). Consider the (i ,))-entries. Because of the defin ition of matrix addition, the (i ,j )-entry of (A + B) + C is the sum of the (i .})-entry of A + B, which is aij + b;i, and the (i ,j)-entry of C , which is cij . Therefore this sum equals (aij + bij) + Cij . Similarly. the (i .})-entry of A + (B + C) is Oij + (bij + Cij ). Because the assoc iative law holds for add ition of scalars, (aij + b;i ) + cij = aij + (bij + Cij ). Therefore the (i.j)-entry of (A+ B)+ C equals the (i .j)-entry of A + (B + C), proving (b). (f) The matrices on each side of !he equation are m x 11 matrices. As in the proof of (b), we consider the (i ,})-entries of each matrix. The (i ,j)-entry of s(A + B) is defined to be the product of s and the (i .})-entry of A + B. which is a 1i + b;i . This product equals s(aij + b;j ). The (i ,j)-entry of sA + sB is the sum of the (i ,j)-entry of sA. which is saij , and the (i ,j)-entry of sB, which is sbij . Tllis sum is saij + sb;j . Since s(aij + b;j ) = sa;j + sb;j . (f) is proved. • PROOF
Because of the associative law of matrix addition, sums of three or more matrices can be written unambiguously without parentheses. Thus we may write A 8 C instead of either (A+ B) + C or A + (B + C).
+ +
User: Thomas McVaney
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 7 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.1 Matrices and Vectors
7
MATRIX TRANSPOSES In the bookstore example, we could have recorded the in formation about July sales in the following form: Store I 2
New!>papcr' 6
Maga7incs 15
8
20
Books
45 64
This representation produces the matrix
6 15 45] [ 8 20 64 . Compare this with
B
=
[~~45 642~] ·
The rows of the first matrix are the columns of 8, and the columns of the first mat rix are the rows of B. This new matrix is called the transpose of B. ln general, the t ranspose of an m x 11 matrix A is the 11 x m matrix de noted by AT whose (i ,j)-entry is the (j, i )·entry of A. The matrix C in our bookstore example and its transpose are
c
Practice Problem 3 •
Let A = [;
-
=
[~~52 683~]
and
~ -~] and B = [ ~
_
cr =
[79 3I81 52] 68 .
~ ~l Compute the following matrices:
(a) AT
(b) (3 Bf ~
(c) (A + B f
The following theorem shows that the transpose preserves the operations of matrix addition and sca lar multipl ication:
THEOREM 1.2 (Properties ofthe Transpose)
Let A and 8 be m x n matrices, and let s be any
scalar. Then (a) (A + B)T =AT+ B T (b) (sAl = sAT (c) (A r)r =A.
We prove part (a). The rest arc left as exercises. (a) The matrices on each side of the equation are n x m matrices. So we show that the (i.j)-entry of (A + B)' equals the (i,j)-entry of AT+ Br. By the definition of transpose, the (i ,j)-entry of (A + B )T equals the (j, i)-entry of A + 8 , which is aji + bji. On the other hand , the (i,j)-entry of AT + Br equals the sum of the (i ,j)-entry of AT and the (i ,j)-entry of s r, that is, aji bji· Because the • (i ,j )-entries of (A + B)' and A r + B r arc equal, (a) is proved. PROOF
+
User: Thomas McVaney
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 8 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
8
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
VECTORS A matrix that has exactly one row is called a row vector , and a matrix that has exactly one column is called a column vector . The tenn vector is used to refer to either a row vector or a column vector. The entries of a vector are called components. In this book. we normally work with column vectors, and we denote the set of all column vectors with n components by 1<.". We write vectors as boldface lower case letters such as u and v. and denote the ith component of the vector u by u; . For example, if u =
[-~l then
112 =
- 4.
Occasionally, we ident ify a vector u in 1(." with an 11-tuple, (11 1 , 112, .. . , u,). Because vectors are special types of matrices, we can add them and multiply them by scalars. In th is context, we call the two arithmetic operations on vectors vector addition and scala r multiplication. These operations satisfy the propenies listed in Theorem 1. 1. In particular, the vector in 1<." with all zero components is denoted by 0 and is called the zero vector . It satisfies u + 0 = u and Ou = 0 for every u in 1<.".
IJifi ..!.!tfW Let u - [
-n
and v -
u
[~l
n.
+ v = [_
Then
u_ v
= [ =~l
and
For a given matrix. it is often advantageous to consider its rows and columns as vectors. For example. for the matrix
[o
- 2), and the columns are
_;J.
[~ ~
[n [n
and [
the rows arc [2
4
3) and
-n
Because the columns of a matrix play a more imponant role than the rows, we introduce a special notation. When a capital letter denotes a matrix, we use the corresponding lower case letter in boldface with a subscript j to represent the jth column of that matrix. So if A is an m x 11 matrix, its jth column is
Otjl
azj
[
Omj
.r
.
GEOMETRY OF VECTORS (ll, b)
For many applications,2 it is usefu l to represent vectors geometrically as directed line segments, or arrows. For example, if v =
X
[~]
is a vector in 1<.2 , we can represent v
as an arrow from the origin to the point (a. b) in the xy-plane. as shown in Figure 1.1. 2 The importance of vectors in physics was recognized late in the nineteenth century. The algebra of
Figure 1.1 A vector in
n2
User: Thomas McVaney
vectors, developed by Oliver Heaviside (1850- 1925) and Josiah Willard Gibbs (1 839- 1903). won out over the algebra of quaternion.s to become the language of physicists.
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 9 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.1 Matrices and Vectors
Example 3
9
Velocity Vectors A boat cmises in still water toward the northeast at 20 miles per hour. The velocity u of the boat is a vector that points in the direction of the boat's motion, and whose length is 20, the boat's speed. If the positive y-axis represents north and the positive x-axis represents east, the boat's direction makes an angle of 45° with the x-axis. (See Figure 1.2.) We can compute the components of u = [
11 •] 112
by using trigonometry: lit
RIVER
= 20cos45° =
Therefore. u =
Figure 1.2
[
wh
and
20 sin 45° =
112 =
wh.
10J2l JOJ2j' where the units are in miles per hour.
VECTOR ADDITION AND THE PARALLELOGRAM LAW We can represent vector addi tion graph ically, using arrows, by a result called the parallelogram law.3 To add nonzero vectors u and v, first form a parallelogram with adjacent sides u and v. Then the sum u + v is the arrow along the diagonal of the parallelogram as shown in Figure 1.3. (a + c, b + d)
Figure 1.3 The parallelogram law of vector addition
Velocities can be combined by adding vectors that represent them.
Example4
Imagine that the boat from the previous example is now cm ising on a river. which flows to the east at 7 miles per hour. As before, the bow of the boat points toward the northeast, and its speed relative to the water is 20 miles per hour. Ln this case. the vector u =
[:~~].which we calculated in the prev ious example, represents the
boat's veloc ity (in miles per hour) relative to the river. To find the velocity of the boat relative to the shore. we must add a vector v. representing the velocity of the river, to the vector u. Since the river flows toward the east at 7 miles per hour, its velocity vector is v =
[~]. We can represent the sum of the vectors u and v by using
the parallelogram law, as shown in Figure 1.4. The veloci ty of the boat relative to the shore (in miles per hour) is the vector u + v= [
3
User: Thomas McVaney
IOJ2 + 7] lOJ2 .
A justification of the parallelogram law by Heron of Alexandria (first century c.~) appears in his Mechanics.
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 10 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
10 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations North
boat
velocity u
'
water velocity Figure 1.4
To find the speed of the boat, we use the Pyrhagorean lheorem, which !ells us that the length of a vector with endpoint (p. q) is jp2 + q2. Us ing the fact that the components of u + v are p = 10J2 + 7 and q = JOJ2, respectively, it follows that the speed of the boat is
Jp 2
+ q 2 ""25.44 mph.
SCALAR MULTIPLICATION We can also represenl scalar multiplicarion graph ically, using arrows. If v
= [~J is
a vector and c is a positive scalar, the scalar multiple c v is a vector that pomts in the same direction as v, and whose le ngth is c times the length of v. This is shown in Figure I.S(a). If c is negative, c v points in the opposite direction from v, and has length lei times the length of v. This is shown in Figure I.S (b). We call two vectors parallel if one of them is a scalar mulliple of the other. y
J
cv
(m . cb)
(a. b)
v
(a, b) X
(ca. cb)
(a) c
>0
(b)c< O
Figure 1.5 Scalar m ultiplication of vectors
VECTORS IN R 3 If we identify 1?.3 as the set of all ordered triples, then I he same geomelric ideas thai hold in 1?.2 are also true in 1?.3 . We may depict a vector v
= [~]
in 1?.3 as an arrow
emanating from rhe origin of the xyz -coordinate system, with the point (a, b, c) as its
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 11 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
endpoint. (See Figure 1.6(a).) As is the case in R 2 , we can view two nonzero vectors in n 3 as adjacent sides of a parallelogram, and we can represent their addition by using the parallelogram law. (See Figure 1.6(b).) In real life, motion takes place in 3-dimensional space, and we can depict quantjtics such as velocities and forces as vectors in n 3 .
EXERCISES In Erercises 1- 12. compwe rile indicared marrices, where
n
-I
A=[;
4
and
1. 4A
2. -A
4. 3A +28
5. (2 B )r
-2]4 .
0 3
3. 4A -28 6. AT +28T 9. AT
8. (A + 2Bj1'
7. A+B 10. A-8
[~
8 =
-I
13. - A
5
2 -6
-~J
C3 .
North )'
if possible,
·{l -ll
and
14. 38 17. A - 8
15. (- 2)A
18. A - 8 T
16. (28l 19. AT - 8
20. 3A +2BT
22. (4Al
23. B -Ar
2 1. (A + B l 24. (BT - Al
In Exercises 25- 28, as.w me rlwr A = 25. Determine a 12. 27. Determine a 1 •
30. Determi ne
31. Determi ne the first row of C. 32. Determine the second row of C.
12. (-B )T
11. - (BT)
In Exercises 13- 24, compure rile indicared matrices, where
A =[~
29. Determine c1 •
[
2~
= [2;
~~l
User: Thomas McVaney
-3 12
Figure 1.7 A view of the airplane from above
33. An airplane is Aying with a ground speed of 300 mph
26. Determine a 21. 28. Detennine a 2 •
In Erercises 29- 32, assume rhm C
X
- f - - - - - - - - E ast
0.4]o·
35. A pilot keeps her airplane pointed in a northeastward direction while maintaining an airspeed (speed relative to the surrounding air) of 300 mph. A wind from the west blows eastward at 50 mph .
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 12 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
12
CHAPTER 1 Matrices, Vectors, and Systems of linear Equations
(a) Find the velocity (in mph) of the atrplane relative to the ground. (b) What is the speed (in mph) of the airpl.tne rehuive to the ground? 36. Suppose that in a medical study of 20 people. for each i. I !: i !: 20, the 3 x I vector u, is defined so that its components respectively represent the blood pressure. pulse rate, and cholesterol reading of the i th person. Provide an interpretation of the vector + ll2 + ... + li2Q).
m
~
Exercises 37- 56, detemtille ~ me/1/s are tme or falsi'. /11
11 llnllu
tile state-
37. Matrices must be of the same size for their s um to be dehned. 38. The transpose of a sum of two matrices ;, the sum of the trdnsposcd matrices. 39. Every vector is a matrix. 40. A scalar multiple of the zero matrix is the 7.ero scalar. 4 1. The transpose of a matrix is a matrix of the same size. 42. A submatrix of a matrix may be a vector. 43. If B is a 3 x 4 matrix, then its rows are 4 x I vectors. 44. l11e (3, 4)-entry of a matrix lies in column 3 and row 4. 45. In a zero matrix. every entry is 0. 46. An m x
11
matrix has m
+ 11 entnes.
47. If ' ' and ware vectors such that '' = -3w, then ,. and w are parallel. 48. If A and B are any m x
11
matrices. then
A-B =A+(- 1)8.
49. The (i.j)-entry of AT equals the (j, i)-entry of A. 50. If
A= [~ ~] and B = [~ ~ ~l then A= B.
51. In m1y matrix A. the sum of the e ntries of 3A tXIua ls three times the sum of the entries of A. 52. Matrix addition is commutative. 53. Matrix addition is associative. 54. For any m x 11 matrices A and 8 :md any •calaT>. c and d. (cA +dB)T =cAT+ dBT. 55. If A i' a matrix. then cA i• the same '"e a~ A for e\'ery >calar c.
56. If A is a matrix for which the sum A +A r is defined. then A i' a 'quare matrix.
57. Let A and 8 be matrices of the same size. (a) Prove that the j th column of II + B is
111
+ bi.
(b) Prove that for any scalar c. the jth colunm of cA is
car 58. For any m x 11 matrix A, prove that OA = 0. the m x 11 zero matrix. 59. For any 111 x
11
matrix A. prove thai lA =A.
60. Prove Theorem 1.1 (a). 61. Prove Theorem 1.1 {c). 62. Prove Theorem l.l (d ). 63. Prove Theorem 1.1 (e). 6-1. Prove Theorem l.l(g). 65. Prove Theorem 1.2(b). 66. Prove Theorem 1.2(c). A square mmri.r A is called a diagon al matrix if a,1 = 0 ll'hene•·er i :F j. Exercises 67-70 are co11cemed with dillf(OIIttl 11/lllri·
ces. 67. Prove that u square zero matrix is a diagonal matrix . 68. Prove that if 8 is a diagonal matrix, then cB is a diagonal matrix for any scalar c. 69. Prove that if 8 i' a diagonal matrix, then i> a diagonal matrix. 70. Pro1·e that if 8 and C arc diagonal matrices of the same si1.c, then 8 -1- C i~ a diagonal matrix.
or
A (squa,...)matril A is said to be symmetric ifA= AT. Eurc:is~s 71-78 a,... t'tmtemt'd ll'itlt symmerric matrices. 7 I. Give cx:unple\ of 2 x 2 and 3 x 3 symmetric matricc,. 72. Prove thut the (i .})-entry of a symmetric matnx equals the (j. i)-entry. 73. Prove that a square zero matrix is symmetric. 74. Prove that if B is a symmetric matrix, then so is cB for any scalar c. 75. Prove that if B is a square matrix, then 8 + 8 r i• 'Ymmetric. 76. Prove that if 8 and C are 11 x 11 symmetric matrice,, then so is 8 +C. 77. Is a square submatrix of a symmetric matrix necessarily a •ymmetric matrix? Justify your answer. 78. Prove that a diagonal matrix is symmetric. A (squa,...) mmri.1 A is calletl sk ew-symmetric
if AT= -A.
E.xerci,,e,, 79 -t~ / are <:oncerned with skew·.\'J'mmetric nuurice.f.
79. What must he tnte about the (i, i)-entries of a ,kcwsymmctric matrix? Justify your answer. 80. G ive an example of a nonzero 2 x 2 skew-symmetric m:ttrix 8 . Now show that every 2 x 2 ;kew-symmelric matrix is a scalar multiple of B. 81. Show thai e1ery 3 x 3 matrix can be writlen a' the •um of a symmetric matrix and a skew-symmetric matrix. 82~ The t race of an 11 x 11 matrix A. written trace(A ). is defined to be the sum trace(A) = a 11 +au+···+ a,... Prove that. for any 11 x 11 matrices A and 8 and sc:1lar c, the following Matements are true: (a) tracc(A + 8) = trace(A) +trace(B). (b) trace(cA) = c · tr:tce{A). (c) trace(A 7) = tmce(A). 83. Probability IW'tors are \'CCtors whose comi>Oncnt' are nonnegative and have a sum of I. Show that if p and q arc probability vectors and a and b are nonnegatile scalars with ll + b = I. then a p + bq is a probabtlity vector.
• This exercise is used in Se
User: Thomas McVaney
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 13 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.2 linear Combinations, Matrix-Vector Products, and Specia l Matrices
In the following etudsr. uu tdthrr a calculator with matrix capabilities or computu sajfll'art' .wch a.r MATI.AB to soh·t' tht' 11roblem:
and
B=
84. Constder the matrices
A-
-3.3
2.1
r,
5.2 3.2 08 -1.4
6.0 -I. I 3.4 1.1 -4.0 - 12. 1 5.7 0.7 4.4
2.3
-2.6 -1.3 3.2
l
2.6 2.2
-1.3 -2.6
7.1
1.5 - 1.2
0.7 1.3 -8.3 2.4
-0.9
1.4
r
-0.9 3.3
13
-4.4] -3.2 4.6 . 5.9
6.2
+ 28.
(a) Compute A
(b) Compute A - B. (c) Compute AT + B T.
SOLUTIONS TO THE PRACTICE PROBLEMS I. (a) The (I. 2)-entry of A
IS
-- [23
2.
(b) The (2. 2)-entry of A is 3.
2. (a) A - 8
0 4 I
= [: (b) 2A = 2(; (c) A+ 38
=
-~J- [~
- I
= [;
- I
3. {a)
-~J = [:
0
= [~
~]
-!]
- I
[i
3
-2 0
- I
_;J
9 •] + [3 6 -3 ~~]
-I
0
-2
8
I(')]
-3
j]
AT= [- :
(b) {3Bf = [~ (c) (A+ B)T
=
[~
12
2 - I
0
(!}]
or
9 -3
=
[3 9 -~] 0
~r = [~
12
-n
LINEAR COMBINATIONS, MATRIX- VECTOR PRODUCTS, AND SPECIAL MATRICES
In this ~ction. we explore some applications involving matrix oper..tions and introduce the product of a matrix and a vector. Suppose that 20 students are enrolled in a linear algebra course. in which two
= lltl ;- . where 11, denotes the score II>
tests. a quiz. and a final exam are given. Let u
r
1120
of the ith student on the first test. Likewise, define vectors v, w. and z >imilarly for the second test. quiz. and final exam. respectively. Assume that the instructor computes a student's course average by counting each test score twice as much as a quiz score, and the final exam score three times as much as a test score. Thus the weights for the tests, quiz, and final exam score are. res pectively, 2/11 ,2/1 1, 1/ 11, 6/ 11 (the weights must sum to one). Now consider the vector
The first component Yt represents the first si'Udent 's course average, the second component .\'2 represents the second student's course average. and so on. Notice that y is a sum of scalar multiples of u, v, w, and z. This form of vector sum is so imponanl that it merits its own definition.
User: Thomas McVaney
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 14 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
14
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Definitions A linear combination of vectors u 1, u 2, . . . , u* is a vector of the form
where Ct, c 2•..• , Ck arc scalars. These scalars arc called the coefficients of the linear combination. Note that a linear combination of one vector is simply a scalar multiple of that vector. In the previous example, the vector y of the students' course averages is a linear combination of the vectors u, v, w, and z. The coefficients are the weights. Indeed, any weighted average produces a linear combination of the scores. Notice that
Thus
[~]
is a linear combination of [:].
[~].and
[ _ :]. with coefficients - 3. 4,
and l. We can also write
m = [:J + 2m - ~[-:J. This equation also expresses
[~]
as a linear combination of[:].
but now the coefficients are I, 2. and - I. So the set of coefficients vector as a li near combination of the others need not be unique.
That is, we seck a solution of the system of equations
2xt + 3x2 = 4 3xt + x 2 = - I. Because these equations represent nonparallel lines in the one solution, namely,
User: Thomas McVaney
x1
= - I and
x2
= 2. Therefore
~lane,
[_; J is
there is exactly
a (unique) linear
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 15 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.2 Linear Combinations, Matrix-Vector Products, and Special Matrices 15
combination of the vectors [;] and
[~]. name ly,
(Sec Figure 1.8.) y
_../
[~]
2
.. .· /
...
,:
.. /
X
/ [- 4]1
...······························
Figure 1.8 The vector
[_~]is a linear combination of
(b) To determine whether [
=~J
[i]
and
is a linear combination of
[~l
[~]
and
[~l we
pe1form a similar computation and produce the set of equations
6x1 3xl
+ 2x2 = -4 + x2 = - 2.
Since the first equation is twice the second. we need only solve 3xl + x2 = - 2. Thi s equation represents a line in the plane, and the coordinates of any point on the line - 2 and X2 4. In this case. we have give a solution. For example, we can let Xi
=
=
There are infinitely many solutions. (See Figure 1.9.)
[~]
Figure 1.9 The vector [
User: Thomas McVaney
=~J is a linear combination of[~] and [~].
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 16 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
16 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
(c) To determine
if[~] is a linear combination of [~] and
[!l
we must solve
the system of equations
=
3xl + 6x2 3 2x1 +4x2 =4.
-!
If we add times the first equation to the second, we obtain 0 = 2, an equation with no solutions. Lndeed, the two original equations represent parallel lines in the plane, so the original system has no solutions. We conclude that combination of
[~]
is not a linear
and [ : ]. (See Figure 1.10.)
Figure 1.10 The vector[!] isnoralinearcombinationof
Example2
[!]
[;]and[~].
Given vectors u 1, u 2, and u 3, show that the sum of any two linear combinations of these vectors is also a linear combination of these vectors.
Solution
Suppose that w and z are li near combinations of u 1, u2, and u3. Then we
may write
w = a u 1 + bu2 + c u3
and
where a ,b, c, a', b',c' are scalars . So w + z = (a + a ')u 1 + (b + b')u2 + (c + c')u3,
which is also a linear combi nation of U J, u 2, and U3.
STANDARD VECTORS We can write any vector [: and [
User: Thomas McVaney
J in 1?}
as a linear combination of the two vectors [
~J
~J as follows:
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 17 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.2 Linear Combinations, Matrix-Vector Products, and Special Matrices 17
The vectors
[01]
write any vector
[~]
and
[n
[~] in
are called the srandard vecrors of 1<.2 Similarly, we can
n 3 as a linear combination of the vectors
[~l [~land
as follows:
The vectors [
[n=a [~]
l
+b
[~] +c [~]
~ [~l and [~] are called the
sTandard vecTors of
nJ
In general. we define the standard vectors of 'R." by
(S ee Figure I II. ) )'
.,
.,
)'
., .r The standard Vettors of n2
Tile standard vector~ of
n'
Figu re1 .11 y
au
·······
From the preceding equations, it is easy to see that every vector in 'R." is a linear combination of the standard vectors of 'R.". In fact. for any vector v in 7?!' .
.r
(See Figure I. 13.) Now let u and v be nonparallel vectors, and let w be any vector in 2. Begin w ith the endpoint of w and create a parallelogram with sides a u and bv, so that w is its d iagonal. It follows that w =au + bv; that is, w is a linear combinat ion of the vectors u and v. (See Figure 1.1 2.) More generally, the following statement is true:
n
Figure 1.12 The vector w is a lin-
ear combination of the nonparallel vectors u and v.
If u and v are any nonparallel vectors in 1<.2• then every vector in 1<.2 is a linear combination of u and v.
User: Thomas McVaney
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 18 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
18 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
··....
X
The veclor ,, is a linear combination of standard vectors in n 3.
The vector v i~ a linear combination of standard vectors in 'R2 .
Figure 1.13
Practice Problem 1 •
Let
w=
[~~]and S = {[~]. [-~J } -
(a) Without doing any calculations, explain why w can be wrinen as a linear combination of the vectors in S. (b) Express w as a linear combination of the vectors in S. Suppose that a garden supply store sells three mixtures of grass seed. The deluxe mixture is 80% bluegrass and 20% rye. the standard mixture is 60% bluegrass and 40% rye, and the economy mixture is 40% bluegrass and 60% rye. One way to record this information is with the following 2 x 3 matrix: lie luxe
8=
.80 [ .20
' tandanl .60 .40
economy
.40 ] .60
bluegrass rye
A customer wants to purchase a blend of grass seed containing 5 lb of bluegrass and 3 lb of rye. There are two natural questions that arise: 1. Is it possi ble to combine the three mixtures of seed into a blend that has exactly the desired amounts of bluegrass and rye, with no surplus of either? 2. If so, how much of each mixture should the store clerk add to the blend? Let x 1, xz, and x3 denote the number of pounds of deluxe, standard, and economy mixtures. respectively. to be used in the blend. Then we have
.80x, .20x,
+ .60xz + .40x3 = 5 + .40xz + .60x3 = 3.
This is a system of two linear equations in three unknowns. Finding a solution of this system is equ ivalent to answering our second question. The technique for solving general systems is explored in great detail in Sections 1.3 and 1.4. Using matrix notation, we may rew rite these equations in the form
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 19 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.2 Linear Combinations, Matrix-Vector Products, and Specia l Matrices
19
Now we use matrix operations to rewrite th is matrix equation, using the columns of Bas
x, [ .80] .20 + x2 [.60] .40 + xJ [ .40] .60 Thus we can rephrase the first question as follows: Is
= [5] 3 · [~]
a linear combination of the
. the box on page 17 provtdes . columns [ ..80] , [.60] . , and [.40] .60 o f 8.? The result m an 20 40
affirmative answer. Because no two of the three vectors are parallel,
[~]
is a linear
combination of any pair of these vectors.
MATRIX- VECTOR PRODUCTS
8
- [.80 .20
X-
.60 .40] .40 .60
[XI] ;~
=Xt
[.80] [.60] [.40] .20 + x2 .40 + xJ .60 ·
This defin ition provides another way to state the first question in the preceding example: Does the vector
[~]
equal Bx for some vector x? Notice that for the
matrix-vector product to make sense, the number of columns of 8 must equal the number of components in x. The general definition of a matrix-vector product is given next. Definition Let A be an m x 11 matrix and v be an 11 x I vector. We define the ma tr ix-vector p rod uct of A and v. denoted by Av. to be the linear combination of the columns of A whose coefficients are the corresponding components of v. That is,
As we have noted, for Av to exist, the number of columns of A must equal the number of components of v. For example, suppose that
A=
[i :]
and
Notice that A has two columns and v has two components. Then
Av =
User: Thomas McVaney
[~
: ]
m [~] + [;I]+[ii] [~n. =
7
8 [ :]
=
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 20 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
20
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Returning to the preced ing garden supply store example, suppose that the store has 140 lb of seed in stock: 60 lb of the deluxe mixture, 50 lb of the standard mixture, and 30 lb of the economy mixture. We let v
= [~~]
represent th is information. Now
the matrix-vector product
8v
=
[~~] 30
.80 .60 .40] [ .20 .40 .60 80
60
40
= 60 [·.20] +50 [·.40] + 30 [·.60] seed (lb)
[~~]
bluegrass rye
gives the number of pounds of each type of seed contained in the 140 pounds of seed that the garden supply store has in stock. For example, there are 90 pounds of bluegrass because 90 .80(60) + .60(50) + .40(30). There is another approach to computing the matrix- vector product that relies more on the entries of A than on its col umns. Consider the following example:
=
Av
= [Ot t 02t
Notice that the first component of the vector Av is the sum of products of the corresponding entries of the first row of A and the components of v. Likewise, the second component of Av is the sum of products of the corresponding e ntries of the second row of A and the components of v. With tllis approach to computing a matrix- vector product, we can omit the intermed iate step in the preceding illustration. For example, suppose
A = [~ -~ ~]
and
Then
Av
User: Thomas McVaney
=
[7
3
l]
-2 3
[-l] [ ~
=
(2)(- l ) + (3)( 1) + ( 1)(3)
J
[4]
(1)(- 1)+(-2)( 1) + (3)(3) = 6 .
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 21 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.2 Linear Combinations, Matrix-Vector Products, and Special Matrices
In general, you can use this technique to compute A v when A is an m x v is a vector in R". ln this case, the i th component of A v is
[ail
a;2 ...
a;,.][~:~ ]
11
21
matrix and
=a; ,v, +ai2v2+·· · + a;,.v,.,
VII
which is the matrix - vector product of the ith row of A and v. The computation of all the components of the matrix-vector product Avis given by
A sociologist is interested in studying the population changes within a metropolitan area as people move between the city and suburbs. From empirical evidence, she has discovered that in any given year, IS % of those living in the city will move to the suburbs and 3% of those living in the suburbs w ill move to the c ity. For simplic ity, we assume that the metropolitan population remains stable. This information may be represented by the following matrix: Fro m City Suburb~ To
C ity Suburb~
.85 [ .15
.03] =A .97
Notice that the entries of A are nonnegative and that the entries of each column sum to I. Such a matrix is called a stoch astic m atrix. Suppose that there arc now 500 thousand people living in the c ity and 700 thousand people living in the suburbs. The socio logist would like to know how many people will be living in each of the two areas next year. Figure 1. 14 describes the changes of population from one year to the next. It follows that the number of people (in thousands) who will be living in the city next year is (.85)(500) + (.03)(700) = 446 thousand, and the number of people living in the suburbs is (.15)(500) + (.97)(700) = 754 thousand. If we Jet p represent the vector of current populations of the city and suburbs, we have
User: Thomas McVaney
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 22 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
22
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
This yeor
City
Suburbs
500 thousand
700 thousand
t5%
.Vr 97%
Next year
c85X500r
(. t5)(500) + (.97)(700) Suburbs
+ 1.03)r70<))
C U}
Figure 1.14 Movement between the city and suburbs
We can find the populations in the next year by computing the matrix-vector product: A
In other words, Ap is the vector of populations in the next year. If we want to detennine the populations in two years, we can repeat this procedure by multiplying A by the vector Ap . That is , in two years, the vector of popu lations is A(Ap).
IDENTITY MATRICES Suppose we let /2
= [~ ~] and v be any vector in 7?} . Then
So multiplication by /2 leaves every vector v in 7?} unchanged. The same property holds in a more general context. Defin ition For each positive integer n, the 11 x 11 identity m atr ix / 11 is the 11 x matrix whose respective columns are the standard vectors et . e2, . . . , e11 in R 11 •
11
For example,
and
/3
= [0I 0
0 0]0 . I
0
I
Because the colu mns of 1, are the standard vectors of R 11 • it follows easil y that 1, v for any v in R".
=v
ROTATION MATRICES
n
Consider a point Po = (xo,yo) in 2 with polar coordinates (r, a ), where r ~ 0 and a is the angle between the segment OPo and the positive x -axis. (See Figure 1.1 5.) Then xo = r cos a and Yo = r s in a. Suppose that OPo is rotated by an angle B to the
User: Thomas McVaney
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 23 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.2 Lin ear Combin ation s, Matrix-Vector Products, and Specia l Matrices
23
y
Po = (xo. .ro} Yo
0
X
Figure 1.15 Rotat ion o f a vector through the angle 0 .
segment OP 1, where P 1 = for P I· and hence
(XJ. )'J).
Then (r ,c:r + B) represents the polar coordinates
x , =rcos(c:r + B} = r (cosc:r cosO - sin c:r sin 0) = (r cosc:r)cosB - (rsin c:r)sin O
= xo cos O -
Yo sin O.
Similarly, y1 = xo sin B +yo cos O. We can express these equations as a matrix equat ion by usi ng a matrix- vector product. If we define Ao by A _ [ cos O - sin O] 0 sin O cosO •
then A 0 [xo] = [cos 0 - sin O] [xo] = yo smB cosO yo
[xocos O- yo sin B] = xo s1n B + yo cosB
[x']. y1
We call A0 the B-rotation matrix, or more si mply, a rotation matrix. For any vector the vector A0u is the vector obtained by rotating u by an ang le 0, whe re the rotation is counterclockwise if B > 0 and clockwise if 0 < 0.
u,
To rotate the vector
[!]
by 30°. we compute A30o
[!}
that is.
J3 cos 30° [ sin 30°
- sin 30: ] [3] = cos30 4
2 [
1
2
Thus when
4] . + -J3 [3]4 is rotated by 30°, the resultjng vector is "2' [3~ 3 4
It is interesting to observe that the 0° -rotation matrix A0o , which leaves a vector unchanged, is given by A0o "' /2. This is quite reasonable because multiplication by / 2 also leaves vectors unchanged.
User: Thomas McVaney
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 24 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
24 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Besides rotations, other geometric transformations (such as reflections and projections) can be described as matrix- vector products. Examples are found in the exercises.
PROPERTIES OF MATRIX- VECTOR PRODUCTS I! is usefu l to note that the columns o f a matrix can be represented as matrix-vector products of the matrix with the standard vectors. Suppose. for example, that A =
[i :].
Then
and The general result is stated as (d) of TI1eorem 1.3. For any m x 11 matrix A. AO = 0', where 0 is the 11 x 1 zero vector and 0' is the m x I zero vector. This is easily seen since the matrix-vector product AO is a sum of products of columns of A and zeros. Similarly. for them x 11 zero matrix 0. Ov = 0' for any n x I vector v. (See (f) and (g) of Theorem 1.3.)
THEOREM 1.3 (Properties of Matrix- Vector Products)
Let A and B be m x
11
matrices, and
let u and v be vectors in R". Then (a) A(u + v) =A u + Av. (b) A(c u) = c(Au) = (cA)u for every scalar c. (c) (d) (e) (f) (g)
(A+ B)u =Au +B u. Aej a j for j I. 2. . . . , 11 , where Cj is the jth standard vector in R ". If 8 is an 111 X II matrix such that 8w = Aw for all w in n", then 8 =A. AO is the m x I zero vector. If 0 is the 111 x 11 zero matrix, then Ov is them x I zero vector. (h) J, v = v.
=
=
PROOF We prove part (a) and leave the rest for the exercises. (a) Because the ith component of u + vis u; + v;, we have
It follows by repeated applications of Theorem 1.3(a) and (b) that the
matrix-vector product of A and a linear combination of u 1• u 2..... Uk yields a linear combination of the vectors Au 1,A u 2, ... ,A uk. That is, For any m x
11
matrix A. any scalars
CJ. c2,
... , Ck. and any vectors U J. u2, .... Uk
inn",
User: Thomas McVaney
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 25 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.2 Linear Combinations, Matrix-Vector Products, and Specia l Matrices 25
EXERCISES In Eurcises 1- 16, compwe the mmrix- vector products.
1.
3.
[! -~ ~J[-i]
[! -~ -l][!]
29. u = [:].s=
4.
9.
[~ ~r [~J
8.
[~ ~ ~] [~]
ll. [
13. [
-~ ~~] [~]
14.
-~]0
- ;
- 1
u
= [_ :]. S= {[:]}
32.
u
= [:]. s = {[~]. [-~Jl
16. ([
36. ..
38. [:]
I
41.
m
=
fl=210°, u = [=~J
22. fi= l35°, u = [-7]
23.
o = no•, u =
24.
27.
(!
(!
= 240· . .. = [ = 300· . .. =
=n
m
26. 28.
= 30°. .. =
[~J
21.
(!
o= (!
(!
User: Thomas McVaney
330•. u =
[~]
= 150°. .. = [
-~J
= 120· . .. = [
-~l
S=
U=[~J.s= {[:J.[-m
u=[_Hs= {[j].[-l]} [
42. u =
S= {
[!J.[-iJl
S= {
11
=[
44. ..
=[
43.
-n
I =[-Us= { [-!].[-~] · [-:] } [H GJ. [!] '[~] }
C1
(!
[ -;J
20.
U
19
25.
= 60°. II =
18. (/ = 0°.
Cz
S=
40. .. =
In Eurcises 17- 28, an angle 0 and a 1•ector u are given. Write the corresponding rotation matrix, and compwe the vectorfowtd by rotating u by tire angle 0. Draw a sketch and simplify yortr answers.
=
[
l wJ. [-m = [:l mJ. [-n[:JJ m. {[~].[;].[=;]}
37. u =
-~ ~J + [~ -~D [~J
U
S= {
~: s=
-~ ~r + [~ _;f)[~J
17. (/ = 45°.
=
35. .. = [
39. 15. ([
[i].
[~l j]} 34. .. =[:J. s =I[~J. [-n [~J 1
-~ -~] [~]
[-~3
mJ.[~]}
31.
33. u
[jr [-~J
12. [
-~ -~ -~J [~]
3] [ -:]
[~ ~ ~] [~]
10.
If possible.
30. u = [_:]. S = {[ _:]}
[~ -~H~J
6. [2
7.
In Exercises 29-44, a vector u and a setS are given. write u as a linear combination of the \'ectors in S.
~
=n
-n
5= {
S= {
[~], [~] , [~] } [-:J. [-~J . [-iJ J
bt Eurcises 45- 64, determine wlrether the state·
~ mews are true or false.
45. A linear combina1ion of vectors is a sum of scalar mulli ples of the vectors.
46. The coefficienls in a linear combinalion can always be chosen to be positive scalars.
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 26 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
26
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
n
47. Every vector in 2 can be wriuen as a linear combination of the standard vectors of 2•
n
n
48. E,·ery vector in 2 i; a linear combinalion of any two nonparallel vectors. 49. The zero vector i; a linear combination of any nonempty set of vectoN. 50. The matrix \'ector product of a 2 x 3 matrix and a 3 x I ,·ector i< a 3 x I 'ector. 51. The matrix vector product of a 2 x 3 matrix and a 3 x I vector equals a hnear combination of the rows of the matrix. 52. The product of a matrix and a standard ,·ector equals a standard vector. 53. The rotation matrix A 1 ~0 equals - / 2• 54. The matrix vector product of an m x 11 matrix and " vector yield' a vector in R.". 55. Every vector in 2 i> a linear combination of two parallel vectors. 56. Every vector v in n" can be wriuen as a linear combination of the standard vectors. using the components of v as the coefficients of the linear combination. 57. A vector with exact ly one nonzero component is called a standard vector. 58. If A is an m x 11 matrix, u i~> a vector in n". and c is a scalar, the n A(c u ) = c(A u). 59. If A is an m x 11 matrix. then the only vector u in n• such that Au = 0 is u = 0 . 60. For any \'ector u in 2• A~u is the vector obtained by rotating u by the angle 0. 61. If 0 > 0. then A. u i> the vector obtained by rotating u by a clockwise rotation of the angle(). 62. If A i' iln m x 11 matri' and u and ,. are vectors in n• such that Au = A''· then u = v. 63. The matrix vector product of an m x 11 matrix A and a \"eCtor U in n• equaJ, 11(8 1 + 112 3 1 - • · • + 11n 3 n.
70.
~t
64. A matnx havmg nonncgauve entries such that the sum of the entrie' in each column is I is called a stochastic matrix. 65. Use a matrix vector product to >how that if () Aov = v for all v in 2•
n
= 0 °. then
66. Usc a matrix vector product to show that if 0 = 180 °. - v for :til v in 2• then Aov
=
n
67. Usc matrix- vector products to >how that. for any angles 0 and fJ and any vector v in 2• Ao(A,9 v) = A0+.8"· 68. Compute A~(Ao u) and Ao(A~ u) for any vector u in n 2 and any :mgle 0. 69. Suppose that in a metropolitan area there me 400 thousand people living in the city and 300 thousand people living in the ;uburbs. Use the >tochastic matrix in Example 3 to determine (a) the number of people living in the ci ty and s uburbs after one year; (b) the number of people living in the city and suburbs after two years.
n
User: Thomas McVaney
~ ~
:] and u =
GJ
Repre>ent Au as a
linear combination of the columns of A. hr £rercises 71- 74,1t't A = [
-~ ~] a11d u = [~
l
71. Show that Au is the reRcction of u about the y·axis. 72. Prove that A(A u) = u . 73. Modify the matrix A to obtain a matrix 8 ..a that B u i> the reRection of u about the ,, -axis.
74.
~t
C denote the rotation matrix that corre,pond> to
() = 180.
(a) Find C . (b) Use the matrix B in Exercise 73 to >how that
n
n
A= [
A(C u) = C(A u) = B u
and
B(C u) = C(B u) = Au . (c) Interpret these equations in terms of renections and rotations. /11
Erercises 75-79. let A
= [~ ~] and u =
(;:
l
75. Show that Au is the projection of tt on the x·axis. 76. Prove that A(Au ) = Au .
77. Show that if v is any vector whose endpoint lies on the x-axis. then Av = ''· 78. Modi fy the matrix A to obtain a matrix 8 so that B u is the proJection of u on the \'-axi>. 79.
~t C denote the rotation matrix that corresponds to 8 = 180 . (See Exerme 74(a).)
(a) Prove that A(C u) = C(Au ). (b) Interpret the result
111
(a) geometncally.
80. ~t u1 and u 2 be vectors rn n•. Pro\"e that the >urn of two linear combination; of these vectors is also a tinear combination of the>e vecton.. 8 1. ~t 111 and 112 be vecton. in n•. Let v and w be linear combination' of u 1 and u 2. Prove that any linear combination of v and w is also a linear combination of u 1 and u 2• 82.
u 1 and u 2 be vectors in n•. Prove that a >c:llar multiple of a linear combinatio n of thc>c vectors is also a linear combination of these vectors. ~t
83. Prove (b) of Theorem 1.3. 84. Prove (c) of Theorem 1.3.
85. Prove (d) of Theorem 1.3. 86. Prove (e) of Theotc m 1.3. 87. Prove (t) of Theore m 1.3.
88. Prove (g) of Theorem 1.3. 89. Prove (h) of Theorem 1.3. /11 Exercises 90 a11d 91. ll.fl! either a wlclll(l(or ll"ith matrh capo·
bilitirs or compwu softwarr suclr as MATLAB to soh·c caclt problem.
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 27 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.3 Systems of Linear Equations
90. In reference to Exercise 69. dete rmine the number of pcopie living in the city and suburbs after 10 years.
and the vectors
91. For the matrices
A=
and B
['' ['' 1.3 4.4 0.2
=
1.3
-0.1 4.5 5.7 1.1
-9.9 -2.2
9.8 1.1 4.8 2.4 -2.1
-1.2 1.3 6.0
27
"{l]
'] '] 6.2 2.0
and
-8.5
3.0 2.4 -5.8 -5.3
(a) compute Au: (b) compute B (u
6.0 2.8 8.2
(c) compute (A
+ v);
+ B )v:
(d) compute A(Bv).
SOLUTIONS TO THE PRACTICE PROBLEMS Using elementary algebra. we sec that
I. (a) The vectors inS are nonparallel vectors in 1?.2
x2
(b) To express w as a linear combination of the vectors in S. we must find scalars Xt and x2 such that
A linear equation in the variables (unknowns) x 1,x2 • . . . ,x,. is an equation that can be written in the fom1
where a 1,a2, ... ,a,., and b are real numbers. The scalars a 1,a2, . .. ,a,. are called the coefficients, and b is called the constant term of the equation. For example, 3x 1 - ?x2 +x3 19 is a linear equation in the variables x 1, x2, and x3, with coefficients 3. -7, and l. and constant tcnn 19. The equation 8x2- 12xs = 4x t - 9x3 + 6 is also a li near equati on because it can be wrinen as
=
On the other hand. the equations
2xt -1x2 +xi= -3,
and
4J,X;' - 3x2 = IS
are 11ot linear equations because they contai n terms in volvi ng a product of variables, a square of a variable. or a square root of a variable. A system of linear equations is a set of m linear equati ons in the same 11 vru·iablcs. where m and 11 arc positive integers. We can write such a system in the
User: Thomas McVaney
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 28 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
28
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
form
= b,
a,,x, + a12x2 + · · · + a,,x, a21X1
+ 022X2 + · ·· + a2,X, = b2
0 111 1X1
+ a,2X2 + · ·· + a,,x,
= h111 ,
where a;j denotes the coefficient of Xj in equation i. For example, on page 18 we obtained the following system of 2 linear equations in the variables x,, x2. and x3: .80x,
+ .60x2 + .40x3 =
.20x,
+ .40x2 + .60x3 =
5 3
( l)
A solution of a system of linear equations in the variables x,,xz, ... ,x, is a vector [:;] in 1?!' such that every equation in the system is satisfied when each x;
s, is replaced by s;. For example, .80(2) + .60(5) + .40( 1)
[~] is a solution of system ( I ) because =5
and
.20(2) + .40(5) + .60( 1)
= 3.
The set of all solutions of a system of linear equations is called the solution set of that system.
Practice Problem 1 ""'
Determine whether (a) u =
[ -~~l
" ' (b) •
~ [1] ~l ,.ioM ore
of oho
''"~
of
linear equations
x, + 2x, - xz
Sx3 -
+ 6x3
X4
=7 = 8.
SYSTEMS OF 2 LINEAR EQUATIONS IN 2 VARIABLES A linear equation in two variables x and y has the form ax+ by= c. When at least one of a and b is nonzero, this is the equation of a line in the xy-planc. Thus a system of 2 linear equat ions in the variables x andy consists of a pair of equations, each of which describes a line in the plane.
+ b,y = c, a2x + bzy = c2
a,x
is the equation of line L,. is the equation of line
Lz.
Geometrically, a solution of such a system corresponds to a point lying on both of the lines L 1 and L2. There are three d ifferent situations that can arise. If the lines are different and parallel, then they have no point in common. In this case, the system of equations has no solution. (See Figure 1.16.) If the lines are different but not parallel. then the two lines have a unique point of intersection. In this case, the system of equations has exactly one solution. (See Figure 1.17 .)
User: Thomas McVaney
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 29 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.3 Systems of Linear Equations
29
y
)'
c,
c,
X
X
£ 1 and £ 2 arc diffcrcnl buo nol parallel.
£ 1 and £ 2 arc parallel.
Exactly one solution
No solution
Figure 1.16
Figure 1.17
Finally. if the two lines coincide, then every point on £, and Lz satisfies both of the equations in the system, and so every point on £ 1 and Lz is a solution of the system. ln this case. there arc infinitely many solutions. (Sec Figure 1.18.) As we will soon see, no matter how many equations and variables a system has, there arc exactly three possibilities for its solution set. X
Every system of linea r equations has no solution, exactly one solution, or infinitely many solutions . .C 1and .C.2 are the sa. me. Infinitely many solulions
Figure 1.18
A system of linear equations that has one or more solutions is called consistent; otherw ise, the system is called inconsistent. Figures 1.17 and 1.18 show consistent systems, while Figure 1.16 shows an inconsistent system.
ELEMENTARY ROW OPERATIONS To find the solution set of a system of linear equations or determine that the system is inconsistent, we replace it by one with the same solutions that is more easily solved. Two systems of linear equations that have exactly the same solutions are called equivalent. Now we present a procedure for creating a simpler, equivalent system. It is based on an important technique for solving a system of linear equations taught in high school algebra classes. To illustrate this procedure, we solve the following system of three linear equations in the variables x 1, xz, and x3:
We begin the simplification by eliminating from every equation but the first. To do so, we add appropriate mu ltiples of the first equation to the second and third equations so that the coefficient of x 1 becomes 0 in these equations. Adding - 3 times the first equation to the second makes the coefficient of equal 0 in the result.
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 30 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
30
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Likewise, add ing - 2 times the fi rst equation to the th ird makes the coeffic ient of x 1 0 in the new third equation.
-2x, + 4x2 2r, - x2 3x2
+ 2x3 = -6 + XJ = 0 + 3x3 = -6
(-2 times equation I) (equalion 3)
=
We now replace equation 2 with - 2r3 - 6, and equation 3 with 3x2 transform system (2) into the following system: X1 -
+ 3x3 = -
6 to
2t2 - X3 = 3 - 2x3 = - 6 3x2 + 3x3 = -6
In this case, the calculation that makes the coefficient of x 1 equal 0 in the new second equation also makes the coefficient of x2 equal 0. (This does not always happen, as you can sec from the new third equation.) If we now interchange the second and third equations in th is system, we obtain the following system:
Xi - 2x2 -
=
X3
3
= -6
3x2 + 3x3
(3)
- 2r3 = - 6 We can now solve the third equation for X3 by mu ltiplying both si des by - ~ (or equivalently, dividing both sides by - 2). This produces X( -
2r2 - X3 3x2 + 3x3
=
3
X3
=
3.
=- 6
By adding appropriate multiples of the third equation to the first and second. we can el iminate X3 from every equation but the th ird. If we add the third equation to the first and add -3 times the third equation to the second , we obtain
6 = - IS X3
=
3.
Now solve for x2 by multiplying the second equation by X( -
2x2
6
X2
= -5 X3
=
!. The result is
3.
Finall y, adding 2 times the second equation to the first produces the very simple system = - 4
X(
X2
=- 5
X3 =
(4)
3,
whose solution is obvious. You should check that replacing x, by -4, and
X3
by 3 makes each equation in system (2) tme. so that [
=~]
x2
by -5,
is a solution of
system (2). Indeed, it is the only solution, as we soon will show.
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 31 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.3 Systems of Linear Equations
31
In each step just presented, the names of the variables played no essentia l role. All of the operations that we performed on the system of equations can also be performed on matrices. In fact, we can express the original system Xt - 2r2 X3 = 3 3xt - 6x2 - 5x3 3 2xt - X2 + XJ = 0
=
(2)
as the matrix equation Ax = b , where
A=
[i -I]
-2 -6 -5 ' - I
X =
I
XI] [ X2
and
,
b=
XJ
[il
Note that the columns of A contain the coefficien ts of x 1, x2 , and x3 from system (2) . For this reason. A is called the coefficient matrix (or the matrix of coefficients) of system (2 ). All the information that is needed to find the solution set of this system is contained in the matrix
[i
-2 - I -6 -5 - I
which is called the augmented matrix of the system. This matrix is formed by augmenting the coefficient matrix A to include the vector b. We denote the augmented matrix by [A b ]. If A is an m x 11 matrix, then a vector u in 'R" is a solution of Ax b if and only if Au = b. l11us [
=~] is a solution of system
=
(2) because
-2 -6 - I
Example 1
For the system of linear equations Xt
+
2rt - x2
5XJ -
X4
+ 6x3
=
7
= - 8,
the coefficient matrix and the augmented matrix arc
[~
0 5 - 1 6
and
[~
0 5 - 1 6
- I
0
respectively. Note that the variable x2 is missing from the first equation and x4 is missing from the second equation in the system (that is. the coefficients of x2 in the first equation and X4 in the second equation are 0). As a result, the ( I, 2)- and (2, 4)entries of the coefficient and augmented matrices of the system are 0.
In solving system (2), we performed three types of operations: interchanging the position of two equations in a system, multiplying an equation in the system by a
User: Thomas McVaney
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 32 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
32
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
nonzero scalar, and adding a multiple of one equation in the system to another. The analogous operations that can be performed on the augmented matrix of the system are given in the following definition . Defin ition Any one of the following three operations performed on a matrix is called an elementa r y row operation : I. Interchange any two rows of the matrix.
(in ter change operation)
2. Multiply every entry of some row of the matrix by the same nonzero scalar. (scaling operation)
3. Add a mu ltiple of one row of the matrix to another row. (r ow addition oper ation) To denote how an elementary row operation changes a matrix A into a matrix 8, we use the following notation: r;~ r;
l. A - - --
8 indicates that row i and row j are interchanged.
cr; - r,
2. A - - - - 8 indicates that the entries of row i arc multiplied by the scal ar c. 8 indicates that c times row i is added to row j.
3. A
ijifi ..!.!tjM
Let I
2
and
B=
[~
~] .
2 1
-5
-3 -7
The following seq uence of ele mentary row operations transforms A into B:
A=
[r
2
- 2r1+r2- r2
- I I
0
[i
-3r 1+r, . - r., [I0
~]
r1 # r1
2
I
-3
-3 0
2
I
[i
2
1 - I
0
n
-n _;]
-3 -3 0 -5 - 3 -7
- \rJ -+ 1"2
[~
2 I
1
~] = 8.
-5 -3 -7
We may perform several elementary row operations in succession, indicating the operations by stack ing the individual labels above a single arrow. These operations are performed in top- to-bottom order. In the previous example. we cou ld indicate how to transform the second matrix into the fourth matrix of the example by using the following notation: 2 - I
0
User: Thomas McVaney
I
_;]
-3 -3 -5 -3 -7
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 33 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.3 Systems of Linear Equations
33
Every elementary row operation can be reversed. That is. if we perform an e lementary row operation on a matrix A to produce a new matrix 8, then we can perform an e lementary row operation of the same kind on 8 to obtain A. If, for example. we obtain 8 by interchanging two rows of A, then interchanging the same rows of 8 yields A. Also, if we obtain 8 by mu ltiplying some row of A by the nonzero constant c. then multiplying the same row of 8 by ~ yields A. Finally. if we obtain 8 by adding c times row i of A to row j , then adding - c times row i of 8 to row j results in A. Suppose that we perform an elementary row operation on an augmented matrix lA bl to obtain a new matrix lA' b' l. The reversibility of the elementary row operations assures us that the solutions of Ax b are the same as those of A'x b'Thus performing an elementary row operation on the augmell/ed matrix of a system of linear equations does not change the so/wion set. That is, each elemematy row operation produces the augmellled matrix of an equivalent system of linear equations. We assume this result throughout the rest of Chapter I; it is proved in Section 2.3. Thus, because the system of linear equations (2 ) is equivalent to system (4 ). there is only one solution of system (2).
=
=
REDUCED ROW ECHELON FORM We can use elementary row operations to simplify any system of linear equations unti l it is easy to see what the solution is. First. we represent the system by its augmented matrix, and then use elementary row operations to transform the augmented matrix into a matrix having a special form. which we call a reduced row echelon form. The system of linear equations whose augmented matrix has this form is equivalent to the original system and is easily solved. We now define this special form of matrix. In the following discussion, we call a row of a matrix a zero row if all its e ntries are 0 and a nonzero row otherwise. We call the leftmost nonzero entry of a nonzero row its leading entry.
De finitions A matrix is said to be in row ech elon form if it satisfies the foll owing three conditions: I . Each nonzero row I ies above every zero row. 2. The lead ing entry of a nonzero row lies in a column to the right of the column containing the leading entry of any preceding row. 3. lf a column contains the leading entry of some row, then all entries of that column below the leading entry are 0. 5 If a mau·ix also satisfies the following two additional conditions. we say that it is in
reduced row echelon form 6 4. If a column contains the leading entry of some row, then all the other entries of that column are 0. 5. The leading entry of each nonzero row is I.
5 Condition 3 is a direct consequence of condition 2. We include it in this definition for emphasis, as is
usually done when defining the row echelon form. 6 Inexpensive calculators are available that can compute the reduced row echelon form of a matrix. On
such a calculator, or in computer software, the reduced row echelon form is usually obtained by using the command rre f .
User: Thomas McVaney
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 34 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
34
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
l
A matrix having e ither of the forms that follow is in reduced row echelon form. In these diagrams. a denotes an arbitrary entry (that may or may not be 0).
*
~ ~ ~ g: 0 0 0 1 *
[0
l
* * * 0 00 0 ** 0 I 01 *
OOOl__!_ 0 *
0
[ 0 0 0 0 ~ 0 0 0 0 0 0 0
0 0 0 0
Notice that the leading entries (which must be I ' s by condition 5) form a pattem suggestive of a fl ight of stairs. Moreover, these lead ing e ntries of I are the on ly nonzero entries in their columns. Also, each nonzero row precedes all of the zero rows.
Example 3
The following matrices are nor in reduced row echelon form:
0 0 6
A= [!
"]
I 5 7 0 0 2 4 0 0 0 0 0 I 0 I
8 =
[I 72 0 0 0 0
0 0 0 0
-3 9
I
4
6
0 0 0
2
3
0 0 0 0
~]
Matrix A fai ls to be in reduced row echelon form because the leading e ntry of the third row does not lie to the right of the leading entry of the second row. Notice, however, that the matrix obta ined by interchanging the second and third rows of A is in reduced row echelon form. Matrix 8 is not in reduced row echelon form for two reasons. The leading entry of the third row is not I. and the leading entries in the second and third rows are not the only nonzero entr ies in their columns . That is, the third column of 8 contains the first nonzero entry in row 2, but the (2, 3)-entry of 8 is not the onl y nonzero entry in column 3. Notice. however, that although 8 is not in reduced row echelon form, 8 is in row eche lon form. A system of linear equations ca n be eas ily solved if its augmented matrix is in reduced row echelon form. For example, the system
=
Xt
X2 X3
-4 = -5 3
=
has a solution that is immediately evident. If a system of equations has in fin ite ly many solutions, then obtaining the solution is somewhat more complicated. Consider, for example, the system of linear equations Xt -
Jx2
x3
+ 2x4 + 6x4
=7 = 9
xs = 2 0=0.
(5 )
The augmented matrix of this system is
[!
-3 0 2 0 0 I 6 0 0 0 0 I 0 0 0 0
which is in reduced row echelon form .
User: Thomas McVaney
l Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 35 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.3 Systems of Linear Equations
35
=
Since the equation 0 0 in system (5) provides no useful in formation, we can disregard it. System !5) is consistent. but it is not possible to find a unique value for each variable because the system has infini tely many solutions. Instead, we can solve for some of the variables. called basic variables. in terms of the others, called the free variables. The basic variables corTes pond to the leading entries of the augmented matrix. ln system (5). for example. the basic variables arc x 1 • XJ. and xs because the lead ing entries of the augmented matrix are in columns I, 3, and 5, respectively. The free variables are x2 and X4. We can easily solve for the basic variables in tem1s of the free variables by moving the free variables and their coefficients from the left s ide of each equation to the right. The resulti ng equations xr x2 X3
= 7 + 3x2 =9 -
x4
xs
2x4
free
6x4
free
=2
provide a gener a l sol ution of system (5). Th is means that for every choice of values of the free variables, these equations give the corresponding values of x r, XJ, and xs in one solution of the system, and furthermore, every solution of the system has this 0 and X4 0 form for some values of the free variables. For example, choosing X2
=
gi~
[!J"' oho~ing
l
=
" = -2 " ' " = I yicldnho.olo
=
The general solution can also be written in vector form as
1,
~'" fo~. opp~rn<
"·~••~Y
;, ;,
'"" ""' rolmim< of iho
lio= oombi,.
"oc""
''"= ;, iho ""'of
[~l "'
[!l"' [~11, ~m'''"' whh
being the free variables x2 and x4, respectively.
Example4
Find a general solution of the system of linear equations
+ 2x4 = 7 - Jx4
X3
User: Thomas McVaney
=&
+ 6x4 = 9.
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 36 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
36
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Solution Since the augmented matrix of thi s system is in reduced row echelon form. we can obtain the general solution by solving for the basic variables in terms of the other variables. In this case, the basic variables are x 1, xz, and x 3, and so we solve for X t . x2, and X 3 in terms of x4. The resulting general sol uti on is
=
Xt 7 - 2x4 xz = 8 + 3x4
x3 X4
=9 -
6x4
free.
We can write the general solution in vector fonn as
There is one other case to consider. Suppose that the augmented matrix of a system of linear equations contains a row in which the only nonzero entry is in the last column, for example,
The system of linear equations corresponding to this matrix is Xt
-
)XJ
=5
Xz + 2x3 = 4 Ox t + Ox2 + Ox3 = I Ox, + Oxz + Ox3 = 0. Clearly. there are no values of the variables that satisfy the third equation. Because a solution of the system must satisfy every eq uation in the system, it follows that thi s system of equations is inconsistent. More generally. the following statement is tme: Whenever an augmented matrix contains a row in which the on ly nonzero entry lies in the last column, the corres ponding system of linear equations has no solution. It is not usually obvious whether or not a system of linear equations is consistent. However, th is is apparent after calculating the reduced row echelon form of its augmented matrix. Practice Problem 2 ""
The augmented matrix of a system of linear equations has
[~
I
0 0
-4 0 0
0
3 0]
1 -2 0 0 0 I
as its reduced row echelon form. Determine whether thi s system of linear equations is consistent and. if so. find its general solution. ~
User: Thomas McVaney
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 37 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.3 Systems of Linear Equations
37
SOLVING SYSTEMS OF LINEAR EQUATIONS So far, we have learned the following facts : l. A system of linear equations can be represented by its augmented matrix, and any elementary row operations performed on that matrix do not change the solutions of the system. 2. A system of linear equations whose augmented matrix is in reduced row echelon form is easily solved.
Two questions rema in. Is it always possible to tmnsfonn the augmented matrix of a system of linear equations into a reduced row echelon form by a sequence of elementary row opemtions? Is that form un ique? The first question is answered in Section 1.4, where an algorithm is given that transforms any matrix into one in reduced row echelon form. The second question is also important. If there were different reduced row echelon forms of the same matrix (depending on what sequence of elementary row operations is used), then there cou ld be different solutions of the same system of linear equations. Fortunately. the following important theorem assures us that there is only one reduced row echelon form for any matrix. It is proved in Appendix E.
THEOREM 1.4 Every matrix can be transformed into one and only one matrix in reduced row echelon form by means of a sequence of elementary row operations. In fact, Section 1.4 describes an explici t procedure for perfonning this trans formation. If there is a sequence of elementary row operations that transforms a matrix A into a matrix R in reduced row echelon form, then we call R the reduced row echelon form of A . Using the reduced row echelon form of the augmented matrix of a system of linear equations Ax = b, we can solve the system as follows:
Procedure for Solving a System of Linear Equations l. Write the augmented matrix
lA
b) of the system.
2. Find the reduced row echelon form [R c) of [A b ). 3. If [R c) contains a row in which the on ly nonzero entry lies then Ax = b has no solution. Otherwise, the system has at Write the system of linear equations corresponding to the solve this system for the basic variables in terms of the free a geneml solution of Ax b .
=
Example 5
in the last column, least one solution. matrix [R c), and variables to obtain
Solve the following system of linear equations: X1 + 2x2- XJ + 2t4 + Xs = 2 - x1 - 2x2 + x3 + 2r4 + 3xs = 6 2x1 + 4x2 - 3x3 + 2x4 = 3 - 3xl - 6x2 + 2x3 + 3xs = 9
Solution The augmented matrix of this system is
[-]
2 - I - 2 I 4 -3 2 -3 -6
User: Thomas McVaney
2 2
I
3
2 0 0 3
!l Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 38 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
38
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
In Section 1.4, we show that the reduced row echelon form of this matrix is
-1-5]
[ 2 0 0 0 0 [ 0 [
0 0
0
0 -3 I 2 .
I
0
0 0 0 0
0
Because there is no row in this matrix in which the only nonzero entry lies in the last column. the original system is consistent. Thi ~ matrix corrc~pond~ to the system of linear equations
+ 2x2
Xt
-
Xs
X3 X4
= -5
= -3
+ Xs = 2.
In this system, the basic variables are X t , XJ, and
X4,
and the free variables are
x2
and
xs. When we solve for the basic variables in tenns of the free variables. we obtain the following general so lution: Xt
=- 5-
X4
=- 3 = 2
xs
free
X3
2x2
+ X5
free
x2
-
XS
This is the general solution of the original system of linear equations.
Practice Problem 3 ..,.
The augmented matrix of a system of linear eq uations has
2 4]
0 [ -3 0 0 [ - 1 5 0 0 [0 0 0 0 0 0
as its reduced row echelon form. Write the corresponding system of linear equations. and determine if it is consistent. If ~o. find ib general ~olution. and write the geneml solution in vector form. ~
EXERCISES f11 £1ercises I -6. ll'rite (ll) the t'o
-.l z
7. Interchange rows I and 3. 8. Multiply row I by -3.
9. Add 2 time' row I to row 2.
I0. Interchange rows l and 2.
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 39 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.3 Systems of Linear Equations
II. Multiply row 3 by
4.
12. Add -3 times row 3 to row 2.
39.
[t
- I
2]
40.
1 [0 0
41.
1 -2 [0 0
~]
42.
[~ -~ ~]
13. Add 4 times row 2 to row 3. 14. Add 2 times row I to row 3.
In Exercises 15- 22. petformthe indicated elementmy row operation on
[
_: 2
-2 -4
-3
2
-~]
[~
-3
43.
[~
-2 0
45.
47.
[~
0 I 0
0 0
[0~
I 0
0 I 0
0
0 I 0
-2
I
15. Multiply row I by - 2. 16. Multiply row 2 by 17. Add -2 times row I to row 3.
t·
18. Add 3 times row I to row 4. 19. Interchange rows 2 and 3. 20. Interchange rows 2 and 4 .
49.
21. Add - 2 times row 2 to row 4. 22. Add 2 times row 2 to row I.
In Exercises 23-30, deten11i11e whether the gire11 vector is a solution of the system X t - 4x2 + 3x4 = 6 X3 - 2x4 -3.
51.
=
[=ll ~ [-ll , m [ll
I
53.
26
23.
"·[-ll ,. [-ll , Ul , [=!l In Exercises 31- 38, deten11i11e whethenl!e give11 vector i.f a solution of tile system Xt - lr2 + X3 + X4 + 7xs = I Xt - lr2 + 2x3 + lOxs = 2 2r1 - 4x2 + 4x4 + 8xs = 0.
3L
[l] [l]
35. [ ]
32
36
[
~ll
33
37
[]
[
~ ~ll [
-ll ~ [l]
In Exercises 39- 54, the reduced row echelon form of the a11gmemed mmrix of a sysrem of li11ear eq11ario11s is given. Determine whether this system of linear equations is consistent and. if so. find its geneml solwio11. /11 addition. in Exercises
47- 54, write the solution in vector form.
User: Thomas McVaney
[~
54.
[~
3
0 0
~]
I 0
0 0
0
-3 -4
0
46. 0
~] -3] -4 5
0
~]
4
-3
2
0
4
0
0 0
0 0
0
0
-:]
44.
0 0
6 .
39
48. I
50.
52.
[~
-I 2
3 -5
0
0
-2
0
0
I
[~ ~
0 0
0 0 0 I
~] -3] -4 5
3 2
0 0
0
[~ ~ ~ -~ ~
-l ~]
55. Suppose that the general solution of a system of m linear equations in 11 variables contai ns k free variables. How many basic variables does it have? Explain your answer. 56. Suppose that R is a matrix in reduced row echelon fonn. If row 4 o f R is nonzero and has its leading e ntry in column 5, describe column 5. ~
~
In Exercises 57-76. determine whether the following stateme11ts are tme or false.
57. Every system of linear equations has at least one solution. 58. Some systems of linear equations have exactly two solutions. 59. If a matrix A can be transformed into a matrix 8 by an e lementary row operation. then 8 can be transformed into A by an elementary row operation. 60. If a matrix is in row echelon fo n11 , then the leading entry o f each nonzero row must be I. 61. If a matrix is in reduced row echelon form, then the leading entry of each nonzero row is I. 62. Every matrix c an be transfonned into one in reduced row eche lon form by a sequence of elementary row operations. 63. Every matrix can be transformed into a unique mat.rix in row echelon form by a sequence of elementary row operations.
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 40 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
40
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
64. Every matrix can be transfonned into a unique matrix in
78. Prove that if R is the reduced row echelon fonn of a
reduced row echelon form by a sequence of elementary row operations.
matrix A, then IR 0) is the reduced row echelon fomt of [A OJ.
65. Perfonning an elementary ro\\ operation on the augmented maui< of a sys tem of Ioncar equations produces the augmented matrix of an equivalent ~ystcm of linear
equation,, 66. If the reduced row echelon fomt of the augmented matrix of a sy,tcm of linear equations contains a zero row. then the system is
const~tent.
67. If the only nonzero entry in some row of an augmented matrix of a system of Ioncar equations hcs in the last column. then the •)'stem i> incons i>tcnt.
68. A system of linear equation> is called consistent if it has
79. Pro\·e that for any m x 11 matrix A. the equation Ax is consistent. where 0 is the uro vector in
n· .
form contains no zero rows. Pro\e that Ax tent for every b m R.•.
contains one more column than the coefficient matrix.
72. A system of linear equations Ax = b has the same solu-
=
tions as the .ystcm of linear equations Rx c. where IR cl is the reduced row echelon fomt of lA bl. 73. Multiplying every entry of some row of a matrix by a scalar is an elementary row operation.
74. Every >oluuon of a con>i>tcnt sy>tcm of linear equations can be obtained by substituting appropriate values for the free variables in its general >olution.
[~
I ()
0
76. If A ts an Ax
111
x
11
matnx. then a solution of the system
= b is a vector u in n• >UCh that Au = b .
•
0 0
0 I
0
• •0
0 0
is a possible reduced row ccltelon form for a 3 x 7 matrix. I low many different .uch forms for a reduced row echelon matrix are possible if the matrix is 2 x 3?
82. Repeat Exercise 8 1 for a 2 x 4 matrix. 83. Suppose that 8 is obt:tincd by one elementary row operation performed on matrix A. Prove th:tt the same type of elementary operation (namely, an interchange. scaling. or row addition operation) that tran>forms A into 8 also transforms 8 into A.
84. Show that if an equation in a >y;tem of linear equations is multiplied by 0. the resulting system need not be equivalent to the original one.
85. let S denote the following sy>tem of linear equations: a 11 x 1 t121.\1
75. If a sy,tcm of linear equations has more variables than equattons. then tt must have mfirutely many solutions.
= b is consis-
81. In a matrix in reduced ro\\ echelon fonn. there are three types of entrtes: The lcadmg entnc> of nonzero rows are required to be Is. cenain other cntric> arc required to be 0.. and the remaining entric> arc arbit.rary. Suppose that these arbitrary entric> are denoted by asterisks. For example.
69. If A is the coefficient matrix of :1 ,ystem of m linear equations in 11 variables. then A is an 11 x m matrix.
71. If the reduced row eche lon fomt of the augmented matrix of a consistent syMem of m linear equations in 11 vari:tblcs contains k notwcro rows, then its general solution contains k basic variables.
0
80. let A be an m x 11 matrix whose reduced row echelon
one or more solutions.
70. The :mgmcntcd matrix of a system of linear equations
=
(IJ t•\1
+ Dt2·''2 +lit l X I = b 1 + a ;u.\ 2 + tiJJ.\ _l = b 2 + 0 )2A2 + tl ~t-I J = bj
Show that if the second equation of S i> multiplied by a nonzero scalar c. then the resulting sy>tem is equivalent toS.
86. let S be the •)'stem of linear equations in Exercise 85. 77. 7 let lA bl be the augmented matrix of a system of linear equations. Prove that if it;, reduced row echelon fonn is [R cl. then R is the reduced row echelon fonn of A.
Show that if k times the first equation of S is added to the third equation. then the resultang >yblem is equivalent toS.
SOLUTIONS TO THE PRACTICE PROBLEMS
=
I. (a) Since 2(- 2) - 3 + 6(2) 5. u is not a solution of the second equation in the given system of equations. Therefore u is not a solution of the system. Another method for solving this problem is to represent the given •ystem as a matrix e
=
A = [~
0 5 - 1 6
and
Because
0 -1
5 6
u is not a solution of the given sy;,tem.
7 This exercise Is used In Sect tOn 1.6 (on page 70).
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 41 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussian Elimination (b) Since 5+5( 1)-3=7 and 2(5)-8+6(1)=8. v satisfies both of the equations in the given system. lienee v is a solution of the system. Alternatively. using the matrix equation Ax = b. we see that ' ' is a solution bccau>e
Since the gh·en matrix contains no row who>e only nonzero entry lies in the last column. this ;ystem is consistent. The general solution of thi> >yMem is \2
=
·'• =
0 5 6
x~
+ hJ -
5 +
2.\j
.tj
free.
Note that.\ t. which i' not a ba...ic variable. is therefore a free variable. The general soluuon tn vector form .,
2. In the given matrix. the only nonzero entry in the third row lie> in the la>t column. lienee the >)'Stem of linear equations corresponding to this matrix is not consistent.
H-[!l [!l ... [~l
3. The corresponding >y>tcm of linear equations is
+ ..
+ 2.rs = 4 X5
free 4 free
Xt
·'l
X4-
41
= 5.
+» [ - ;].
ITIJ GAUSSIAN ELIMINATION In Section 1.3, we learned how to solve a system of linear equations for wh ich the augmented matrix is in reduced row echelon form. In thi s section. we describe a procedure that can be used to tr.tnsfonn any matrix into this form. Suppose that R is the reduced row echelon form of a matrix A. Recall that the fir>! nonzero entry in a nonzero row o f R is called the leading enlry of that row. The positions that contain the leading entries of the nonzero rows of R are called the pivot positions of A, and a column of A that conta ins some pivot position of A is called a pivol column of A. For example. later in this section we show that the reduced row echelon form of 2 - I 2 I I 2 3 [ - II - 2
A=
2 4 -3 2 0 -3 -6 2 0 3
~]
is
0 0 0 I 0 R= [ 0" 0 00 I 0 0 0 0
-I
-5]
0 -3 2 . I 0 0
Here the first three rows of R are its no nLero row>, and so A has three pivot positions. The fi rst pivot position is row I. column I because the leading entry in the first row of R li es in column I. The second pivot position is row 2, column 3 because the lead ing entry in the second row of R lies in column 3. Finally. the third pivot position is row 3. column 4 because the leadi ng entry in the th ird row of R lies in column 4. Hence the pivot columns of A are columns I , 3. and 4. (Sec Fig me 1.19.) The pi vot posi tions and pivot columns are easi ly determined from the reduced row echelon form of a matrix. However, we need a method to locate the pivot position s so we can compute the reduced row echelon form. The algori thm that we use to obtai n the reduced row echelon form of a matrix is called G aussian climinalion.s 'l11is • This method is named after Carl Friedrich Gauss (1777-1855), whom many consider to be the greatest mathematician of all t ime. Gauss described this procedur
to determine the orbit of the asteroid Pallas. However, a similar method for solving systems of linear equations was known to the Chinese around 250 ac.
User: Thomas McVaney
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 42 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
42
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
. fm.t ,•
pi~ot
pm.ition
/ second pivot posi tion third pivot position
·-r:
2
0 0 0
o.. ·· o
- L ··
0 /'0 cb CD. 0 0
0
-1]
I 0
pivot col umns Figure 1.19 The pivot positions of the matrix R
algorithm locates the pivot positions and then makes certain entries of the matrix 0 by means of elementary row operations. We assume that the matrix is nonzero because the reduced row echelon form of a zero matrix is the same zero matrix. Our procedure can be used to find the reduced row echelon form of any nonzero matrix. To illustrate the algorithm. we find the reduced row echelon form of the matrix
I A=[~0 6~ -7 0 -6
-4
- 5 3 5
2
16
Step 1. Detennine the leftmost nonzero column. This is a pivot column, and the topmost position in this column is a pivot position. Since the second column of is the leftmost nonzero column. it is the first pivot column. The topmost position in this column lies in row I , and so the first pivot position is the row I. col tmut 2 position.
pivot po'ition
A
A ~ [:
@
2
-4
-5
I
- I
I
3
I
6
0
-6
5
16
2
-;l
pivot columns S tep 2. In the pivot column, choose any nonzero9 entry in a row that is not above the pivot row. and perform the appropriate row interchange to bring this entry into the pivot posi tion. Because the entry in the pivot position is 0, we must perform a row interchange. We must select a nonzero entry in the pivot column. Suppose that we select the entry I. By interchanging rows I and 2, we bring this entry into the pivot position. 9 When performing calculations by hand, it
_,_,__,2_
[~ ~
- 1 3 2 -4 - 5 0 - 6 5
may be advantageous to choose an entry of the pivot column
that is± 1, if possible, in order to simplify subsequent calculations.
User: Thomas McVaney
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 43 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussian Elimination
43
Step 3. Add an appropriate multiple of the row containing the pivot position to each lower row in order to change each entry below the pivot position into 0. In step 3, we must add multiples of row I of the matrix produced in step 2 to rows 2 and 3 so that the pivot column entries in rows 2 and 3 are changed to 0. In this case, the entry in row 2 is already 0, so we need only change the row 3 entry. Thus we add - 6 times row I to row 3. This calculation is usually done mentally, but we show it here for the sake of completeness.
0 -6 6 0 6 0
- 18 - 6 5 16
-6 -6
0 6 - 12 - 13
b
10
6
(-6 times row I )
(row 3)
7 13
- 1 3 I 2 - 4 - 5 2 6 - 12 - 13 10
The effect of this row operation is to transform the previous matrix into the one shown at the right.
-~] 13
Du ring steps 1-4 of the algorithm, we can ignore certain rows of the matrix. We depict such rows by shad ing them. At the beginning of the algorithm. no rows are ignored. Step 4. Ignore the row containing the pivot position and all rows above it. If there is a nonzero row that is not ignored. repeat steps 1-4 on the submatrix that remains. We are now fin ished with row I, so we repeat steps 1-4 on the submatrix below row I. : pt vot position
[~
0
0
-4 -5 6 -12 -13
2 10
pi vot columns The leftmost nonzero column of this submatrix is column 3, so column 3 becomes the second pivot column. Since the topmost position in the second column of the submatrix lies in row 2 of the entire matrix, the second pivot position is the row 2, column 3 position. Because the entry in the current pivot position is nonzero. no row interchange is required in step 2. So we continue to step 3, where we must add an appropriate multiple of row 2 to row 3 in order to c reate a 0 in row 3, column 3. The addition of - 3 times row 2 to row 3 is shown as follows: 0 0
0
0 0 0
12 -6 6 - 12 0
0
IS - 13 2
-6 10 4
-Jr2+r3- r3
The new matrix shown at the right.
User: Thomas McVaney
is
- IS 13 - 2
[~
I
0 0
(-3 times row 2) (row 3)
- I 2 0
-4
0
3 I -5 2 2
4
-~] -2
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 44 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
44
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
We are now fin ished with row 2, so we repeal steps 1-4 on the submatrix below row 2, which consists of a single row. pivot pos ition
pivot columns At this stage, column 5 is the leftmost nonzero column of the submatrix. So column 5 is the third pivot column, and the row 3, column 5 position is the next pivot position. Because the entry in the pivot position is nonzero. no row interchange is needed in step 2. Moreover. because there are no rows below the row containing the present pivot pos ition. no operations are required in step 3. Since there are no nonzero rows below row 3, steps 1-4 are now complete and the matrix is in row echelon form. The next two steps transform a matrix in row echelon form into a matrix in reduced row echelon form. Unlike ste ps 1-4, which started at the top of the matrix and worked down, steps 5 and 6 start at the last nonzero row of the matrix and work up.
Slep 5. If the leading e ntry of the row is not I, perform the appropriate sca ling operation to make it I. Then add an appropriate multiple of this row to every preced ing row to change each entry above the pivot pos ition into 0. We start by applying step 5 to the last nonzero row of the matrix, which is row 3. Since the leading entry (the (3, 5)entry) is not I, we multiply the third row by I 2 to make the lead ing entry I. This produces the matrix at the right. Now we add appropriate mu ltiples of the third row to every preceding row to change each entry above the lead ing entry into 0. The res ulting matrix is shown at the right.
irJ ~ rJ
s, .. + ' 2 -
-3r .1 + r l -
[~
'2 [ rl
I
- I
0 0
2
0
I
3
-~]
I
-4 -5 2
0
- I
2
0
I
- I
0
-5
0 0 0
2
- 4 0
12
0
0
0
2
-~J
Slcp 6. If step 5 was performed on the first row, stop. Otherwise, repeal step 5 on the preceding row.
User: Thomas McVaney
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 45 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussian Elimination
Since we just performed step 5 using the third row, we now repeat step 5 using the second row. To make the leading entry in this row a I, we must mu ltiply row 2 by~-
- I I
I
0
Now we must change the entry above the leading entry in row 2 to 0. The resulting matrix is shown at the right.
0
-2 0
2
0 - 1 0 I
0
J]
-5 6
0
45
I
-2 0 6 2
0
We just performed step 5 on the second row, so we repeat it on the first row. T his time the lead ing entry in the row is already I, so no scaling operation is needed in step 5. Moreover, because there are no rows above this row, no other operat ions are needed. We see that the preceding matrix is in reduced row echelon form. Steps 1-4 of the preceding algorithm are called the forward pass. The forward pass transforms the original matrix into a matrix in row echelon form. Steps 5 and 6 of the algorithm are called the backward pass. The backward pass further transforms the matrix into reduced row echelon form.
We apply the Gauss ian e limination algorithm to transform th is matrix into one in reduced row echelon form.
Operations The fi rst pivot position is I ll row I. column I. Since this entry is nonzero, we add appropriate multi pies of row I to the other rows to change the entries below the pivot position to 0. The second pivot position is row 2. column 3. Since the entry in this position is presently 0. we inter-
change rows 2 and 3.
User: Thomas McVaney
Resulting Matrix r 1 + r2 - r2 -2.r 1 + r 3 -. r J
Jr1 + r4 _. r4
[j
f 2Wf _l
[!
2 - I 0 0
2 4
4
0
- I - I
-2 6
-2 6
2
- I
0
- I
2 -2
-2
0
I
0
0
4
4
0
- I
6
6
-l] 15
-:] 15
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 46 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
46
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
- I
Next we add - 1 times row 2 to row 4.
2
-!]
-2 -2
- 1 0
0
4
4
8
8
16
- I - 1 0
The third pivot postllon is in row 3, column 4. Since this e ntry is nonzero, we add - 2 t imes row 3 to row 4.
0 2
At th is stage, step 4 is complete, so we continue by performing step 5 on the third row. First we multiply row 3 by ~ ·
- 2 I 0
Then we add 2 times row 3 to row 2 and - 2 times row 3 to row I.
- 1 0 - 1 0
rz - ..I
2r:t+ rz - 2r3 + ra
- 2 I 0
0
I
0 0 - 1 0 I 0
Now we must perfonn step 5 using row 2. This requires that we multiply row 2 by -I.
I
- 1 0 I
0 0
0
0
[o~l ~ ~ ~
Then we add row 2 to row I.
0
0
I
0 0 0
-
=~] 2
0
~ =~] I
2
0
0
Performing step 5 with row l produces no changes, so this matrix is the reduced row eche lon form of the augmented matrix of the given system. This matrix corresponds to the system of linear equations Xt
+ 2x2
-
X5
=- 5 = - 3
X3 X4
+ X5 =
(6 )
2.
As we saw in Example 5 of Section 1.3. the general solut ion of this system is Xt
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 47 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussian Elimination
47
THE RANK AND NULLITY OF A MATRIX We now associate w ith a matrix two important numbers that are easy to determine from its reduced row echelon form. If the matrix is the augmented matrix of a system of linear equations, these numbers provide significant information about the solutions of the co!Tesponding system of linear equations. Definitions The rank of an m x 11 matrix A. denoted by rank A. is defined to be the number of nonzero rows in the reduced row echelon form of A. The nullity of A, denoted by nullity A, is defined to be 11 - rank A.
i#ifh.!.!fW
For the matrix
[-;
2
- I I
2 2 2
I
3 4 -3 0 -3 -6 2 0 3 - 2
!]
in Example I, the reduced row echelon form is
[!
2 0 0 -I 0 I 0 0 I 0 0 I 0 0 0 0
-5]2 .
-3 0
Since the reduced row echelon form has three nonzero rows, the rank of the matrix is 3. The nu llity of the matrix, found by subtracting its rank from the number of columns. is 6 - 3 3.
=
Example 3
The reduced row echelon form of the matrix
B
is
~
[j
[j
3
5
I 5
3
0 - I
j]
7 - I
-I
-2
I
I
3
0 0
0 0
0 0
-;]
0 . 0
Since the latter matrix has two nonzero rows, the rank of 8 is 2. T he nullity of 8 is
5 - 2= 3. Practice Problem 2 .,..
Find the rank and nullity of the matrix
h[j User: Thomas McVaney
0 - I 0 I 0 I -2 0 6 0 0 0 I 2 0 0 0 0 0 0 0 0 0 0
-ll Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 48 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
48
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
In Examples 2 and 3, note that each nonzero row in the reduced row echelon form of the given matrix contains exactly one pivot pos ition, so the number of nonzero rows equals the number of pivot positions . Consequently, we can restate the defin ition of rank as follows: The rank of a matrix equals the number of pivot columns in the matrix. The nullity of a matrix equals the number of nonpivot columns in the matrix.
It follows that in a matrix in reduced row echelon form with rank k. the standard vectors e., e2, ... , e.( must appear, in order, among the columns of the matrix. In Example 2. for instance. the rank of the matrix is 3. and columns I. 3, and 4 of the reduced row echelon form are
respectively. Thus, if an m x must be precisely [et e2
the
11
x
If an
11
11
11
matrix has rank 11, then its reduced row eche lon form e,). In the special case of an 11 x 11 (square) matrix,
identity matrix; therefore we have the following usefu l result:
x
11
matrix has rank n. then its reduced row echelon form is /, .
Consider the system of linear equations in Example I as a matrix equation Ax = b . Our method for solving the system is to find the reduced row echelon form [R c] of the augmented matrix [A b]. The system R x = c is then equ ivalent to Ax = b. and we can easily solve Rx = c because IR c ) is in reduced row echelon form. Note that each basic variable of the system Ax = b corresponds to the leading e ntry of exactly one nonzero row of [R c), so the number of basic variables equals the number of nonzero rows, which is the rank of A. Also, if 11 is the number of columns of A, then the number of free variables of Ax = b equals 11 minus the number of basic variables. By the previous remark, the number of free variables equals n - rank A, wh ich is the nullity of A. In general.
If Ax= b is the matrix form of a consistent system of linear equations, then (a) the number of basic variables in a general solution of the syste m equals the rank of A; (b) the number of free variables in a general solution of the system equals the nullity of A. Thus a consistent system of linear equations has a unique solution if and only if the nullity of its coefficient matrix equals 0. Equivalently, a consistent system of linear equations has infinitely many solutions if and only if the nullity of its coefficient matrix is positive.
User: Thomas McVaney
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 49 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussian Elimination
49
The original system of linear equat ions in Example I is a system of 4 equations in 5 variables. However, it is equivalent to system (6 ), the system of 3 equadons in 5 variables corresponding to the reduced row echelon form of its augmented matrix. In Example I , the fourth equation in the original system is redundant because it is a linear combination of the first three equations. Specificall y, it is the sum of - 3 times the first equation. 2 times the second equation. and the third equation. The other three equations are non redundant. In general, the rank of the augmented matrix [A b] tells us the number of nonredundant equations in the system Ax = b .
Example4
Consider the following system of linear equations: Xt Xt Xt
+x2 + X3 = + 3x3 = - 2+s - X2 + I"XJ = 3
(a) For what values of r and s is th is system of linear equations incons istent? (b) For what values of r and s does this system of linear equations have infinitely many solutions? (c) For what values of r and s does this system of linear equations have a unique solution? Solution Apply the Gaussian elimination algorithm to the augmented matrix of the given system to transform the matrix into one in row echelon form:
[:
1
1
]
0 3 - 2 +s - 1 r 3
-- qr 1 + + rr2.1-. rrz .;
-2r2+rJ~ r~
[
[
1
I
0 - I 0 -2
2
I
0 0
r- 1
- 3 + .1' 2
I - I
2
0 r- 5
- 3+s 8 - 2s
] ]
(a) The original system is inconsistent whenever there is a row whose on ly nonzero entry lies in the last column. Only the third row cou ld have th is form. Thus the original 5 and system is inconsistent whenever r - 5 0 and 8 - 2s ::J 0: that is. when r
=
=
s ::/:4. (b) The original system has infinitely many solutions whenever the system is consistent and there is a free variable in the general solution. In order to have a free variable. we must have r - 5 0. and in order for the system also to be consistent. we must have 8 - 2s = 0. Thus the original system has infinitely many solutions if r 5 and s 4. (c) Let A denote the 3 x 3 coefficient matrix of the system. For the system to have a unique solution, there must be three basic variables, and so the rank of A must be 3. Since deleting the last column of the immediately preceding matrix gives a row echelon form of A, the rank of A is 3 precisely when r - 5 ::J 0; that is, r ::J 5.
=
=
Practice Problem 3 ..,.
=
Consider the following system of linear equations: Xt Xt
+ 3x2 = I + s
+ I'X2 =
5
(a) For what values of r and s is th is system of linear equations incons istent? (b) For what va lues of many solutions?
User: Thomas McVaney
r and s does this system of linear equations have infinitely
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 50 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
SO
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations (c) For what values of r and s does this system of linear equations have a unique solution? <1111 The following theorem provides several conditions that are equivalent 10 to the existence of solutions for a system of linear equations.
THEOREM 1.5 (TestforCo nsistency)
The following conditions are equivalent:
=
(a) The matrix equation A x b is consistent. (b) The vector b is a linear combination of the columns of A . (c) The reduced row echelon fonn 11 of the augmented matrix [A b] has no row of the form [0 0 · · ·0 d ). where d # 0. Let A be an 111 x n matrix, and let b be in 7?!". By the definition of a matrix-vector product, there exists a vector
PROOF
in 'R" such that Av = b if and on ly if
Thus Ax = b is consistent if and only if b is a linear combination of the columns of A. So (a) is equ iva lent to (b). Finally, we prove that (a) is equivalent to (c). Let [R cj be the reduced row echelon form of the augmented matrix lA b]. If statement (c) is fa lse, then the syste m of linear equations corresponding to Rx = c contains the equation
Ox1
+ Ox2 + · · · +Ox" =d.
=
where d # 0. Since this equation has no solutions. Rx c is inconsistent. On the other hand , if statement (c) is true, then we can solve every equation in the system of linear equations corresponding to Rx c for some basic variable. This gives a solution of Rx = c, which is also a solution of Ax= b. •
=
TECHNOLOGICAL CONSIDERATIONS Gaussian e limination is the most efficient procedure for reducing a matrix to its reduced row echelon form. Nevertheless, it requi res many tedious computations. ln fact, the number of arithmetic operations required to obtain the reduced row echelon 3 + i11 2 - ~" · We can form of an 11 x (n + 1) matrix is typically on the order of
in
10
Statements are called equivalent (or logically equivalent) if, under every circumstance. they are all true or t hey are all false. Whether any one of the statements in Theorem 1.5 is true or false depends on the particular matrix A and vector b being considered. 11 Theorem 1..5 remains true if (c) is changed as follows: Every row echelon form of [A b ] has no row in which the only nonzero entry lies i n the last column. • The remainder of this section may be omitted without loss of continuity.
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 51 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussian Elimination
51
easily program th is algorithm on a computer or programmable calcu lator and thus obtain the reduced row echelon form of a matrix. However. computers or calculators can store only a fini te number of decimal places, so they can introduce small errors called roundoff errors in their calculations. Usually. these e1Tors are insignificant. But when we use an algorithm with many steps, such as Gaussian elimination, the errors can accumulate and significantly affect the result. The following example illustrates the potential pi tfalls of roundoff error. Although the calculations in the example are perfom1ed on the TI -85 calculator, the same types of issues arise when any computer or calculator is used to solve a system of linear equations. For the matrix
[~
-1]
- 1 2 3 - 1 2 4 -2 4 8
2 ' 6
the TI-85 calculator gives the reduced row echelon form a~
I 0
0 I
[0 0
- I E-14 0 - 2 0 0
-.999999999999 4 2
(The notation aEb represents a x lOb.) However, by exact hand calculations, we find that the third, fifth, and sixth columns should be
[Jl
and
respectively. On the TI-85 calculator, numbers are stored with 14 digits. Thus a number containing more than 14 s ignificant digits is not stored exactly in the calculator. ln subsequent calculations with that number, roundoff errors can accumulate to such a degree that the final resu lt of the calculation is highly inaccurate. In our calculation of the reduced row eche lon form of the matrix A, roundoff errors have affected the ( I, 3)-entry. the (1, 5)-entry. and the (2, 6)-entry. In this instance. none of the affected entries is greatly changed, and it is reasonable to expect that the true entries in these positions should be 0. - I. and 0, respectively. But can we be absolutely sure? (We will team a way of checking if these entries are 0, - 1, and 0 in Section 2.3.) It is not always so obvious that roundoff errors have occurred. Consider the system of linear equations
(k
kx1 + (k + I )x1 +
I )x2
=I
kx2 = 2.
By subtracting the first equation from the second, this system is easi ly solved, and the solution is x1 = 2 - k and x2 = k - I. But for sufficiently large values of k, roundoff 4,935,937, the T I-85 calculator errors can cause problems. For example, with k gives the reduced row echelon fom1 of the augmented matrix
=
4935937 [ 4935938
4935936 4935937
'] 2
as
[~ User: Thomas McVaney
.999999797404 0
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 52 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
52
CHAPTER 1 Matrices, Vectors, and Systems of Linear Eq uations
Since the last row of the reduced row echelon form has its only nonzero entry in the last column. we would incorrectly deduce from this that the original system is incom,istent! The analysis of roundoff errors and related mailers is a serious mathematical ~ubject that is inappropriate for thi~ book. (It is Mudied in the branch of mathematics called llllmerical a11alysis.) We encourage the usc of technology whenever possible to perform the tedious calculations associmed "ith matricc~ (~uch as tho;,e required to obtain the reduced row echelon fonn of a matrix). evertheless. a certain amount of ;,kepticism is healthy when technology i; used. JuM because the calculations are performed with a calculator or computer is no guarantee that the result is correct. In thi; book. however. the examples and exercises u;ually in\olvc ~imple numbers (often one- or two-digit integers) and small matrices; so there is lillie chance of serious errors resulting from the use of technology.
EXERCISES In £..\ercises I /6, dt'll'nniltt' whether the gil'en system ;,,.con·
.sisrem. and if so, find irs ge11eml so/111io11. I. 2x1
/11 E.rercises 27-34, de/ermine the t•alues ofr cmd sfarwhich the gi•·en sysrem af li11ear eqtwtimt< lw< (II) no .rolurio11s. (b) exactly one solwion. 11nd (c) infinirely "'""-"so/Ill ions.
27
.r1 + rxz · 3x1 + 6.<2
=5 =•
28
-.II
+ 4 .\2
=.
· 2.11+n2=6
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 53 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussian Eliminatio n
+ 2x2 = S + fX2 = 8
30.
-xt 4xt
Xt 2xt
+ fX2 = -3 + 5x2 = s
32.
Xt -3xt
-x 1 3x1
+ rx2 =
34.
2xt- X2 = 3 4x 1 + rx2 = s
29.
Xt - 4Xt
31.
33.
-
9x2
s
=- 2
+ 3xz = s + rxz = - 8 + fX2 = 5 + 6x2 = s
In Exercise.< 35-42. jind 1/re rank tmd 1wlli1y of 1he given mtllrix.
35.
36.
37.
-I -I
[-j [-j
-2 2
-2
-3 6
- I
-9
2 2
-I
-3 3
4 - 9
[-~
-I
2
u [i 0 -I I
2
39.
40.
41.
42.
3
-I
- I
38.
- I
-2 -2
I 2 0
[-l [-l [-l
4 - 4
0 -I
45. A patient needs to consume exactly 660 mg of magnes ium. 820 IU of vitamin D. and 750 meg of folate per day. Three food supple ments can be mixed to provide these murients. The amounts of the three nutrients provided by each of the supplements is given in the following table:
-3 8
-2
-I
- 6
-2
2 6
3
Food Supplement Magne>ium (mg) Vitamin D (IU) Fo late (meg)
I
~] 0 - I
- I -I
-3
_i]
0
2
-6
0 -I
3 5
- I
-4
-2
2
I
0
0
2
3
15 20 15
36 44 42
(b) Can the three s upplements be mixed to provide exactly 720 mg of magnesium. 800 IU of vitamin D. and 750 meg of folate? If so. how?
-l]
-8 5
-2
I 10 10 15
(a) What is the maximum amount o f supple ment 3 that can be used to provide exactly the required amounts of the three nutrients?
2
-4
44. A company makes three types of fertil izer. The first type contains 10% nitrogen and 3% phosphates by weight, the second contains 8% nitrogen and 6% phosphates, and the third contains 6% nitrogen and I% phosphates.
(b) Can the company mix these three types of fertilizers to supply exactly 600 pounds of fertil izer containing 9 % nitrogen and 3.5% phosphates? If so, how?
-l]
- 4 -3
-2
(b) Can the company supply exactly 40 tons of highgrade, 100 tons of medium-grade, and 80 tons of low-grade ore? If so, how many days should each mine operate to fill this order?
to supply exactly 600 pounds of fertil izer containing 7.5% nitrogen and 5% phosphates? If so, how?
!] I -3 -7
5
low-grade ore? If so. how many days should each mine operate to fill this order0
(a) Can the company mix these three types of fertilizers
-I - I
I
53
-2
46. Three grades of crude o il :ue to be blended to obtai n 100 barrels of oil costing $35 per barrel and containing 50 g m of s ulfur per barrel. The cost and sulfur content of the three grades of oil are given in the following table: Grade Co>t per barrel Sulfur per barrel
-ll
43. A mining company operates three mines that each produce three grades of ore. T he daily yield of e"ch mine is shown in the following table:
Daily Yield Mme I Mme2 Mine J I ton I ton 2 to ns 1 ton 2 to ns 2 to ns 2 tons I ton 0 tons
B
c
$32 62 gm
$24 94 gm
(a) Find the amounts of each grade to be ble nded that use the least oil of grade C. (b) Find the amounts of each grade to be blended that use the most oil of grade C. 47. Find a polynomial function f(x) = ax 2 + bx + c whose graph passes through the points (- I. 14). ( 1.4). and (3. 10). 48. Find a polynomial function f(x) = a.~ 2 + bx + c whose gmph passes through the points ( - 2, -33). (2. -I), :md (3,-8).
= ttxl + bx 2 +ex+ d whose graph passes through the points (-2, 32). (- 1, 13),
49. Find a polynomial function f(x)
(a) Can the company supply exactly 80 tons of highgrade. 100 tons of medium-grade. and 40 tons of
User: Thomas McVaney
(2.4), and (3. 17).
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 54 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
54 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations 50. Find a polynomial function f(x) = ar 3 + bx 2 +ex+ d whose gruph passes through the points (-2. 12). (- 1. - 9 ). (I. -3). and (3. 27). 51. If the thrrd pivot posruon of a matrix A is m column j. what can be sard about column j of the reduced row echelon fonn of A? Explain your answer. 52. Suppo-.c that the founh pivot po
/11 £.\t'rt"iu.< 5.l -72. tf<>tumi11e ll"iletilu tile state· 1111!11/S llYt' /Yilt' ()Y /llfSt'.
53. A column of a matrix A is a pivot column if the corresponding column in the reduced row echelon form of A contains the leading entry of some nonzero row.
7 1. The third pivot position in a matrix lies in column 3. 72. If R is an 11 x 11 matrix in reduced row echelon form that has rank"· then R = 1•. 73. Describe an m x 7~.
11
matrix with rank 0.
What is the smallest possible rank of a 4 x 7 matrix? Explain your answer.
15. What is the largest possible rank of a 4 x 7 matrix? Explain your answer. 76. What is the la.rgest possible rank of a 7 x 4 matrix? Explain your answer. 77. What is the smallest possrble nullrty of a 4 x 7 matrix? Explain your answer. 78. What is the smallest possible nullity of a 7 x 4 matrix? Explain your an>wer.
54. There is a unique sequence of elementary row operations that transforms a matrix into its reduced row echelon form.
79. What is the largest possible rank of an m x Explain your
11
matrix?
55. When the forward pass o f Gaussian e limination is complete. the orig imrl matrix has been transformed into one in mw echelon fonn.
80. What is the s ma llest possible nullity of an m x Explain your
11
matrix?
56. No ;,caling opcrutions arc ret' a 5 x 8 matrix with runk 3 a nd nullity 2. 60. If a ;ystem of m linear equations in 11 variables is equivalent to a ' )'>tem of 11 linear equation.' in q variable>. then "'=fl.
61. If a S)"Stem of m linear equations in II variables is equi\'3· lent to a system or p linear equations in q variables. then II
=q.
62. The equation Ax = b •s consistent if and only if b is a linear combrnatron of the column; of A.
63. If the equation Ax = b ;, inconsi~tent. then the rank of [A bJ is greater than the rank of A. 64. If the reduced row echelon form of lA b ) contains a zero row. then Ax = b mu>t have infinitely many sol utions. 65. If the reduced row echelon form of lA b l contains a zero row. then Ax = h mu>t be con>istent. 66. If some column of m:1trix A is a pivot column. the n the corresponding column in the reduced row echelon form of A is a ;tandurd vector. 67. If A is a matrix with r:u1k k. the n the vectors e 1• e2 ••••• et :rppe;rr as columns of the reduced row echelon form of A. 68. The surn of the rank and nullity of a matrix equals the number of rows in the matrix . 69. Suppose that the pivot •·ows of a matrix A :~re rows I. 2..... k. and row k + I becomes zero when applying the Gaussian e limination algorithm. The n row k + I must
equal borne linear combination of rows I. 2..... k, 70. The third pivot po;ition in a matrix lies in row 3.
User: Thomas McVaney
8 I. Let A be a 4 x 3 matrix . Is it possible that Ax = b is Explain your an>wcr. consistent for every b in
n•?
82. Let A be an m x 11 matrix and b be a vector in n"'. What must be true ahout the rank of A if Ax = b has a unique solution? Justify your ;mswcr. 83. A >ystern of linear equations is called wrdertletermi11ed if it has fewer equation; than variables. What can be said ahout the number of solut ions of an underdetennined system'! 84. A system of linear equations is called Ol'etrletumill<'d if it has more equations than variables. Gi,·e examples of overdetermined 'Y'tem' that ha\e (a) no >Oiution>. (b) exactly one solution. and
(c) infinitely many solullons. 85. Prove that if A is an m x 11 matrix with rank m. then AX = b b con'i'tent for e•cry b in
n,..
86. Prove that a matrix equation Ax = b is consi>tent if and only if the rank• of A and [A bJ arc equal. 87. Let u be a solution of Ax = 0. where A i' an m x n matrix. Must c u be a solution of Ax = 0 for every scalar c? Justify your answer. 88. Let u and v be solutions of Ax = 0. where A is an m x 11 matrix. Must u + v be a solution of llx = 0? Justi fy your answer.
89. Let u and '' be solution$ o f Ax = b. where A is ~u1 m x 11 matrix and b is a vector in Prove that u - v is a solutio n of Ax = 0 .
nm.
90. Let u be a solution of Ax = b and v be a solution of Ax = 0. where A is an m x 11 matrix :md b i' a vector in n'". Prove that u + v is a solution of Ax = b. 91. Let II be an m x
11
matrix and b be a vector in
n'"
such
that Ax = b i~ con~istenl. Prove that Ax = cb is consistent for e\·ery scalar c.
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 55 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.4 Gaussia n Elimination 92. Let A be an m x 11 matrix and b 1 and b2 be vectors in n"' such that both Ax = h t and Ax = b2 are consistent. Prove that Ax = b t + h2 is consistent. 93. Let u and v be solutions of Ax = b, where A is
=
Exercises 94- 99. 11se either a calculator with matrix capabilities or complller software such as MATLAB to solve each pmblem.
/11 Exercises 97- 99, ji11d the rank and the nul/if)• of the mmrix.
/ 11
Exercises 94- 96. use Gaussia11 e/imi11atio11 011 the augmemed mmrix of the sysrem of linear equmio11s to test for consisTency, cmd to ji11d the general solution.
[ ,,
+ 5XJ
55
[·l
-1.1
1.3
1.6
2 .3 -1.8 - 1.1 -1.7
-1.2 4.2 2.1 3.4
- II I 2 14
2 0 -9 -I I
2
0
'']
1.0 1.4 -1.2
-2.1 2.3 -2.0
2.4 0.5 7. 1
2.2 1.5 2 .1 3. 1 1.2 1.5
-LO] 2.2 1.4
0.0 2 .0
-:]
4
8
-4
3
7
9
16
10
SOLUTIONS TO THE PRACTICE PROBLEMS I. The augmented matrix of the given system is
[-~
I
- I
0
-6 - 5
- I - 3 2
6
-8 3
-2
The final matrix conesponds to the following system of linear equations: Xt - krJ + XS = 2 x2 + XJ - 2rs = - I X4 - 4xs = -5
- 2] . -6 - 7
Apply the Gaussian e limination algorithm to the augmented matrix of the given system to transfom1 the matrix into o ne in row echelon form:
[-~
-I 2 -2
l r 1 + r2 - Jr1 + r.\ -
-3 6
-8 r2 r.\
r 2- r.\
iq- q
- r.3+ r 1- q
r1+ r 1- r 1
[~ [~ [~ [~ [~
-6 -5
3
- I 0
-I I 0
-2] -6
x3
-7
X4
-3 0
-3
-I I 0
- 3 I 0
0
User: Thomas McVaney
+ krJ - XS - XJ + 2rs
free
=
-5
+ 4xs
free.
-10
-2
-I
2. The matrix A is in reduced row echelon fonn. Furthermore. it has 3 nonzero rows. and hence the rank of A is 3. Si nce A has 7 columns. its nullity is 7- 3 = 4.
-2] -I
3. Apply the Gaussian elimination algorithm to the augmented matrix of the given system to transform the matrix imo one in row echelon form:
0 2
-2
I 0
-I
-8
-2 -4
0 0
3
-2 -4
0 0
2
=-I
I -I -8
-2]
-I
I 0
- 2
xs
2 0
-3 I 0
-I I 0
0 I 0
Xt = X2
-I
I 0
The general solution of this system is
I
-2 - 4
-10 -2] -I -5
-~]
-)
-~]
- )
3
3
r
r-3
I
+s ]
4-s
(a) The original system is inconsistent whenever there is a row whose only nonzero e ntry lies in the last co lum n. Only the second row could have thi s fom1. Thus the o riginal system is inconsistent whenever r - 3 0 and 4 - s =I 0: that is. when r = 3 and s =/4. (b) The origi nal system has infinitely many solutions whenever the system is consistent and there is a free variable in the general solution. In order to have a free variable, we must haver - 3 0. and in order for the
=
=
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 56 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
56
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
>yMem :1lso to be consi>tent. we must have 4 - s = 0. TI1u< the origmal >y>tem has infinitely many solutions if r 3 and s 4. (c ) Let A denote the coefficient matrix of the system. For the ~ystem to have a unique solution, there must be
=
=
two basic v:uiables. so the rank of A mu>t be 2. Since deleting the la>t column of the preceding matrix gives a row echelon form of A. the mnk of A is 2 precisely when r - 3 ¥= 0 : that is. "hen r ¥= 3.
l1.s• j APPLICATIONS OF SYSTEMS OF LINEAR EQUATIONS Systems of linear equations arise in many applications of mathematics. In this section. we present two such app)jcations.
THE LEONTIEF INPUT- OUTPUT MODEL In a modern industrialized country, there arc hundreds of different industries that supply goods and services needed for production. These industries arc often mutually dependent. The agricultural industry, for instance, requ ires farm machinery to plant and harve~t crops. whereas the makers of farm machi nery need food produced by the agricultural industry. Because of this interdependency. events in one industry, such as a st rike by factory workers. can significantl y a ffect many other ind ustries. To better understand these complex interactions, economic planners usc mathematical models of the economy. the most important of which was developed by the Ru ssian-born economist Wassily Leontief. While a student in Berlin in the 1920s, Leontief developed a mathematical model, called the inpw- owpurmodel, for analyzing an economy. After arriving in the United States in 1931 to be a professor of economics at Harvard University, Leontief began to coUect the data that would enable him to implement his ideas. Finally. after the end of World War II. he succeeded in extracting from government statistics the data necessary to create a model of the U.S. economy. This model proved to be highly accurate in predicting the behavior of the postwar U.S. economy and earned Leontief the 1973 Nobel Prize for Economics. Leontiefs model of the U.S. economy combined approximately 500 industrie. into 42 sectors that provide products and services. such as the electrical m:~chincry sector. To illustrate Leontiefs theory. which c;m be :~pplicd to the economy of any country or region. we next show how to construct a general input- output model. Suppose that an economy is divided into n sectors and that ~ector i produce~ M>me commodity or service S, (i = I. 2 .... , n ). Usually. we measure amounts of commodities and services in common monetary units and hold costs fixed so that we can compare diverse sectors. For example. the output of the steel industry could be measured in mi ll ions of dollars worth of stee l produced. For each i and j. let Cij denote the amount of S, needed to produce one unit of S1 . Then the 11 x n matrix C whose (i .})-entry is c,; is cttllcd the iniJUt - output matrix (or the consumption matrix) for the economy. To illustrate these ideas with a very simple example. consider an economy that is divided into three sectors: agriculture, manufacturing. and services. (Of course. a model of any real economy, such as Leontiefs orig in ttl model, wi ll in volve many more sectors and much larger matrices.) Suppose that each dollar's worth of agricultural output requires inputs of $0.10 from the agricultural sector. $0.20 from the
• This section can be omined without toss of continuity.
User: Thomas McVaney
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 57 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.5 Appl ications of Systems of Linear Equations
57
manufacturing sector, and $0.30 from the services sector; each dollar's worth of manufacturing output requires inputs of $0.20 from the agricultural sector. $0.40 from the manufacturing sector, and $0.10 from the services sector; and each dollar's worth of services output requires inputs of $0.10 from the agricultural sector, $0.20 from the manufacturing sector, and $0.10 from the services sector. From this information, we can form the following input-output matrix: Ag.
C=
Man. .2 .4
Svcs.
['I .I .I] .I .2 .3
.2
Agriculture Manufactunng Services
Note that the (i ,j)-entry of the matrix represents the amount of input from sector i needed to produce a dollar 's worth of output from sector j. Now let x 1, x 2. and x3 denote the total output of the agriculture, manufacturing, and services sectors, respectively. Sinc-e x 1 dollar' s worth of agricultural products arc being produced, the first column of the input-output matrix shows that an input of .lx1 is required from the agriculture sector. an input of .2r1 is required from the manufacturing sector, and an input of .3x1 is requ ired from the services sector. S imilar statements apply to the manufacturing and services sectors. Figure 1.20 shows the total amount of money flowing among the three sectors. Note that in Figure 1.20 the three arcs leaving the agriculture sector give the total amount of agricultural output that is used as inputs for all three sectors. The sum of the labels on the three arcs .. lx 1 + .2r2 + .lx3. represents the amount of agricultural output that is consumed during the production process. Similar statements apply to the other two sectors. So the vector
Figure 1.20 The flow of money among the sectors
User: Thomas McVaney
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 59 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.5 Appl ications of Systems of Linear Equations
59
For an economy with 11 x 11 input-output matrix C. the gross production necessary to satisfy exactly a demand d is a solution of(/, - C)x = d .
Example 2
For the economy in Example I. determine the gross production needed to meet a consumer demand for $90 mill ion of agricu lture, $80 million of manufacturing, and $60 million of services. Solution We must solve the matrix equation (13 - C )x = d. where C is the input-output matrix and
is the demand vector. Since
0 0] [.1 .2 .l] I 0 -
.2
.4 .2
0
.3
.I
I
.l
.9 - .2 - .2
.6
[ - .3 - .I
-.1]
- .2 . .9
the augmented matrix of the system to be solved is
.9 - .2 [ - .3
- .2 - . 1 90] .6 - .2 80 . - .1 .9 60
Thus the solution of (/3 - C)x = d is
170] . [240 ISO so the gross production needed to meet the demand is $170 million of agriculture, $240 million of manufacturing, and $ 150 million of services.
Practice Problem 1
~
An island 's economy is divided into three sectors-tourism, transportation, and services. Suppose that each dollar's wonh of tourism output requires inputs of $0.30 from the tourism sector. $0.10 from the transpo1tation sector, and $0.30 from the services sector; each dollar' s wonh of transportation output requires inputs of $0.20 from the tourism sector, $0.40 from the transpo1tation sector, and $0.20 from the services sector; and each dollar's wonh of services output requires inputs of $0.05 from the tourism sector, $0.05 from the transpo1tation sector, and $0. 15 from the services sector. (a) Write the input-output matrix for th is economy. (b) If the gross production for th is economy is $ 10 million of tourism, S I S million
of transportation. and $20 mlllion of services. how much input from the tourism sector is required by the services sector? (c) If the gross production for this economy is $10 million of tourism, $15 million of transportation, and $20 million of services, what is the total va lue of the inputs consumed by each sector during the production process?
User: Thomas McVaney
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 60 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
60
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
(d) If the total outputs of the tou rism, transportation, and services sectors are $70 million. $50 million. and $60 million. respectively. what is the net production of each sector? (e) What gross production is required to satisfy exactly a demand for $30 million of tourism. $50 million of transportation, and $40 million of services? <4
CURRENT FLOW IN ELEGRICAL CIRCUITS When a battery is connected in an electrical circuit. a current flows through the circuit. If the current passes through a res istor (a device that creates resistance to the flow of electricity). a drop in voltage occu rs. These voltage drops obey Ohm's law. 12 which states that V = Rl , where V is the voltage drop across the resistor (measured in volts), R is the resistance (measured in ohms), and I is the current (measured in amperes).
r-------41r-------~ 20 volt.' 2ohms
3 ohms
Figure 1.21 A simple electrical
Figure 1.21 shows a simple e lectrical circui t consisting of a 20-vol t battery (i ndicated by t l ) and two resistors (indicated by /WI. ) with resistances of 3 ohms and 2 ohms. The current flows in the direction of the arrows. If the value of I is positive, then the flow is from the posi ti ve terminal of the ballery (indicated by the longer side of the battery) to the negative terminal (indicated by the shorter side). In order to determine the value of I , we must util ize Kirchhoff's 13 volwge law.
circuit
Kirchhoff's Voltage Law In a closed path within an electrical circuit. the sum of the voltage drops in any one direction equals the sum of the voltage sources in lhe same direction. In the circuit shown in Figure 1.21. there arc two voltage drops. one of 31 and the other of 21. Their sum equals 20, the voltage supplied by the single battery. Hence 31
+ 21 = 20 51= 20 I = 4.
Thus the current flow through the nctv
Determine the current through the following e lectrical circuit:
-H52 volts
3 ohm~
5ohms
20 voll<
12 Georg Simon Ohm (1787- 1854) was a German physicist whose pamphlet
Die galvanische Kette mathematisch bearbeitet greatly influenced the development of the theory of electricity. n Gustav Robert Kirchhoff (1824- 1887) was a German physicist who made significant contributions to the fields of electricity and electromagnetic radiation.
User: Thomas McVaney
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 61 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.5 Appl ications of Systems of Linear Equations
c 1 ohrn
30 volts 4 ohms
I ohm
D
£
A
I,
I,
61
ll 2ohms
12
I,
ll
¥H 8
ll
F
Figure 1.22 An electrical circuit
A more complicated circuit is shown in Figure 1.22. Here the junctions at A and 8 (indicated by the dots) create three branches in the circuit, each with its own current How. Starting at B and applying Kirchhoff' s voltage law to the closed path BDCAB , we obtain 111 + 111 - 412 = 0; that is, (7)
Note that, s ince we are proceeding around the closed path in a clockwise direction, the How from A to 8 is opposite to the direction indicated for h Thus the voltage drop at the 4-ohm resistor is 4( - h ). Moreover, because there is no voltage source in this closed path. the sum of the three voltage drops is 0. Similarly, from the closed path 8AEF8 , we obta in the equat ion (8)
and from the closed path 8DCAEF8. we obtain the equation
(9) Note that, in this case, equation (9) is the sum of equations {7) and (8). Since equation (9) provides no informat ion not already given by equations (7) and (8). we can discard it. A similar situation occurs in all of the networks that we consider, so we may ignore any c losed paths that contai n only currents that are accounted for in other equations obtained from Kirchoff s voltage law. At this point, we have two equations (7) and (8) in three variables, s o another equation is required if we are to obtain a unique solution for 1, , h and l 3. This equation is provided by another of Kirchhoff s laws.
Kirchoff's Current Law The current How into any junction equals the CUITent How out of the junction.
In the context of Figure 1.22, Kirchhoffs current law states that the How into junction A, which is 11 + h equals the ClllTent How out of A. which is h Hence we obtain the equation 11 + h = l3 , or (1 0)
Notice that the current law also applies at junction 8 , where it yields the equation l3 = 11 + h However. since trus equation is equivalent to equation ( 10). we can ignore
it. In general, if the current law is appl ied at each junction, then any one of the resulting equations is redundant and can be ignored.
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 62 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
62
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Thus a system of equations that determines the current flows in the circuit in Figure 1.22 is
=
0
4/2 + 2 /3 = 30
It+ h -
/3
=
0.
Solving this system by Gaussian elimination. we see that It = 6. /z = 3. and / 3 = 9. and thus the branch currents arc 6 amperes. 3 amperes. and 9 amperes. respectively. Determine the currents in each branch of the following electrical circuit:
Practice Problem 3 ...
-l 33 \'Oil< 2 ohms
--E '•/,
---
6ohms
l ohm
/J
3ohm, 8 volt'
11
EXERCISES ~
In F.lert·ises I 6. dner111i1w whether the Jtatements ~ are rme or foist'.
I. The (i .j)-entry of the mput- output matrix represents the amount of input from sector i needed to produce one unit of output from sector j. 2. For an economy \lith 11 x 11 mput-output matrix C. the gro\\ production necessary to satisfy exactly a demand d i' a 'olution of (1. - C)x = d . 3. If C is the input -output matrix for an economy with gross production vector x. then C x i' the net production vector. 4 . In any closed path wtthin an electrical circuit. the alge· braic >Um of all the \Oitage drop> in the same direction equals 0. 5. At every junction in an electrical circuit. the current How into the junction equals the current flow out of the junction. 6. The voltage drop :11 each re>i;,tor in an electrical network equals the product of the reststancc and the amount or current through the resistor. In Exerdses 7 16. snppnse that tm econo111y is divided into four M!Ctor.v (agriculwre, numufat'lfll'ing . services, anti enter·
winmem ) witlt the follml'ill8 input o11tp111111atrix:
At;.
c=
["
.20 . 18 .09
Man . II .08 . 16 .07
'in' . 15
.24 .06 .12
l:nt
"]
.07
.22 .05
User: Thomas McVaney
Agtintlturc MJnu factunng Sen rce' Entcrtammcnt
7. What amount of input from the services sector i> needed for a gross production of $50 million by the entertainment sector? 8. What amount of input from the manufacturing sector is needed for a gross production of S I 00 million by the agriculture sector? 9. Which sector is lea.t dependent on services? I0 . Which 'ector i' rno't dependent on
\CJ'\ ice'?
II. On which sector is agriculture least dependent? 12. On whrch sector rs agriculture most dependent? 13. If the gross production for thi> economy is S30 million or agriculture. $40 million of manufacturing. S30 million of services. and $20 milhon of entertainment. what is the total value of the inputs from each scctor con,umed during the production procc>s? 14. If the gross production for thi s economy i' $20 million of agriculture. $30 million of manuf:rcturi ng. $20 million or services, and $ 10 million of entert:rinmcnt. what is the total value of the inputs fro m each sector con>umcd during the production procc»? 15. If the gross production for this economy is $30 million of agricu lture. $40 mill ion of m
16. If the gross production for thb economy is $20 million of agricu lture, $30 million of manufacturing. $20 million of sel'\•ices. and $ 10 million of cntcrt:ri nmcnt. what is the net production of each ;ector?
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 63 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.5 Applications of Systems of Linear Equations
17 The input output matrix for an economy producing trans-
63
(a) What is the net production corresponding to a gross production of S40 mall ion of transportation. S30 million of food. and SJS milhon of oil?
21. Consider an economy that is divided into three sectors: finance. goods. and services. Suppose that each dollar" s worth of financial output requares inputs of $0.10 from the finance sector. $0.20 from the goods sector. and $0.20 from the service; ;,ector: each dollar's worth of goods require' inputs of SO 10 from the finance sector. $0.40 from the good; ;,ector. and $0.20 from the services sector: and each dollar'; worth of services requires inputs of $0.15 from the finance sector. SO.IO from the goods sector. and $0.30 from the ;ervices sector.
(b) What gro.s productaon is required to sati>fy exactly a demand for $32 million of transportation. $48 million of food. and S24 million of oil?
(a) What is the net production corresponding to a gross production of $70 million of finance. SSO million of goods. and $60 nulhon of ;crvaccs"!
Ill The input- output matrix for an economy with sectors of metals. nonmetal>. and service; follows:
(b) What is the gross production corre;ponding to a net production of $40 million of finance. S50 million of goods, and S30 million of ;erviccs?
portation, food. and oil follows: I
I·LO
.2
.20 .30 .25
.4
[ .2
M~t
t\ unm
.2 .4 ["2 .2
.4 .2
Oal
.3]
.I
.3
Tran,port<~hon
Food Oal
S \ C\
Metah . I] Nonme tab .2 . I Scrvace'
(;a) What i~ the net pi"Oduction coaTe;,pondi ng to a gross production of $50 million of metals. $60 million of nonmetals, and S40 million of >ervices? (b) What gross production is required to satisfy exactly a demand for $120 million of metals. Sl80 million of nonmetals. and S ISO million of services? 19. Suppose that a nation's energy production is divided into 1\\0 \CCtor.: electricit) :md oil. Each dollars worth of elccaricity output requires SO.IO of electricity input and $0.30 of oil input. and each dollars wonh of oil output requires $0.40 of electricity input and S0.20 of oil input.
(c) What gross production is needed to satisfy exactly a demand for $40 million of linance. $36 million of goods. and $44 million of ;crviccs? 22. Consider an economy that is divided into three sectors: agriculture, m:mufacturing. :and services. Suppose that each dollar's woa1h ol agricu ltural output requires inputs of$0.10 from the agricu ltural sector. $0.15 from the manufacturing sector. and $0.30 from the services sector: each dollars wonh of manufacturing output requires inputs of $0.20 from the agricultural sector. $0.25 from the manufacturing sector. and $0.10 from the services sector: and each dollar's worth of ;,ervices output requires inputs of $0.20 from the agricultural sector, $0.35 from the manufacturing sector. and SO.IO from the services sector.
(a) Write the input -output matrix for this economy.
(a) What is the net production corrc;ponding to a gross production of $40 milhon of agriculture. S50 million of manuf:acturing. and $30 million of \Crvices?
(b) What as the net production corresponding to a gross production of $60 mall ion of electricity and S50 million of oil?
(b) What gross production is needed to ;atisfy exactly a demand for $90 million of agnculturc. $72 million of manufacturing. and 596 mallion of services?
(c) What gro" production is needed to satisfy exactly a demand for $60 million of electricity and $72 million of oil?
23. Let C be the input- output matrix for an economy. x be the gross production vector. d be the demand vector. and p be the vector whose components arc the unit prices of the products or ;ervicc' produced by each sector. Economists call the vector ''= p - cr p the mlu~·addul a•ecror. Show that pTd = vTx. (The single entry in the I x I matrix pTd represents the gross domestic product of the economy.) /lint: Compute I>Tx in two different way,. First. replace p by v + cr p. and then replace x by
20. Suppose thnt an economy is divided into two sectors: nongovernment and government. Each dollars worth of nongovernment output requires $0.10 in nongovernment input ;md $0.10 in government input. oond each dollars worth of government output requires $0.20 in nongovernment input and $0.70 in government input. (a) Write the input output matrix for thi s economy. (b) What is the net production correspondi ng to a gross production of $20 mil lion in nongovernment and $30 mi Ilion in govemment? (c) What gross production is needed to satisfy exactly a demand for $45 million in nongovernment and $50 million in government?
User: Thomas McVaney
C x + d. 24. Suppose that the columns of the input output matrix
Af
c = [I.2 .3
\lin
.2 .4 .I
lex
:~]
.I
A~rinolturc \l lll~l .th
Tcxll lc'
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 64 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
64
CHAPTER 1 Matrices, Vectors, and Systems of Linear Eq uations
measure the amount (in tons) of each input needed to produce one ton of output from each sector. Let p 1 , p2. and /)3 denote the prices per ton of agricultural products, mi nerals, and texti les, respectively.
(a) Interpret I he vector CT f), where
f) =
~~·] l;~ .
2ohms
3 ohms
28.
(b) Interpret p- CT f). 6ohms
In Exercises 25-29. determine the crm¥'11/S in each branch of the given circuit.
---:::-1 60 volts
3ohms
,,
lz
2 ohms I ohm
25.
L 29
2ohms 2ohms
,,
v~lts
I
"E
,;J,.,.. 6ohms
-'NV"-
5 volts
'• 26.
2ohm~
lz
,,
..,.,.,.._
3ohms
vvolts
- H-
c=
.05
.II
.03 .I I .21 .18 .1 5 .23
.20 .06 .I I .13 .07 .06
.10
.II .15
.22 .12 .05
.05 .07 .I I .03 .19 .15
.07 .06 .18 .14 .19
wl
and
2ohm~
d
=
150 100] 200 125 . [ 300 180
/3
I ohm
["
.2 1
.16 .07
lz
5ohms
m
31. Let
6ohms
27.
1:
-~ /3
In the following exercise, use either a calculator with matrix capabilities or computer software suclr as MATLAB to solve tire
/J
I ohm
II -
proiJ/em:
30 volts
ri,.
/6
30 volts
30. In the following e lectrical net work. determine the value of v that makes l2 = 0:
/3
I ohm
Is I ohm
4ohms
4ohms
I ohm
29.
lz
,,
I ohm
/}
__,
User: Thomas McVaney
where C is the input- output matrix for an economy that has been d ivided into six sectors, and d is the net production for this economy (where units are in millions of dollars). Find the gross production vector required to produce d .
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 65 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.5 Appl ications of Systems of Linear Equations
65
SOLUTIONS TO THE PRACTICE PROBLEMS I. (a) The input-output matrix is as follows: Tour. C= [ .3I .3
Tran. .2 .4 .2
Svc>. .05] .05 .15
.7
-.2
-.05
- .1
.6 -.2
- .05
[ -.3
Touri>m Tran,pon:u ion Services
.85
30] . 50 40
and so the gross production vector is
(b) Each dollar's wonh of output from the services sector requires an input of $0.05 from the tourism sector. Hence a gross output of $20 million from the services sector requires an input of 20($0.05) $ 1 million from the tourism sector.
=
Thus the gross productions of the tourism. transponation, and services sectors are $80 million, $ 105 mi llion, and $100 million, respectively.
(c) The total value of the inputs consumed by each sector during the production process is given by
2. The algebraic sum of the voltage drops around the cir8/. Since the current flow from the cuit is 51 + 3/ 20-volt battery is in the direction opposite to I. the algebraic sum of the voltage sources around the circuit is 52+ ( -20) = 32. Hence we have the equation 8/ = 32. so I 4 amperes.
=
Hence, during the production process, $7 million in inputs is consumed by the tourism sector. $8 million by the transponation sector. and $9 million by the services sector.
=
3. There are two junctions in the given circuit, namely, A and B in the following figure:
c
(d) The gross production vector is
£
A I
33 volts
I
/3
/21 ~ I ohm
2 ohms
3 ohms
6ohms
and so the net production vector is D
x-Cx
70] [.3 [
= ~ - :~
.2 .4
..05OS] [70] 50
.2
.1 5
60
F
Applying Kirc hoff's current law to junction A or junction B g ives It /2 + /3. Applying Kirchoffs voltage law to the closed path ABDCA yields
=
[~] -[EJ = [~~] .
112 + 61, +2ft
= 33.
Similarly. from the c losed path AEFBA. we obtain Hence the net productions of the tourism. transportation. and services sectors arc S36 million. $20 million. and $20 million, respectively. (e) To meet
a demand
3/J + 1(-/2) = 8. Hence the system of equations describing the current flows
is It 8ft +
the gross production vector must be a solution of the equation (/3 - C)x = d . The augmented matrix of the system is
User: Thomas McVaney
=
0
+ 3/3 =
8.
/2 -
12 -/2
/3
=
= 33 =
Solving this system gives / 1 4. / 2 = I, and / 3 3. Hence the branch currents are 4 amperes. ampere, and 3 amperes. respectively.
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 66 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
66
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
~ THE SPAN OF A SET OF VECTORS In Section 1.2, we defined a linear combination of vectors u, , u 2, ... , Uk in 1?." to be a vector of the form c 1u 1 + c2u2 + · · · + q lib where c 1• q, ... , q are scalars. For a given set S = {u ,, u 2.... , Uk} of vectors from 1?.", we often need to find the set of all the linear combinat ions of u 1, u2, . . . , uk. For example, if A is an n x p matrix, then the set of vectors v in 1?." such that Ax = v is consistent is precisely the set of all the linear combinations of the columns of A . We now define a term for such a set of linear combinations. Defin ition For a nonempty set S = {u 1• u2, . ... Uk} of vectors in 1?." . we define the span of S to be t he set of all linear combinations of u 1, u2, .. . , uk in 1?." . This set is denoted by SpanS or Span {u, , u2, . . . , uk}.
A linear combination of a single vector is just a multiple of that vector. So if u is in S. then every multiple of u is in SpanS. Thus the span of {u} consists of all multiples of u. In particular, the span of {0} is {0). Note, however, that if S conta ins even one nonzero vector, then SpanS contains infinitely many vectors. Other examples of the span of a set foil ow.
Example 1
Describe the spans of the following subsets of 1?.2:
and
Solution The span of S 1 consists of all linear combinations of the vectors in S 1 • Since a linear combination of a s ingle vector is just a multiple of that vector, the span
J
of S 1 consists of a ll multiples of [ _ : -that is, all vectors of the form [ some scalar c . These vectors all li e along the line with equation y in Figure 1.23.
-~ J for
= - x , as pictured
The span of S 2 consists of all linear combinations of the vectors [ _ : Jand [ - ; ] . Such vectors have the form
a [_ :]
+b
[-;]=a [_:]- [_:]=(a2b
2b) [ _ : ],
where a and b are arbitrary scalars. Taking c =a - 2b. we see that these are the same vectors as those in the span of S 1. Hence Span S2 Span S 1. (See Figure 1.24.)
=
The span of S3 consists of all linear combinat ions of the vectors [ _ :
[7]. User: Thomas McVaney
Note that the vectors [ _ :] and
[7]
J. [-;J.
and
are not parallel. Hence an arbi trary vector
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 67 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.6 The Span of a Set of Vectors
67
Figure 1.23 The span of S 1
Figure 1.24 The span of S 2
v in 1?.2 is a linear combination of these two vectors, as we learned in Section 1.2. Suppose that v =a [ _ : J+ b
[~]
for some scalars a and b. Then
so every vector in 1?.2 is a linear combination of the vectors in S3. It follows that the is R 2 . span of Finally, since every vector in 1?.2 is a linear combination of the nonparallel vectors
s3
[ _:Jand [~] , every vector in 1?.2 is also a linear combination of the vectors in S 4 . Therefore the span of S4 is again R 2.
IJiflrrl,!tjM
For the standard vectors
User: Thomas McVaney
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 68 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
68 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
e, )'
c,
X
Span{ e,. e2 } = .\)··plane
y
Figure 1.26 The span of (u, v}, w here u and v are nonpara llel vectors in 'R. 3
in R 3 • we see that the span of {e 1, e 2) is the set of vectors of the form
Thus Span {e., e 2 ) is the set of vectors in the xy-pllme of R 3 . (See Figure 1.25.) More generally, if u and v are nonpara lle l vectors in R 3• then the span of {u, v} is a plane through the origin. (See Figure 1.26.) Furthermore. Spa n {e3} is the set of vectors that li e along the z-axis in R 3. (See Figure 1.25.)
From the preceding examples, we see that saying "v belongs to the span of S = {u 1, u2. .... ud" means exactly the same as saying ·'v equals some linear combination
of the vectors U J, u2, ... , u k."So our comment at the beginning of the section can be rephrased as follows:
={
u 1• u 2, .. .. Uk ) be a set of vectors from R", and let A be the matrix Let S whose columns are u 1, u 2• . .. , uk. Then a vector v from R" is in the span of S (that is. v is a linear combination of \I J, u2. . .. , Uk) if and on ly if the equation Ax = v is consistent.
User: Thomas McVaney
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 68 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
68 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
e, )'
c,
X
Span{ e,. e2 } = .\)··plane
y
Figure 1.26 The span of (u, v}, w here u and v are nonpara llel vectors in 'R. 3
in R 3 • we see that the span of {e 1, e 2) is the set of vectors of the form
Thus Span {e., e 2 ) is the set of vectors in the xy-pllme of R 3 . (See Figure 1.25.) More generally, if u and v are nonpara lle l vectors in R 3• then the span of {u, v} is a plane through the origin. (See Figure 1.26.) Furthermore. Spa n {e3} is the set of vectors that li e along the z-axis in R 3. (See Figure 1.25.)
From the preceding examples, we see that saying "v belongs to the span of S = {u 1, u2. .... ud" means exactly the same as saying ·'v equals some linear combination
of the vectors U J, u2, ... , u k."So our comment at the beginning of the section can be rephrased as follows:
={
u 1• u 2, .. .. Uk ) be a set of vectors from R", and let A be the matrix Let S whose columns are u 1, u 2• . .. , uk. Then a vector v from R" is in the span of S (that is. v is a linear combination of \I J, u2. . .. , Uk) if and on ly if the equation Ax = v is consistent.
User: Thomas McVaney
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 69 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.6 The Span of a Set of Vectors
1$if!,j,)§i
69
Is or
·{i]
a vector in the span of
If so, express it as a linear combination of the vectors in S.
Solution Let A be the matrix whose columns are the vectors in S. The vector v belongs to the span of S if and only if Ax = v is consistent. Since the reduced row echelon form of [A v] is
[~ ~ ~ -~] 0 0 0 0 0 0
0 . 0
Ax = v is consistent by Theorem 1.5. Hence v belongs to the span of S. To express v as a linear comb ination of the vectors in S, we need to find the actual solution of Ax= v. Using the reduced row echelon form of [A v]. we sec that the general solution of this equation is Xt
=
X2
=
x3 For example. by tak ing x 3
1 - 3x3
-2-
2x3
free.
= 0, we find that
In the same manner, w belongs to the span of S if and only if Ax = w is consistent. Because the reduced row echelon form of [A w] is
Theorem 1.5 shows that Ax of S.
PracticeProbleml
~
Are u =
User: Thomas McVaney
[-n
and v=
= w is not cons istent. Thus w does not belong to the span
[~] inthespanofS=
1[-!l[-l])? Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 70 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
70
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
In our examples so far, we stat1ed with a subset S of R" and described the set V =SpanS. In other problems. we might need to do the opposite: Start with a set V and find a set of vectors S for which SpanS V. If V is a set of vectors from R"
=
and SpanS = V. then we say that S is a gener ating set for V or that S generates V . Because every set is contained in its span, a generating set for V is necessarily contained in V.
l§ifi,!.)tji
Let
[~J. [lJ.[!J.[=~J I·
s =I Show that SpanS =
n 3.
Solution Because S is contained in 1?.3, it follows that SpanS is contained in n 3 . Thus, in o rder to show that SpanS n 3 , we need only show that an arbitrary vector v in n 3 belongs to SpanS. Thus we must show that Ax= vis consistent for every v, where
=
A=[~ ~ -~]. - I
Let [R c] be the reduced row echelon form of [A v]. No matter what v is, R is the reduced row echelon form of A by Exercise 77 of Section I .3. Since
R
=
[~
0 0
-~]
l 0 0 - I
has no zero row. there can be no row in [R c] in which the only nonzero entry lies in the last colu mn. Thus Ax v is consistent by Theorem 1.5, and so v belongs to SpanS. Since v is an arbitrary vector in n 3 . it follows that SpanS= 1?.3 .
=
The following theorem guarantees that the technique used in Example 4 can be applied to test whether or not any subset of R"' is a generating set for R"':
THEOREM 1.6 The following statements about an
111
x n matrix A arc equivalent:
(a) The span of the columns of A is R"'. (b) The equation Ax = b has at least one solution (that is, Ax = b is consistent) for each b in n"'. (c) The rank of A ism, the number of rows of A. (d) The reduced row echelon form of A has no zero rows. (e) There is a pi vot posi tion in each row o f A. PROOF Since, by Theorem 1.5. the equation Ax "" b is consistent precisely when b equals a linear combination of the columns of A, statements (a) and
User: Thomas McVaney
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 71 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.6 The Span of a Set of Vectors 71 (b) are equ ivalent. Also, because A is an 111 x 11 matrix , statements (c) and (d) are equivalent. The details of these arguments are left to the reader. We now prove that statements (b) and (c) are equivalent. First, let R denote the reduced row echelon form of A and e, be the standard vector
in 7?:". There is a sequence of elementary row operations that transforms A into R. S ince each of these elementary row operations is reversible. there is also a sequence of elementary row operations that transforms R into A. Apply the latter sequence of operations to [R e,] to obtain a matrix [A d ] for some d in 7?:". Then the system Ax = d is e<1u ivalent to the system Rx = e,. If (b) is tmc. then Ax= d. and hence Rx = e,. must be consistent. But then Theorem 1.5 implies that the las t row of R cannot be a zero row, for otherwise [R e,) wou ld have a row in which the only nonzero entry lies in the last column. Because R is in reduced row echelon form, R must have no nonzero rows. It follows that the rank of A is m. establishing (c). Conversely, assume that (c) is true, and let [R c) denme the reduced row echelon fonn of [A b]. Since A has rank 111, R has no nonzero rows. Hence [R c) has no row whose only nonzero entry is in the last column. Therefore, by Theorem 1.5. Ax b is consistent for every b. •
=
Pra ctice Problem 2 •
Iss=
I[~l [-il [~l [-nI
a generating
s~
3
for R ?
MAKING A GENERATING SET SMALLER In Example 3, we found that v is in the span of S by solving Ax = v, a system of 4 equations in 3 variables. If S contained only 2 vectors, the corresponding system would consist of 4 equations in 2 variables. Ln general. it is easier to check if a vector is in the span of a set wi th fewer vectors than it is to check if the vector is in the span of a set with more vectors. The following theorem establishes a useful property that enables us to reduce the s ize of a generating set in certain cases.
THEOREM 1.7 Let S = {u t, u2, .. . , uk) be a set of vectors from R", and let v be a vector in R". Then Span {u t, U2,···· "*·v} =Span {u,. u 2, .... uk} if and only ifv belongs to the span of S. PROOF Suppose that vis in the span of S. Then v = a 1u 1 + a2u2 + · · · + ak u k for some scalars at.a2, .... ak. lf w is in Sp;m {u t, u2..... uk. v}, then w can be written w = Ct Ut + c2 u2 + · · · + ck u k + bv for some scalars Ct, c2, ... , Cko b. By substituting a, u, + a2u 2 + · · · + ak u k for v in the preceding equation. we can wri te w as a linear combination of the vectors u , , u 2, ... , llk. So the span
of {u, . u2, .. . , llk, v) is contained in the span of {u ,, u2, .. . , Ilk). On the other hand , any vector in Span {u, , u 2, ... , llk} can be written as a linear combination
User: Thomas McVaney
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 72 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
72
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
of the vectors u 1, u 2, ... , u t, v in wh ich the coefficient of v is 0; so the span of {u J, u 2, ... , uk} is also contained in the s pan of {u1 . u 2, ... , v). It follows that the two spans are equal. Conversely, suppose that v does not belong to the span of S. Note that v is in the span o f {u 1, u2, .. . , uk, v} because v = Ou 1 + Ou2 + · · · + Ou k + I v. He nce Span {u 1. u2, .. . , uk) f. Span {u 1, u 2.... , Uk. v) because the second set contains v, but the first does not •
"*'
Theorem 1.7 provides a me thod for reduc ing the size of a generating set If one of the vectors in S is a linear combination of the others. it can be removed from S to obtain a smaller set having the same span as S. For instance, for the set S of three vectors in Example 3. we have
Hence the span of S is the same as the span of the smaller set
For the set S in Practice Problem 2, find a subset with the fewest vectors such that SpanS= R 3 . ~
Practice Problem 3 ..,.
EXERCISES In Exercises 1-8. determine whether the given ••ector is in Span { [
1.
5.
l [-:l [n}.
~
[-~]
[-:J
2.
[~]
3.
6.
[-~J
7.
[-!] [_:J [-n [~]
4.
8.
In Exercises 9- 16 detemrine whether the given vector is in
' '" ![-i]·[-l]· [-!ll 9
[-l] "[l] "[~1] "[-l]
User: Thomas McVaney
In Exercises 17-20, determine the values of r for which v is in the span of S.
17.
S=
{ [_~J.[-n } . v = [_~]
18.
S=
{[jJ.[=i]}.v= [;]
19.
s=
{[
-~l
[-n l·
20. S= {[- :]. [
v
= [
-~J
-~] }· = [~] v
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 73 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.6 The Span of a Set of Vectors
In Exercises 21- 28, a set of vectors in R." is given Detennine whether this .~et i.~ a generating set for R.". Zl.
In Exercises 45- 64. determine wlwther the statemenrs are true or false.
45. Let S = {u ~o u2..... uk} be a nonempty set of vectors in R.". A vector v belongs to the span of S if and only if v = c 1u 1 + c2 u 2 + · · · + Ct Uk for some scalars Ct. C2., ••• CJ"
46. The span of {0) is {0 ).
47. If A = [u t u2 ... Ut ) and the matrix equation Ax = v is inconsistent. then \' does not belong to the span of {Ui. U2····· uk}. 48. If A is an m x 11 matrix. then Ax = b is consistent for if and only if the mnk of A is 11. every b in
nm
49. LetS= {u 1• u 2 • • •• , Ut ) be a subset of R.". Then the span ofSisR." if;mdonlyifthcrankof[u 1 u2 ... u,)isiL 50. Every finite s ubset of R." is contai ned in its span .
-2] -4 I
-3
;J
-:]
0 - I 3
-2
-3 0
2
I
3
4
4
i]
Jn Exercises 37- 44, a set S of vectors i11 R." is give11. Find a subset of S with the scm1e span asS that i.s a.s small as pos.sible.
37.
-~l
[~l [ll [:]. [~] }
~
Jn Exercises 29- 36. an m x 11 matrix A is given. Detennine whether the equation Ax = b is consiste/11 for every b in R."'.
[-~ ~]
[~l [ll [:]}
43. {[
44 . {
. {[-
29
42
73
5 1. If 5 1 and 5 2 are finite subsets of R." s uch that 5 1 is contai ned in Span 5 2 • then Span 5 1 is contai ned in Span 5 2 •
52. If 5 1 and S 2 are finite subsets of R." having equal spans, then S, = Sz, 53. If and Sz are finite subsets of R." having equal spans, then St and Sz contain the same number of vectors.
s,
54. Let S be a nonempty set of vectors in R.". and let v be in R.". The spans of Sand S U {v) arc equal if and only if vis in S. 55. The span of a set of two nonparallel vectors in R. 2 is R.2. 56. The span of any finite nonempty subset of R." contains the zero vector. 57. If v belongs to the span of S. so does c v for every scalar c. 58. If u and v belong to the span of S, so
doe.~
u
+ v.
59. The span of {v) cons ists of every multiple of v. 60. If S is a generating set for R."' that contains k vectors, then k 2:: m. 6 1. If A is a n m x 11 matrix whose reduced row echelon form contains no zero rows, then the columns of A form a generating set for R."'.
62. If the columns of an n x n matrix A form a generating set for R.", then the reduced row echelon form of A is 1,.. 63. If A is an m x n matrix such that Ax = b is inconsistent for some b in R."', then rank A < m. 64. If S 1 is contained in a fi nite set S2 and S 1 is a generating
set for 'R'", then S2 is also a generating set for 'R"'.
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 74 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
74
65.
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
Lel u 1 =[-~]:uld uz= [ ~l
75.
(a) llow many ''eCIOrs are in (u 1, uzJ? (b) llow many vcc1ors are in the span of (u 1, uzl?
76. Let V be the span of a finite subset of n•. Show 1hat either V = (01 or If comains infinitely many vectors.
66. Give three different generating sets for the set of vec1ors that lie in 1he Xl'·plane of 'R.3 .
67. Lei A be an 111 X 11 matrix with 111 > 11. Explain why Ax = b is inconsi>lent for >Orne b in 'R.'". 68. Whal can be said about the number of vectors in a generating sci for 'R."''? Explain your answer. 69. Le1 S 1 and S 2 be fini1e sub:.et> of n• such that S 1 is contamed m Sz. Prove that if i> a genemting set for n•. then so is Sz. 70. Let 11 and v be any vectors m n•. Prove that the spans of (u. vi :md (11 + v. u - vi are equal.
s,
77.
Let S be a finite ~ubset of n•. Prove that if u and v are in the span of S. then so i~ u + cv for any scalar c.
Let B be a matrix ob1ained from Aby performing a single elementary row operauon on A. Pro' e that the span of the rows of B equals the span of the rows of A. Him: Usc Exerci.c.' 71 and 72.
78. Pro'e thai e'ery linear combination of the rows of A can be wrinen a.~ a linear combination of the rows of the reduced row echelon form of A. Hmt: Use Exercise 77.
/11 Exucis~s 79 ·82, liS~ ~ithu" calwill/or ll'ith mlllrix CliJXIbili· ties or comp111er software wch (IS MATLAB to detemli11e u·herher e(lch gh·e11 vuror is ill the span of
UJ. IIz .... , Ut be VCCIOrs in 'R." and CJ.Cz ..... Ct be nonzero sc:1lars. Prove thai Span I u 1. 112 ••••• "*I = Sp:m (c, ll , .l·z uz, .. .,l'l llt ).
71. Let
72. Lei 111. uz..... u1 be vectors in n" "nd c be " scal"r. Prove lhlll 1hc sp:m of ( u 1, u 2, ...• lit I is equal 10 Ihe span of (u 1 + cuz, llz, .... 73. Let R be Ihe reduced row echelon form of an m x 11 ma1rix A. Is Ihe span of the columns of R equal 10 the span of lhe columns of A'! Justify your answer.
Let s, and Sz be finite :.ubsets of n• such that S 1 is conlained in S 2. Usc only the definition of spa11 to prove lhat SpanS1 is contained in SpanS2 •
SOLUTIONS TO THE PRACTICE PROBLEMS I. Lei A be the mn~rix "hose columns are the vectors of S. Then u is in the span of S if and only if Ax = u is conSIStent. Because the reduced row echelon form of (A u ( is / 3 • this sy>tem 1s inconsi:.lent. lienee u is not in the span of S . On the o1hcr lt
[~
Thus the rank of A IS 3. so S " a generatmg set for 'R.3 by Theorem 1.6. 3. From the reduced ro" echelon form of A in Pracuce Problem 2. we see that the last column of A is a linear cornbirunion of 1he fir>l 1hree columns. Thus the vec1or [-:] c:m be removed from S wilholll changing ils span. So
Thus Ax = v is consis1en1. and so v is in the span of S. ln f:1c1, 1hc reduced mw echelon form of [A v] shows thai
n
2. Let A be lhe m:urix whose columns arc the vectors in S. The reduced row echelon form of A is
[~
0
0
I 0 0
User: Thomas McVaney
~].
is a subset of S 1hat is a generating set for 3 • Moreover. this set is lhe smallest generating set possible because removing any vcclor from S ' leaves u :.et of only 2 vcclors. Since 1he malrix who...c: columns :tre the vec1ors in S ' is :1 3 x 2 matrix. it cannot have rank 3 and so cann01 be a generatmg set for 3 by Theorem 1.6.
n
-2
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 75 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.7 Linear Dependence and Linear Independence
[!I]
75
LINEAR DEPENDENCE AND LINEAR INDEPENDENCE
In Section 1.6, we saw that it is possible to reduce the size of a generating set if some vector in the generating set is a li near combi nation of the others. Ln fact, by Theorem 1.7, this vector can be removed without affecti ng the span. In this section. we consider the problem of recognizing when a generating set cannot be made smaller. Consider. for example. the setS= {u 1, u z. UJ. u 4}, where
In this case, the reader should check that u4 is not a linear combination of the vectors u 1, uz, and UJ. However, this does nor mean that we cannot find a smaller set having the same span asS because it is possibl e that one of u 1. liz. and liJ might be a linear combination of the other vectors in S. Ln fact, this is precisely the situation because u 3 = Sll1 - 3ll2 + 01.14. Thus checking if one of the vectors in a generating set is a linear combination of the others could require us to solve many systems of linear equati ons. Fortunate ly, a better method is available. In the preceding example, in order that we do not have to guess which of ll 1, liz, liJ. and u 4 can be expressed as a linear combination of the others, let us fommlate the problem d ifferently. Note that because liJ = Su 1 - 3uz + Ou4, we must have - Su 1 + 3u 2 + u 3 - 0114
= 0.
Thus, instead of tryi ng to wri te some u; as a linear combination of the others, we can try to write 0 as a linear combin ation of lit. liz. liJ. and 1.14. Of course, this is always possible if we take each coefficient in the linear combination to be 0. But if rhere is a linear combinaTion of li t. liz, liJ. and 1.14 rhat equals 0 in which nor all of tile coefficienTs are 0, then we can express one of the u; 's as a linear combinarion of rhe otl1ers. In this case. the equation -Sli t + 3u z + l13- Ou4 = 0 enables us to express any one of u 1, uz, and liJ (but nor 1.14) as a linear combination of the others. For example. since - Sli t + 3llz + ll3 - 0114 = 0, we have - Su 1 = -3llz- ll3 + Ou4
3
I
s
s
li t = - uz + - liJ + 0lJ4 . We see that at least one of the vectors depends on (is a linear combi nation of) the othe rs. Th is idea motivates the following defi ni tions. Definitions A set of k vectors {u 1. u 2, .. .. uk} in R" is called linearly dependent if there exist scalars c 1, Cz, ... , ck, not all 0, such that
In this case, we also say that the vectors u 1, liz, . ... llk are linearly d ependent .
User: Thomas McVaney
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 76 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
76
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
A set of k vectors (u 1, u2, .. . , u*) is called linearly independent if the only scalars c , , c2, ... , Ck such that
are c , = c2 = · · · = ck = 0. In this case, we also say that the vectors u , , u 2, ... , llk are linearly independent. Note that a set is linearly independent if and o nl y if it is not linearly dependent.
l§ifh•l•!tjl
Show that the sets and are linearly dependent.
Solution The equation
=
=
is true with c , 2, c2 - I. and CJ = I. Since not all the coefficients in the preceding linear combination are 0, S 1 is linearly dependent. Because
and at least one of the coefficients in th is linear combination is nonzero, S 2 is also linearly dependent.
As Example I suggests, any finite subset S = {0. u 1, u2, .. . , uk) of 'R" rltar conrains the zero vector is linearly dependelll because
is a linear combination of the vectors in S in which at least one coefficient is nonzero.
0 CAUTION
Whi le the equation
is true, it te lls us nothing about the linear independence or dependence of the set S 1 in Example 1. A similar statement is tnte for any set of vectors {u,. u 2. . . . . llk ):
Ou,
+ Ou2 + · ·· + Ot~k = 0.
For a set of vectors to be linearly dependent, the equation
must be satisfied with at least one nonzero coefficient.
User: Thomas McVaney
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 77 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.7 Linear Dependence and Linear Independence
Since the equation c 1u 1 + c2u 2 + · · · + q vector product
uk
=0
77
can be written as a matrix-
we have the followi ng useful observation: The set {u 1, uz, ... , u k} is linearly dependent if and on ly if there exists a nonzero solution of Ax = 0. where A = (u , u 2 . . . uk].
i#iflrrl,!tfW
Determine whether the set
s=
I[n ·[~J ·[~J ·[iJI
is linearly dependent or linearly independent.
Solution
We must determine whether Ax
A=
= 0 has a nonzero solution. where
I
I
I
2
0
4
I
1
[ I
i]
is the matrix whose columns are the vectors in S. The augmented matrix of Ax is
[~
I
I
0
4 2 1 3
=0
~l
and its reduced row echelon form is
[~
0
2
I
- I
0
0
0 0
~l
Hence the general solution of this system is
x, = -2x3 X2
=
X3 X4
=
X3
free
0.
Because the solution of Ax = 0 contains a free variable, this system of linear equations has infinitely many solutions, and we can obtain a nonzero solution by choos ing any 1, for instance, we see that nonzero value of the free variable. Taking XJ
=
User: Thomas McVaney
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 78 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
78
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
is a nonzero solution of Ax = 0. Thus S is a linearly dependent subset of n 3 since
is a representation of 0 as a linear combination of the vectors in S.
+;:£1,!.111
Determine whether the set
is linearly dependent or linearly independent.
Solution
As in Exa mple 2, we must check whether Ax= 0 has a nonzero solut ion,
where
There is a way to do this without actually solving Ax= 0 (as we did in Exa mple 2). Note that the system Ax = 0 has nonzero solutions if and on ly if its general solution contains a free vari able . Since the reduced row echelon form of A is
[~
0 0]0 , I
0
I
the rank of A is 3. and the nullity of A is 3 - 3 = 0. Thus the general solution of Ax = 0 has no free variables. So Ax 0 has no nonzero solutions, and hence S is linearly independent.
=
ln Example 3. we s howed that a particular set S is linearly independent without actua ll y solving a system of linear equat ions. Our next theorem shows that a similar technique can be used for any set whatsoever. Note the relationship between this theorem and Theorem 1.6.
THEOREM 1.8 The following s tatements about an
111
x n matrix A are equivalent:
(a) The columns of A are linearly independent. (b) The equation Ax = b has at most one solution for each b in 'R"'. (c) The nullity of A is zero.
User: Thomas McVaney
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 79 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.7 Linear Dependence and Linear Independence 79
(d) The rank of A is n, the number of columns of A. (c) The columns of the reduced row echelon form of A arc distinct standard vectors in
nm.
(f) The only solution of Ax = 0 is 0. (g) There is a pivot position in each column of A.
PROOF We have already noted that (a) and (f) are equivalent, and clearly (f) and (g) are equivale nt. To complete the proof, we show that (b) implies (c), (c) implies (d), (d) implies (c), (c) implies (f). and (f) implies (b). (b) implies (c) Since 0 is a solution of Ax = 0, (b) implies that Ax= 0 has no nonzero solu tions. Thus the general solution of Ax = 0 has no free variables. Since the number of free variables is the nullity of A, we see that the nullity of A is zero. (c) implies (d) Because rankA + nullity A= n, (d) follows immediately from (c). (d) implies (e) Lf the rank of A is n, then every column of A is a pivot column , and therefore the reduced row echelon form of A consists entirely of standard vectors. These are necessarily distinct because each column contains the first nonzero e ntry in some row. (e) implies (f) Let R be the reduced row eche lon form of A. If the columns of R are distinct standard vectors in 'R"'. then R = (e1 !!;! .. . e,.]. Clearl y, the only solution of Rx = 0 is 0. and since Ax = 0 is equivalent to Rx = 0, it follows that the onl y solution of Ax = 0 is 0. (f) implies (b) Let b be any vector in 1?!". To show that Ax = b has at most one solution, we assume that u and v are both solutions of Ax = b and prove that u = v. Since u and v are solutions of Ax = b, we have
A(u - v) =A u - Av = b - b = 0. So u - v is a solution of Ax = 0. Thus (f) implies that u - v = 0: that is. u = v. It follows that Ax = b has at most one solution. • Is some vector in the set
Practice Problem 1 ...
a linear combination of the others? The equation Ax = b is called homogeneous if b = 0 . As Examples 2 and 3 illustrate, in checking if a subset is linearly independent. we are led to a homogeneous equation. Note that, un like an arbitrary equation, a homogeneous equation must be consistent because 0 is a solution of Ax = 0. As a result. the important question concerni ng a homogeneous equation is not !f it has solutions, but whether 0 is the only solution. If not, then the system has infinitely many solutions. For example, the general solution of a homogeneous system of linear equations with more variab les than equati ons must have free variables. Hence a homogeneous system of linear equarions wirtz more variables rhan equations has infinirely many solwions. According
User: Thomas McVaney
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 80 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
80
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
=
to Theorem 1.8, the number of solutions of Ax 0 determines the linear dependence or independence of the columns of A. In order to investigate some other properties of the homogeneous equation Ax 0, let us consider this equation for the matrix
=
A=[~
-4 2 - I -8 3 2
Since the reduced row echelon form of [A 0) is
[~ the general solution of Ax
-4 0 0
7
-8
-4
5
= 0 is Xt
= 4xz - 7x4
X2
4x4 - 5xs
x3 = X4
xs
+ 8xs
free free free.
Expressing the solurions of Ax = 0 in vector form yields
Thus the solution of Ax
= 0 is the span of
=
In a similar manner, for a matrix A , we can express any solution o f Ax 0 as a linear combination of vectors in which the coefficients arc the free variables in the general solution. We call such a representation a vector form of the general solution of Ax = 0. The solution set of this equation equals the span of the set of vectors that appear in a vector form of its general solution. For the preceding set S. we sec from equation ( II ) that the only linear combination of vectors in S equal to 0 is the one in which all of the coefficients are zero. So S is linearly independent. More generally. the following result is tmc:
When a vector form of the general solution of Ax = 0 is obtained by the method described in Section 1.3. the vectors that appear in the vector form are linearly independent. Practice Problem 2 .,.
Determine a vector fonn for the general solution of Xt -
3xz -
2xr - 6x2 - 2xr + 6xz
User: Thomas McVaney
+
X3
+
X3 -
X4 -
3x4 -
XS
=0
9xs = 0
+ 3x3 + 2r4 + I lxs = 0.
Page 3 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 81 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.7 Linear Dependence and Linear Independence 81
LINEARLY DEPENDENT AND LINEARLY INDEPENDENT SETS The following result provides a useful chamcterization of linearly dependent sets. In Section 2.3, we deve lop a simple method for implementing Theorem 1.9 to write one of the vectors in a linearly dependent set as a linear combination of the preceding vectors.
THEOREM 1.9
=
Vectors UJ. u 2, ... , u k in R " are linearly dependent if and only if Ut 0 or there exists an i :::; 2 such that u ; is a linear combination of the preceding vectors li t, ll2 , . . . , Ui - 1·
PROOF Suppose first that the vectors u 1. u 2, . .. , u k in R" are linearly dependent. If u 1 = 0. then we arc finished: so suppose u 1 # 0. There exist scalars c 1, c2, ... , ck, not all zero, such that
Let i denote the largest index such that c; # 0. Note that i :::; 2, for otherw ise the preceding etlttation would reduce to c 1ll 1 0, which is false because c 1 # 0 and U t # 0. Hence the preceding equation becomes
=
where
C;
#
0. Solving this equation for u;, we obtain
C2
- Ct
ll;
= -
ut C;
C;
Ci-1 u 2 - · · · - - - u ;-J. C;
Tims u ; is a linear combination of ll t , ll2, ... , ll;- t. We leave the proof of the converse as an exercise.
•
The followi ng properties relate to linearly dependent and linearly independent sets. Properties of Linearly Dependent and Independent Sets l. A set consisting of a single nonzero vector is linearly independent, but (0} is linearl y dependent.
=
2. A set of two vectors (u 1, u 2} is linearly dependent if and only if u 1 0 or u2 is in the span of ( ll t}: that is, if and only if li t = 0 or u 2 is a multiple of li J. Hence a set of two vectors is linearly dependent if and only if one of the vecron is a multiple of the other.
=
3. Let S (li t, ll2•... , Uk} be a linearly independent subset of R". and v be in R". Then v does not belong to the span of S if and onl y if {li t. ll2, ... , u k. v) is linearly independent. 4. Every subset of R" contai ning more than n vectors must be linearly dependent. 5. If S is a subset of R" and no vector can be re moved from S without changing its span, then S is linearly independent.
User: Thomas McVaney
Page 4 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 82 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
82
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
0 CAUTION
The result ment ioned in item 2 is valid only for sets contain ing two vectors. For example. in n 3, the set (Ct. e2. Ct + e2} is linearly dependent. but no vector in the set is a multiple of another. Properties 1, 2. and 5 follow from Theorem 1.9. For a justification of propetty 3, observe that by Theorem 1.9, u 1 "# 0, and for i 2::. 2. no u; is in the span of (u 1• u2. .. .. u, _ J). If v does not belong to the span of S . the vectors u 1, uz, ... , uk. v are also linearly independent by l11eorem 1.9. Conversely. if the vectors u ,. u 2..... Uk . v are linearly independent, then v is not a linear combination of u 1, u2, ... , u k by Theorem 1.9. So v does not belong to the span of S. (Sec Figure 1.27 for the case that k = 2.)
. k -
0
u,
Span l u 1• u 2 )
Span {u 1• u 2)
( u 1, u 2• v} i~ linearly dependent.
{ua. u2. l'} is linearly independent.
Figure 1.27 Linearly independent and linearly dependent sets of 3 vectors
To justify property 4, consider a set (u ,, u 2.... , uk} of k vectors from R". where 11. The 11 x k matrix [u 1 u 2 . . . uk] cannot have rank k because it has on ly 11 rows. Thus the set (u t. u2..... uk} is linearly dependent by Theorem 1.8. However, the next exa mple shows that subsets of R" containing n or fewer vectors may be either linearl y dependent or linearly independent.
k >
Example 4
Determine by inspection whether the following sets are linearly dependent or linearly independent:
Solution
Since St contains the zero vector. it is linearly dependent. To determine if S2. a set of two vectors, is linearly dependent or linearly independent, we need only check if either of the vectors in S2 is a multiple of the other. Because
5[-4] [-!0] ,o '
2 we see that
User: Thomas McVaney
12 6
15
s2 is linearly dependent.
Page 5 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 83 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.7 Linear Dependence and Linear Independence 83 To see if $ 3 is linearly independent. consider the subsetS= (u 1, 112} , where and
Because S is a set of two vectors, neither of which is a multiple of the other, S is linearly independent. Vectors in the span of S are linear combi nations of the vectors in S, and therefore must have 0 as their third component. Since
has a nonzero third component, it docs not belong to the span of S. So by propctty 3 in the preceding list, S3 = {Ut, 112, V} is linearly independent. Finally, the set S 4 is linearly dependent by propcny 4 because it is a set of 4 vectors from 1?.3. Practice Problem 3
~
Determine by inspection whether the following sets are linearly dependent or linearly independent:
ln this chapter. we introduced matrices and vectors and learned some of their fundamental propenies. Since we can wri te a system of linear equations as an equation involving a matrix and vectors, we can use these arrays to solve any system of linear equations. It is surprising that the number of solutions of the equation Ax = b is related both to the si mple concept of the rank of a matrix and also to the complex concepts of generating sets and linearly independent sets. Yet this is exactly the case, as Theorems 1.6 and 1.8 show. To conclude this chapter, we present the follow ing table, which summarizes the relationships among the ideas that were established in Sections 1.6 and 1.7. We assume that A is an 111 x n matrix with reduced row echelon fom1 R. Properties listed in the same row of the table are equivalent.
User: Thomas McVaney
The rank of A
The number of solutions of Ax = b
The columns of A
The reduced row echelon form R of A
rankA =m
Ax = b has at least one solution for every b in n_m.
The columns of A are a generating set for'Rm.
Every row of R contains a pivot position.
rank A = n
Ax = b has at most one solution for every b in n_m.
The columns of A are linearly independent.
R contains a pivot
Every column of position.
Page 6 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 84 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
84 CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
EXERCISES In £urcises 1- 12, determine by inspection ll'hether the gil•en sets are linearly depentlent.
In Exercises 13- 22, a setS is given. Detennine by impection a
13. { [
In mdependem.
28
1[ilnllll tm
"·I[ll f!lfil Pll =
30
IDH-!].[-!][=ll l
In Exe~ises 31 - 38, a linearly dependem setS is given. Write some vector in S as a linear combintllinn of the others.
[-;J. m J
_iJ. [-~l [~~l [=~]}
User: Thomas McVaney
Page 7 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 85 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
1.7 Linear Dependence and Linear Independence
85
'· 1[-iH-iH!lDll In Exercises 51- 62, wrire rhe vecror fonn of rile general .wlwion of rile given sysrem of linear equations. x, + Sx3 0 51. x, - 4xz + 2r3 0 52. x2 - 3x, 0
In Erercises 63-82. derennine wheTher rile state· ments are true or false.
63. If Sis linearly independent, then no vector inS is a lineM combi nation of the others. 64. If the onl y solution of Ax = 0 is 0. then the rows of A are linearly independent.
65. If the nullity of A is 0, then the columns of A are linearly dependent.
66. If the columns of the reduced row echelon form of A are distinct s tandard vectors, then the only solution of A x 0
=
is 0.
67. If A is an 111 x 11 matrix with rank A are linearly independent.
11,
then the columns of
Page 8 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 86 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
86
CHAPTER 1 Matrices, Vectors, and Systems of linear Equations
68. A homogeneous equation i> alway• consistent. 69. A homogeneous equation always has infinitely many solulion~.
70. If a vector form of the general ;olution of Ax= 0 is obtained by the method described in Section 1.3, then the vectors that appear in the vector fonn are linearly independent. 71. For any vector v. {v i is linearly dependent.
72. A set of vectors in R!' is linearly dependent if and only if one of the vectors is a multo pie of one of the others. 73. lf a subset of n• is linearly dependent. then it must contain at least 11 \'CCtors. 74. If the columns of a 3 x .f matrix arc distinct. then they are linearly dependent. 75. For the ;ystcm of linear equation; Ax = b to be homogeneous, b must equal 0. 76. If a subset of n • contains more than linearly dependent.
11
vector., then it is
77. If every column of an m x 11 mmrix A contains a pivot position, then the matrix equation Ax = b is consistent for every b in n•. 78. If every row of an m x 11 matrix A contains a pivot position. then the matrix equation Ax = b is con;i;tcnt for C\Cry b in n•. 79. 1f <'o ll o+czuz+ .. ·+ct u.t= O for <'t=n= .. ·= <'t = 0. then {u o. uz..... u.t I is linearly independent. 80. Any >Ubset of n• that contain' 0 i\ hncarly dependent. 81. The sCI of Standard vector' in
n•
i' linearly independent.
82. The largest number of linearly independent vectors in
n•
is 11. 83. Find a 2 x 2 matrix A such that () is the only solution of Ax = (). 84. Find a 2 x 2 matrix A such that Ax many solutions.
= 0 has
infinitely
85. Find an example of linearly independent sub>cts {u 0, u 2} and {\'1 of n 3 such that (u o. uz. vi is linearly dependent. 86. Let (u 1 • uz..... u.t} be a linearly independent set of vectors in n•. and Jet V be a \'eCtOr in n• such that \' = co u o + Cz ll z + · .. + <'t Ut for some scala" c 0.c2 ..... Ct. with Ct # 0. Prove that (v. u 2• ••• • Ut} "linearly independent. 87. Let u :ond v be d istinct vecton< in R". Prove that the set {u. v } is linearly independent if and only if the set {u + v, u - ' '} is linearly independent.
90. Complete the proof of Theorem 1.9 by s howing that if u o = 0 or u, is in the span of {u o. uz..... u, t} for some i ~ 2. then {u t. uz..... Ut} is linearly dependent. Him: Separately consider the case in which u 1 = 0 and the case in which vector u, is in the span of (u t. u 2••••• u, _ 1). 4
9 1~ Prove th:ot :ony nonempty subset of a linearly independent subset of n" is linearly independent.
92. Prove th:ot if S o is a linearly dependent subset ofR" that is contained in a fi nite set S 2. then S2 is li nearly dependent. 93. Let S = {u 0• 111 ..... Ut} be a nonempty set of veeton< from n•. Prove that if S is linearly independent. then every vector in Span S can be written as Ct Ut + czu z - .. · + Ct Ut for wuqu~ scalars l't.l'l ... . . c• . 9.f. State and prove the con\'crsc of Exercise 93. 95. LetS = {u ,. u z..... Ut } be a noncmpty sub>ct of R.• and A be an m x 11 matrix. Prove that if S i~ linearly dependent and S' {Au 1.Au 2 , •••• Auk} contains k distinct vectors. then S' is linearly dependent.
=
96. G ive an example to show that the preceding exercise is false if lillf!ttrly dependem is changed to li11early i11depe11· de11t. 97. LetS= (u 1• u 2 •.•.• lit} be a nonempty subset of 'R" and A be an m x 11 matrix with rank 11. Prove that if S is a hnearly mdependent set. then the set (Au 0.Au 2..... Aut} i~ also linearly mdependcnt. 98. Let A and 8 be m x 11 matrices :.uch that 8 can be obtained by performing a single elementary row operation on A. Prove that if the rows of A are linearly independent. then the rows of 8 arc also linearly independent. 99. Prove that if u matrix is in reduced row eche lon form. then its non1ero row' are linearly independent. 100. Prove that the rows of an m x 11 matrix A :ore linearly independent if and only if the rank of A ism . Him: Usc Exercises 98 and 99. In £terl'ises /OJ 104. use t'ither a calculmor ll'ith matri.l capabiliries or comp11tu softll'ai'F s11ch as MATU\8 to dnen11i11~ ll'herhu each 1/11'<'11 St'l is lmt'arl)' dependem. In tit~ cau tltot th~ set is linearly tlependmt. ll'rit~ somt' •·e ctor i11 the set tiS a linel/r Cflmbinatitm t1{ the mhus.
~.~
101.
88. Let u. v. and w be distinct vectors in R". Prove that (u. v. wl is linearly independent if and only if the set (u + v. u + w. v + w} is linearly independent. 89. Pro'c that if (u o. uz, .... ud i~ a linearly independent 'ub>et of n• and co. cz ..... ct arc non1ero scalar-;, then (co u o.c2u 2..... ct u.t} is also linearly mdependent. 14
This exercise is used in Section 7.3 (on page 5 t4).
User: Thomas McVaney
Page 9 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 87 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
Chapter 1 Review Exercises
103 ·
[-li]. [ ll]. [Ji]. [-i!]. [-~]· [ ll] 17 10
- 66 II
25 16
-7 22
- 15 22
-15 24
104 ·
87
[-li].[-~]· [~il]. [-:1]. [~l]. [Jl] 17 10
-66 II
0 2
45 15
- 28 -3
34 7
SOLUTIONS TO THE PRACTICE PROBLEMS
----------------------
I. Let A be the matrix "hose columns are the vectors in S. Since the reduced row echelon form of A is / 4 • the columns of A arc linearly independent by Theorem 1.8. Thus S i> linearly independent. and so no vector in S is a linear combination of the others. 2. The augmented matrix of the given system is
To oblain its vector form. we expre.s the general solution as a linear combination of vector> in which the coefficients are the free variables.
Since the reduced row eche lo n form of this matrix is
[~
3 0 0 - 2 OJ
0 0
I 0
0 I
l 2
0 . 0
the general solution of the given >ystem is
11
=h2+
~2
free
..l \
.14
1s
CHAPTER 1
=
=
3. By properly I o n page 81. S 1 i< linearly indepe ndent. By property 2 o n page 81. the lir>t two vectors in S2 arc linearly dependent. Therefore S2 is linearly dependent by Theorem 1.9. By properly 2 on page 81. S1 is linearly independent. By property 4 on page 81.
s. is linearly dependent.
l1s - X'\
free.
-l
REVIEW EXERCISES
~ bt E.urcist!s I 17. detemtillt! wltt'tlter tltt! statmumts
~
1
""' tmt! or falu.
I. If 8 is a 3 x 4 matrix. then its columns arc I x 3 vectors.
2. Any scalar multiple of n vector v in bination of v.
n"
is a linear com·
3. If a vector ,. lies in the spru1 of a finite subset S of n", then v is a line:tr combi nation o f the vectors in S. 4. T he matrix vector product of an m x 11 matrix A and a vector in n" is a !incur combination o f the columns of A. 5. 111e rnnk of the coeftlcie nt matrix of a consistent sys· tcm of linear equations is cqtml to the number of basic varillbles in the general ,olution of the sy,tem. 6. The nullity of the coefficient matrix of a consistent system of linear equations i> equal to the number of free variables in the general solution of the system.
User: Thomas McVaney
7. Every mmrix can be tran;formed into one and only one matrix in reduced row echelon form by means of a sequence of elementary row oper.ltion>. 8. If the last row of the reduced row echelon form of tm aug· mentcd matrix of a system of linear equation' has only one nonzero entry. 1hc n the syMem is inconsistent. 9. If the last row of the reduced row echelon form o f an aug· mented matrix o f a syste m of linear equations has only zero entries, then the system has infi nitely muny solutions. 10. The zem vector ofn" lies in the >pan of any fi nite subset ofn". II. If the rank of an m x 11 matrix A is A are line:trly independent.
111.
then Lhc rows of
12. The set of columns of an 111 x 11 matrix A is a generating set for n"' if and only if the rank of A is m.
Page 10 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 88 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
88
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
13. If the columns of an m x 11 matrix are linearly dependent. then the rank of the matrix is less than 111. 14. If S is a linem·ly independent subset of 1?.'' and v is a vector in 'R" such that SU (v} is linearly dependent. then v is in the span of S. 15. A s ubset of 'R" containing more than 11 vectors must be linearly dependent. 16. A subset of 1?!' containing fewer than 11 vectors must be linearly independent.
during January of las t year. Provide an interpretation of the vector (O.I )(vt + v2 + · · · + Vto).
/11 Exercises 30-33. compute the matrix-vector products.
30.
[~
32.
A4s• [-~J
17. A linearly dependem s ubset of'R" must coma in more than 11 vectors.
-l] [~]
[:
3 1.
3 -I
33. A_30o [
-~J
34. Suppose that 18. Determine whether each of the followi ng phrases is a misuse of terminology. If so, explain what is wrong w ith each one:
Vt
=
[!]
and
=[
"2
[
~r -:J
I 4
-!l
Represent 3v t - 4v2 as the product of a 3 x 2 matrix and
(a) an inco nsistent matrix
a vector in 1?. 2•
(b) the solution of a matrix (c) equivalent matrices
111 Exercise.r 35- 38, determine ll'hether the given vecror v is in the span of
(d) the nullity of a system o f linear equations (c) the span of a matrix (f) a generating set for a system of linear equations
(g) a homogeneous matrix If so, write v as a linear combination of the vectors in
(h) a linearl y independent matrix
19. (a) If A is an m x 11 matrix with rank 11 . what can be said about the number of solutions of Ax = b fo r every b
in"R111 '! (b) If A is an m x 11 matrix with rank m , what can be said about the number of solutions of Ax = b fo r every b
in 7?./11 ?
/11 Exercises 20-27. use tire jol/owi11g matrices to compute tire g iven expression, or give a
deji11ed:
A= [
-~ ~l
8
=
retiSOII
why the e..rpression is nor
G-~l
21. A+8
22. 8C
23. AD 7
24. 2A- 38
25. ATDT
26. A 7 -8
27. C 7
-
37.
UJ
V=[l]
= [~]
38. v
=
[~~] Xt
c=
m'
Xt
+ lx2 -
XJ
X1
X1
42.
2tt
+ lx2 + 3XJ + X2 + XJ -
2D
28. A boat is traveling o n a river in a southwesterly direction. parallel to the riverbank. at 10 mph. At the same time. a passenger is walk ing from the southea st side of the boat to the northwest at 2 mph. Find the velocity and the speed of the passenger w ith respect to the riverbank.
29. A s upermm·ket c hai n has 10 stores. For each i such that I :S i :S I0. the 4 x I vector v, is defined so that its respective components represent the total value of sales in produce, meats. dairy, and processed foods at store i
43.
X2
+
X3
=
3
= I = 2
4x2 - 7xJ = 4
+ 3X2 + 2<) + + X2 + X3 -
XI -
+
40. - 2rt + 4x2 + 2<3 = 7 2Xt - X2 - 4XJ = 2
=I
and 41. lr1 x1
User: Thomas McVaney
36. v
111 Exercises 39-44, determi11e whether the given system is consiste/11, lllld if so. ji11d its ge11eral sollllion.
111 Exercises 45-48.jirulthe m11k and llllllity of tire give11 matrix. 45. [1
2
-3
0
1]
46.
[~
2 0
-3 0 0 0
~]
Page 1 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 89 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
Chapter 1 Review Exercises
47. [_~
2 I
I 0
-3
2
-I I 2
48.
[-llll
49. A company that ships fruit has three kinds of fruit packs. The first pack consists of I 0 oranges and 10 grapefruit. the second pack consists of 10 oranges, 15 gnapefruit, and I 0 apples, and the third pack consists of 5 oranges, 10 grapefruit, and 5 apples. How many of each pack can be made fro m a stock of 500 oranges, 750 grapefruit. and 300 apples?
In Exercises 50- 53, a .set of vectors in R" is given. Detem1i11e whether tile set is a generating set for R".
, l[-iH!H-ill 53. {
In Exercises 64-67, a linearly dependent set S is given. Write some vector in S as a linet1r combinati()n of the others.
In Exercises 68- 71, write rhe vectorform ofthe general solwion of the given system of linear equations.
51. {[-:].[-:]. [_:] } 52. {
89
X!
69.
[-:l [:J. [!]}
?O.
In Exercises 54- 59. an m x I! mallix A is given. Determine whether the equation Ax= b is consistelll for every b in R".
71.
0
72. Let A be an m x n matrix. let b be a vector in R'", and 54.
[~ ~]
55.
[~ ~]
56.
[~
57
[-:
58.
[j -~]
- I
0
:]
2
0
59.
suppose that v is a solution of A x
-I
_:]
2
73. S uppose that Wt and w 2 are linear combinations of vectors v 1 and v 2 in R" such that w 1 and w 2 are linearly independent. Prove that ' ' t and v 2 are linearly independent.
[-l ll
WH-lHnl . j[-l][l][!ll
., W lHll User: Thomas McVaney
=
(b) Prove that for any solution u to Ax = b, there is a solution w to Ax = 0 such that u = v + w.
-3
In Exercises 60- 63, determine whether the given set is linearly dependem or/inearly independe/11.
60
= b.
(a} Prove that if w is a solution of Ax = 0. then v + w is a solution of Ax b.
74. Let A be an m x n matrix with reduced row echelon fo rm R. Describe the reduced row echelon form of each of the following matrices: (a} [A 0]
(b) (a 1 a2 · · ·
ad
fork < 11
(c) cA, where c is a nonzero scalar (d)
lim A]
(e)
lA cA I. where c
is any scalar
Page 2 of 10
Elementary Linear Algebra: A Matrix Approach, 2nd Edition, page: 90 No part of any book may be reproduced or transmitted by any means without the publisher’s prior permission. Use (other than qualified fair use) in violation of the law or Terms of Service is prohibited. Violators will be prosecuted to the full extent of the law.
90
CHAPTER 1 Matrices, Vectors, and Systems of Linear Equations
CHAPTER 1 MATLAB EXERCISES For the following exercises, use MATlAB (or comparable software) or a calculator with matrix capabilities. The MATlAB funaions in Tables 0.1, 0.2, 0.3, 0.4, and 0.5 ofAppendix O may be useful.
I. Let 3.2
6. 1 -1.7 1.5 4.3 2.4 . 4.1 2.0 5. 1 4.2 6. 1 -1.4 3.0 -1.3 Use the matrix- vector product o f A and a vector to com· pute each of the following linear combinations of the columns of A: (a) 1.5at - 2.2a z + 2.7a3 + 4:14 (b) 2a t + 2.1 a z - l.la4