Mathematics 44, section 1 -- Linear Algebra

Discussion 2 -- Bases in Vector Spaces

February 5, 1999

Background

Over the last week we have introduced the notions of linear dependence and linear independence for collections of vectors in vector spaces. We have also seen that linearly independent sets are the most "economical" ones to use for making linear combinations and spanning subspaces. This is true since E being linearly dependent is equivalent to L(E) = L(E - {A}) for some A in E so A is, in effect, redundant or unnecessary to make the vectors in L(E). This is the motivation for considering the following new idea:

Definition. Let V be a vector space. A subset E of V is called a basis for V if

  1. E spans V -- in other words L(E) = V, and
  2. E is linearly independent, so in any equation c1A1 + ... + cnAn = 0 with ci R and Ai in E, then all the ci = 0.

For example, in V = R3, the set E = {(1,0,0), (0,1,0), (0,0,1)} (called the standard basis) is a basis. First, every vector (c1,c2,c3) in R3 can be written as

(c1,c2,c3) = c1(1,0,0) + c2(0,1,0) + c3(0,0,1)

This shows that R3 = L(E). Second E is clearly linearly independent since if

c1(1,0,0) + c2(0,1,0) + c3(0,0,1) = (c1,c2,c3) = (0,0,0)

then c1 = c2 = c3 = 0. Today we want to begin studying bases for vector spaces.

Discussion Questions

Assignment

Group write-ups due Wednesday, February 10.