Vector spaces

Before defining a vector field, it is helpful to be precise about what is meant by a vector. A vector space (or linear space) is defined as a set, $ {V}$, that is closed under two algebraic operations called vector addition and scalar multiplication and satisfies several axioms, which will be given shortly. The vector space used in this section is $ {\mathbb{R}}^n$, in which the scalars are real numbers, and a vector is represented as a sequence of $ n$ real numbers. Scalar multiplication multiplies each component of the vector by the scalar value. Vector addition forms a new vector by adding each component of two vectors.

A vector space $ {V}$ can be defined over any field $ {\mathbb{F}}$ (recall the definition from Section 4.4.1). The field $ {\mathbb{F}}$ represents the scalars, and $ {V}$ represents the vectors. The concepts presented below generalize the familiar case of the vector space $ {\mathbb{R}}^n$. In this case, $ {V}= {\mathbb{R}}^n$ and $ {\mathbb{F}}= {\mathbb{R}}$. In the definitions that follow, you may make these substitutions, if desired. We will not develop vector spaces that are more general than this; the definitions are nevertheless given in terms of $ {V}$ and $ {\mathbb{F}}$ to clearly separate scalars from vectors. The vector addition is denoted by $ +$, and the scalar multiplication is denoted by $ \cdot$. These operations must satisfy the following axioms (a good exercise is to verify these for the case of $ {\mathbb{R}}^n$ treated as a vector space over the field $ {\mathbb{R}}$):

  1. (Commutative Group Under Vector Addition) The set $ {V}$ is a commutative group with respect to vector addition, $ +$.
  2. (Associativity of Scalar Multiplication) For any $ v \in {V}$ and any $ \alpha,\beta \in {\mathbb{F}}$, $ \alpha(\beta v) = (\alpha \beta) v$.
  3. (Distributivity of Scalar Sums) For any $ v \in {V}$ and any $ \alpha,\beta \in {\mathbb{F}}$, $ (\alpha + \beta) v = \alpha v + \beta v$.
  4. (Distributivity of Vector Sums) For any $ v,w \in {V}$ and any $ \alpha \in {\mathbb{F}}$, $ \alpha (v+w) = \alpha v + \alpha w$.
  5. (Scalar Multiplication Identity) For any $ v \in {V}$, $ 1 v
= v$ for the multiplicative identity $ 1 \in {\mathbb{F}}$.
The first axiom allows vectors to be added in any order. The rest of the axioms require that the scalar multiplication interacts with vectors in the way that we would expect from the familiar vector space $ {\mathbb{R}}^n$ over $ {\mathbb{R}}$.

A basis of a vector space $ {V}$ is defined as a set, $ v_1$,$ \ldots $,$ v_n$, of vectors for which every $ v \in {V}$ can be uniquely written as a linear combination:

$\displaystyle v = \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n ,$ (8.7)

for some $ \alpha_1,\ldots,\alpha_n \in {\mathbb{F}}$. This means that every vector has a unique representation as a linear combination of basis elements. In the case of $ {\mathbb{R}}^3$, a familiar basis is $ [0\;\;0\;\;1]$, $ [ 0 \;\; 1 \;\; 0 ]$, and $ [1\;\;0\;\;0]$. All vectors can be expressed as a linear combination of these three. Remember that a basis is not necessarily unique. From linear algebra, recall that any three linearly independent vectors can be used as a basis for $ {\mathbb{R}}^3$. In general, the basis must only include linearly independent vectors. Even though a basis is not necessarily unique, the number of vectors in a basis is the same for any possible basis over the same vector space. This number, $ n$, is called the dimension of the vector space. Thus, we can call $ {\mathbb{R}}^n$ an $ n$-dimensional vector space over $ {\mathbb{R}}$.

Example 8..4 (The Vector Space $ {\mathbb{R}}^n$ Over $ {\mathbb{R}}$)   As indicated already, $ {\mathbb{R}}^n$ can be considered as a vector space. A natural basis is the set of $ n$ vectors in which, for each $ i \in
\{1,\ldots,n\}$, a unit vector is constructed as follows. Let $ x_i =
1$ and $ x_j = 0$ for all $ j \not = i$. Since there are $ n$ basis vectors, $ {\mathbb{R}}^n$ is an $ n$-dimensional vector space. The basis is not unique. Any set of $ n$ linearly independent vectors may be used, which is familiar from linear algebra, in which nonsingular $ n \times n$ matrices are used to transform between them. $ \blacksquare$

To illustrate the power of these general vector space definitions, consider the following example.

Example 8..5 (A Vector Space of Functions)   The set of all continuous, real-valued functions $ f : [0,1]
\rightarrow {\mathbb{R}}$, for which

$\displaystyle \int_0^1 f(x) dx$ (8.8)

is finite, forms a vector space over $ {\mathbb{R}}$. It is straightforward to verify that the vector space axioms are satisfied. For example, if two functions $ f_1$ and $ f_2$ are added, the integral remains finite. Furthermore, $ f_1 + f_2 = f_2 + f_1$, and all of the group axioms are satisfied with respect to addition. Any function $ f$ that satisfies (8.8) can be multiplied by a scalar in $ {\mathbb{R}}$, and the integral remains finite. The axioms that involve scalar multiplication can also be verified.

It turns out that this vector space is infinite-dimensional. One way to see this is to restrict the functions to the set of all those for which the Taylor series exists and converges to the function (these are called analytic functions). Each function can be expressed via a Taylor series as a polynomial that may have an infinite number of terms. The set of all monomials, $ x$, $ x^2$, $ x^3$, and so on, represents a basis. Every continuous function can be considered as an infinite vector of coefficients; each coefficient is multiplied by one of the monomials to produce the function. This provides a simple example of a function space; with some additional definitions, this leads to a Hilbert space, which is crucial in functional analysis, a subject that characterizes spaces of functions [836,838]. $ \blacksquare$

The remainder of this chapter considers only finite-dimensional vector spaces over $ {\mathbb{R}}$. It is important, however, to keep in mind the basic properties of vector spaces that have been provided.

Steven M LaValle 2012-04-20