Intersection and sum of subspaces of a linear space. Intersection and sum of subspaces of a linear space Dimension, basis, coordinates

Intersection and sum of subspaces of a linear space. Intersection and sum of subspaces of a linear space Dimension, basis, coordinates

30.07.2023

Definition. linear space over number field TO is called a set R elements, which we will call vectors and denote ,, and so on, if:

From these axioms it follows that:

Linear Shells

Definition.Linear shell A family of vectors is the set of all possible linear combinations of them in a linear space L.

It is easy to check that the linear span is a linear space in L.

Linear shell also called the subspace spanned by the vectors or generated by the vectors of the family. It can also be defined as the intersection of all subspaces in L containing all rank family of vectors is called the dimension of its linear span.

The first characteristic property of the basis: its linear span coincides with everythingL.

Subspaces

Definition. Linear subspace or vector subspace is a non-empty set K linear space L such that K is itself a linear space with respect to those defined in L the operations of addition and multiplication by a scalar. The set of all subspaces is denoted as Lat ( L ) . For a subset to be a subspace, it is necessary and sufficient that

The last two statements are equivalent to the following:

In particular, a space consisting of one element is a subspace of any space; Any space is itself a subspace. Subspaces that do not coincide with these two are called own or non-trivial.

Subspace Properties

In functional analysis in infinite-dimensional spaces, the closed subspaces.

Linear dependence of vectors

Definition. The family of vectors is called linear independent, if no non-trivial linear combination is equal to zero, that is, from

it follows that all = 0. Otherwise it is called linear dependent. The linear independence of the family means that the null vector is uniquely represented as a linear combination of family elements. Then any other vector has either a single representation or none. Indeed, comparing the two representations

This implies the second characteristic property of the basis: its elements are linearly independent. The definition of these two properties is equivalent to the original definition of the basis.

notice, that a family of vectors is linearly independent if and only if it forms a basis of its linear span.

A family is known to be linearly dependent if there are zero or two identical vectors among them.

Lemma 1. A family of vectors is linearly dependent if and only if at least one of the vectors is a linear combination of the others.

Proof.

If

Conversely, if , then

Lemma 2. linearly dependent, then is a linear combination.

Proof.

If not all equal, then necessarily, otherwise we would get a non-trivial dependence between Therefore

Linear (vector) a space is a set V of arbitrary elements, called vectors, in which the operations of adding vectors and multiplying a vector by a number are defined, i.e. any two vectors \mathbf(u) and (\mathbf(v)) are assigned a vector \mathbf(u)+\mathbf(v), called the sum of the vectors \mathbf(u) and (\mathbf(v)) , any vector (\mathbf(v)) and any number \lambda from the field of real numbers \mathbb(R) is assigned a vector \lambda \mathbf(v), called the product of the vector \mathbf(v) and the number \lambda ; so the following conditions are met:


1. \mathbf(u)+ \mathbf(v)=\mathbf(v)+\mathbf(u)\,~\forall \mathbf(u),\mathbf(v)\in V(commutativity of addition);
2. \mathbf(u)+(\mathbf(v)+\mathbf(w))=(\mathbf(u)+\mathbf(v))+\mathbf(w)\,~\forall \mathbf(u), \mathbf(v),\mathbf(w)\in V(associativity of addition);
3. there is an element \mathbf(o)\in V , called the null vector, such that \mathbf(v)+\mathbf(o)=\mathbf(v)\,~\forall \mathbf(v)\in V;
4. for each vector (\mathbf(v)) there is a vector , called the opposite of the vector \mathbf(v) , such that \mathbf(v)+(-\mathbf(v))=\mathbf(o);
5. \lambda(\mathbf(u)+\mathbf(v))=\lambda \mathbf(u)+\lambda \mathbf(v)\,~\forall \mathbf(u),\mathbf(v)\in V ,~\forall \lambda\in \mathbb(R);
6. (\lambda+\mu)\mathbf(v)=\lambda \mathbf(v)+\mu \mathbf(v)\,~ \forall \mathbf(v)\in V,~\forall \lambda,\mu\ in \mathbb(R);
7. \lambda(\mu \mathbf(v))=(\lambda\mu)\mathbf(v)\,~ \forall \mathbf(v)\in V,~\forall \lambda,\mu\in \mathbb( R);
8. 1\cdot \mathbf(v)=\mathbf(v)\,~\forall \mathbf(v)\in V.


Conditions 1-8 are called linear space axioms. The equal sign put between vectors means that the same element of the set V is presented in the left and right parts of the equality, such vectors are called equal.


In the definition of a linear space, the operation of multiplying a vector by a number is introduced for real numbers. Such a space is called linear space over the field of real (real) numbers, or, in short, real linear space. If in the definition, instead of the field \mathbb(R) of real numbers, we take the field of complex numbers \mathbb(C) , then we obtain linear space over the field of complex numbers, or, in short, complex linear space. The field \mathbb(Q) of rational numbers can also be chosen as a number field, and in this case we obtain a linear space over the field of rational numbers. In what follows, unless otherwise stated, real linear spaces will be considered. In some cases, for brevity, we will talk about space, omitting the word linear, since all the spaces considered below are linear.

Remarks 8.1


1. Axioms 1-4 show that a linear space is a commutative group with respect to the operation of addition.


2. Axioms 5 and 6 determine the distributivity of the operation of multiplying a vector by a number with respect to the operation of adding vectors (axiom 5) or to the operation of adding numbers (axiom 6). Axiom 7, sometimes called the law of associativity of multiplication by a number, expresses the connection between two different operations: multiplication of a vector by a number and multiplication of numbers. The property defined by Axiom 8 is called the unitarity of the operation of multiplying a vector by a number.


3. A linear space is a non-empty set, since it necessarily contains a zero vector.


4. The operations of adding vectors and multiplying a vector by a number are called linear operations on vectors.


5. The difference of the vectors \mathbf(u) and \mathbf(v) is the sum of the vector \mathbf(u) with the opposite vector (-\mathbf(v)) and is denoted by: \mathbf(u)-\mathbf(v)=\mathbf(u)+(-\mathbf(v)).


6. Two non-zero vectors \mathbf(u) and \mathbf(v) are called collinear (proportional) if there exists a number \lambda such that \mathbf(v)=\lambda \mathbf(u). The concept of collinearity extends to any finite number of vectors. The null vector \mathbf(o) is considered to be collinear with any vector.

Consequences of the axioms of linear space

1. There is a unique zero vector in a linear space.


2. In a linear space, for any vector \mathbf(v)\in V, there is a unique opposite vector (-\mathbf(v))\in V.


3. The product of an arbitrary space vector and the number zero is equal to the zero vector, i.e. 0\cdot \mathbf(v)=\mathbf(o)\,~\forall \mathbf(v)\in V.


4. The product of a zero vector by any number is equal to a zero vector, i.e. for any number \lambda .


5. The vector opposite to this vector is equal to the product of this vector by the number (-1), i.e. (-\mathbf(v))=(-1)\mathbf(v)\,~\forall \mathbf(v)\in V.


6. In expressions like \mathbf(a+b+\ldots+z)(the sum of a finite number of vectors) or \alpha\cdot\beta\cdot\ldots\cdot\omega\cdot \mathbf(v)(the product of a vector by a finite number of factors) you can place the brackets in any order, or not at all.


Let us prove, for example, the first two properties. Uniqueness of the null vector. If \mathbf(o) and \mathbf(o)" are two zero vectors, then by axiom 3 we obtain two equalities: \mathbf(o)"+\mathbf(o)=\mathbf(o)" or \mathbf(o)+\mathbf(o)"=\mathbf(o), the left parts of which are equal by axiom 1. Therefore, the right parts are also equal, i.e. \mathbf(o)=\mathbf(o)". Uniqueness of the opposite vector. If the vector \mathbf(v)\in V has two opposite vectors (-\mathbf(v)) and (-\mathbf(v))" , then by axioms 2, 3,4 we obtain their equality:


(-\mathbf(v))"=(-\mathbf(v))"+\underbrace(\mathbf(v)+(-\mathbf(v)))_(\mathbf(o))= \underbrace( (-\mathbf(v))"+\mathbf(v))_(\mathbf(o))+(-\mathbf(v))=(-\mathbf(v)).


The rest of the properties are proved similarly.

Examples of Linear Spaces

1. Denote \(\mathbf(o)\) - a set containing one zero vector, with operations \mathbf(o)+ \mathbf(o)=\mathbf(o) And \lambda \mathbf(o)=\mathbf(o). For these operations, axioms 1-8 are satisfied. Therefore, the set \(\mathbf(o)\) is a linear space over any number field. This linear space is called null.


2. Denote V_1,\,V_2,\,V_3 - sets of vectors (directed segments) on a straight line, on a plane, in space, respectively, with the usual operations of adding vectors and multiplying vectors by a number. The fulfillment of axioms 1-8 of linear space follows from the course of elementary geometry. Therefore, the sets V_1,\,V_2,\,V_3 are real linear spaces. Instead of free vectors, we can consider the corresponding sets of radius vectors. For example, a set of vectors on a plane that have a common origin, i.e. laid off from one fixed point of the plane, is a real linear space. The set of radius vectors of unit length does not form a linear space, since for any of these vectors the sum \mathbf(v)+\mathbf(v) does not belong to the considered set.


3. Denote \mathbb(R)^n - the set of matrices-columns of size n\times1 with operations of matrix addition and matrix multiplication by a number. Axioms 1-8 of the linear space are satisfied for this set. The zero vector in this set is the zero column o=\begin(pmatrix)0&\cdots&0\end(pmatrix)^T. Therefore, the set \mathbb(R)^n is a real linear space. Similarly, the set \mathbb(C)^n of columns of size n\times1 with complex entries is a complex linear space. The set of column matrices with non-negative real elements, on the contrary, is not a linear space, since it does not contain opposite vectors.


4. Denote \(Ax=o\) - the set of solutions of the homogeneous system Ax=o of linear algebraic equations with and unknowns (where A is the real matrix of the system), considered as a set of columns of size n\times1 with the operations of matrix addition and matrix multiplication by the number . Note that these operations are indeed defined on the set \(Ax=o\) . Property 1 of solutions of a homogeneous system (see Section 5.5) implies that the sum of two solutions of a homogeneous system and the product of its solution by a number are also solutions of a homogeneous system, i.e., belong to the set \(Ax=o\) . The axioms of the linear space for the columns are satisfied (see point 3 in the examples of linear spaces). Therefore, the set of solutions of a homogeneous system is a real linear space.


The set \(Ax=b\) of solutions to the inhomogeneous system Ax=b,~b\ne o , on the contrary, is not a linear space, if only because it does not contain a zero element (x=o is not a solution to the inhomogeneous system).


5. Denote M_(m\times n) - the set of matrices of size m\times n with the operations of matrix addition and matrix multiplication by a number. Axioms 1-8 of the linear space are satisfied for this set. The zero vector is the zero matrix O of the corresponding dimensions. Therefore, the set M_(m\times n) is a linear space.


6. Denote P(\mathbb(C)) - the set of polynomials in one variable with complex coefficients. The operations of adding many terms and multiplying a polynomial by a number, considered as a polynomial of degree zero, are defined and satisfy axioms 1-8 (in particular, a zero vector is a polynomial that is identically equal to zero). Therefore, the set P(\mathbb(C)) is a linear space over the field of complex numbers. The set P(\mathbb(R)) of polynomials with real coefficients is also a linear space (but, of course, over the field of real numbers). The set P_n(\mathbb(R)) of polynomials of degree at most n with real coefficients is also a real linear space. Note that the operation of addition of many terms is defined on this set, since the degree of the sum of polynomials does not exceed the powers of the summands.


The set of polynomials of degree n is not a linear space, since the sum of such polynomials may turn out to be a polynomial of a lower degree that does not belong to the set under consideration. The set of all polynomials of degree at most n with positive coefficients is also not a linear space, since when multiplying such a polynomial by a negative number, we get a polynomial that does not belong to this set.


7. Denote C(\mathbb(R)) - the set of real functions defined and continuous on \mathbb(R) . The sum (f+g) of the functions f,g and the product \lambda f of the function f and the real number \lambda are defined by the equalities:


(f+g)(x)=f(x)+g(x),\quad (\lambda f)(x)=\lambda\cdot f(x) for all x\in \mathbb(R)


These operations are indeed defined on C(\mathbb(R)) , since the sum of continuous functions and the product of a continuous function by a number are both continuous functions, i.e. elements of C(\mathbb(R)) . Let us check the fulfillment of the linear space axioms. The commutativity of addition of real numbers implies the validity of the equality f(x)+g(x)=g(x)+f(x) for any x\in \mathbb(R) . Therefore, f+g=g+f , i.e. axiom 1 is satisfied. Axiom 2 follows similarly from the associativity of addition. The zero vector is the function o(x) , identically equal to zero, which, of course, is continuous. For any function f, the equality f(x)+o(x)=f(x) is true, i.e. Axiom 3 is valid. The opposite vector for the vector f will be the function (-f)(x)=-f(x) . Then f+(-f)=o (axiom 4 holds). Axioms 5, 6 follow from the distributivity of the operations of addition and multiplication of real numbers, and axiom 7 from the associativity of multiplication of numbers. The last axiom holds, since multiplication by one does not change the function: 1\cdot f(x)=f(x) for any x\in \mathbb(R) , i.e. 1\cdot f=f . Thus, the set C(\mathbb(R)) under consideration with the introduced operations is a real linear space. Similarly, it is proved that C^1(\mathbb(R)),C^2(\mathbb(R)), \ldots, C^m(\mathbb(R))- sets of functions that have continuous derivatives of the first, second, etc. orders, respectively, are also linear spaces.


Denote by - the set of trigonometric binomials (frequently \omega\ne0 ) with real coefficients, i.e., set of functions of the form f(t)=a\sin\omega t+b\cos\omega t, Where a\in \mathbb(R),~b\in \mathbb(R). The sum of such binomials and the product of a binomial by a real number is a trigonometric binomial. The linear space axioms hold for the set under consideration (because T_(\omega)(\mathbb(R))\subset C(\mathbb(R))). Therefore, the set T_(\omega)(\mathbb(R)) with the operations of addition and multiplication that are usual for functions, is a real linear space. The zero element is the binomial o(t)=0\cdot\sin\omega t+0\cdot\cos\omega t, identically equal to zero.


The set of real functions defined and monotone on \mathbb(R) is not a linear space, since the difference of two monotone functions may turn out to be a nonmonotone function.


8. Denote \mathbb(R)^X - the set of real functions defined on the set X , with operations:


(f+g)(x)=f(x)+g(x),\quad (\lambda f)(x)=\lambda\cdot f(x)\quad \forall x\in X


It is a real linear space (the proof is the same as in the previous example). In this case, the set X can be chosen arbitrarily. In particular, if X=\(1,2,\ldots,n\), then f(X) is an ordered set of numbers f_1,f_2,\ldots,f_n, Where f_i=f(i),~i=1,\ldots,n Such a set can be considered a column matrix of dimensions n\times1 , i.e. a bunch of \mathbb(R)^(\(1,2,\ldots,n\)) coincides with the set \mathbb(R)^n (see item 3 for examples of linear spaces). If X=\mathbb(N) (recall that \mathbb(N) is the set of natural numbers), then we obtain a linear space \mathbb(R)^(\mathbb(N))- set of numerical sequences \(f(i)\)_(i=1)^(\infty). In particular, the set of convergent sequences of numbers also forms a linear space, since the sum of two convergent sequences converges, and when we multiply all terms of a convergent sequence by a number, we get a convergent sequence. On the contrary, the set of divergent sequences is not a linear space, since, for example, the sum of divergent sequences can have a limit.


9. Denote \mathbb(R)^(+) - the set of positive real numbers, in which the sum a\oplus b and the product \lambda\ast a (the notation in this example differs from the usual ones) are defined by equalities: a\oplus b=ab,~ \lambda\ast a=a^(\lambda), in other words, the sum of elements is understood as a product of numbers, and the multiplication of an element by a number is understood as exponentiation. Both operations are indeed defined on the set \mathbb(R)^(+) , since the product of positive numbers is a positive number and any real power of a positive number is a positive number. Let's check the validity of the axioms. Equality


a\oplus b=ab=ba=b\oplus a,\quad a\oplus(b\oplus c)=a(bc)=(ab)c=(a\oplus b)\oplus c


show that axioms 1 and 2 are satisfied. The zero vector of this set is one, since a\oplus1=a\cdot1=a, i.e. o=1 . The opposite of a is \frac(1)(a) , which is defined as a\ne o . Indeed, a\oplus\frac(1)(a)=a\cdot\frac(1)(a)=1=o. Let's check the fulfillment of axioms 5, 6,7,8:


\begin(gathered) \mathsf(5))\quad \lambda\ast(a\oplus b)=(a\cdot b)^(\lambda)= a^(\lambda)\cdot b^(\lambda) = \lambda\ast a\oplus \lambda\ast b\,;\hfill\\ \mathsf(6))\quad (\lambda+ \mu)\ast a=a^(\lambda+\mu)=a^( \lambda)\cdot a^(\mu)=\lambda\ast a\oplus\mu\ast a\,;\hfill\\ \mathsf(7)) \quad \lambda\ast(\mu\ast a) =(a^(\mu))^(\lambda)=a^(\lambda\mu)=(\lambda\cdot \mu)\ast a\,;\hfill\\ \mathsf(8))\quad 1\ast a=a^1=a\,.\hfill \end(gathered)


All axioms are fulfilled. Therefore, the set under consideration is a real linear space.

10. Let V be a real linear space. Consider the set of linear scalar functions defined on V, i.e., functions f\colon V\to \mathbb(R), taking real values ​​and satisfying the conditions:


f(\mathbf(u)+\mathbf(v))=f(u)+f(v)~~ \forall u,v\in V(additivity);


f(\lambda v)=\lambda\cdot f(v)~~ \forall v\in V,~ \forall \lambda\in \mathbb(R)(homogeneity).


Linear operations on linear functions are defined in the same way as in paragraph 8 of the examples of linear spaces. The sum f+g and the product \lambda\cdot f are defined by the equalities:


(f+g)(v)=f(v)+g(v)\quad \forall v\in V;\qquad (\lambda f)(v)=\lambda f(v)\quad \forall v\ in V,~ \forall \lambda\in \mathbb(R).


The fulfillment of the linear space axioms is confirmed in the same way as in paragraph 8. Therefore, the set of linear functions defined on the linear space V is a linear space. This space is called dual to the space V and is denoted by V^(\ast) . Its elements are called covectors.


For example, the set of linear forms of n variables, considered as the set of scalar functions of a vector argument, is the linear space dual to the space \mathbb(R)^n .

If you notice an error, typo or have suggestions, write in the comments.

linear space is called a set L , which defines the operations of addition and multiplication by a number, i.e. for each pair of elements a,bL there is some cL , which is called their sum, and for any element aL and any number R exists bL called the product of  by a. The elements of a linear space are called vectors . The operations of addition and multiplication by a number satisfy the following axioms.

Addition axioms:  a, b, cL

a+b = b+a commutativity

(a + b) + c = a + (b + c) - associativity

There is an element in space called null vector and denoted 0 , which together with any a from L gives the same element a, those.  0L:  a L 0 + a = a.

For everyone a from L exists opposite element , denoted -a, such that (-a) + a = 0

( a L  (-a) L: (-a) + a = 0)

Consequences from the axioms of addition:

1. The null vector is unique, i.e. if for at least one a L it is fair that b+a=a, That b = 0.

2. For any vector aL the opposite element is unique, i.e. b + a = 0  b = (-a)

Multiplication axioms:  ,  R  a, bL

 (a) = ()a

(a+b) =a +b- distributivity (over vectors)

(+)a =a +a - distributivity (by numbers)

1a = a

Consequences from the axioms of multiplication:  aL    R

0 = 0

0 a = 0

(-a) = (-1) a
^

2.1 Examples of linear spaces


1. Space K n columns of height n. The elements of this space are columns containing n real numbers, with the operations of componentwise addition and componentwise multiplication by a number. A null vector in such a space is a column consisting of n zeros.

2. Ordinary vectors in three-dimensional space R 3 with addition operations “according to the parallelogram rule” and multiplication-stretching. It is assumed that the beginnings of all vectors are at the origin, the null vector is the vector that ends at the origin

3. A polynomial of degree n in one variable 1 is a function

P n ( x ) =  n x +  n-1 x n n-1 + … +  1 x +  0 and  n  0

many polynomials, no higher degree n, with the usual operations of addition and multiplication by a number, form a linear space. Note that the set of polynomials of degree n does not form a linear space. The fact is that the sum of two polynomials of degree, for example, 3, may turn out to be a polynomial of degree 2 (for example, ( x 3 + 3) + (– x 3 – 2x 2 + 7) = – 2x 2 + 10 is a polynomial of degree 2). However, the operation of adding polynomials can lower the degree but not raise it, so the set of polynomials of degree at most n is closed under addition (i.e., the sum of two polynomials of degree at most n is always a polynomial of degree at most n), and forms a linear space.
^

2.2 Dimension, basis, coordinates.


Linear combination vectors ( e 1 , e 2 , …e n )  is called the expression  1 e 1 +  2 e 2 +  n e n = So a linear combination is simply the sum of vectors with numerical coefficients. If all coefficients  i are 0, the linear combination is called trivial .

The system of 2 vectors is called linearly dependent , if there exists a non-trivial linear combination of these vectors equal to 0 . In other words, if there are n numbers  R such that not all of them are equal to zero, and a linear combination of vectors with coefficients is equal to a null vector:

Otherwise, the vectors are called linearly independent . In other words, vectors are called linearly independent , If
from  1 e 1 +  2 e 2 + …+  n e n = 0 follows  1 =  2 = …=  n = 0 , i.e. if any linear combination of these vectors equal to a null vector is trivial.

decomposition vector a according to the system of vectors ( e i) is called a representation a as a linear combination of vectors ( e i). In other words, decompose vector a by vectors ( e i) means to find numbers  i such that

a = 1 e 1 +  2 e 2 + k e k

Note that the definition of vector independence can be given the following form: vectors are independent if and only if the decomposition 0 on them only.

Linear space is called finite-dimensional , if there is an integer n such that all independent systems of vectors in this space contain at most n elements.

Dimension finite-dimensional linear space L is the maximum possible number of linearly independent vectors (denoted dim L or dim L ). In other words, the linear space is called n-dimensional , If:

1. there is an independent system in space, consisting of n vectors;

2. any system consisting of n +1 vectors is linearly dependent.

Basis linear space L n any independent system of vectors is called, the number of elements of which is equal to the dimension of the space.

Theorem 1. Any independent system of vectors can be completed to a basis. That is, if the system  L k is independent and contains fewer vectors than the space dimension (n  L k, that the combined set of vectors ( e 1 ,e 2 ,…e n f 1 ,f 2 ,…f k-n ) is independent, contains k vectors and, therefore, forms a basis L k. ▄ Thus, in any linear space there are many (actually, infinitely many) bases.

The system of vectors is called complete if any aL can be decomposed into vectors of the system (decomposition is possible not uniquely).

In contrast, the decomposition of any vector in terms of an independent system is always unique (but does not always exist). Those.

Theorem 2 Decomposition of any vector in terms of a linear space basis Always exists and is unique. That is, the basis is an independent and complete system. The coefficients  i of the expansion of the vector in terms of the basis ( e i) are called coordinates vectors in the basis ( e i }.▄

All zero-vector coordinates are equal to 0 in any basis.

2.3 Examples

1. Space R 3 - the three-dimensional space of vectors-“directed segments” known from the school course with the usual operations of addition “according to the parallelogram rule” and multiplication by a number. Standard basis form three mutually perpendicular vectors directed along three coordinate axes; they are marked with letters i , j And k.

2. Space K n columns of height n has dimension n. Standard basis in the space of columns form vectors - these are columns in which there are ones in the i-th position, and the remaining elements are zeros:

Indeed, it is easy to see that any column is expanded in a system of vectors in a unique way, namely: , i.e., the expansion coefficients in for any column are simply equal to the corresponding elements of this column.

3. The space of polynomials of degree at most n has dimension n+1. Standard basis in this space:

(). Indeed, from the definition of a polynomial of degree n, it is obvious that any polynomial of degree at most n can be uniquely represented as a linear combination of vectors, and the coefficients of the linear combination are simply the coefficients of the polynomial (if the degree of the polynomial k is less than n, then the last n-k coefficients are 0 ).
^

2.4 Isomorphism of linear spaces


Let the basis in L n . Then everyone aL n one-to-one corresponds to a set of n numbers - the coordinates of the vector a in basis . Therefore, to each aL n one can one-to-one map a vector from the column space K n – column , which is formed from the coordinates of the vector a. With such a defined correspondence, the basis will be associated with the standard basis from K n . 4

It is easy to check that the summation of vectors in L n leads to the summation of the corresponding coordinates in the basis ; means the sum of vectors in L n answers with our correspondence the sum of the corresponding columns in K n ; a similar rule holds for multiplication by a number.

A one-to-one correspondence between elements of two spaces with preservation of the operations introduced in these spaces is called isomorphism . Isomorphism, like equality, is a transitive (transitional) property: if the space L n isomorphically K n , and the space K n isomorphic to some space M n , then and L n isomorphically M n .

Theorem 3. Any linear space of dimension n is isomorphic K n, therefore, due to transitivity, all linear spaces of dimension n are isomorphic to each other. ▄

Isomorphic objects from the point of view of mathematics are in essence only different “embodiments” (realizations) of one object, and any fact proved for some space is also true for any other space isomorphic to the first one.

2.5 Subspaces

subspace space L called a subset M L , closed under the operations of addition and multiplication by a number, i.e. x,y

M

Obviously, 0 M , If M- subspace L , i.e., the null vector belongs to any subspace 5 .

Every subspace of a linear space is itself a linear space. A bunch of ( 0 ) is a subspace (all axioms of a linear space are satisfied if the space consists of a single element - a null vector) 6 .

Each linear space contains two trivial subspaces: the space itself and the null subspace ( 0 ); other subspaces are called non-trivial .

The intersection of two subspaces is a subspace. The union of two subspaces is not, generally speaking, a subspace, for example, the union of two lines passing through the origin does not contain the sum of vectors belonging to different lines (such a sum lies between the lines) 7 .

Let n L k . Then the set of all linear combinations of these vectors, i.e. the set of all vectors of the form

a =  1 f 1 +  2 f 2 +  n f n

Forms an n-dimensional subspace G {f 1 , f 2 ,…f n ), which is called linear shell vectors ( f 1 , f 2 ,…f n).

Theorem 4. The basis of any subspace can be completed to the basis of the entire space. Those. let M n L k subspace, dimensions n – basis in M n . Then in L k there is such a set of vectors  L k , that the system of vectors ( f 1 ,f 2 …f n ,g 1 ,g 2 , …g k-n) 8 is linearly independent and contains k elements, therefore, it forms a basis. ▄
^

2.6 Examples of subspaces.


1. In R 3 any plane passing through the origin forms a two-dimensional subspace, and any straight line passing through the origin forms a one-dimensional subspace (planes and lines that do not contain 0 , cannot be subspaces), and other subspaces in R 3 No.

2. In column space K 3 columns of the form , i.e. the columns whose third coordinate is 0 form a subspace obviously isomorphic to the space K 2 columns, height 2.

3. In space P n polynomials, degrees not higher than n, polynomials, degrees not higher than 2, form three-dimensional subspace (they have three coefficients).

4. In 3D space P 2 polynomials of degree not higher than 2, polynomials turning to 0 at a given point x 0 form a two-dimensional subspace (prove it!).

5. Task. In space K 4 a bunch of M consists of columns whose coordinates satisfy the condition: 1 2 2 + 3 =0 (*). Prove that M three-dimensional subspace K 4 .

Solution. Let's prove that M subspace. Indeed, let A M , b M , so a 1 2a 2 + a 3 =0, b 1 2b 2 + b 3 =0. But according to the vector addition rule ( A + b) i= a i+b i. It follows that if for vectors A And b condition (*) is satisfied, then for A + b this condition is met. It is also clear that if for a column A condition (*) is satisfied, then it is also satisfied for the column A. And finally, the null-vector to the set M belongs. Thus, it is proved that M subspace. Let us prove that it is three-dimensional. Note that any vector a M by virtue of condition (*) has coordinates (**). Let m 1 = , m 2 = , a h 4 = . Let us show that the system of vectors ( m 1 ,m 2 ,h 4 ) forms a basis in M . Let's make a linear combination 1 m 1 + 2 m 2 +h 4 = with arbitrary coefficients. Obviously, any vector A from M (see (**)) expands over the set ( m 1 ,m 2 , h 4 ); to do this, it suffices to choose the coordinates of the vector as expansion coefficients 1 = a 1, 2 = a 2, 4 = a 4. In particular, the only linear combination of vectors m 1 ,m 2 , h 4 , equal to the null vector, is a combination with zero coefficients: 1 = 0, 2 = 0, 4 = 0. It follows from the uniqueness of the expansion of the null vector that ( m 1 ,m 2 , h 4 ) is an independent system of vectors. And from the fact that everyone A M expands according to the system ( m 1 ,m 2 , h 4 ) , it follows that this system is complete. A complete and independent system forms a basis in a subspace M . Since this basis contains three vectors, then M three-dimensional subspace.

http://matworld.ru/linear-algebra/linear-space/linear-subspace.php

Let L And M- two subspaces of space R.

Amount L+M is called the set of vectors x+y, Where xL And yM. Obviously, any linear combination of vectors from L+M belongs L+M, hence L+M is a subspace of the space R(may coincide with the space R).

crossing LM subspaces L And M is the set of vectors that simultaneously belong to subspaces L And M(can only consist of a null vector).

Theorem 6.1. Sum of dimensions of arbitrary subspaces L And M finite-dimensional linear space R is equal to the dimension of the sum of these subspaces and the dimension of the intersection of these subspaces:

dim L+dim M=dim(L+M)+dim(L∩M).

Proof. Denote F=L+M And G=L∩M. Let G g-dimensional subspace. We choose a basis in it. Because GL And GM, hence the basis G can be added to the basis L and to the base M. Let the basis of the subspace L and let the basis of the subspace M. Let us show that the vectors

belongs to subspace G=L∩M. On the other hand, the vector v can be represented by a linear combination of the basis vectors of the subspace G:

Due to the linear independence of the basis of the subspace L we have:

are linearly independent. But any vector z from F(by definition of the sum of subspaces) can be represented by the sum x+y, Where x∈L, y∈M. In its turn x is represented by a linear combination of vectors a y- a linear combination of vectors . Hence vectors (6.10) generate a subspace F. We have found that the vectors (6.10) form a basis F=L+M.

Studying the bases of subspaces L And M and subspace basis F=L+M(6.10), we have: dim L=g+l, dim M=g+m, dim (L+M)=g+l+m. Hence:



dimL+dimM−dim(L∩M)=dim(L+M).

2. Eigenvectors and eigenvalues ​​of a linear operator.

http://www.studfiles.ru/preview/6144691/page:4/

The vector X ≠ 0 is called own vector linear operator with matrix A, if there is a number l such that AX = lX.

In this case, the number l is called eigenvalue operator (matrix A) corresponding to the vector x.

In other words, an eigenvector is a vector that, under the action of a linear operator, transforms into a collinear vector, i.e. just multiply by some number. In contrast, improper vectors are more difficult to transform.

We write the definition of the eigenvector as a system of equations:

Let's move all the terms to the left side:

The last system can be written in matrix form as follows:

(A - lE)X \u003d O

The resulting system always has a zero solution X = O. Such systems in which all free terms are equal to zero are called homogeneous. If the matrix of such a system is square, and its determinant is not equal to zero, then according to Cramer's formulas, we will always get a unique solution - zero. It can be proved that the system has non-zero solutions if and only if the determinant of this matrix is ​​equal to zero, i.e.

|A - lE| = = 0

This equation with unknown l is called characteristic equation(characteristic polynomial) matrix A (linear operator).

It can be proved that the characteristic polynomial of a linear operator does not depend on the choice of basis.

For example, let's find the eigenvalues ​​and eigenvectors of the linear operator given by the matrix A = .

To do this, we compose the characteristic equation |А - lЕ| = \u003d (1 -l) 2 - 36 \u003d 1 - 2l + l 2 - 36 \u003d l 2 - 2l- 35; D \u003d 4 + 140 \u003d 144; eigenvalues ​​l 1 = (2 - 12)/2 = -5; l 2 = (2 + 12)/2 = 7.

To find the eigenvectors, we solve two systems of equations

(A + 5E) X = O

(A - 7E) X = O

For the first of them, the expanded matrix will take the form

,

whence x 2 \u003d c, x 1 + (2/3) c \u003d 0; x 1 \u003d - (2/3) s, i.e. X (1) \u003d (- (2/3) s; s).



For the second of them, the expanded matrix will take the form

,

whence x 2 \u003d c 1, x 1 - (2/3) c 1 \u003d 0; x 1 \u003d (2/3) s 1, i.e. X (2) \u003d ((2/3) s 1; s 1).

Thus, the eigenvectors of this linear operator are all vectors of the form (-(2/3)c; c) with eigenvalue (-5) and all vectors of the form ((2/3)c 1 ; c 1) with eigenvalue 7 .

It can be proved that the matrix of the operator A in the basis consisting of its eigenvectors is diagonal and has the form:

,

where l i are the eigenvalues ​​of this matrix.

The converse is also true: if the matrix A in some basis is diagonal, then all vectors of this basis will be eigenvectors of this matrix.

It can also be proved that if a linear operator has n pairwise distinct eigenvalues, then the corresponding eigenvectors are linearly independent, and the matrix of this operator in the corresponding basis has a diagonal form.

Let's explain this with the previous example. Let us take arbitrary non-zero values ​​c and c 1 , but such that the vectors X (1) and X (2) are linearly independent, i.e. would form a basis. For example, let c \u003d c 1 \u003d 3, then X (1) \u003d (-2; 3), X (2) \u003d (2; 3). Let us verify the linear independence of these vectors:

12 ≠ 0. In this new basis, the matrix A will take the form A * = .

To verify this, we use the formula A * = C -1 AC. Let's find C -1 first.

C -1 = ;


EXAMINATION TICKET No. 11

1. Transition to a new basis in linear space. transition matrix.

http://www.studfiles.ru/preview/6144772/page:3/

Transition to a new basis

There are two bases in the space R: the old e l , e 2 ,...e n and the new e l * , e 2 * ,...e n * . Any new basis vector can be represented as a linear combination of the old basis vectors:

The transition from the old basis to the new one can be specified transition matrix

Note that the multiplication coefficients of the new basis vectors according to the old basis form columns, not rows, of this matrix.

The matrix A is non-singular, since otherwise its columns (and hence the basis vectors) would be linearly dependent. Therefore, it has an inverse matrix A -1 .

Let the vector X have coordinates (x l , x 2 ,... x n) relative to the old basis and coordinates (x l * , x 2 * ,... x n *) relative to the new basis, i.e. X \u003d x l e l + x 2 e 2 + ... + x n e n \u003d x l * e l * + x 2 * e 2 * + ... + x n * e n * .

Substitute in this equation the values ​​e l * , e 2 * ,...e n * from the previous system:

x l e l + x 2 e 2 +...+ x n e n = x l * (a 11 e l + a 12 e 2 + ... + a 1n e n) + x 2 * (a 21 e l + a 22 e 2 + ... + + a 2n e n) +...+ x n * (a n1 e l + a n2 e 2 + … + a nn e n)

0 \u003d e l (x l * a 11 + x 2 * a 21 + ... + x n * a n1 - x l) + e 2 (x l * a 12 + x 2 * a 22 + ... + x n * a n2 - x 2) + + ... + e n (x l * a 1n + x 2 * a 2n + ... + x n * a nn - x n)

Due to the linear independence of the vectors e l , e 2 ,...e n all the coefficients attached to them in the last equation must be equal to zero. From here:

or in matrix form

Multiply both parts by A -1, we get:

For example, let in the basis e l , e 2 , e 3 vectors are given and 1 = (1, 1, 0), and 2 = (1, -1, 1), and 3 = (-3, 5, -6) and b = (4; -4; 5). Show that the vectors a l , a 2 , and 3 also form a basis and express the vector b in this basis.

Let us show that the vectors a l , a 2 , and 3 are linearly independent. To do this, make sure that the rank of the matrix composed of them is equal to three:

Note that the original matrix is ​​nothing more than the transition matrix A. Indeed, the connection between the bases e l , e 2 , e 3 and a l , a 2 , a 3 can be expressed by the system:

Calculate A -1 .

= 6 + 0 - 3 – 0 – 5 + 6 = 4


That is, in the basis a l, a 2, a 3 vector b = (0.5; 2; -0.5).

2 Vector length and angle between vectors in Euclidean space.

http://mathhelpplanet.com/static.php?p=evklidovy-prostranstva

Let and be subspaces of a linear space.

Intersection of subspaces and the set of vectors is called, each of which belongs and simultaneously, i.e. the intersection of subspaces is defined as the usual intersection of two sets.

Algebraic sum of subspaces and is the set of vectors of the form , where . The algebraic sum (in short, just the sum) of subspaces is denoted

The representation of a vector in the form , where , is called vector decomposition no subspaces And .

Remarks 8.8

1. The intersection of subspaces is a subspace. Therefore, the concepts of dimension, basis, etc. apply to intersections.

2. The sum of subspaces is a subspace. Therefore, the concepts of dimension, basis, etc. apply to amounts.

Indeed, it is necessary to show the closedness of linear operations in the set . Let two vectors and belong to the sum , i.e. each of them is decomposed into subspaces:

Let's find the sum: . Since , and , then . Therefore, the set is closed with respect to the operation of addition. Let's find the work: . Since , a , then . Therefore, the set is closed with respect to the operation of multiplication by a number. Thus, is a linear subspace.

3. The operation of intersection is defined on the set of all subspaces of a linear space. It is commutative and associative. The intersection of any family of subspaces V is a linear subspace, and the brackets in the expression - can be placed arbitrarily or not at all.

4. Minimum linear subspace containing a subset of a finite-dimensional linear space is the intersection of all subspaces containing , i.e. . If , then the specified intersection coincides with the zero subspace , since it is contained in any of the subspaces . If is a linear subspace of , then the indicated intersection coincides with , since it is contained in each of the intersected subspaces (and is one of them: ).

Minimum property of a linear shell: linear shell any subset finite-dimensional linear space is the minimal linear subspace containing , i.e. .

Indeed, we denote . It is necessary to prove the equality of two sets: . Since (see point 6 of remarks 8.7), then . Let us prove the inclusion . An arbitrary element has the form , where . Let be any subspace containing . It contains all vectors and any linear combination of them (see point 7 of Remarks 8.7), in particular, the vector . Therefore, the vector belongs to any subspace containing . Hence, belongs to the intersection of such subspaces. Thus, . The equality follows from the two inclusions.

5. The operation of adding subspaces is defined on the set of all subspaces of a linear space. It is commutative and associative. Therefore, in the sums of a finite number of subspaces, brackets can be placed arbitrarily or not at all.

6. One can define a union of subspaces and as a set of vectors, each of which belongs to a space or a space (or both subspaces). However, the union of subspaces is generally not a subspace (it will be a subspace only under the additional condition or ).

7. The sum of subspaces coincides with the linear span of their union. Indeed, the inclusion follows from the definition. Any element of the set has the form , i.e. is a linear combination of two vectors from the set . Let us prove the opposite inclusion . Any element looks like , Where . We divide this sum into two, referring to the first sum all the terms for which . The rest of the terms make up the second sum:

The first sum is some vector , the second sum is some vector . Hence, . Means, . The resulting two inclusions indicate the equality of the considered sets.

Theorem 8.4 on the dimension of the sum of subspaces. If And subspaces of a finite-dimensional linear space , then the dimension of the sum of subspaces is equal to the sum of their dimensions without the dimension of their intersection (Grassmann's formula ):

Indeed, let be the basis of the intersection . Let us supplement it with an ordered set of vectors up to the basis of the subspace and an ordered set of vectors up to the basis of the subspace . Such an addition is possible by Theorem 8.2. From these three sets of vectors, we compose an ordered set vectors. Let us show that these vectors are generators of the space . Indeed, any vector of this space can be represented as a linear combination of vectors from the ordered set

Hence, . Let us prove that the generators are linearly independent and therefore they are the basis of the space . Indeed, let us compose a linear combination of these vectors and equate it to the zero vector:

Let's denote the first two sums - this is some vector from , let's denote the last sum - this is some vector from . Equality (8.14): means that the vector also belongs to the space . Means, . Expanding this vector in terms of the basis , we find . Taking into account the expansion of this vector in (8.14), we obtain

The last equality can be considered as the expansion of the zero vector in terms of the subspace basis. All coefficients of this expansion are zero: and . Substituting into (8.14), we get. This is possible only in the trivial case and , since the system of vectors is linearly independent (this is the basis of the subspace ). Thus, equality (8.14) is satisfied only in the trivial case when all coefficients are equal to zero simultaneously. Therefore, the collection of vectors linearly independent, i.e. is the basis of the space. Let us calculate the dimension of the sum of subspaces:

Q.E.D.

Example 8.6. In the space of radius vectors with a common origin at a point, the following subspaces are given: and - three sets of radius vectors belonging to lines intersecting at a point and respectively; and are two sets of radius vectors belonging to intersecting planes and, respectively; straight line, belongs to the plane, the line belongs to the plane, plane and intersect in a straight line (Fig. 8.2). Find the sums and intersections of each two of the indicated five subspaces.

Solution. Let's find the sum. Adding two vectors belonging to and respectively, we obtain a vector belonging to the plane . On the other hand, any vector (see Figure 8.2) belonging to can be represented as , by constructing projections and vectors on the lines and, respectively. Hence, any radius-vector of the plane is decomposed into subspaces and , i.e. . Similarly, we obtain that , and is the set of radius vectors belonging to the plane passing through the lines and .

Let's find the sum. Any space vector can be decomposed into subspaces and . Indeed, through the end of the radius vector we draw a line parallel to the line (see Fig. 8.2), i.e. we build the projection of the vector onto the plane . Then we postpone the vector on so that . Hence, . Since , then . Similarly, we get that . The remaining sums are found simply: . Notice, that .

Using Theorem 8.4, we check, for example, equality in dimension. Substituting and into the Grassmann formula, we obtain , which is to be expected, since .

Intersections of subspaces are found from fig. 8.2, as an intersection of geometric figures:

where is the zero radius vector .

    Just a sum of subspaces. Criteria directly sumi.



© 2023 globusks.ru - Car repair and maintenance for beginners