
Course Log / Time Table
This is a protocol of the course's actual progression, including exercise sheets.
Unless otherwise stated, all numerical citations refer to the book (see below).
Week 
Events 
Content 
35 (29.8.)* 
Tue 810 & Wed 810 & Thu 1214 U20 
Section 1.1 to 1.5 [p. 168]
We introduced sytems of linear equations, row echelon form,
Gaussian elimination via elementary row operations and some examples,
including a traffic flow example (see p. 17/18). In the exercise
classes, more examples will be discussed. We did matrix algebra,
including inverses, and by the end of the week, we finished Chapter 1
(omitting most of Section 1.6).

 Wed 1416 U20 [S1] 
Exercises: Sheet 1 [pdf] (for two weeks) 
 Tue 1416 U81 [S2] 

36 (5.9.) 
Wed 1416 U20 & Thu 1416 U26 [S1] 
Exercises: See previous week. 
 Tue 1416 U81 & Fri 1214 U49 [S2] 

37 (12.9.) 
Tue 810 & Wed 810 & Thu 1214 U20 
Section 2.1 and 2.2 [p. 8498]
We introduced determinants by way of expansion.
Book and lecture notion differ:
Given a matrix A, I use A_{ij}
for the minor obtained from A by deleting
row i and column j.
For some reason, the book introduces new symbols here
(M_{ij}) and uses A_{ij} for
the cofactors instead; as determinants are not really overemphasized
throughout the course, I went for a more straightforward notion. 
You might wish to work out a proof for Laplace's Expansion Theorem 2.1.1.
A proof which fits to the way determinants are introduced in the book can
be found
here,
but be aware that the level is a little bit above our text book.
Another way of introducing determinants is via
Leibniz's formula;
for this, some elementary knowledge on permutation groups is necessary.
Section 3.1 to most of 3.4 [p. 110141]
We discussed vector space axiomatics and derived some basic properties.
The idea is to begin with just a coefficient field, an (unstructured) set of
what will be called vectors, and some operations, and then to define
the structure or behaviour of this conglomerate of objects by only a
few defining properties (also called axioms). The whole theory of general
vector spaces is then derived from these axioms (and from similar axioms or defining properties for fields,
sets, or operations). Although the book refers to the most common fields
like the reals or the complex numbers, you should be aware that there are
other important ones, like the rationals or also the binary field;
you might want to read more in
Wikipedia about fields. 
Please be aware that now you deal with addition,
multiplication, neutral elements, and inverses in different object spaces.
In practically all cases, we will use the same symbol for nonetheless different
things: 0 will denote the neutral element with respect to addition
of both vectors and members from the coefficient field, + will denote addition
of vectors and, as well, of coefficients etc.
One of the easiests and most common ways to obtain
a vector space from just a field F and an arbitrary
coordinate set S is to take the set V of all functions
from S to F, usually denoted by F^{S},
and then to define addition and scalar multiplication pointwise as follows:
For f,g in V,
define a new function f+g in V
by (f+g)(x):=f(x)+g(x) for all x in S,
and for f in V and α in F,
define α·f in V
by (α·f)(x):=α·f(x) for all x in S.
We get, for example, the vector space of all real m×nmatrices,
by taking the reals for F and the set
{1,...,m} × {1,...,n} of all pairs (i,j)
with i from {1,...,m} and
j from {1,...,n}.
We also get good old 3space this way, by taking S={1,2,3},
since triples of reals can very well be considered as functions
from {1,2,3} to the reals.
Practically all general results in Section 3 relating
bases and spanning sets follow from the following statement:
Given a vector space V over a field F,
a linear independent family a_{1},...,a_{m},
and a spanning family b_{1},...,b_{k},
then a_{1},...,a_{m} can be extended to a basis
using only members from b_{1},...,b_{k}.
I put this Theorem and its proof on the exercise sheet of the next week.

 Wed 1416 U20 [S1] 
Exercises: Sheet 2 [pdf] 
 Tue 1416 U81 [S2] 

38 (19.9.) 
Tue 810 & Thu 1214 U20 
Section 3.4 to 3.6 [p. 138165]
I have left out Section 3.5 in the lecture,
as it is mostly a specialization of what is happening in Chapter 4.
Section 3.6 is about three vector spaces generated by matrices:
The row space, the column space, and the null space.
The main results here are that the sum of the dimensions of row space
and null space is equal to the width of the matrix, and
that the dimensions of row space and column space are equal.
Sections 4.1 to 4.3 [p. 166197]
You might be interested in the CG & animation example [p. 182185].
The interesting point here is, that a translation
by some vector a (that is, the function assigning x+a
to every x), is not linear in general (why?). However, it can be modelled
as a linear transformation if you go one dimension higher, introducing
an additional component which is constantly 1. This is the reason why
3D CG is a matter of 4×4matrices rather than of
3×3matrices.

 Wed 1416 U20 & Thu 1416 U26 [S1] 
Exercises: Sheet 3 [pdf] 
 Tue 1416 U81 & Fri 1214 U49 [S2] 

39 (26.9.) 
Tue 810 & Thu 1214 U20 
Sections 5.4, 5.1, 5.6 [p. 233240, p. 198213, p. 259269]
We introduced inner products and norms and showed
how inner products induce a norm. We treated Section 5.4
and specialized to obtain most of the results in Section 5.1.
We then jumped forward to 5.6, as this will make life easier
in the remaining sections 5.2, 5.3, and 5.5.
The Gram Schmidt Process from section 5.6. has a number of
important applications, among them basis extension theorems for
orthonormal sets.

 Wed 1416 U20 [S1] 
Exercises: Sheet 4 [pdf] 
 Tue 1416 U81 [S2] 

40 (3.10.) 
Tue 810 & Thu 1214 U20 
Sections 5.2, 5.3, 5.5 [p. 214232, p. 241259]
We have finished 5.2 and 5.3 (and most of the remaining parts of 5.6).
Although Section 5.5 has not been treated explicitely in the lectures,
most of the arguments have been explained in the treatize of the other
sections (for example, 5.5.1 has been proved in connection with Theorem 5.6.1).

 Tue 1618 U26 & Wed 1416 U20 [S1] 
Exercises: Sheet 5 [pdf] 
 Tue 1416 U81 & Thu 1012 U110 [S2] 

41 (10.10.) 
Tue 810 & Thu 1214 U20 
Sections 6.1 and 6.3 [p. ...  ...]
We have considered Sections 6.1 and 6.3, with the following simplified definition:
A vector from real nspace is a distribution if its entries are nonnegative
and sum up to 1. A matrix is stochastic if its columns are distribution.
A Markov Process is a pair (A,x^{(0)}), where
A is a stochastic n×nmatrix and x^{(0)}
is a distribution from nspace, the initial status of the process.
The status x^{(t)} at time t is then
A^{t}·x^{(t)}.
Its jth entry is interpreted as the probability that
the process has status j at time t.

 Tue 1618 U26 & Wed 1416 U20 [S1] 
Exercises: Sheet 6 [pdf] 
 Tue 1416 U81 & Thu 1012 U110 [S2] 

*: The date (dd.mm.) indicates monday of the respective week.
Contents
This is an introductory course to linear algebra.
Topics are: Matrices, Systems of Equations, Determinants,
Vector Spaces, Linear Transformations, Orthogonality,
Eigenvalues, and numerical linear algebra.
Literature
The course is covered by the material of the book
Linear Algebra with Applications by
Steven J. Leon, 8th edition, Prentice Hall (2010).
Further books are in the MM505 slot in IMADA's library.
