[IUCr Home Page] [Commission Home Page]
next up previous contents index
Next: Determinants Up: Matrices and determinants Previous: The matrix formalism

Rules for matrix calculations

Matrices can be multiplied with a number or can be added, subtracted, and multiplied with each other. These operations obey the following rules:

Definition (D 2.4.5) An $(m\times n)$ matrix A is multiplied with a (real) number $\lambda$ by multiplying each element with $\lambda$:

\( \mbox{\textit{\textbf{A}}} = \left( \hspace{-0.2em}
\begin{array}{c@{\hspace{...
... \lambda A_{m2} & \ldots & \lambda A_{mn} \end{array}
\hspace{-0.2em}\right) \).

Definition (D 2.4.5) Let $A_{ik}$ and $B_{ik}$ be the general elements of the matrices A and B. Moreover, A and B must be of the same size, i.e. must have the same number of rows and of columns. Then the sum and the difference $\mbox{\textit{\textbf{A}}}\pm\mbox{\textit{\textbf{B}}}$ is defined by

\( \lefteqn{\mbox{\textit{\textbf{C}}}=\mbox{\textit{\textbf{A}}}\pm\mbox{\texti...
...m B_{m1} & A_{m2}\pm B_{m2} & \ldots & A_{mn}\pm B_{mn}
\end{array} \right), \)

i.e. the element $C_{ik}$ of C is equal to the sum or difference of the elements $A_{ik}$ and $B_{ik}$ of A and B for any pair of $i,k$: $C_{ik}=A_{ik}\pm B_{ik}$.

The definition of matrix multiplication looks more complicated at first sight but it corresponds exactly to what is written in full in the formulae (2.2.1) to (2.2.4) of Section 2.2. The multiplication of two matrices is defined only if the number $n_{(lema)}$ of columns of the $le$ft $ma$trix is the same as the number $m_{(rima)}$ of rows of the $ri$ght $ma$trix. The numbers $m_{(lema)}$ of rows of the $le$ft $ma$trix and $n_{(rima)}$ of columns of the $ri$ght $ma$trix are free.

We first define the product of a matrix A with a column a:

Definition (D 2.4.5) The multiplication of an ($m\times n$) matrix A with an ($n\times 1$) column a is only possible if the number $n$ of columns of the matrix is the same as the length of the column a. The result is the matrix product d = A$\,$a which is a column of length $m$. The $i$-th element $d_i$ of d is

\begin{displaymath}d_i=A_{i1}\,a_1+A_{i2}\,a_2+\ldots+
A_{ik}\,a_k+\ldots+A_{in}\,a_n=\sum_{j=1}^{n} A_{ij}\,a_j.
\end{displaymath} (2.4.1)

For $i=1,\,2,\,3$ and $n=3$ this is the same procedure as in equations (2.2.1) and (2.2.2), where the coefficients $x_k$ in equation (2.2.1) and $\tilde{x}_k$ in equation (2.2.2) are replaced by $a_k$ here, and the left sides $(\tilde{x}_i$ and $\tilde{\tilde{x_i}})$ are replaced by $d_i$. The terms $a_i$ and $b_i$ of equations (2.2.1) and (2.2.2) are not represented in equation (2.4.1).

Written as a matrix equation this is

\begin{displaymath}\left( \begin{array}{r} d_1 \\ d_2 \\ \vdots\\ d_i\\ \vdots \...
... \\
a_2 \\ \vdots \\ a_k \\ \vdots \\ a_n \end{array}\right). \end{displaymath}

In an analogous way one defines the multiplication of a row matrix with a general matrix.

Definition (D 2.4.5) The multiplication of a $(1\times m)$ row a $^{\mbox{\footnotesize {T}}}$, with an ($m\times n$) matrix A is only possible if the length $m$, i.e. the number of `columns', of the row is the same as the number $m$ of rows of the matrix. The result is the matrix product d $^{\mbox{\footnotesize {T}}}$ = a $^{\mbox{\footnotesize {T}}}$A which is a row of length $n$. The $i$-th element $d_i$ of d $^{\mbox{\footnotesize {T}}}$ is

\( d_i=a_1\,A_{1i}+a_2\,A_{2i}+\ldots+a_k\,A_{ki}+
\ldots+a_m\,A_{mi} \).

Written as a matrix equation this is

\( \left( d_1\ d_2 \ldots d_i \ldots d_n \right)=\left( a_1\ a_2 \ldots a_k
\ld...
..._{m2} & \ldots & A_{mi} & \ldots & A_{mn} \end{array}
\hspace{-0.1em}\right).\)

The multiplication of two matrices (both neither row nor column) is the combination of the already defined multiplications of a matrix with a column (matrix on the left, column on the right side) or of a row with a matrix (row on the left, matrix on the right side). Remember: The number of columns of the left matrix must be the same as the number of rows of the right matrix.

Definition (D 2.4.5) The matrix product C = A$\,$B, or

\( \left( \begin{array}{cccccc}
C_{11} & C_{12} & \ldots & C_{1k} & \ldots & C_...
...r2} & \ldots & B_{rk} & \ldots & B_{rn} \end{array}%%\hspace{-0.1em}
\right) \)

is defined by \( C_{ik}=A_{i\,1}\,B_{1k}+A_{i\,2}\,B_{2k}+\ldots+
A_{i\,j}\,B_{jk}+\ldots+A_{i\,r}\,B_{rk}. \)

Examples.

If \( \mbox{\textit{\textbf{A}}}= \left( \begin{array}{rrr} 0 & 1 & 0 \\ 0 & 0 & 1 \\
1 & 0 & 0 \end{array} \right)\) and \( \mbox{\textit{\textbf{B}}}=\left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\
0 & 0 & 1 \end{array} \right)\),


then \( \mbox{\textit{\textbf{C}}} = \mbox{\textit{\textbf{A}}}\,\mbox{\textit{\textb...
...( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0
\end{array} \right) \). On the

other hand, \( \mbox{\textit{\textbf{D}}}=\mbox{\textit{\textbf{B}}} \,\mbox{\textit{\textbf...
...(
\begin{array}{rrr} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array} \right) \).

Obviously, C$\ne$D, i.e. matrix multiplication is not always commutative. However, it is associative, e.g., (AB)D = A(BD), as the reader may verify by performing the indicated multiplications. One may also verify that matrix multiplication is distributive, i.e.
(A + B)C = AC + BC.

In `indices notation' (where A is an $(m\times r)$ matrix, B an $(r\times n)$ matrix) the matrix product is

\begin{displaymath}
C_{ik}=\sum_{j=1}^{r} A_{ij}\, B_{jk},\ \ i=1,\ldots,m;
\ k=1,\ldots, n.
\end{displaymath} (2.4.2)

Remarks.

  1. The matrix A has the same number $r$ of columns as B has rows; the number $m$ of rows is `inherited' from A to C, the number $n$ of columns from B to C.
  2. A comparison with equation (2.2.4) shows that exactly the same construction occurs in the matrix product when describing consecutive mappings by matrix-column pairs. Also the product of the matrix B with the column a will be recognized. It is for this reason that the matrix formalism has been introduced. Affine mappings (also isometries and crystallographic symmetry operations) in point space are described by matrix-column pairs, see Sections 2.2 and 4.1.
  3. The `power notation' is used in the same way for the matrix product of a square matrix with itself as for numbers: AA = A$^2$; AAA = A$^3$, etc.
  4. Using the formulae of this section one confirms equations (2.2.5) to (2.2.8).


next up previous contents index
Next: Determinants Up: Matrices and determinants Previous: The matrix formalism

Copyright © 2002 International Union of Crystallography

IUCr Webmaster