If \( AB = BA \) for any two square matrices, prove by mathematical induction that \( (AB)^n = A^n B^n \).
Let \( A \) and \( B \) be square matrices of the same order such that \( AB = BA \). We prove by mathematical induction on positive integer \( n \) that \( (AB)^n = A^n B^n \).
Base case (n = 1): For \( n = 1 \), \( (AB)^1 = AB = A^1 B^1 \). Hence the result is true for \( n = 1 \).
Induction hypothesis: Assume that for some \( k \in \mathbb{N} \), the relation holds: \( (AB)^k = A^k B^k \).
Induction step (n = k + 1): Consider
\[ (AB)^{k+1} = (AB)^k (AB). \]
Using the induction hypothesis,
\[ (AB)^{k+1} = A^k B^k AB. \]
Since \( AB = BA \), matrix \( A \) commutes with \( B \). From this it follows that \( A \) also commutes with any power of \( B \): by a simple induction, \( AB^k = B^k A \) and \( A^k B = B A^k \). Hence
\[ A^k B^k A B = A^k A B^k B \]
because \( B^k A = A B^k \). Thus
\[ (AB)^{k+1} = A^{k+1} B^{k+1}. \]
This proves that if the statement holds for \( n = k \), it also holds for \( n = k + 1 \).
Conclusion: By the principle of mathematical induction, \( (AB)^n = A^n B^n \) for all positive integers \( n \), whenever \( AB = BA \).
Find \( x, y, z \) if
\[ A = \begin{bmatrix} 0 & 2y & z \\ x & y & -z \\ x & -y & z \end{bmatrix} \]
satisfies \( A' = A^{-1} \), where \( A' \) is the transpose of \( A \).
The condition \( A' = A^{-1} \) means that \( A \) is an orthogonal matrix, i.e. \( AA' = I = A'A \). Hence the column vectors of \( A \) form an orthonormal set.
The columns of \( A \) are
\[ C_1 = \begin{bmatrix} 0 \\ x \\ x \end{bmatrix}, \quad C_2 = \begin{bmatrix} 2y \\ y \\ -y \end{bmatrix}, \quad C_3 = \begin{bmatrix} z \\ -z \\ z \end{bmatrix}. \]
Unit length conditions:
\( C_1 \cdot C_1 = 0^2 + x^2 + x^2 = 2x^2 = 1 \Rightarrow x^2 = \dfrac{1}{2} \Rightarrow x = \pm \dfrac{1}{\sqrt{2}}. \)
\( C_2 \cdot C_2 = (2y)^2 + y^2 + (-y)^2 = 6y^2 = 1 \Rightarrow y^2 = \dfrac{1}{6} \Rightarrow y = \pm \dfrac{1}{\sqrt{6}}. \)
\( C_3 \cdot C_3 = z^2 + (-z)^2 + z^2 = 3z^2 = 1 \Rightarrow z^2 = \dfrac{1}{3} \Rightarrow z = \pm \dfrac{1}{\sqrt{3}}. \)
Orthogonality conditions: One may verify that for these values, \( C_1 \cdot C_2 = C_1 \cdot C_3 = C_2 \cdot C_3 = 0 \), so the columns are mutually perpendicular and of unit length, hence \( A' = A^{-1} \).
Therefore, the required values are
\[ x = \pm \dfrac{1}{\sqrt{2}}, \quad y = \pm \dfrac{1}{\sqrt{6}}, \quad z = \pm \dfrac{1}{\sqrt{3}}. \]
If possible, using elementary row transformations, find the inverse of the following matrices:
(i) \( A_1 = \begin{bmatrix} 2 & -1 & 3 \\ -5 & 3 & 1 \\ -3 & 2 & 3 \end{bmatrix} \)
(ii) \( A_2 = \begin{bmatrix} 2 & 3 & -3 \\ -1 & -2 & 2 \\ 1 & 1 & -1 \end{bmatrix} \)
(iii) \( A_3 = \begin{bmatrix} 2 & 0 & -1 \\ 5 & 1 & 0 \\ 0 & 1 & 3 \end{bmatrix} \).
To find the inverse of a matrix by row transformations, form the augmented matrix \( [A \mid I] \) and perform elementary row operations until the left side becomes \( I \). The right side then gives \( A^{-1} \).
(i) Inverse of \( A_1 \)
Start with
\[ [A_1 \mid I] = \left[ \begin{array}{ccc|ccc} 2 & -1 & 3 & 1 & 0 & 0 \\ -5 & 3 & 1 & 0 & 1 & 0 \\ -3 & 2 & 3 & 0 & 0 & 1 \end{array} \right]. \]
By a sequence of row operations (such as \( R_2 \leftarrow 2R_2 + 5R_1 \), \( R_3 \leftarrow 2R_3 + 3R_1 \), and further simplifications), the left block can be reduced to the identity matrix. After complete reduction we obtain
\[ [I \mid A_1^{-1}] = \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & -7 & -9 & 10 \\ 0 & 1 & 0 & -12 & -15 & 17 \\ 0 & 0 & 1 & 1 & 1 & -1 \end{array} \right]. \]
Thus
\[ A_1^{-1} = \begin{bmatrix} -7 & -9 & 10 \\ -12 & -15 & 17 \\ 1 & 1 & -1 \end{bmatrix}. \]
(ii) Inverse of \( A_2 \)
Form \( [A_2 \mid I] \) and perform similar row operations. In the process the left block reduces to a matrix with a zero row, showing that \( \det(A_2) = 0 \). Hence \( A_2 \) is singular and its inverse does not exist.
(iii) Inverse of \( A_3 \)
Start with
\[ [A_3 \mid I] = \left[ \begin{array}{ccc|ccc} 2 & 0 & -1 & 1 & 0 & 0 \\ 5 & 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 3 & 0 & 0 & 1 \end{array} \right]. \]
Applying suitable row operations (for example, make the first entry in the first column 1 by \( R_1 \leftarrow \tfrac{1}{2} R_1 \), eliminate the other entries in the first column, and proceed similarly for other columns), we eventually obtain
\[ [I \mid A_3^{-1}] = \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 3 & -1 & 1 \\ 0 & 1 & 0 & -15 & 6 & -5 \\ 0 & 0 & 1 & 5 & -2 & 2 \end{array} \right]. \]
Thus
\[ A_3^{-1} = \begin{bmatrix} 3 & -1 & 1 \\ -15 & 6 & -5 \\ 5 & -2 & 2 \end{bmatrix}. \]
Therefore:
\( A_1^{-1} = \begin{bmatrix} -7 & -9 & 10 \\ -12 & -15 & 17 \\ 1 & 1 & -1 \end{bmatrix}, \quad A_2^{-1} \) does not exist, and \( A_3^{-1} = \begin{bmatrix} 3 & -1 & 1 \\ -15 & 6 & -5 \\ 5 & -2 & 2 \end{bmatrix}. \)
Express the matrix
\[ A = \begin{bmatrix} 2 & 3 & 1 \\ 1 & -1 & 2 \\ 4 & 1 & 2 \end{bmatrix} \]
as the sum of a symmetric and a skew-symmetric matrix.
For any square matrix \( A \), it can be expressed uniquely as the sum of a symmetric matrix \( S \) and a skew-symmetric matrix \( K \) by the formulae
\[ S = \dfrac{1}{2}(A + A'), \qquad K = \dfrac{1}{2}(A - A'). \]
Here
\[ A = \begin{bmatrix} 2 & 3 & 1 \\ 1 & -1 & 2 \\ 4 & 1 & 2 \end{bmatrix}, \quad A' = \begin{bmatrix} 2 & 1 & 4 \\ 3 & -1 & 1 \\ 1 & 2 & 2 \end{bmatrix}. \]
Compute
\[ A + A' = \begin{bmatrix} 4 & 4 & 5 \\ 4 & -2 & 3 \\ 5 & 3 & 4 \end{bmatrix}, \quad S = \dfrac{1}{2}(A + A') = \begin{bmatrix} 2 & 2 & \tfrac{5}{2} \\ 2 & -1 & \tfrac{3}{2} \\ \tfrac{5}{2} & \tfrac{3}{2} & 2 \end{bmatrix}. \]
Similarly,
\[ A - A' = \begin{bmatrix} 0 & 2 & -3 \\ -2 & 0 & 1 \\ 3 & -1 & 0 \end{bmatrix}, \quad K = \dfrac{1}{2}(A - A') = \begin{bmatrix} 0 & 1 & -\tfrac{3}{2} \\ -1 & 0 & \tfrac{1}{2} \\ \tfrac{3}{2} & -\tfrac{1}{2} & 0 \end{bmatrix}. \]
Matrix \( S \) is symmetric since \( S' = S \), and matrix \( K \) is skew-symmetric since \( K' = -K \).
Finally,
\[ S + K = \begin{bmatrix} 2 & 2 & \tfrac{5}{2} \\ 2 & -1 & \tfrac{3}{2} \\ \tfrac{5}{2} & \tfrac{3}{2} & 2 \end{bmatrix} + \begin{bmatrix} 0 & 1 & -\tfrac{3}{2} \\ -1 & 0 & \tfrac{1}{2} \\ \tfrac{3}{2} & -\tfrac{1}{2} & 0 \end{bmatrix} = \begin{bmatrix} 2 & 3 & 1 \\ 1 & -1 & 2 \\ 4 & 1 & 2 \end{bmatrix} = A. \]
Thus \( A \) has been expressed as the sum of a symmetric matrix and a skew-symmetric matrix:
\[ A = \underbrace{\begin{bmatrix} 2 & 2 & \tfrac{5}{2} \\ 2 & -1 & \tfrac{3}{2} \\ \tfrac{5}{2} & \tfrac{3}{2} & 2 \end{bmatrix}}_{\text{symmetric}} + \underbrace{\begin{bmatrix} 0 & 1 & -\tfrac{3}{2} \\ -1 & 0 & \tfrac{1}{2} \\ \tfrac{3}{2} & -\tfrac{1}{2} & 0 \end{bmatrix}}_{\text{skew-symmetric}}. \]