Advanced Algebra Midterm

本文最后更新于:8 个月前

\(\large\textcolor{blue}{\mbox{Advanced Algebra } \small \mathbf{Midterm}}\ \ \ \ \ \ _\textcolor{blue}{2022.4.21}\)

Problem 1\(\ \small \mbox{Quaternions}\)

The quaternions are widely applied in computation graphics and computer games, and can also be used to simplify calculations that involves angular stuff or rotations. They are defiend as \(a+b i+c j+d k\), where \(a, b, c, d \in \mathbb{R}\) and \(i^{2}=j^{2}=k^{2}=i j k=-1\). (In particular, the multiplication is defined \[ a s(a+b i+c j+d k)\left(a^{\prime}+b^{\prime} i+c^{\prime} j+d^{\prime} k\right)=\left(a a^{\prime}-b b^{\prime}-c c^{\prime}-d d^{\prime}\right)+\\\left(a b^{\prime}+b a^{\prime}+c d^{\prime}-d c^{\prime}\right) i+\left(a c^{\prime}+c a^{\prime}+d^{\prime} b-\right.\left.b d^{\prime}\right) j+\left(a d^{\prime}+d a^{\prime}+b c^{\prime}-c b^{\prime}\right) k \] You don't really need to know this formula for this problem though.)

\((1)\) Recall that we can formulate complex numbers as matrices. Similarly, let us try this for quaternions. Consider matrices of the form \[ a\left[\begin{array}{cccc}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{array}\right]+b\left[\begin{array}{cccc}0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0\end{array}\right]+c\left[\begin{array}{cccc}0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0\end{array}\right]+d\left[\begin{array}{cccc}0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0\end{array}\right] \] where \(a, b, c, d \in \mathbb{R}\). Show that this gives a model of the quaternions as well, i.e., they satisfy \(i^{2}=j^{2}=k^{2}=i j k=-1\).

\((2)\) You may interpret \(\left[\begin{array}{l}a \\ b \\ c \\ d\end{array}\right] \in \mathbb{R}^{4}\) as the quaternion \(a+b i+c j+d k\). Then for any quaternion \(q\), multiplying \(q\) to \(a+b i+c j+d k\) from the left gives a linear map \(\mathbb{R}^{4} \rightarrow \mathbb{R}^{4}:(a+b i+c j+d k) \mapsto q(a+b i+c j+d k)\). If \(q=r+x i+y j+z k\), what is the matrix \(L_{q}\) for this linear map? What is the matrix \(R_{q}\) for the linear map if we multiply \(q\) from the right? Do we have \(L_{q} R_{q}=R_{q} L_{q}\) ?

\((3)\) The conjugate of a quaternion \(q=a+b i+c j+d k\) is \(\bar{q}=a-b i-c j-d k\). Show that for a unit quaternion \(q\) (i.e., \(q \bar{q}=1\) ), the matrix \(L_{q} R_{\bar{q}}\) has block form \(\left[\begin{array}{cc}1 & 0 \\ 0 & Q\end{array}\right]\) where \(Q\) is an orthogonal matrix. (Hint: The block form is easy. To see that \(Q\) is orthogonal, an easy way is to show that \(\left(L_{q} R_{\bar{q}}\right)^{T}\left(L_{q} R_{\bar{q}}\right)=I\) by understanding the meaning of the matrices involved.) (In particular, if you interpret a \(3 D\) vector \((x, y, z)\) as the quaternion \(v=0+x i+y j+z k \mathrm{~ , ~ t h e n ~ t h e ~ q u a t e r n i o n ~ m u l t i p l i c a t i o n ~ q v}\) to some rotation of \(v\). This quaternion interpretation is currently one of the best way to compute 3D rotations in real life. Many of your games with \(3 D\) graphics depend on this.)


\((1)\) Just calculate it.

\((2)\) \(L_{q}=\begin{pmatrix}r&-x&-y&-z\\x&r&-z&y\\y&z&r&-x\\z&-y&x&r\end{pmatrix},R_{q}=\begin{pmatrix}r&-x&-y&-z\\x&r&z&-y\\y&-z&r&x\\z&y&-x&r\end{pmatrix}\)

\(R_qL_q=\begin{pmatrix}-x^2-y^2-z^2+r^2&-2rx&-2ry&-2rz\\2rx&-x^2+r^2+y^2+z^2&-2xy&-2xz\\2yr&-2yx&-y^2+z^2+r^2+x^2&-2zy\\2zr&2zx&-2zy&-z^2+r^2+x^2+y^2\end{pmatrix}\)

\(L_qR_q=\begin{pmatrix}-x^2-y^2-z^2+r^2&-2rx&-2ry&-2rz\\2rx&-x^2+r^2+y^2+z^2&-2xy&-2xz\\2yr&-2yx&-y^2+z^2+r^2+x^2&-2zy\\2zr&2zx&-2zy&-z^2+r^2+x^2+y^2\end{pmatrix}\)

So \(L_qR_q=R_qL_q\), and it's obvious when looking at its linear map, which is both right for \(L_qR_q\) and \(R_qL_q\) : \(\mathbb{R}^{4} \rightarrow \mathbb{R}^{4}:(a+b i+c j+d k) \mapsto q(a+b i+c j+d k)q\)

\((3)\) Calculate that \(q\bar{q}=1\Longleftrightarrow a^2+b^2+c^2+d^2=1\) \[ \begin{gathered} L_qR_{\bar{q}}=\begin{pmatrix}a&-b&-c&-d\\b&a&-d&c\\c&d&a&-b\\d&-c&b&a\end{pmatrix}\begin{pmatrix}a&b&c&d\\-b&a&-d&c\\-c&d&a&-b\\-d&-c&b&a\end{pmatrix}\\=\begin{pmatrix}1&0&0&0\\0&a^2+b^2-c^2-d^2&2(bc-ad)&2(ac+bd)\\0&2(ad+bc)&a^2+c^2-b^2-d^2&2(cd-ab)\\0&2(bd-ac)&2(ab-cd)&a^2+d^2-b^2-c^2\end{pmatrix} \end{gathered} \] The direct way to judge whether the matrix above is orthogonal is hard. So consider multiply \[ \left(L_{q} R_{\bar{q}}\right)^{T}\left(L_{q} R_{\bar{q}}\right)=R_{\bar{q}}^TL_q^TL_qR_{\bar{q}} \] And calculate \(L_q^TL_q=I_4,R_{\bar{q}}^TR_{\bar{q}}=I_4\) So this matrix has block form \(\begin{pmatrix}I&O\\O&Q\end{pmatrix}\)

Problem 2\(\ \small \mbox{Drazin Inverse and Differential Equation}\)

Given an unknown vector-valued function \(\boldsymbol{v}(t)\), we know how to solve \(\boldsymbol{v}^{\prime}=A \boldsymbol{v}\) for a constant matrix \(A\). But what if we have \(A \boldsymbol{v}^{\prime}+B \boldsymbol{v}=\mathbf{0}\) for constant matrices? If \(A\) is invertible, we can reorganize this into \(\boldsymbol{v}^{\prime}=-A^{-1} B \boldsymbol{v}\) and solve it easily. But what if \(A\) is not invertible?

Here we introduce the Drazin inverse of a matrix. Recall that for any matrix A, according to the ultimate decomposition, \(A=X\left[\begin{array}{cc}A_{R} & \\ & A_{N}\end{array}\right] X^{-1}\) where \(A_{R}\) is invertible and \(A_{N}\) is nilpotent. Then we define \(A^{(\mathrm{D})}=X\left[\begin{array}{ll}A_{R}^{-1} & \\ & 0\end{array}\right] X^{-1}\) as the Drazin inverse of \(A\)

\((1)\) Show that if \(\left[\begin{array}{ll}R & \\ & N\end{array}\right]=X\left[\begin{array}{cc}R^{\prime} & \\ & N^{\prime}\end{array}\right] X^{-1}\) where \(R, R^{\prime}\) are invertible and \(N, N^{\prime}\) are nilpotent, then \(\left[\begin{array}{cc}R^{-1} & \\ & 0\end{array}\right]=X\left[\begin{array}{cc}\left(R^{\prime}\right)^{-1} & \\ & 0\end{array}\right] X^{-1}\). (This shows that the Drazin inverse of \(A\) is unique and does not depends on the choice of invertible-nilpotent decomposition \(A=X\left[\begin{array}{lll}A_{R} & & \\ & A_{N}\end{array}\right] X^{-1}\).)

\((2)\) Show that \(A A^{(\mathrm{D})}=A^{(\mathrm{D})} A, A^{(\mathrm{D})} A A^{(\mathrm{D})}=A^{(\mathrm{D})}\), and \(A^{(\mathrm{D})} A^{k+1}=A^{k}\) where \(k\) is the smallest integer such that \(\operatorname{Ker}\left(A^{k}\right)=\operatorname{Ker}\left(A^{k+1}\right)\)

\((3)\) Calculate \(\left(\boldsymbol{a} \boldsymbol{b}^{*}\right)^{(\mathrm{D})}\) for non-zero vectors \(\boldsymbol{a}, \boldsymbol{b} \in \mathbb{C}^{n}\). (Hint: Harder if you use brute force calculation with the definition. Easier if you can guess it out right, and prove that you have the right guess.)

\((4)\) For fixed \(A\), show that we can find a polynomial \(p(x)\) such that \(A^{(\mathrm{D})}=p(A)\). (Again, note that for different \(A\) this polynomial may be different.) (Hint: Suppose \(A_{R}^{-1}=q\left(A_{R}\right)\) for a polynomial \(q(x)\).)

\((5)\) If \(A B=B A\), show that \(\mathrm{e}^{-A^{(\mathrm{D})} B t} A A^{(\mathrm{D})} \boldsymbol{v}_{0}\) is a solution to \(A \boldsymbol{v}^{\prime}+B \boldsymbol{v}=\mathbf{0}\) for any constant vector \(\boldsymbol{v}_{0}\). (This is one of the results in a paper by Campbell, Meyer and Rose in \(1967\).)


\((1)\) Set \(X=\begin{bmatrix}A& B\\C&D\end{bmatrix}\) so the equation can be simplified \[ \left[\begin{array}{ll}R & \\ & N\end{array}\right]=X\left[\begin{array}{cc}R^{\prime} & \\ & N^{\prime}\end{array}\right] X^{-1}\rightarrow \begin{bmatrix}R&\\&N\end{bmatrix}\begin{bmatrix}A& B\\C&D\end{bmatrix}=\begin{bmatrix}A& B\\C&D\end{bmatrix}\begin{bmatrix}R&\\&N\end{bmatrix} \] For each block we have \(RA=AR',RB=BN',ND=DN'\). Use induction to prove

\(R^nB=BN'^{n}\), if \(n=k-1\) \(R^{k-1}B=BN'^{k-1}\) is right, then \(R^{k}B=RBN^{k-1}=BN'^{k}\)

And \(N\) is a nilpotent matrix, so after limited steps, it transfers to \(R^{n}B=O\) and \(R\) is invertible

So \(B=O\), it's the same with \(C=O\), and \(X=\begin{bmatrix}A&\\&D\end{bmatrix}\) since \(RA=AR'\), \(R^{-1}A=A(R')^{-1}\) \[ \begin{bmatrix}R^{-1}& O\\O&O\end{bmatrix}\begin{bmatrix}A& O\\O&D\end{bmatrix}=\begin{bmatrix}R^{-1}A& O\\O&O\end{bmatrix}=\begin{bmatrix}A(R')^{-1}& O\\O&O\end{bmatrix}=\begin{bmatrix}A& O\\O&D\end{bmatrix}\begin{bmatrix}(R')^{-1}& O\\O&O\end{bmatrix} \] So \(\left[\begin{array}{cc}R^{-1} & \\ & 0\end{array}\right]=X\left[\begin{array}{cc}\left(R^{\prime}\right)^{-1} & \\ & 0\end{array}\right] X^{-1}\), so the Drazin inverse is sole.

\((2)\) \(AA^{(D)}=X\left[\begin{array}{cc}A_{R} & \\ & A_{N}\end{array}\right] X^{-1}X\left[\begin{array}{ll}A_{R}^{-1} & \\ & 0\end{array}\right] X^{-1}=X\begin{bmatrix}I&O\\O&O\end{bmatrix}X^{-1}\)

\(A^{(D)}A=X\left[\begin{array}{ll}A_{R}^{-1} & \\ & 0\end{array}\right] X^{-1}X\left[\begin{array}{cc}A_{R} & \\ & A_{N}\end{array}\right] X^{-1}=X\begin{bmatrix}I&O\\O&O\end{bmatrix}X^{-1}=AA^{(D)}\)

\(A^{(D)}AA^{(D)}=X\begin{bmatrix}I&O\\O&O\end{bmatrix}X^{-1}X\left[\begin{array}{ll}A_{R}^{-1} & \\ & 0\end{array}\right] X^{-1}=X\left[\begin{array}{ll}A_{R}^{-1} & \\ & 0\end{array}\right] X^{-1}=A^{(D)}\)

By induction, \(A^{(D)}A^{k+1}=X\left[\begin{array}{ll}A_{R}^{-1} & \\ & O\end{array}\right] X^{-1}X\left[\begin{array}{cc}A_{R}^{k+1} & \\ & A_{N}^{k+1}\end{array}\right] X^{-1}=X\begin{bmatrix}A_R^{k}&\\&O\end{bmatrix}X^{-1}\)

And \(A^{k}=X\left[\begin{array}{cc}A_{R}^k & \\ & A_{N}^k\end{array}\right] X^{-1}\) if \(\ker{(A^{k})}=\ker{(A^{k+1})}\) so \(\ker{(A_N^k)}=\ker{(A_{N}^{k+1})}\)

let \(m\) is the smallest number that satisfied \(A_N^m=O\), so \(k\leq m\), if \(k<m\), that is to say

\(A_N^{k+1}\vec{x}=\vec{0}\Longrightarrow A_N^{k}\vec{x}=\vec{0}\) and \(\ker{(A_{N}^{k})}=\ker{(A_{N}^{k+1})}=\cdots \ker{(A_{N}^{m})}=\mathbb{R}^{n}\)

So \(A_N^{k}=O\) , \(A^{(\mathrm{D})} A^{k+1}=A^{k}\)

\((3)\) Can't solve this question

\((4)\) For \(A_R\neq O\), let its minimal polynomial \(p(A_R)=p_1(A_R)+a I=O\) because \(A_R\)

is invertible, and its eigenpolynomial's constant term is not \(0\), so \(a\neq 0\), \(p_2(A_R)A_R=-aI\)

Define \(q(A_R)=-\dfrac{p_2(A_{R})}{a}\) so $$ polynomial \(q(x)\) such that \(A_R^{-1}=q(A_R)\). Also define

\(k\) is the smallest integer such that \(\operatorname{Ker}\left(A^{k}\right)=\operatorname{Ker}\left(A^{k+1}\right)\) According to \(\small(2)\), we have

\(A_N^k=O,A^k=X\begin{bmatrix}A_R^k&O\\O&O\end{bmatrix}X^{-1}\), and multiply by \((q(A))^{k+1}=X\begin{pmatrix}(A_{R}^{-1})^{k+1}&O\\O&q(A_N)\end{pmatrix}X^{-1}\)

\(A^{k}q(A)^{k+1}=X\begin{bmatrix}A_R^k&O\\O&O\end{bmatrix}X^{-1}X\begin{pmatrix}(A_{R}^{-1})^{k+1}&O\\O&q(A_N)\end{pmatrix}X^{-1}=X\begin{bmatrix}A_R^{-1}&O\\O&O\end{bmatrix}X^{-1}=A^{(D)}\)

\((5)\) First, \(A^{(D)}\) and \(B\) is community. let \(B^{\prime}=X^{-1}BX\) so that \[ \begin{pmatrix}A_{R}&O\\O&A_N\end{pmatrix}B^{\prime}=B^{\prime}\begin{pmatrix}A_{R}&O\\O&A_N\end{pmatrix} \] like question \(\small(1)\) we can know that \(B^{\prime}=\begin{pmatrix}B_1&O\\O&B_4\end{pmatrix}\) \(B_1\) and \(A_R\) is community

so it's obvious that \(\begin{pmatrix}A_{R}^{-1}&O\\O&O\end{pmatrix}B^{\prime}=B^{\prime}\begin{pmatrix}A_{R}^{-1}&O\\O&O\end{pmatrix}\) so \(A^{(D)}\) and \(B\) is community.

Secondly, because \(AB=BA,AA^{(D)}=A^{(D)}A\) we can prove that \((A^{(D)}Bt)^{i}A=A(A^{(D)}Bt)^{i}\)

if \(i=k\) right \((A^{(D)}Bt)^{k+1}A=A^{(D)}Bt(A^{(D)}Bt)^{k}A=A^{(D)}BtA(A^{(D)}Bt)^{k}=A(A^{(D)}Bt)^{k+1}\)

\(i=0\) is right, By induction \((A^{(D)}Bt)^{i}A=A(A^{(D)}Bt)^{i}\) is right. It is the same to

\((A^{(D)}Bt)^{i}A^{(D)}=A^{(D)}(A^{(D)}Bt)^{i}\), calculate the solution below \[ \begin{gathered} A \boldsymbol{v}^{\prime}=-AA^{(D)}Be^{-A^{(D)}Bt}AA^{(D)}\boldsymbol{v}_{0}=-AA^{(D)}B\sum_{i=1}^{+\infty}\dfrac{1}{i!}(A^{(D)}Bt)^{i}AA^{(D)}\boldsymbol{v}_{0}\\ =-AA^{(D)}BA\sum_{i=0}^{+\infty}\dfrac{1}{i!}(A^{(D)}Bt)^{i}A^{(D)}\boldsymbol{v}_{0}=-AA^{(D)}ABA^{(D)}\sum_{i=0}^{+\infty}\dfrac{1}{i!}(A^{(D)}Bt)^{i}\boldsymbol{v}_{0}\\ =-BA\boxed{A^{(D)}AA^{(D)}}\sum_{i=0}^{+\infty}\dfrac{1}{i!}(A^{(D)}Bt)^{i}\boldsymbol{v}_{0}=-BA\boxed{A^{(D)}}\sum_{i=0}^{+\infty}\dfrac{1}{i!}(A^{(D)}Bt)^{i}\boldsymbol{v}_{0}\\ =-BA\sum_{i=0}^{+\infty}\dfrac{1}{i!}(A^{(D)}Bt)^{i}A^{(D)}\boldsymbol{v}_{0}=-B\sum_{i=0}^{+\infty}\dfrac{1}{i!}(A^{(D)}Bt)^{i}AA^{(D)}\boldsymbol{v}_{0}=-B\boldsymbol {v} \end{gathered} \] So \(\mathrm{e}^{-A^{(\mathrm{D})} B t} A A^{(\mathrm{D})} \boldsymbol{v}_{0}\) is a solution to \(A \boldsymbol{v}^{\prime}+B \boldsymbol{v}=\mathbf{0}\)

Problem 3\(\ \small \mbox{Sherman-Morrison-Woodburry Formula}\)

The famous Sherman-Morrison-Woodburry formula states that, for any \(m \times m\) invertible matrix \(X, n \times n\) invertible matrix \(Y, m \times n\) matrix \(A\) and \(n \times m\) matrix \(B\), we have \((X-A Y B)^{-1}=X^{-1}+X^{-1} A\left(Y^{-1}-B X^{-1} A\right)^{-1} B X^{-1}\). This can be proven using block eliminations on \(\left[\begin{array}{cc}X & -A \\ B & Y\end{array}\right]\). Well, I always find that proof annoying. So let us not do that, and try to find some alternative proofs. To simplify, you can easily see that it is enough to establish the special case \(\left(I_{m}-A B\right)^{-1}=I_{m}+A\left(I_{n}-B A\right)^{-1} B .\)

Consider the function \(f(x)=(1-x)^{-1}\). This function has a Taylor expansion \(f(x)=1+x+x^{2}+\ldots\) for all \(|x|<1\) over the complex numbers. (This is also the sum of the geometric series, which you should have learned about in high school.)

\((1)\) For any \(m \times n\) matrix \(A\) and \(n \times m\) matrix \(B\), suppose \(I_{m}-A B\) is invertible and all eigenvalues of \(AB\) have absolute value less than 1. Write \(\left(I_{m}-A B\right)^{-1}\) as the sum of a series of matrices.

\((2)\) Using above idea, deduce the formula \(\left(I_{m}-A B\right)^{-1}=I_{m}+A\left(I_{n}-B A\right)^{-1} B\), when \(I_{m}+A B\) and \(I_{n}-B A\) are invertible and all eigenvalues of \(A B\) and \(B A\) have absolute value less than \(1 .\) (Btw, note that \(A B\) and \(B A\) always have the same non-zero eigenvalues. You don't need this fact though.)

\((3)\) Oops, unfortunately, the method above does not always work. It has many annoying requirements on eigenvalues. Let us now forget about Taylor expansion. In general, prove that \(A p(B A)=p(A B) A\) for all polynomials \(p(x)\). (Hint: \(\operatorname{try} p(x)=x\) first.)

\((4)\) Show that for any function \(f(x)\) and any two square matrix \(X, Y\), we can find a polynomial \(p(x)\) such that \(p(X)=f(X)\) and \(p(Y)=f(Y)\) simultaneously. (Hint: block matrix.)

\((5)\) Show that \(A f(B A)=f(A B) A\) as long as \(f(A B)\) and \(f(B A)\) are defined.

\((6)\) Verify that \(f(A B)=I_{m}+A f(B A) B\), using the identity above.


\((1)\) let \(S=I_m+AB+(AB)^2+\cdots\) the absolute value of \(AB\) is less than \(1\), so it converges

And \((I_m-AB)S=(I_m+AB+(AB)^2+\cdots)-(AB+(AB)^2+\cdots)=I_m\) then

\(S=(I_m-AB)^{-1}=I_m+AB+(AB)^2+\cdots=\displaystyle \sum_{i=0}^{+\infty}(AB)^{i}\)

\((2)\) \(RHS=I_m+\displaystyle A\sum_{i=0}^{+\infty}(BA)^{i}B=I_m+\sum_{i=0}^{+\infty}A(BA)^{i}B \stackrel{?}{=}\sum_{i=0}^{+\infty}(AB)^{i}=LHS\)

\((3)\) \(p(x)=\displaystyle \sum_{i=0}^{n}a_ix^{i}\), it can be linearly decomposed into 'single' polynomial. Consider \(x^{n}\)

If \(n=k\) is right, so \(A(BA)^{k+1}=A(BA)^{k}(BA)=(AB)^{k}ABA=(AB)^{k+1}A\) \(n=k+1\) is right, and \(n=0\) is right. By induction, \(A p(B A)=p(A B) A\) for each \(x^{n}\) Add them together

So \(A p(B A)=p(A B) A\)

\((4)\) Construct a block matrix \(M=\begin{pmatrix}X&O\\O&Y\end{pmatrix}\) for any fixed matrix \(M\), we can find a certain polynomial \(p(x)\), such that \(p(M)=f(M)\) use the Taylor Series divide its minimal polynomial

\(p(\begin{pmatrix}X&O\\O&Y\end{pmatrix})=f(\begin{pmatrix}X&O\\O&Y\end{pmatrix})\) so \(p(X)=f(X),p(Y)=f(Y)\)

\((5)\) if fix \(A\) and \(B\), according to the last question, we can find a polynomial \(p(x)\) to replace the function \(f\). Also, according to the \(\small (3)\) question, if \(f(A B)\) and \(f(B A)\) are defined, then \[ A f(B A)=f(A B) A \] \((6)\) let \(f(AB)=\displaystyle \sum_{i=0}^{+\infty}(AB)^{i}\) so use the equation above \[ \begin{gathered} RHS=I_m+\displaystyle \sum_{i=0}^{+\infty}A(BA)^{i}B=I_m+Af(BA)B=I_m+f(AB)AB=\\I_m+\sum_{i=0}^{+\infty}(AB)^{i+1} =\sum_{i=0}^{+\infty}(AB)^{i}=LHS \end{gathered} \] So we prove the equality \(\left(I_{m}-A B\right)^{-1}=I_{m}+A\left(I_{n}-B A\right)^{-1} B\)

Problem 4\(\ \small \mbox{Equations of matrixes}\)

Let \(N\) be the \(n \times n\) nilpotent Jordan block.

\((1)\) Show that the solutions to the Sylvester's equation \(N X-X N=0\) are exactly the polynomials of \(N\).

\((2)\) Suppose \(Y=\mathrm{e}^{N}\). Show that \(Y, Y-I,(Y-I)^{2}, \ldots,(Y-I)^{n-1}\) are linearly independent in the space of matrixes, and they span the space of matrices made of polynomials of \(N\). (Consequently, \(N\) is a polynomial of \(Y\).)

\((3)\) Find all solutions \(X\) to the matrix equation \(\mathrm{e}^{X}=\mathrm{e}^{N}\). (Make sure to consider COMPLEX matrixes \(X\).)

\((4)\) Find real matrixes \(A, B\) such that \(A B \neq B A\) but \(\mathrm{e}^{A}=\mathrm{e}^{B}\). (Hint: for complex \(1 \times 1\) matrices, try to find \(x \neq y \in \mathbb{C}\) such that \(\mathrm{e}^{x}=\mathrm{e}^{y}\).)

\((5)\) Prove that there is no solution \(X\) to the equation \(\sin (X)=\left[\begin{array}{cc}1 & 1996 \\ 0 & 1\end{array}\right]\). (This is a Putnam competition problem. I'm sure you know the year of the competition....)


(1)$ let $ \(N=\left[\begin{array}{lllll}0 & & & & \\ 1 & 0 & & 0 & \\ & \ddots & \ddots & & \\ 0 & & \ddots & \ddots & \\ & & & 1 & 0\end{array}\right],X=\left[\begin{array}{cccc}a_{11} & a_{12} & \cdots & a_{1 n} \\ a_{21} & a_{22} & \cdots & a_{2 n} \\ \vdots & \vdots & & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n n}\end{array}\right]\)

\(NX=\left[\begin{array}{cccc}0&0&\cdots&0\\a_{11} & a_{12} & \cdots & a_{1 n} \\ a_{21} & a_{22} & \cdots & a_{2 n} \\ \vdots & \vdots & & \vdots \\ a_{(n-1) 1} & a_{(n-1) 2} & \cdots & a_{(n-1) n}\end{array}\right],XN=\left[\begin{array}{cccc}a_{12} & a_{13} & \cdots & a_{1 (n-1)}&0 \\ a_{22} & a_{23} & \cdots & a_{2 (n-1)}&0 \\ \vdots & \vdots & & \vdots \\ a_{n 2} & a_{n 3} & \cdots & a_{n(n-1) }&0\end{array}\right],\)

some elements of each row are 'killed' step by step. like \(\{a_{12},a_{13}\cdots,a_{1n}\}\)

neat \(\{a_{23},a_{24},\cdots ,a_{2n}\},\{a_{34},a_{35},\cdots,a_{3n}\},\cdots,\{a_{(n-1)n}\}\) and \(a_{ij}=a_{(i+k)(j+k)}\)

So \(X=\left[\begin{array}{cccc}a_{11} & 0 & \cdots & 0 \\ a_{21} & a_{11} & \cdots & 0 \\ \vdots & \ddots & & \vdots \\ a_{n 1} & \cdots &a_{21}& a_{11}\end{array}\right]=a_{11}I+a_{21}N+a_{31}N^{2}+\cdots +a_{n1}N^{n-1}=p(N)\)

\((2)\) Use Taylor Series to \(Y=e^{N}=\displaystyle \sum_{i=0}^{+\infty}\dfrac{N^{i}}{i!}=\sum_{i=0}^{n-1}\dfrac{N^{i}}{i!}=I+\sum_{i=1}^{n-1}\dfrac{N^{i}}{i!}\)

\((Y-I)^{k}=\displaystyle (\sum_{i=1}^{n-1}\dfrac{N^{i}}{i!})^{k}\) the minimum of \(N\) is \(k\). So for \(\{Y, Y-I,(Y-I)^{2}, \ldots,(Y-I)^{n-1}\}\)

the minimum of \(N\) is exactly $0,1,2,,n-1 $ and each \(N^{k}\) is obviously independent

So, they are linearly independent, the dimension of the polynomial of \(N\) is \(n\), which is equal to the number of the base, so they span the whole space \(V\). Also, \(N\in V\) and \(N\) can be written by the combination of the base. Each base is just the polynomial of \(Y\). So, \(N=p(e^{N})\)

\((3)\) Use Jordan factorization, every complex matrix \(A\) can be divided into \[ A=XJX^{-1} \] Obviously, one solution is \(x=N\), finding complex matrixes \(A_{I}\) such that \(e^{A_{I}}=I\) matters.

And \(e^{A_{I}}=Xe^{J_{I}}X^{-1}=I\Longrightarrow e^{J_{I}}=I\) And if \(J_{k}=\begin{bmatrix}\lambda&1&0&\cdots&0\\0&\lambda&1&\cdots&0\\0&0&\lambda&\cdots&0\\\vdots&\vdots&\vdots &\ddots&1\\0&0&0&\cdots&\lambda\end{bmatrix}\) exists in \(J_{I}\)

which will produce \(\dfrac{d^{n}e^{x}}{dx^{n}}\Large|\normalsize _{x=\lambda}\neq 0\) as non-diagonal elements, causing contradiction.

So \(J_{I}\) is a diagonal matrix \(J_{I}=\mbox{diag}(z_1,z_2\cdots,z_{n})\) So \(e^{z_i}=1\Longrightarrow z_{i}=2k\pi ,k\in \mathbb{Z}\)

Some base soluitions to \(e^{X}=e^{N}\) are \(X=N+2\pi\cdot\mbox{diag}(k_1,k_2,\cdots,k_{n}),k_{i}\in \mathbb{Z}\)

And if want to add \(X\) and \(X^{-1}\), because \(e^{A+B}=e^{A}e^{B}\) needs \(AB=BA\)

\(N(XDX^{-1})=(XDX^{-1})N\Longrightarrow (X^{-1}NX)D=D(X^{-1}NX)\) if and only if \(d_{i}=d_{j}\)

\(N+X\mbox{diag}(2\pi k,2\pi k,\cdots 2\pi k)X^{-1},k\in \mathbb{Z}\) Can be added to the solutions

So one part of the solutions are (can't prove these matrixes contain all the solutions) \[ \{N+2\pi\mbox{diag}(k_1,k_2,\cdots,k_{n}),k_{i}\in \mathbb{Z}\}\cup \{N+2\pi X\mbox{diag}(k,k,\cdots k)X^{-1},k\in \mathbb{Z},\det X\neq 0\} \] \((4)\) Because of the rotation \(e^{x+yi}=e^{x+(y+2k\pi)i}\) So we can easily know that \[ \Large e^{\large \begin{pmatrix}0&-2k\pi\\2k\pi&0\end{pmatrix}}\normalsize =I_2,k\in \mathbb{Z} \] But the 'standard complex' matrix \(aI+bJ=\begin{pmatrix}a&-b\\b&a\end{pmatrix}\) is community. So let's try other matrixes meaning for rotations, which gives the \(I\) itself. And the answer is Quaternions

Because of its non-commnuity. Like \((b_1i+c_1j)(b_2i+c_2j)=-b_1b_2-c_1c_2+(b_1c_2-c_1b_2)k\)

\(\neq (b_2i+c_2j)(b_1i+c_1j)=-b_1b_2-c_1c_2+(b_2c_1-c_2b_1)k\) when \(b_1c_2\neq b_2c_1\) and \(e^{2\pi i,j}=I_4\) \[ \begin{gathered} A=2\pi i=\begin{pmatrix}0&-2\pi&0&0\\2\pi &0&0&0\\0&0&0&-2\pi\\0&0&2\pi&0\end{pmatrix},B=2\pi j=\begin{pmatrix}0&0&-2\pi&0\\0&0&0&2\pi\\2\pi&0&0&0\\0&-2\pi&0&0\end{pmatrix}\\ AB=4\pi^2\begin{pmatrix}0&0&0&-1\\0&0&-1&0\\0&1&0&0\\1&0&0&0\end{pmatrix}\neq BA=4\pi^2\begin{pmatrix}0&0&0&1\\0&0&1&0\\0&-1&0&0\\-1&0&0&0\end{pmatrix}\\ \mbox{But}\ \ \ e^{A}=e^{B}=\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}=I_4 \end{gathered} \]

\((5)\) Since \(f(x)=\sin x,g(x)=\cos x\) are all matrix functions, and \(f(x)^2+g(x)^2=1\) is right

So \(\sin^2X+\cos ^2X=I\) substitute \(\sin X=\left[\begin{array}{cc}1 & 1996 \\ 0 & 1\end{array}\right]\) we can get \(\cos^2 X=\left[\begin{array}{cc}0 & -3992 \\ 0 & 0\end{array}\right]\)

That is to say \(Y=\cos X\) is nilpotent \(Y^{4}=O\). However, the matrix is $2 $. \(Y^{2}\) must be \(O\) ,

which causes contradiction.

Problem 5\(\ \small \mbox{Newton's Method}\)

As we have seen in class, \(\operatorname{sign}(X)\) is useful to solve certain Sylvester's equations. Here we aim to find a way an approximation to \(\operatorname{sign}(X)\). Given a matrix \(A\) with no purely imaginary eigenvalue, set \(X_{0}=A\), and set \(X_{n+1}=\dfrac{1}{2}\left(X_{n}+X_{n}^{-1}\right)\). (As a side note, a complex number \(z\) is purely imaginary if its real part is zero. In particular, 0 is a purely imaginary number as well. So, if a matrix has no purely imaginary eigenvalue, then it is invertible.)

\((1)\) Show that if \(X_{n}\) has no purely imaginary eigenvalue, then \(X_{n+1}\) has no purely imaginary eigenvalue. (So our inductive definition makes sense.)

\((2)\) If \(A\) is \(1 \times 1\), and it is not purely imaginary, show that \(X_{n}\) indeed coverge to sign \((A)\). (This question has little to do with linear algebra...) (Hint: \(\dfrac{f(x)-1}{f(x)+1}=\left(\dfrac{x-1}{x+1}\right)^{2}\) where \(f(x)=\dfrac{1}{2}\left(x+\dfrac{1}{x}\right)\).

\((3)\) If \(A\) is diagonalizable and has no purely imaginary eigenvalue, show that \(X_{n}\) indeed coverge to sign \((A)\). (Not part of this problem. But diagonalizable matrices are dense, so you can imagine that this is true in general.)

\((4)\) Suppose \(A\) is an \(n \times n\) Jordan block with eigenvalue \(1\). Show that \(X_{n-1}=I\).


\((1)\) Use Jordan block \(X_{n}=XJX^{-1}\) \[ J=\begin{pmatrix}J_{1}\\&\ddots\\&&J_{n}\end{pmatrix},J_{i}=\begin{bmatrix}\lambda_i&1&0&\cdots&0\\0&\lambda_i&1&\cdots&0\\0&0&\lambda_i&\ddots&0\\\vdots&\vdots&\vdots &\ddots&1\\0&0&0&\cdots&\lambda_i\end{bmatrix} \] And \(\mathbf{Re}(\lambda_{i})\neq 0\) So \(X_n^{-1}=XJ^{-1}X^{-1}=X\begin{bmatrix}\lambda_i^{-1}&a_1&a_2&\cdots&a_{n-1}\\0&\lambda_i^{-1}&a_1&\cdots&a_{n-2}\\0&0&\lambda_i^{-1}&\cdots&a_{n-3}\\\vdots&\vdots&\vdots &\ddots&a_1\\0&0&0&\cdots&\lambda_i^{-1}\end{bmatrix}X^{-1}\)

and \(\dfrac{X_n+X_n^{-1}}{2}=\dfrac{1}{2}X\begin{bmatrix}\lambda_i+\lambda_i^{-1}&1+a_1&a_2&\cdots&a_{n-1}\\0&\lambda_i+\lambda_i^{-1}&1+a_1&\cdots&a_{n-2}\\0&0&\lambda_i+\lambda_i^{-1}&\cdots&a_{n-3}\\\vdots&\vdots&\vdots &\ddots&1+a_1\\0&0&0&\cdots&\lambda_i+\lambda_i^{-1}\end{bmatrix}X^{-1}\)

So the eigenvalue of \(X_{n+1}\) is \(\lambda_i+\lambda_{i}^{-1}=(x_i+y_i)+\dfrac{1}{x_i+y_i}\) its real part is \[ x_i+\dfrac{x_i}{x_i^2+y_i^2}=x_i(1+\dfrac{1}{x_i^2+y_i^2})\neq 0 \] So \(X_{n+1}\) has no purely imaginary eigenvalue

\((2)\) set \(f(x)=\dfrac{1}{2}(x+\dfrac{1}{x})\) We have recursion formula \(\dfrac{f(x)-1}{f(x)+1}=\left(\dfrac{x-1}{x+1}\right)^{2}\)

\(A = x+yi\) if \(\mathbf{Re}(A)=x>0\) So set \(a_n=\dfrac{X_{n}-1}{X_{n}+1}\) it will be \(a_n=a_{n-1}^{2}\)

By induction we can easily get \(a_n=a_1^{2^{n-1}}=(\dfrac{x-1+yi}{x+1+yi})^{2^{n}}\) and the length of \(\dfrac{x-1+yi}{x+1+yi}\)

is \(\sqrt{\dfrac{(x-1)^{2}+y^2}{(x+1)^2+y^2}}=\sqrt{1-\dfrac{4x}{(x+1)^2+y^2}}<1\) so let $n+$ \(\lim\limits_{n\to +\infty}a_n=0\) \(X_{+\infty}\to 1\)

And if \(x<0\), find a new path \(b_n=\dfrac{X_n+1}{X_n-1}\), also satisfy \(b_{n+1}=b_{n}^2\) And the length

\(|\dfrac{x+1+yi}{x-1+yi}|=\sqrt{1+\dfrac{4x}{(x-1)^{2}+y^2}}<1\) so let \(n\to +\infty\) \(\lim\limits_{n\to +\infty}a_n=0,X_{+\infty }\to -1\)

In conclusion, \(\lim\limits_{n\to +\infty}X_n=\begin{cases}1&\mathbf{Re}(A)>0\\-1&\mathbf{Re}(A)<0\end{cases}=\mbox{sign(A)}\)

\((3)\) Actually, for every Jordan block \(J_{i}=\begin{bmatrix}\lambda_i&1&0&\cdots&0\\0&\lambda_i&1&\cdots&0\\0&0&\lambda_i&\ddots&0\\\vdots&\vdots&\vdots &\ddots&1\\0&0&0&\cdots&\lambda_i\end{bmatrix}\) the diagonal elements

stay the sequence transformation. \(\lambda_{i(n+1)}=\dfrac{1}{2}(\lambda_{in}+\lambda_{in}^{-1})\) According to the \(\small (2)\) question

\(\lim\limits_{n\to +\infty}\lambda_{in}=\mbox{sign}(\lambda_{i})\) So the diagonal elements of \(\lim\limits_{n\to +\infty}X_{n}\) are \(\mbox{sign}(\lambda_i)\) Just consider \(\lambda_i>0\)

Then \(\mbox{sign}(\lambda_i)=1\) So \(\lim\limits_{n\to +\infty}X_n=I+\displaystyle \sum_{i=1}^{n-1}b_i(J-\lambda I)^{i}\) Take the limit for the original equation \(X_{n+1}=\dfrac{1}{2}\left(X_{n}+X_{n}^{-1}\right)\Longrightarrow I+\displaystyle \sum_{i=1}^{n-1}b_i(J-\lambda I)^{i}=\dfrac{1}{2}(I+\displaystyle \sum_{i=1}^{n-1}b_i(J-\lambda I)^{i}+(I+\displaystyle \sum_{i=1}^{n-1}b_i(J-\lambda I)^{i})^{-1})\)

So \((I+\displaystyle \sum_{i=1}^{n-1}b_i(J-\lambda I)^{i})^{2}=I\) then \(\displaystyle \sum_{i=1}^{n-1}(b_i^2(J-\lambda I)^{2i}+2b_i(J-\lambda I)^{i})=O\)

We have proved that \(J,J-I,(J-I)^2,\cdots,(J-I)^{n-1}\) are linearly independent

So \(b_1=b_3=\cdots=b_{2j+1}=0\) and \(b_1^2+2b_2=0\Longrightarrow b_2=0,\cdots,b_{j}^{2}+2b_{2j}=0\)

For every even \(b_{\small\mbox{even}}=2^{n}\cdot c\) where \(c\in \mbox{odd}\) So \(b_{\small\mbox{even}}\) will ultimately went to \(0\). So

\(\lim\limits_{n\to +\infty}X_{n}=I=\mbox{sign}(A)\) when \(\lambda_i<0\) that is the same.

Every matrix can be transformed into Jordan block and every Jordan block is right

So the algorithm of \(X_{n+1}=\dfrac{1}{2}(X_n+X_{n}^{-1})\) can give \(\mbox{sign}(A)\) if \(A\) has no purely eigenvalues

And also, solving the equation \(AX+XB=C\) by calculating \(\mbox{sign}\begin{pmatrix}A&-C\\O&B\end{pmatrix}\) can be done

\((4)\) \(A=I+J,J=\begin{bmatrix}0&1&0&\cdots&0\\0&0&1&\cdots&0\\0&0&0&\ddots&0\\\vdots&\vdots&\vdots &\ddots&1\\0&0&0&\cdots&0\end{bmatrix}\) use Taylor series and note that \(J^{n}=O\)

\(X_0=I+J,X_1=\dfrac{1}{2}(I+J+(I+J)^{-1})=\dfrac{1}{2}(I+J+(I-J+J^2+\cdots+(-1)^{n-1}J^{n-1}))\)

\(=I+\dfrac{1}{2}J^2+\cdots\) every step at least 'kill's the 'smallest one'. If after \(k\) steps the result is \(X_{k}=I+a_{k+1}J^{k+1}+\cdots+a_{n-1}J^{n-1}\), then \(X_{k+1}=\dfrac{1}{2}(X_k+X_k^{-1})\)

\(=\dfrac{1}{2}((I+a_{k+1}J^{k+1}+\cdots+a_{n-1}J^{n-1})+(I+a_{k+1}J^{k+1}+\cdots+a_{n-1}J^{n-1})^{-1})\)

the coefficient of \(J^{k+1}\) is eliminated. By induction, \(X_{n-1}=I+a_nJ^{n}+\cdots=I\)

Problem 6\(\ \small \mbox{Real Möbius transformations}\)

A Möbius transformation is a function \(f: x \mapsto \dfrac{a x+b}{c x+d}\). For example, scaling \(x \mapsto 2 x\), addition \(x \mapsto x+1\), inversion \(x \mapsto \dfrac{1}{x}\) are all Mobius transformations. We may further define \(f(\infty)=\dfrac{a}{c}\) and \(f\left(-\dfrac{d}{c}\right)=\infty\), as you can tell by taking limits. So Möbius transformations are actiong on the space \(\mathbb{R} \cup\{\infty\}\). In some sense, you are imagining the real line where \(\infty\) and \(-\infty\) are treated as the same thing (i.e., they are glued together), and you see that \(\mathbb{R} \cup\{\infty\}\) is in fact a big circle. (You can skip these materials in this parenthesis. They are irrelavant but maybe of interest: geometrically, consider the circle \(C\) on the xy-plane with center \((0,1 / 2)\) and radius \(1 / 2\). Then imagine that the point \((0,1)\) on the circle has a ray gun attached. For each point of the circle, the ray gun can shoot that point, and then go through it and intersect with the \(x\)-axis somewhere. This would give a 1-to-1 correspondence betwee the real line \(\mathbb{R}\) and \(C-\{(0,1)\}\). Now think of \(\infty\) as the point \((0,1)\), and you have thus build a 1-to-1 correspondence bwteen \(\mathbb{R} \cup\{\infty\}\) and the circle \(C\). The Möbius transformations act on \(\mathbb{R} \cup\{\infty\}\), so under this correspondence you can imagine them as acting on the circle \(C\). They usually stretch some portions of \(C\) while shrinking some other portions. This corresponds to looking at this circle from various perspectives. For example, if you look at the circle from the left, then since the left portion of the circle is closer to your eyes, it appears to be larger, while the right portion of the circle appears to be smaller as it is further away from your eyes. For a more concrete example, consider \(x \mapsto 2 x\). This means you are moving your perspective to be closer to \((0,0)\) on the circle, so that points near \((0,0)\) are more spread out. Everyone is now repelled away from the origin and attracted towards the infinity point \((0,1)\).)

For any 2 by 2 invertible real matrix \(A=\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]\), we can build a corresponding Möbius transformation \(f_{A}: \mathbb{R} \cup\{\infty\} \rightarrow \mathbb{R} \cup\{\infty\}\) such that \(f_{A}(x)=\dfrac{a x+b}{c x+d}\). Obviously all Möbius transformation arises in this way.

\((1)\) Show that \(f_{A} \circ f_{B}=f_{A B}\) for any \(A, B\). In particular, if we simply write \(A\) to represent the function \(f_{A}\), then the composition of \(A\) and \(B\) is the same as the matrix multiplication of \(A\) and \(B\). (You may also check \(f_{A^{-1}}\) is the inverse function of \(f_{A}\) and so on. This is not required though.)

\((2)\) Show that for any \(k \in \mathbb{R}-\{0\}, f_{A}=f_{k A}\). And conversely, if \(f_{A}=f_{B}\), then \(A=k B\) for some complex constant \(k\). (So WLOG, to study a Möbius transformation \(f_{A}\), you can always scale A appropriately and assume that \(\operatorname{det}(A)=1\).)

\((3)\) Interpret a vector \(\left[\begin{array}{l}x \\ y\end{array}\right]\) as a ratio \(\dfrac{x}{y}\). Then show that, under this interpretation, \(A\left[\begin{array}{l}x \\ y\end{array}\right]\) is interpreted exactly as the ratio \(f_{A}\left(\dfrac{x}{y}\right)\). (As a result, some literature write \(A\) and \(f_{A}\) interchangably and \(\left[\begin{array}{l}x \\ y\end{array}\right]\) and \(\dfrac{x}{y}\) interchangably.)

\((4)\) Show that there are only four kinds of Möbius transformations. There is a kind where \(f_{A}\) has two fixed points in \(\mathbb{R} \cup\{\infty\}\) (a typical example is \(x \mapsto 2 x\) where 0 and \(\infty\) are the fixed points), a kind where \(f_{A}\) has only one fixed point in \(\mathbb{R} \cup\{\infty\}\) (a typical example is \(x \mapsto x+1\) where \(\infty\) is the only fixed point), and a kind without any fixed point in \(\mathbb{R} \cup\{\infty\}\) (a typical example is \(x \mapsto-\dfrac{1}{x}\) ). Finally, there is a kind where everyone is fixed, i.e., the identity function \(x \mapsto x\).


\((1)\) set \(A=\begin{pmatrix}a_1&a_2\\a_3&a_4\end{pmatrix},B=\begin{pmatrix}b_1&b_2\\b_3&b_4\end{pmatrix}\) \(f_A\circ f_B(x)=f_A(\dfrac{b_1x+b_2}{b_3x+b_4})=\dfrac{a_1\dfrac{b_1x+b_2}{b_3x+b_4}+a_2}{a_3\dfrac{b_1x+b_2}{b_3x+b_4}+a_4}\)

\(=\dfrac{(a_1b_1+a_2b_3)x+(a_1b_2+a_2b_4)}{(a_3b_1+a_4b_3)x+(a_3b_2+a_4b_4)}=f_{AB}(x)=f_{\begin{pmatrix}a_1b_1+a_2b_3&a_1b_2+a_3b_4\\a_3b_1+a_4b_3&a_3b_2+a_4b_4\end{pmatrix}}(x)\)

Check \(A^{-1}=\dfrac{1}{a_1a_4-a_2a_3}\begin{pmatrix}a_4&-a_2\\-a_3&a_1\end{pmatrix}\) so \(f_{A^{-1}}(x)=\dfrac{a_4x-a_2}{-a_3x+a_1}\) \((a_1a_4\neq a_2a_3)\)

\(x\stackrel{f_{A}}{\longmapsto}\dfrac{a_1x+a_2}{a_3x+a_4}\stackrel{f_{A^{-1}}}{\longmapsto} \dfrac{a_4\dfrac{a_1x+a_2}{a_3x+a_4}-a_2}{-a_3\dfrac{a_1x+a_2}{a_3x+a_4}+a_1}=\dfrac{(a_1a_4-a_2a_3)x}{a_1a_4-a_2a_3}=x\) \(f_{A^{-1}}\) is the inverse of \(f_{A}\)

\((2)\) \(A=\begin{pmatrix}a_1&a_2\\a_3&a_4\end{pmatrix},f_{kA}(x)=\dfrac{ka_1x+ka_2}{ka_3x+ka_4}=\dfrac{a_1x+a_2}{a_3x+a_4}=f_{A}(x)\) if \(k\neq 0\)

If \(B=\begin{pmatrix}b_1&b_2\\b_3&b_4\end{pmatrix},f_{A}=f_{B}\) we can get \(\dfrac{a_1x+a_2}{a_3x+a_4}=\dfrac{b_1x+b_2}{b_3x+b_4}\) \(\Longrightarrow a_1b_3=a_3b_1,a_2b_4=b_2a_4\)

Construct \(k=\dfrac{a_1b_4-a_2b_3}{b_1b_4-b_2b_3}+i\dfrac{a_1b_2-a_2b_1}{b_1b_4-b_2b_3}=\begin{pmatrix}\dfrac{a_1b_4-a_2b_3}{b_1b_4-b_2b_3}&\dfrac{a_2b_1-a_1b_2}{b_1b_4-b_2b_3}\\\dfrac{a_1b_2-a_2b_1}{b_1b_4-b_2b_3}&\dfrac{a_1b_4-a_2b_3}{b_1b_4-b_2b_3}\end{pmatrix}\)

After too many calculations, \(A=kB=\begin{pmatrix}\dfrac{a_1b_4-a_2b_3}{b_1b_4-b_2b_3}&\dfrac{a_2b_1-a_1b_2}{b_1b_4-b_2b_3}\\\dfrac{a_1b_2-a_2b_1}{b_1b_4-b_2b_3}&\dfrac{a_1b_4-a_2b_3}{b_1b_4-b_2b_3}\end{pmatrix}B\) is right.

\((3)\) \(A\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}a_1&a_2\\a_3&a_4\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}a_1x+a_2y\\a_3x+a_4y\end{bmatrix}\) which can be interpred as \(\dfrac{a_1x+a_2y}{a_3x+a_4y}\)

and \(f_{A}(\dfrac{x}{y})=\dfrac{a_1\dfrac{x}{y}+a_2}{a_3\dfrac{x}{y}+a_4}=\dfrac{a_1x+a_2y}{a_3x+a_4y}\) they are the same

\((4)\) For all the \(2\times 2\) matrixes \(A\), use Jordan block, \(A=XJX^{-1}\) All types of the \(J\) are \(4\) \[ \det(J)=1\Longrightarrow J=\begin{pmatrix}1&0\\0&1\end{pmatrix},\begin{pmatrix}\lambda_1&0\\0&\lambda_2\end{pmatrix},\begin{pmatrix}z&0\\0&\bar{z}\end{pmatrix},\begin{pmatrix}1&1\\0&1\end{pmatrix}\ \ \mbox{where} \ \lambda_1\lambda_2=|z|=1 \] Fixed points are \(\text{R},\{0,\infty\},\{\phi\},\{\infty\}\) Because their eigenvectors must be real \(A\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}x\\y\end{bmatrix}\)


Advanced Algebra Midterm
https://lr-tsinghua11.github.io/2022/04/24/%E6%95%B0%E5%AD%A6/%E9%AB%98%E4%BB%A3%E6%9C%9F%E4%B8%AD%E4%BD%9C%E4%B8%9A/
作者
Learning_rate
发布于
2022年4月24日
许可协议