Coordinate matrix of a vector in terms of a basis

$\begingroup$

I am taking a linear algebra class currently and working through Hoffman's textbook. One of the exercises I am unsure about is,

Find the coordinate matrix of the vector $\alpha=(1,0,1)$ in the basis of $\mathbb{C}^3$ consisting of the vectors $(2i,1,0),(2,-1,1),(0,1+i,1-i)$, in that order.

As I understand it, we want to write the vector $\alpha=(1,0,1)$ that is currently in terms of the standard basis $\mathbb{B}=\{(1,0,0),(0,1,0),(0,0,1)\}$, in terms of a new basis $\mathbb{B}'=\{(2i,1,0),(2,-1,1),(0,1+i,1-i)\}$. That is, we want to determine what matrix $P$ will satisfy $[\alpha]_{\mathbb{B}'}=P[\alpha]_{\mathbb{B}}$. There aren't many examples in Hoffman's textbook for actual computation, and what I recall from the lecture on the change of basis, we write $\mathbb{B}'$ in columns as,

$$Q= \left[ \begin{array}{ccc} 2i & 2 & 0 \\ 1 & -1 & 1+i \\ 0 & 1 & 1-i \\ \end{array} \right]$$

By inverting this matrix $Q$ we find that,

$$Q^{-1}= \left[ \begin{array}{ccc} \frac{1-i}{2} & -i & -1 \\ \frac{-i}{2} & -1 & i \\ \frac{i-1}{4} & \frac{1+i}{2} & 1 \\ \end{array} \right]$$

So, we then have that the coordinates $(x_{1}',x_{2}',x_{3}')$ of the vector $\alpha = (x_{1},x_{2},x_{3})$ in terms of the basis $\mathbb{B}'$ is given by,

$$\left[ \begin{array}{c} x_{1}' \\ x_{2}' \\ x_{3}' \\ \end{array} \right] = \left[ \begin{array}{ccc} \frac{1-i}{2} & -i & -1 \\ \frac{-i}{2} & -1 & i \\ \frac{i-1}{4} & \frac{1+i}{2} & 1 \\ \end{array} \right] \left[ \begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ \end{array} \right]$$

So, we can then substitute $\alpha=(1,0,1)$ for $x_{1}, x_{2}, x_{3}$ to find the coordinates $x_{1}',x_{2}',x_{3}'$ in terms of the basis $\mathbb{B}'$. That is $P=Q^{-1}$. Substituting in these values we receive $\alpha'=(\frac{-1-i}{2},\frac{i}{2},\frac{3+i}{4})$.

Have I done this correctly? Even if I have done this correctly I am unsure why these procedure for determining the matrix $P$ such that $[\alpha]_{\mathbb{B}'}=P[\alpha]_{\mathbb{B}}$ works. Is there any way I can understand this more intuitively than memorizing a procedure for changing a basis (assuming I have done this correctly, if I haven't, please explain).

Thank you.

$\endgroup$ 4

1 Answer

$\begingroup$

(I'll do it for dimension $n$ because the difficulty is the same)

Suppose you write $\{e_1,\ldots,e_n\}$ for the canonical basis (could be any basis, actually), and let $\{f_1,\ldots,f_n\}$ be another basis. You take a vector $x=\sum_{j=1}^n x_j\, e_j$ and you want to write it in the other basis.

You are given the vectors $\{f_1,\ldots,f_n\}$ in terms of the canonical basis, which means you have $\{p_{kj}\}$ such that $$ f_k=\sum_{j=1}^n p_{jk}e_j,\ \ \ k=1,\ldots,n. $$ Here you can think of $P=(p_{kj})$ as the matrix that has the coefficients of the $f_j$ in its columns. In a similar way we have coefficients $\{q_{jh}\}$ such that $$ e_j=\sum_{h=1}^nq_{hj}f_h\ \ \ j=1\,\ldots,n. $$ Combining the two expressions we get $$ f_k=\sum_{j=1}^n\sum_{h=1}^np_{jk}q_{hj}f_h=\sum_{h=1}^n\sum_{j=1}^nq_{hj}p_{jk}f_h =\sum_{h=1}^n(QP)_{kh}f_h. $$ By the uniqueness of the coefficients of a vector in a basis we get that $(QP)_{kh}$ is $1$ when $k=h$ and $0$ otherwise, i.e. $QP=I$. So $Q$ is the inverse matrix of $P$.

Now $$ x=\sum_{j=1}^n x_j\, e_j=\sum_{j=1}^n x_j\,\sum_{h=1}^n q_{hj}f_h =\sum_{h=1}^n \sum_{j=1}^n q_{hj}x_jf_h =\sum_{h=1}^n (QX)_h f_h. $$ In other words, the coefficients of the vector $x$ in the basis $\{f_1,\ldots,f_n\}$ are given by $P^{-1}X$, where $P$ is the matrix with the entries of the $f_k$ in its column, and $X$ are the entries of $x$ in the canonical basis.

$\endgroup$ 0

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

You Might Also Like