I have two matrices, P and Q as follows:

maliaseth0
2022-02-01
Answered

Writing a composite transformation as a matrix multiplication

I have two matrices, P and Q as follows:

$P=\left(\begin{array}{cc}\frac{1}{2}& \frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}& -\frac{1}{2}\end{array}\right)$

$Q=\left(\begin{array}{cc}-\frac{1}{2}& \frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}& \frac{1}{2}\end{array}\right)$

I have two matrices, P and Q as follows:

You can still ask an expert for help

fumanchuecf

Answered 2022-02-02
Author has **21** answers

Step 1

Matrices work like functions. If you have functions$f:A\to B$ and $g:B\to C$ , and a element $x\in A$ , we do first f(x), then $g\left(f\left(x\right)\right)=(g\times f)\left(x\right)$ . Notice that the order is different, we applied f, then g, but the result ended up being $g\times f$ . It is the same for matrices.

Extra info. for you: The matrix of rotation by a angle θ is given by

$\left(\begin{array}{cc}\mathrm{cos}\theta & -\mathrm{sin}\theta \\ \mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right)$

This way, P is rotation by 60 deg and Q is rotation by 120 deg.

Matrices work like functions. If you have functions

Extra info. for you: The matrix of rotation by a angle θ is given by

This way, P is rotation by 60 deg and Q is rotation by 120 deg.

lilwhitelieyc

Answered 2022-02-03
Author has **10** answers

Step 1

Here's the deal: operating on a point p by a transformation T is written T(p). That is, the point operated on goes to the right of the function being applied. If you wanted to then take the result,

$q=T\left(p\right)$

nd apply a transformation S, you'd write S(q). Substituting, you'd find that this was

$S\left(T\left(p\right)\right)$

Many important transformations can be written in the form of multiplication by a matrix, and one might conjecture that the definition of matrix multiplication was made so that

$T\left(p\right)=Mp$

where M is a matrix, would work out nicely. Given that, if$T\left(p\right)=Mp$ and $S\left(q\right)=Kq$ , you get that

$S\left(T\left(p\right)\right)=K\left(Mp\right)=\left(KM\right)p$

So yes, either in applying transformations OR in multiplying matrices for transformations (at least when points are represented by column vectors), you write the matrix for the first operation to the left of the point; the matrix for th next operation to the left of that, and so on.

Foley and van Dam, in their Computer Graphics text (first edition), used ROW vectors for point-coordinates; in that situation, the matrices go on the right, and you can read operations/matrices from right to left, which seems nice. Unfortunately, if you do that, then operations on covectors (like the normal vector to a surface) end up reading left-to-right, so you don't really win. You also, in using row vectors, go against the conventions widely used in math courses, which just messes up your students. That's why in later editions (I confess I'm a co-author) we went with column vectors.

Here's the deal: operating on a point p by a transformation T is written T(p). That is, the point operated on goes to the right of the function being applied. If you wanted to then take the result,

nd apply a transformation S, you'd write S(q). Substituting, you'd find that this was

Many important transformations can be written in the form of multiplication by a matrix, and one might conjecture that the definition of matrix multiplication was made so that

where M is a matrix, would work out nicely. Given that, if

So yes, either in applying transformations OR in multiplying matrices for transformations (at least when points are represented by column vectors), you write the matrix for the first operation to the left of the point; the matrix for th next operation to the left of that, and so on.

Foley and van Dam, in their Computer Graphics text (first edition), used ROW vectors for point-coordinates; in that situation, the matrices go on the right, and you can read operations/matrices from right to left, which seems nice. Unfortunately, if you do that, then operations on covectors (like the normal vector to a surface) end up reading left-to-right, so you don't really win. You also, in using row vectors, go against the conventions widely used in math courses, which just messes up your students. That's why in later editions (I confess I'm a co-author) we went with column vectors.

asked 2021-09-13

Assume that A is row equivalent to B. Find bases for Nul A and Col A.

asked 2021-06-13

For the matrix A below, find a nonzero vector in Nul A and a nonzero vector in Col A.

$A=\left[\begin{array}{cccc}2& 3& 5& -9\\ -8& -9& -11& 21\\ 4& -3& -17& 27\end{array}\right]$

Find a nonzero vector in Nul A.

$A=\left[\begin{array}{c}-3\\ 2\\ 0\\ 1\end{array}\right]$

Find a nonzero vector in Nul A.

asked 2021-09-18

Find an explicit description of Nul A by listing vectors that span the null space.

asked 2022-05-15

Which transformations include in the matrix?

do this matrix transformation options:

1) rotate and scaling

2) translate and scaling

3) scaling and rotate

4) not 1,2,3

$\left[\begin{array}{ccc}1& 0& 1\\ 0& 0& 1\\ 0& 0& 1\end{array}\right]$

do this matrix transformation options:

1) rotate and scaling

2) translate and scaling

3) scaling and rotate

4) not 1,2,3

$\left[\begin{array}{ccc}1& 0& 1\\ 0& 0& 1\\ 0& 0& 1\end{array}\right]$

asked 2022-07-08

Let $\beta $ and ${\beta}^{\prime}$ be bases for the finite dimensional vector space $V$ of dimension n over the field $\mathbb{F}$, and let $Q=[{I}_{V}{]}_{{\beta}^{\prime}}^{\beta}$, where ${I}_{V}$ is the identity operator on $V$. I just recently proved that $Q[x{]}_{{\beta}^{\prime}}=[x{]}_{\beta}$ for every $x\in V$ (twas rather simple), which suggests the title of "basis transformation" matrix or "coordinate transformation" matrix for the matrix $Q$. I am now wondering whether the converse holds.

Let ${V}^{\prime}$ denote $V$ with its elements written with respect to the basis ${\beta}^{\prime}$, and suppose that $Q\in {M}_{n}(\mathbb{F})$ is such that $Q[x{]}_{{\beta}^{\prime}}=[x{]}_{\beta}$ for every $x$. Since $\mathcal{L}({V}^{\prime}V)$ is isomorphic to the matrix algebra ${M}_{n}(\mathbb{F})$ by sending a linear operator to its matrix representation, given $Q$ there exists a linear operator $T\in \mathcal{L}({V}^{\prime},V)$ such that $Q=[T{]}_{{\beta}^{\prime}}^{\beta}$. Hence $[T{]}_{{\beta}^{\prime}}^{\beta}[x{]}_{{\beta}^{\prime}}=[x{]}_{\beta}$ or $[T(x){]}_{\beta}=[x{]}_{\beta}$...

I want to say that this implies $T={I}_{V}$, but I can't clearly see what lemma I need in order to make that conclusion.

Let ${V}^{\prime}$ denote $V$ with its elements written with respect to the basis ${\beta}^{\prime}$, and suppose that $Q\in {M}_{n}(\mathbb{F})$ is such that $Q[x{]}_{{\beta}^{\prime}}=[x{]}_{\beta}$ for every $x$. Since $\mathcal{L}({V}^{\prime}V)$ is isomorphic to the matrix algebra ${M}_{n}(\mathbb{F})$ by sending a linear operator to its matrix representation, given $Q$ there exists a linear operator $T\in \mathcal{L}({V}^{\prime},V)$ such that $Q=[T{]}_{{\beta}^{\prime}}^{\beta}$. Hence $[T{]}_{{\beta}^{\prime}}^{\beta}[x{]}_{{\beta}^{\prime}}=[x{]}_{\beta}$ or $[T(x){]}_{\beta}=[x{]}_{\beta}$...

I want to say that this implies $T={I}_{V}$, but I can't clearly see what lemma I need in order to make that conclusion.

asked 2022-07-16

Let $T:{M}_{2x2}\to {\mathbb{R}}^{3}$ have matrix $[T{]}_{B,A}=\left[\begin{array}{cccc}1& 2& 0& 1\\ 0& 1& 1& 0\\ 1& 1& -1& -1\end{array}\right]$ relative to $A=\{\left[\begin{array}{cc}2& 0\\ 0& 0\end{array}\right],\left[\begin{array}{cc}0& 3\\ 0& 0\end{array}\right],\left[\begin{array}{cc}0& 0\\ 5& 0\end{array}\right],\left[\begin{array}{cc}0& 0\\ 0& 6\end{array}\right]\}$ and $\beta =\{(1,1,1),(1,2,3),(1,4,9)\}$. Find the matrix of T relative to the bases ${A}^{\prime}=\{\left[\begin{array}{cc}1& 0\\ 0& 0\end{array}\right],\left[\begin{array}{cc}0& 4\\ 0& 0\end{array}\right],\left[\begin{array}{cc}0& 0\\ 2& 0\end{array}\right],\left[\begin{array}{cc}0& 0\\ 0& 7\end{array}\right]\}$ and ${\beta}^{\prime}=\{(1,1,1),(1,0,0),(1,1,0)\}$

asked 2022-01-20

Two welders worked a total of 47 h on a project. One welder made $37/h, while the other made $39/h. If the gross earnings of the two welders was $1781 for the job, how many hours did each welder work? Using row- echron matrix.