Show that A^\nu A_\nu has the same meaning as A^\mu A_\mu.
Explicitly writing with the summation, as the index is dummy:
A^\nu A_\nu = \sum_{\nu = 0}^3 A^\nu A_\nu = \sum_{\mu = 0}^3 A^\mu A_\mu = A^\mu A_\mu
Write an expression that undoes the effect of Eq. 5.20. In other words, how we “go backwards”?
Equation 5.20 is:
A_\mu = \eta_{\mu \nu} A^\nu
where:
\eta_{\mu\nu} = \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}
Since the matrix is diagonal, its inverse is simply the matrix whose element of the diagonal are the reciprocal of the corresponding element, which for this matrix, since \left|a_{ii}\right| = 1, it is the matrix itself:
\eta^{\mu\nu} = \eta_{\mu\nu}
It is to go from a contravariant vector to a covariant vector using the matrix \eta_{\mu\nu}, and go from a covariant vector to a contravariant using \eta^{\mu\nu}:
\eta^{\mu \nu} A_\mu = \eta^{\mu \nu} \eta_{\mu \nu} A^\nu = A^\nu
Given the transformation equation (Eq. 6.3) for the contravariant components of a 4-vector A^\nu, where L^\mu_\nu is a Lorentz transformation matrix, show that the Lorentz transformation for A’s covariant components is
\left(A^\prime \right)_\mu = M_\mu^\nu A_\nu
where
M = \eta L \eta
Eq. 6.3 is:
\left(A^\prime \right)^\mu = L^\mu_\nu A^\nu
The covariant components (A^\prime)_\mu in the primed frame are obtained by “lowering the index” of the contravariant components (A^\prime)^\alpha using the metric tensor \eta_{\mu\alpha}:
(A^\prime)_\mu = \eta_{\mu\alpha} (A^\prime)^\alpha
Let’s substitute the transformation for the contravariant components.
We know from Eq. 6.3 that (A^\prime)^\alpha = L^\alpha_\beta A^\beta. Substituting this into our equation:
(A^\prime)_\mu = \eta_{\mu\alpha} (L^\alpha_\beta A^\beta)
We can express the original contravariant vector in terms of its covariant version.
Our goal is to have an equation that transforms A_\nu to (A^\prime)_\mu. Right now we have an A^\beta.
We can “raise the index” on the original covariant vector A_\nu to get A^\beta using the inverse metric tensor \eta^{\beta\nu} (for the Minkowski metric, \eta^{-1} = \eta, so \eta^{\beta\nu} = \eta_{\beta\nu}):
A^\beta = \eta^{\beta\nu} A_\nu
We then substitute this back into our main equation:
(A^\prime)_\mu = \eta_{\mu\alpha} L^\alpha_\beta (\eta^{\beta\nu} A_\nu)
Finally we group the terms to identify the new transformation matrix. We can rearrange the parentheses since this is just a series of sums and products:
(A^\prime)_\mu = \left( \eta_{\mu\alpha} L^\alpha_\beta \eta^{\beta\nu} \right) A_\nu
This is exactly in the form we want: (A^\prime)_\mu = M_\mu^\nu A_\nu.
By comparing the two equations, we can identify the components of the matrix M:
M_\mu^\nu = \eta_{\mu\alpha} L^\alpha_\beta \eta^{\beta\nu} This expression in index notation is precisely what the matrix equation M = \eta L \eta represents.
Now, let’s show that the matrix product \eta L \eta gives the exact matrix M that you correctly identified.
Let’s define our matrices and set \gamma = \frac{1}{\sqrt{1-v^2}}:
\begin{aligned} L & = \begin{bmatrix} \gamma & -\gamma v & 0 & 0 \\ -\gamma v & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \\[10pt] \eta & = \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned}
The first step is to calculate \eta L:
\eta L = \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} \gamma & -\gamma v & 0 & 0 \\ -\gamma v & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} = \begin{bmatrix} -\gamma & \gamma v & 0 & 0 \\ -\gamma v & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}
The second step is to calculate (\eta L) \eta:
(\eta L) \eta = \begin{bmatrix} -\gamma & \gamma v & 0 & 0 \\ -\gamma v & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} = \begin{bmatrix} \gamma & \gamma v & 0 & 0 \\ \gamma v & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}
The final matrix is:
\eta L \eta = \begin{bmatrix} \frac{1}{\sqrt{1-v^2}} & \frac{v}{\sqrt{1-v^2}} & 0 & 0 \\ \frac{v}{\sqrt{1-v^2}} & \frac{1}{\sqrt{1-v^2}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} = M
which is precisely the matrix M, the original Lorentz transformation L with the sign of v reversed.
This completes the proof. You have shown that the covariant components A_\nu transform according to (A^\prime)_\mu = M_\mu^\nu A_\nu where the transformation matrix is given by M = \eta L \eta.