Understanding Coordinate Systems and Equations

Recent questions in Alternate coordinate systems
Linear algebraAnswered question
Iyana Jackson Iyana Jackson 2022-09-02

How to show that the Billiard flow is invariant with respect to the area form sin ( α ) d α d t
Consider a plane billiard table D R 2 (i.e. a bounded open connected set) with smooth boundary γ being a closed curve. Next, let M denote the space of tangent unit vectors (x,v) with x on γ and v being a unit vector pointing inwards. We then define the billiard map
T : M M .
To understand the map T, we consider a point mass traveling from x in direction v. Let x 1 be the first point on γ that this point mass intersects and suppose that v1 is the new direction of the mass upon incidence. Then T maps (x,v) to ( x 1 , v 1 ).
We now introduce an alternate ''coordinate system'' describing M. Parametrize γ by arc-length t and fix a point ( x , v ) M. We can find t such that x = γ ( t ) and let α ( 0 , π ) be the angle between the tangent line at x and v. The tuple ( t , α ) uniquely determines the point (x,v) in M, and thus offers and alternative description of this space.
My question is as follows: I want to show that the area form given by
ω := sin α d α d t
is invariant under T.
I found a proof of this invariance property proof in S. Tabachnikov's Geometry and billiards but I'm having some trouble understanding a critical part of the proof.
If anyone can explain the proof to me (or provide me with another proof) I would highly appreciate it. An intuitive explanation is also appreciated, but I am looking for a rigorous proof if possible. We restate this theorem formally below and provide the proof as given by Tabachnikov.
Theorem 3.1. The area form ω = sin α d α d t is T-invariant.
Proof. Define f ( t , t 1 ) to be the distance between γ ( t ) and γ ( t 1 ). The partial derivative f t 1 is the projection of the gradient of the distance | γ ( t ) γ ( t 1 ) | on the curve at point γ ( t 1 ). This gradient is the unit vector from γ ( t ) to γ ( t 1 ) and it makes angle α 1 with the curve; hence f / t 1 = cos α 1 . Likewise, f / t = cos α . Therefore,
d f = f t d t + f t 1 d t 1 = cos α d t + cos α 1 d t 1
and hence
0 = d 2 f = sin α d α d t sin α 1 d α 1 d t 1 .
This means that ω is a T-invariant form.
The above proof is copied directly from the book. I have the following questions about his method:
Is the domain of f the set M × M?
In the proof, are we specifically considering ( t , α ) and ( t 1 , α 1 ) such that T ( t , α ) = ( t 1 , α 1 )?
I am having a hard time understanding how the author obtains f / t 1 = cos α 1 and f / t = cos α . The explanation given feels mostly heuristic, how could I go about constructing a rigorous proof?

Linear algebraAnswered question
ngombangouh ngombangouh 2022-09-01

Showing that det A = det B det C when B,C are the restrictions of A onto a subspace
I am a bit unsure about one approach that is mentioned to prove this determinant result.
Here is the quote from Pages 100-101 of Finite-Dimensional Vector Spaces by Halmos:
Here is another useful fact about determinants. If M is a subspace invariant under A, if B is the transformation A considered on M only, and if C is the quotient transformation A / M , then
det A = det B det C
This multiplicative relation holds if, in particular, A is the direct sum of two transformations B and C. The proof can be based directly on the definition of determinants, or, alternatively, on the expansion obtained in the preceding paragraph.
What I am confused about is how you can use the definition of determinants to conclude this result.
In this book, the determinant of a linear transformation A is defined as the scalar δ such that α i j for all alternating n-linear forms w, where V is an n-dimensional vector space.
It is then shown that by fixing a coordinate system (or basis) and letting α i j be the entries of the matrix of the linear transformation under the coordinate system, the determinant of the linear transformation A in that coordinate system is:
det A = π ( sgn π ) α π ( 1 ) , 1 α π ( n ) , n
where the summation goes over all permutations in S n .
I have been able to use the expression involving the coordinates to show this result, but I am not sure about how this would be done directly from the definition. I have tried looking at defining other alternating forms and using their product to show this, but I was not able to make much use of that approach.
Are there any suggestions for proving this result directly from the definition?
Edit: I would like to add that part of my confusion may be from the fact that A, B and C are all linear transformations on different vector spaces and I am not sure how the definition can be used in this situation.

Linear algebraOpen question
Malcolm Gregory Malcolm Gregory 2022-09-01

Solving an Energy Minimization Problem, as Outlined in a Paper
The paper fairly succinctly boils the problem down to the following equation:
1 2 x T ( M + h 2 L ) x h 2 x T J d + x T b
Where:
- the system consists of m nodes and s springs
- h is the timestep(constant)
- x is a column vector of length 3m (or 3m∗1 matrix) representing the node positions,
- ( M + h 2 L ) is a matrix of size 3m∗3m (which, in this case, is symmetric, positive semi-definite, and does not change after being initialized.),
- J is a matrix of size 3m∗3s, and
- d is a column vector of size 3s representing the spring directions. (this particular equation is equation 14 in section 4 of the paper)
- b is a column vector of length 3m, which represents the xyz components of the external forces acting on each node
I am at a stage where I can generate all of vectors and matrices in code based on some arbitrary mesh consisting of nodes and links, but I am having trouble figuring out how I would go about solving for d and x. I have an initial guess for x (as outlined towards the ends of both section 3 and 4), but I don't have much experience solving entire systems like this.
My main question has two parts:
is there some generic procedure one would use to go about solving such a system?
if not, could some resources on the topic be suggested? Even some search terms more specific than "energy minimisation" would be of great use!
Ultimately I just don't know where to start, and the paper seems to simply say "starting with an initial guess for x, first we compute the optimal d", but I don't know what "compute the optimal d" entails.
Similarly, I wouldn't know where to start with computing the optimal values for x. I assume this would involve differentiating w.r.t. x and finding the values when the derivative equals zero but again, I have no idea how to apply this to entire matrices of values, as opposed to single values.
I am aware that each term in the above equation should evaluate to a scalar value, but I don't know how I would be able to obtain a value for each row/cell in the column vectors x and d.
NOTE: I believe there are one or two minor errors in the paper, such as d R 2 s which, I believe, should be d R 3 s , since we are dealing with springs in 3 dimensions.

Coordinate system examples can be met in college geometry among architects and 3D designers as they are dealing with the Euclidean space and other objectives. The solutions and answers that have been presented below will also include linear algebra for various calculations. Do not forget to look through the list of questions as these will have great coordinate system equations that will help you determine how to solve your unique problem. Start with given coordinates, provide a position of existing points, and just change your task's problem accordingly by learning from the answers provided.