Boost Your Understanding of Linear Equation Forms

Recent questions in Forms of linear equations
Linear algebraAnswered question
Ryan Reynolds Ryan Reynolds 2022-05-27

I would like to solve the following coupled system of linear PDEs by separation of variables, where a and b are constants:
u t = b a a + b u + ( b + a ) 2 v + u 2 x 2
v t = 2 b a + b u + ( b + a ) 2 v + d v 2 x 2
Or, in matrix form:
w t = A w + D w x x
where w = ( u v ) , A = ( b a a + b ( b + a ) 2 2 b a + b ( b + a ) 2 ) , D = ( 1 0 0 d )
Since this is a linear equation, one should be able to apply the method of Separation of Variables. I've gone through the exercise of separating variables for single variable models in one or more spatial dimensions many times. However, I've never seen a completely worked example where there are two coupled equations. My intuition tells me to start with:
w = ( u ( x , t ) v ( x , t ) ) = ( ϕ 1 ( x ) h 1 ( t ) ϕ 2 ( x ) h 2 ( t ) ) Differentiating with respect to x and t, I obtain:
( ϕ 1 ( x ) h 1 ( t ) ϕ 2 ( x ) h 2 ( t ) ) = A ( ϕ 1 ( x ) h 1 ( t ) ϕ 2 ( x ) h 2 ( t ) ) + D ( ϕ 1 ( x ) h 1 ( t ) ϕ 2 ( x ) h 2 ( t ) )
But at this point I get stuck, because expanding this seems to make the problem more difficult.
Textbooks/papers in mathematical biology solve this problem, which is the linearized form of a reaction-diffusion model, by immediately assuming particular solutions of the form:
w = ( u ( x , t ) v ( x , t ) ) = ( α 1 α 2 ) c o s ( k x ) e λ t , where k is the wavenumber and λ is the eigenvalue.
I find this really unsatisfying, because it seems to fall out of nowhere.
My questions are:
1. Is there a general method for separating variables in coupled linear PDEs or PDEs written in vector form?
2. Is there a book/paper/tutorial that I can use to help me work this out in all of its details?
3. Is there some deeper theory that I need to first understand?

Linear algebraAnswered question
lurtzslikgtgjd lurtzslikgtgjd 2022-05-17

I am trying to show that the functions t 3 and b are independent on the whole real line. To do this, I try and prove it by contradiction. So assume that they are dependent. So then there must exists constants a and b such that a t 3 + b | t | 3 = 0 for all t ( , ). Now pick two points x and y in this interval and assume without loss of generality that x < 0, y 0. Now form the simultaneous linear equations
a y 3 + b | y | 3 = 0, viz.
[ x 3 | x | 3 y 3 | y | 3 ] [ a b ] = [ 0 0 ]
Now here's my problem. If I look at the determinant of the coefficient matrix of this system of linear equations, namely x 3 | y | 3 y 3 | x | 3 and noting that x < 0 and y > 0, I have that the determinant is non-zero which implies that the only solution is a = b = 0, i.e. the functions t 3 and | | t | 3 are linearly independent. However what happens if indeed y = 0? Then the determinant of the matrix is 0 and I have got a problem.
Is there something that I am not getting from the definition of linear independence?
The definition (I hope I state this correctly) is: If f and g are two functions such that the only solution to a f + b g = 0     t in an interval I is a = b = 0, then the two functions are linearly independent.
But what happens if my functions pass through the origin, like the above? Then I've just shown that there exists a t in an interval containing zero such that the two functions are zero, viz. I can plug in any b and a such that a f + b g = 0.

Students dealing with post-secondary Algebra will know that linear equation standard form examples are quite hard to find, let alone obtain solutions to questions related to algebraic problems. Luckily, you will find suitable answers to your linear equation standard form tasks by turning to help based on provided solutions. Even though one may use several calculators online, the college professors will require a verbal or written explanation, which is where provided forms of linear equations answers always help to avoid trouble and see the reasoning.