In a longer derivation I ran into the following quantity: grad [grad * (bbr_0 e^(i bbk *bbr))] ( i.e., the gradient of the divergence ) where k is a vector of constants and r is a position vector. Can someone help explaining how to calculate this? I am hoping it gives: grad [grad * (bbr_0 e^(i bbk *bbr))]=-bbk(bbk * bbr_0)

Deanna Gregory

Deanna Gregory

Answered question

2022-09-30

In a longer derivation I ran into the following quantity:
[ ( r 0 e i k r ) ]
( i.e., the gradient of the divergence ) where k is a vector of constants and r is a position vector.
Can someone help explaining how to calculate this? I am hoping it gives:
[ ( r 0 e i k r ) ] = k ( k r 0 )

Answer & Explanation

Emmanuel Russo

Emmanuel Russo

Beginner2022-10-01Added 9 answers

Summing over repeated indices, the divergence is r 0 i i e i k j r j = r 0 i i k i e i k j r j = i ( k r 0 ) e i k j r j . Applying l pulls down another i k l factor, so the gradient is k ( k r 0 ) e i k r . Your desired result drops the exponential, which I suspect is a typo.
ter3k4w8x

ter3k4w8x

Beginner2022-10-02Added 4 answers

[ ( r 0 e i k r ) ] = [ ( r 0 ) =   0   e i k r + r 0 ( e i k r ) ] = i ( r 0 k e i k r ) =   i r 0 k e i k r = i r 0 k ( e i k r i k ) = k ( k r 0 ) e i k r

Do you have a similar question?

Recalculate according to your conditions!

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?