let f , g be L 1 </msup> ( <mrow class="MJX-TeXAtom-ORD"> <mi

Erin Lozano 2022-06-26 Answered
let f, g be L 1 ( R ) functions with Lebesgue measure. Define f t ( x ) = f ( x / t ) t . Prove f t g converges to a g in L 1 when t 0 + , where a = R f ( x ) d x.

my approach in brief: since f , g L 1 , by Tonelli-Fubini's theorem, we can show
R f t g d x = ( R f ( x ) d x ) R g ( y ) d y
R f t g d x = ( R f ( x ) d x ) R g ( x ) d x
R ( f t g ( R f ( x ) d x ) R g ( x ) ) d x = 0
Therefore,
R | f t g ( x ) a g ( x ) | d x = 0
I am feeling something is wrong with my approach. Do correct me and give me some hints for solving this simple question.
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (2)

Paxton James
Answered 2022-06-27 Author has 25 answers
We have
q ( t ) := R | f t g ( x ) a g ( x ) | d x R | f t ( y ) | ( R | g ( x y ) g ( x ) | d x ) d y := R | f t ( y ) | φ ( y ) d y
Suppose that g is the indicator function of an interval [ α , β ] , then φ ( y ) = 2   min ( | y | , β α ) , hence
q ( t ) 2 R | f ( s ) | min ( t | s | , β α ) d s
which converges to 0 by the monotone convergence theorem when t 0 + .
The general result follows by density of the space of step functions in L 1 ( R ) .
Did you like this example?
Subscribe for all access
April Bush
Answered 2022-06-28 Author has 6 answers

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2022-07-09
I'm studying Pavliotis' Stochastic Processes and I am having trouble with one of the exercises.
Specifically the first exercise of chapter two is prove that the Markov property in the sense off the immediate future being independent of the past given the present, i.e. P ( X n + 1 | X 1 , , X n ) = P ( X n + 1 | X n ) implies that arbitrary futures are independent of the past given the present, i.e. P ( X n + m | X 1 , , X n ) = P ( X n + m | X n ).
Conceptually I imagine a proof using induction along the lines of: Assume the equality holds for some m, then
P ( X n + m + 1 | X 1 , , X n ) = P ( X n + m + 1 , X n + m = x | X 1 , , X n ) d x = P ( X n + m + 1 | X n + m = x ) P ( X n + m = x | X 1 , , X n ) d x = P ( X n + m + 1 | X n + m = x ) P ( X n + m = x | X n ) d x = P ( X n + m + 1 , X n + m = x | X n ) d x = P ( X n + m + 1 | X n )
However, this is based on intuition and my experience non-measure-theoretic probability theory. I cannot justify these steps when I think of P as a measure and P ( X = x ) meaning P { X 1 ( x ) }. In fact I have had trouble finding a definition of conditional probability that is at all helpful. Most authors seem to provide the abstract definition in terms of conditional expectation with respect to σ-algebra, but I haven't found any resources that show how to work with this definition.
So my question is: how (if it all) are these steps, particularly the first two equalities justified from a measure theoretic perspective?
asked 2022-06-11
I am trying to understand the Borel-Cantelli lemma, however, I cannot understand what exactly is "infinitely often". Can someone please explain "infinitely often" by a simple example?
asked 2022-05-12
We can show the following: If E is a normed R -vector space, x : [ 0 , ) E is càdlàg, B E { 0 }, τ 0 := 0 and
τ n := inf { t > τ n 1 : Δ x ( t ) B } =: I n
for n N , then

1. τ 1 ( 0 , ];
2. If n N , then either I n = and hence τ n = or τ n I n .

Now if X is any E-valued càdlàg Lévy process on a filtered probability space ( Ω , A , ( F t ) t 0 , P ), we can similarly define I n and τ n with x replaced by X.
( Δ X t ) t 0 is clearly F -adapted and ( X t 1 + s X t 1 ) s 0 is a Lévy process with respect to the filtration ( F t 1 + s ) s 0 with the same distribution as X for all t 1 > 0.

Are we able to show that τ 1 is A -measurable? Or are we even able to show that τ 1 is measurable with respect to the right-continuous filtration F t + := ε > 0 F t + ε ? The latter is equivalent to showing that { τ 1 < t } F t for all t > 0. Can we show this?
asked 2022-05-13
Let { f k } k = 1 be a sequence of S -measurable real-valued functions on a measure space ( M , S ). Then the functions
lim sup k f k ( x )  and  lim inf k f k ( x )
are also S -measurable.
I already have proven this statement and my question is about something else. In the script it says that the reader may use the hint:
For a sequence { a k } k = 1 of real numbers the following is true:
lim sup k a k = lim k lim m max { a k , a k + 1 , , a m }
Now, I have not used this in my proof, but I can assure you, that my proof is airtight nonetheless. Anyway, can someone point out why this holds? I have not seen this equality before and did not know that one can present lim sup , lim inf like that. I know that for a set of limit points H we can say that lim sup H = max H and vice versa. I do not even see how this property can be useful for our proof.
asked 2022-07-13
I measured something N times using different measurement techniques. Each measurement technique i has a known variance σ i 2 . So every measurement is x i = x ^ + ϵ i where x ^ is the true value and ϵ i is pulled from a normal distribution with mean 0 and variance σ i 2 .
Ok, I have all these measurements. Now I want to know the variance of my measurements taken together, taking into account the fact that I know the variance of all the measurement techniques. What's the best way to do this?
asked 2022-05-25
Let μ be a finite premeasure on an algebra S , and μ the induced outer measure. Show that a subset E of X is μ -measurable if and only if for each ε > 0 there is a set A S δ , A E, such that μ ( E A ) < ε.
Primarily my question is: is this even true? This seems like it's stated backward. I'm a noob so maybe I have this wrong, but since S δ is the set of all intersections of sets in S , then I would have thought that we were looking for a set E A. When I try to do the proof that seems like the direction that pops out naturally. Also we claim "There is a G δ set G containing E for which ..."
But when I look up the errata, this is not mentioned in it.
If the answer is "the book stated it correctly" then I'd appreciate a hint. I've tried using the sequence of sets such that μ ( E ) ε < μ ( E k ) where { E k } covers E. Intersecting these seems like a dead-end. I could union them, prove the theorem for E B, and then try to use B to construct A. But any way that I can see of doing that, I lose the property that A S δ .
asked 2022-06-16
Let f j L P L l o c 1 and g L q where p , q are conjugates and | | f j | | p < M for all j . I have that S f j 0 for all S A (measurable) where A is a compact subset of R and want to show A f j g 0.
Since A is bounded, g L q g L so I want to write down something like | A f j g | | | g | | L | f j | . But this is false. I can only use the tools that I know if I control | f | , but I've already come up with counterexamples to show we have no control over that so considering | f j g | is useless. Another idea was to use Cauchy schwarz but we need not have f j 2 0