Max Macias
2022-08-14
Answered

a light-year (ly) is the distance that light travels in one year. the speed of light is $3.00\cdot {10}^{8}m/s$. how many miles are there in 1.00 ly? ($1.00mi=1.609km$ and one year is 365.25 days.). show all the unit conversion factors you use and full calculations. express your calculations and final answer using powers of ten

You can still ask an expert for help

Kyle George

Answered 2022-08-15
Author has **22** answers

1 light year= $3.00e+08\frac{m}{\mathrm{sec}}\times 3600\frac{\mathrm{sec}}{hr}\times 24\frac{hr}{day}\times 365.25\frac{days}{year}=9.4607e+015m\phantom{\rule{0ex}{0ex}}1.609e+03m=1mile\phantom{\rule{0ex}{0ex}}\Rightarrow 1m=\frac{1.0mi}{1.609e+03}\phantom{\rule{0ex}{0ex}}\Rightarrow \text{1 light year}=9.4607e+015m\times \frac{1.0mi}{(1.609e+03m)}=5.8800\times {10}^{12}mi$

asked 2022-07-02

Let ${f}_{n}$ and $f$ functions of ${L}_{1}{\mathbb{R}}^{n}$ such that $\underset{n\to \mathrm{\infty}}{lim}{f}_{n}=f$ almost always in ${\mathbb{R}}^{n}$ and ${\int}_{{\mathbb{R}}^{n}}|{f}_{n}|\text{}dx\to {\int}_{{\mathbb{R}}^{n}}|f|\text{}dx$. Show that for every set $A\subset {\mathbb{R}}^{n}$ lebesgue measurable

${\int}_{A}{f}_{n}\text{}dx\to {\int}_{A}f\text{}dx.$

How can I solve this problem?

${\int}_{A}{f}_{n}\text{}dx\to {\int}_{A}f\text{}dx.$

How can I solve this problem?

asked 2022-07-07

Given: Let $X$ be a sample from $P\in \mathcal{P}$, ${\delta}_{0}(X)$ be a decision rule (which may be randomized) in a problem with ${\mathbb{R}}^{k}$ as the action space, and $T$ be a sufficient statistic for $P\in \mathcal{P}$. For any Borel $A\subset {\mathbb{R}}^{k}$, define

${\delta}_{1}(T,A)=E[{\delta}_{0}(X,A)|T]$

Let $L(P,a)$ be a loss function. Show that

$\int L(P,a)d{\delta}_{1}(X,a)=E[\int L(P,a)d{\delta}_{0}(X,a)|T]$

My idea of proving this is to show that this holds for a simple function $L$ and generalize that to non-negative functions $L$ by using the conditional Montone Convergence Theorem. But I can't really show the equality for a simple function $L$. That is, if we take $L=\sum _{i=1}^{n}{c}_{i}{\mathbb{1}}_{{A}_{i}}$ can I show that the given result indeed holds true?

${\delta}_{1}(T,A)=E[{\delta}_{0}(X,A)|T]$

Let $L(P,a)$ be a loss function. Show that

$\int L(P,a)d{\delta}_{1}(X,a)=E[\int L(P,a)d{\delta}_{0}(X,a)|T]$

My idea of proving this is to show that this holds for a simple function $L$ and generalize that to non-negative functions $L$ by using the conditional Montone Convergence Theorem. But I can't really show the equality for a simple function $L$. That is, if we take $L=\sum _{i=1}^{n}{c}_{i}{\mathbb{1}}_{{A}_{i}}$ can I show that the given result indeed holds true?

asked 2021-02-26

Juan makes a measurement in a chemistry laboratory and records the result in his lab report. The standard deviation of students lab measurements is

asked 2022-07-04

Let $({E}_{\alpha}{)}_{\alpha \in I}$, where $I$ is an index set (for example, [0,1]). Can we define the corresponding liminf, for example,

$\underset{\alpha \searrow 0}{lim\u2006inf}{E}_{\alpha}=\bigcup _{r>0}\bigcap _{0<\alpha <r}{E}_{\alpha}.$

My desired result is the following continuous parameter type Fatou's lemma: Let $\mu $ be a finite meaure and each ${E}_{\alpha}$ is measurable, then

$\mu (\underset{\alpha \searrow 0}{lim\u2006inf}{E}_{\alpha})\le \underset{\alpha \searrow 0}{lim\u2006inf}\mu ({E}_{\alpha}).$

I am not sure whether these make sense. Any comments are welcome! Many thanks!

$\underset{\alpha \searrow 0}{lim\u2006inf}{E}_{\alpha}=\bigcup _{r>0}\bigcap _{0<\alpha <r}{E}_{\alpha}.$

My desired result is the following continuous parameter type Fatou's lemma: Let $\mu $ be a finite meaure and each ${E}_{\alpha}$ is measurable, then

$\mu (\underset{\alpha \searrow 0}{lim\u2006inf}{E}_{\alpha})\le \underset{\alpha \searrow 0}{lim\u2006inf}\mu ({E}_{\alpha}).$

I am not sure whether these make sense. Any comments are welcome! Many thanks!

asked 2022-04-03

Suppose the current measurements made on a conductor wire track follow a normal distribution with mean 10 milliamperes and variance 4 (milliamperes)^2

a) what is the probability that the value of a measurement is less than 9 milliamperes?

b) what is the probability that the value of a measurement is greater than 13 milliamps?

c) what is the probability that the value of a current measurement is between 9 and 11 milliamperes?

a) what is the probability that the value of a measurement is less than 9 milliamperes?

b) what is the probability that the value of a measurement is greater than 13 milliamps?

c) what is the probability that the value of a current measurement is between 9 and 11 milliamperes?

asked 2022-04-10

Suppose that X is a real-valued random variable on the probability space $(\mathrm{\Omega},\mathcal{F},\mathbb{P})$ with a cumulative distribution function ${F}_{X}(x)=\mathbb{P}[X\le x]$. Can we conclude from some measure theoretic property that $\mathbb{P}[X\le x]=\mathbb{P}[X<x]$? The measure zero of singleton points is certainly true for many well-known measures, but can we conclude that in general, subtracting a finite number of points from $(-\mathrm{\infty},x]$ does not change the probability of $(-\mathrm{\infty},x]$?

I'm asking this because my reading material hasn't explicitly taken care of this, and from other courses I know that $\mathbb{P}[-x\le X\le x]={F}_{X}(x)-{F}_{X}(-x)$, but arguing only with the general properties of probability measures I know of, yields $\mathbb{P}[-x\le X\le x]=\mathbb{P}[X\le x]-\mathbb{P}[X<-x]$, where the RHS would simplify to include only the CDF of X, if the finite difference of points doesn't matter.

I'm asking this because my reading material hasn't explicitly taken care of this, and from other courses I know that $\mathbb{P}[-x\le X\le x]={F}_{X}(x)-{F}_{X}(-x)$, but arguing only with the general properties of probability measures I know of, yields $\mathbb{P}[-x\le X\le x]=\mathbb{P}[X\le x]-\mathbb{P}[X<-x]$, where the RHS would simplify to include only the CDF of X, if the finite difference of points doesn't matter.

asked 2022-06-10

Assuming there's a variable I want to measure, but I have only very noisy instrument to do so. So I want to take multiple measurements so that I have a better chance to recover the state of the variable. Hopefully, with each measurement, my instrument can report the result as a Gaussian distribution , with the mean to be the most likely state of variable and the standard deviation suggests a rough possible region of the state.

My problem now is that I don't know how to combine these multiple measurements to get a sensible answer. My guess is that it would be nice if I can get a new gaussian from these results, with the mean centered at the expectation value of the state of the variable, and a standard deviation to reflect how confident I am about the result...

I tried to teach myself about gaussians, and probabilities, but I just couldn't get my head around...please can someone help me?

My problem now is that I don't know how to combine these multiple measurements to get a sensible answer. My guess is that it would be nice if I can get a new gaussian from these results, with the mean centered at the expectation value of the state of the variable, and a standard deviation to reflect how confident I am about the result...

I tried to teach myself about gaussians, and probabilities, but I just couldn't get my head around...please can someone help me?