a light-year (ly) is the distance that light travels in one year. the speed of light is 3.00*10^8 m/s. how many miles are there in 1.00 ly? (1.00 mi=1.609 km and one year is 365.25 days.). show all the unit conversion factors you use and full calculations. express your calculations and final answer using powers of ten

Max Macias 2022-08-14 Answered
a light-year (ly) is the distance that light travels in one year. the speed of light is 3.00 10 8 m / s. how many miles are there in 1.00 ly? ( 1.00 m i = 1.609 k m and one year is 365.25 days.). show all the unit conversion factors you use and full calculations. express your calculations and final answer using powers of ten
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (1)

Kyle George
Answered 2022-08-15 Author has 22 answers
1 light year= 3.00 e + 08 m sec × 3600 sec h r × 24 h r d a y × 365.25 d a y s y e a r = 9.4607 e + 015 m 1.609 e + 03 m = 1 m i l e 1 m = 1.0 m i 1.609 e + 03  1 light year  = 9.4607 e + 015 m × 1.0 m i ( 1.609 e + 03 m ) = 5.8800 × 10 12 m i
Not exactly what you’re looking for?
Ask My Question

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2022-07-02
Let f n and f functions of L 1 R n such that lim n f n = f almost always in R n and R n | f n |   d x R n | f |   d x. Show that for every set A R n lebesgue measurable
A f n   d x A f   d x .
How can I solve this problem?
asked 2022-07-07
Given: Let X be a sample from P P , δ 0 ( X ) be a decision rule (which may be randomized) in a problem with R k as the action space, and T be a sufficient statistic for P P . For any Borel A R k , define
δ 1 ( T , A ) = E [ δ 0 ( X , A ) | T ]
Let L ( P , a ) be a loss function. Show that
L ( P , a ) d δ 1 ( X , a ) = E [ L ( P , a ) d δ 0 ( X , a ) | T ]
My idea of proving this is to show that this holds for a simple function L and generalize that to non-negative functions L by using the conditional Montone Convergence Theorem. But I can't really show the equality for a simple function L. That is, if we take L = i = 1 n c i 1 A i can I show that the given result indeed holds true?
asked 2021-02-26

Juan makes a measurement in a chemistry laboratory and records the result in his lab report. The standard deviation of students lab measurements is σ=8 milligrams. Juan repeats the measurement 2 times and records the mean x of his 2 measurements. What is the standard deviation of Juan's mean result?

asked 2022-07-04
Let ( E α ) α I , where I is an index set (for example, [0,1]). Can we define the corresponding liminf, for example,
lim inf α 0 E α = r > 0 0 < α < r E α .
My desired result is the following continuous parameter type Fatou's lemma: Let μ be a finite meaure and each E α is measurable, then
μ ( lim inf α 0 E α ) lim inf α 0 μ ( E α ) .
I am not sure whether these make sense. Any comments are welcome! Many thanks!
asked 2022-04-03
Suppose the current measurements made on a conductor wire track follow a normal distribution with mean 10 milliamperes and variance 4 (milliamperes)^2
a) what is the probability that the value of a measurement is less than 9 milliamperes?
b) what is the probability that the value of a measurement is greater than 13 milliamps?
c) what is the probability that the value of a current measurement is between 9 and 11 milliamperes?
asked 2022-04-10
Suppose that X is a real-valued random variable on the probability space ( Ω , F , P ) with a cumulative distribution function F X ( x ) = P [ X x ]. Can we conclude from some measure theoretic property that P [ X x ] = P [ X < x ]? The measure zero of singleton points is certainly true for many well-known measures, but can we conclude that in general, subtracting a finite number of points from ( , x ] does not change the probability of ( , x ]?
I'm asking this because my reading material hasn't explicitly taken care of this, and from other courses I know that P [ x X x ] = F X ( x ) F X ( x ), but arguing only with the general properties of probability measures I know of, yields P [ x X x ] = P [ X x ] P [ X < x ], where the RHS would simplify to include only the CDF of X, if the finite difference of points doesn't matter.
asked 2022-06-10
Assuming there's a variable I want to measure, but I have only very noisy instrument to do so. So I want to take multiple measurements so that I have a better chance to recover the state of the variable. Hopefully, with each measurement, my instrument can report the result as a Gaussian distribution , with the mean to be the most likely state of variable and the standard deviation suggests a rough possible region of the state.
My problem now is that I don't know how to combine these multiple measurements to get a sensible answer. My guess is that it would be nice if I can get a new gaussian from these results, with the mean centered at the expectation value of the state of the variable, and a standard deviation to reflect how confident I am about the result...
I tried to teach myself about gaussians, and probabilities, but I just couldn't get my head around...please can someone help me?