April Bush

2022-06-25

Properties of Finite Differences
I have seen this statement in different variants, but could not find a proof:
If the n-th order differences of equally spaced data are non-zero and constant then the data can be represented by a polynomial.
I interpret this statement more formally as follows:
Let $f:\mathbb{Z}\to \mathbb{Z}$be any function such that the finite differences ${D}_{r}$, defined by
${D}_{r+1}\left(x\right)={D}_{r}\left(x+1\right)-{D}_{r}\left(x\right)$ for $r\ge 0$ (where ${D}_{0}:=f$ by convention)
are such that ${D}_{r}$ is constant and non-zero for any $r\ge {r}_{0}$. Then f is a polynomial in x (with Q coefficients) and degree ${r}_{0}$.
Question: Is this correct as I stated it? How does one prove that such f must be a polynomial?

trajeronls

Expert

Every polynomial of degree d can be expressed in the basis given by the Falling Factorials up to falling index d.
That is
$\begin{array}{rl}& p\left(x\right)={a}_{\phantom{\rule{thinmathspace}{0ex}}d}\phantom{\rule{thinmathspace}{0ex}}{x}^{\phantom{\rule{thinmathspace}{0ex}}d}+{a}_{\phantom{\rule{thinmathspace}{0ex}}d-1}\phantom{\rule{thinmathspace}{0ex}}{x}^{\phantom{\rule{thinmathspace}{0ex}}d-1}+\phantom{\rule{thickmathspace}{0ex}}\cdots \phantom{\rule{thickmathspace}{0ex}}+{a}_{\phantom{\rule{thinmathspace}{0ex}}0}\phantom{\rule{thinmathspace}{0ex}}{x}^{\phantom{\rule{thinmathspace}{0ex}}0}=\\ & ={b}_{\phantom{\rule{thinmathspace}{0ex}}d}\phantom{\rule{thinmathspace}{0ex}}{x}^{\phantom{\rule{thinmathspace}{0ex}}\underset{_}{d}}+{b}_{\phantom{\rule{thinmathspace}{0ex}}d-1}\phantom{\rule{thinmathspace}{0ex}}{x}^{\phantom{\rule{thinmathspace}{0ex}}\underset{_}{d-1}}+\phantom{\rule{thickmathspace}{0ex}}\cdots \phantom{\rule{thickmathspace}{0ex}}+{b}_{\phantom{\rule{thinmathspace}{0ex}}0}\phantom{\rule{thinmathspace}{0ex}}{x}^{\phantom{\rule{thinmathspace}{0ex}}\underset{_}{0}}\end{array}$
Then, from the properties of the falling factorial, it is easy to demonstrate that
${\mathrm{\Delta }}^{n}p\left(0\right)=\sum _{k}{\left(-1\right)}^{n-k}\left(\begin{array}{c}n\\ k\end{array}\right)\phantom{\rule{thickmathspace}{0ex}}p\left(k\right)=\left\{\begin{array}{cc}0& d
And viceversa, given
${\mathrm{\Delta }}^{d}p\left(x\right)=d!{b}_{\phantom{\rule{thinmathspace}{0ex}}d}\phantom{\rule{1em}{0ex}}⇒\phantom{\rule{1em}{0ex}}{\mathrm{\Delta }}^{d+1}p\left(x\right)=0$
the Newton series assures us that
$p\left(a+x\right)=p\left(a\right)+x\phantom{\rule{thinmathspace}{0ex}}\mathrm{\Delta }p\left(a\right)+\cdots +\frac{{x}^{\phantom{\rule{thinmathspace}{0ex}}\underset{_}{\phantom{\rule{thinmathspace}{0ex}}d}}}{d!}\phantom{\rule{thickmathspace}{0ex}}{\mathrm{\Delta }}^{\phantom{\rule{thinmathspace}{0ex}}d}p\left(a\right)$
So we can conclude that
if we are given a sequence of m+1 points equally spaced by h
$\left\{{x}_{0},{x}_{0}+h,{x}_{0}+2h,\dots ,{x}_{0}+mh\right\}$
and the corresponding sequence of values taken by a function f(x)
${\left\{f\left({x}_{0}+kh\right)\right\}}_{k=0}^{h}$
we can put
$f\left({x}_{0}+kh\right)=f\left(h\left(\frac{{x}_{0}}{h}+k\right)\right)=g\left(k\right)$
denoting the differences as
$\left\{\begin{array}{c}\mathrm{\Delta }g\left(k\right)=g\left(k+1\right)-g\left(k\right)\hfill \\ {\mathrm{\Delta }}^{n}g\left(k\right)=\mathrm{\Delta }\left({\mathrm{\Delta }}^{n-1}g\left(k\right)\right)=\hfill \\ =\sum _{\left(0\le \right)j\left(\le n\right)}{\left(-1\right)}^{n-j}\left(\begin{array}{c}n\\ j\end{array}\right)g\left(k+j\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}0\le n+k\le m\hfill \end{array}$
we have that the sequence in n of $\left(-1{\right)}^{n}{\mathrm{\Delta }}^{n}g\left(k\right)$ is the Binomial Transform of the sequence of the values g(k+n)
${\left\{{\left(-1\right)}^{n}{\mathrm{\Delta }}^{n}g\left(k\right)\right\}}_{n=0}^{m-k}\underset{Binomial\phantom{\rule{thickmathspace}{0ex}}Transform}{\phantom{\rule{0pt}{0ex}}⟷}{\left\{g\left(k+n\right)\right\}}_{n=0}^{m-k}$
and since the Binomial Transform is self-inverse, we get
$g\left(k+n\right)=\sum _{\left(0\le \right)j\left(\le n\right)}\left(\begin{array}{c}n\\ j\end{array}\right){\mathrm{\Delta }}^{j}g\left(k\right)=\sum _{\left(0\le \right)j\left(\le n\right)}\frac{{\mathrm{\Delta }}^{j}g\left(k\right)}{j!}{\left(\left(n+k\right)-k\right)}^{\underset{_}{\phantom{\rule{thinmathspace}{0ex}}j\phantom{\rule{thinmathspace}{0ex}}}}$
which is precisely the Newton series.
given n+1 points $\left\{k,k+1,\dots ,k+n\right\}$ and the respective g(k+j)the Newton series is the unique polynomial of max degree n which interpolates them.
if the differences of order n<m are all equal to each other (at changing k)
${\mathrm{\Delta }}^{n}g\left(k\right)=c\ne 0\phantom{\rule{1em}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}0\le \mathrm{\forall }k\le m-n$
then all the differences of order higher than n will be null
${\mathrm{\Delta }}^{n+q}g\left(k\right)={\mathrm{\Delta }}^{q}\left({\mathrm{\Delta }}^{n}g\left(k\right)\right)={\mathrm{\Delta }}^{q}c=0\phantom{\rule{1em}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{\forall }k$
and the relevant Newton series will be the unique polynomial of degree n which interpolates all the m+1 points

Do you have a similar question?