Friday, August 1, 2014

Fractional Calculus: A Tale of Mystery and Intrigue


Fractional calculus is the generalization of calculus beyond simply integer order derivatives and integrals to arbitrary real - or, more generally, complex! - order. Historically, fractional calculus has struggled more than its traditional counterpart, as the notions are much more subtle and drastically less intuitive. In this post, we will discuss what fractional calculus is, present some of the historical challenges fractional calculus has met, and give general expressions for the fractional derivative and integral. The guiding principle for fractional calculus is to consider arbitrary integer order derivatives and integrals in the hopes to find an appropriate generalization. Some peculiarities and applications of fractional calculus within mathematics and physics will be discussed.

Fractional Calculus: History and Foundations

Fractional calculus is the generalization of calculus to noninteger order derivatives and integrals. For example, fractional calculus aims to make sense of half derivatives and half anti-derivatives. Furthermore it is an attempt to unify differentiation and integration into one cohesive framework.

For example, square roots of differential operators, e.g. the Laplace operator, show up in differential geometry, seismic physics, several variable complex analysis, relativistic quantum mechanics, among other areas.

Fractional calculus was first conceived by Leibniz. He considered half derivatives in a correspondence letter with L'Hospital in 1695. He said that the half derivative "is an apparent paradox from which useful consequences will be drawn.'' Four notable names in the history of fractional calculus are Grünwald, Letnikov, Riemann and Liouville.

Grünwald's paper titled "About bounded derivations and their applications'' (1867) and Letnikov's paper "Theory of differentiation of fractional order'' (1868) gave the most intuitive definitions for the fractional derivative based on an alternative definition of $n$th order derivatives. Riemann's paper "An attempt at a notion of differentiation and integration'' (1847) gave formal definitions for both fractional derivatives and integrals.

Why should we endeavor to generalize calculus to non-integer order even if a fractional order derivative or integral has no obvious interpretation? Because we're mathematicians and that's what we do.

Historically derivatives and integrals were first explored for polynomials, particularly monomials, so it may be of interest to consider them before attempting to generalize derivatives and integrals to arbitrary order.

Towards the Fractional Derivative

We know that if $m,n\in\mathbb{N}_0$ and $m\ge n$, then

\begin{equation*}
 \frac{d^n}{dx^n}x^m = m(m-1)\cdots(m-n+1)x^{m-n}.
\end{equation*}

We can realize this as being equivalent to

\begin{equation*}
 \frac{d^n}{dx^n}x^m = \frac{m!}{(m-n)!}x^{m-n}.
\end{equation*}

We might haphazardly make the ansatz that for $\nu > 0$,

\begin{equation*}
 \frac{d^{\nu}}{dx^{\nu}}x^m = \frac{m!}{(m-\nu)!}x^{m-\nu}.
\end{equation*}

However unless $\nu$ is an integer, $(m-\nu)!$ is not defined so we must look to a generalization of the factorial. For technical reasons, the proper generalization is the gamma function. For $z\in\mathbb{C}\setminus\mathbb{Z}_{\le 0}$, the gamma function is defined to be

\begin{equation}
 \Gamma(z) = \int_0^{\infty} t^{z-1}e^{-t}\,dt.
\end{equation}

If you recall from differential equations, this is just a Laplace transform of $f(t) = t^{z-1}$. Some properties of the gamma function follow:
  1. For $z\in\mathbb{C}\setminus\mathbb{Z}_{\le 0}$, $\Gamma(z+1) = z\Gamma(z)$,
  2. For $n\in\mathbb{N}_0$, $\Gamma(n+1) = n!$,
  3. $\Gamma\left(\frac{1}{2}\right) = \sqrt{\pi}$,
  4. $\Gamma$ has simple poles at the non-positive integers.
With the gamma function at our disposal as a seeming natural generalization of the factorial, we can write our ansatz for $\frac{d^{\nu}}{dx^{\nu}}x^m$ as

\begin{equation}
 \frac{d^{\nu}}{dx^{\nu}}x^m = \frac{\Gamma(m+1)}{\Gamma(m-\nu+1)}x^{m-\nu}. \tag{1}
\end{equation}

An interesting consequence of this is that

\begin{equation*}
 \frac{d^{\frac{1}{2}}}{dx^{\frac{1}{2}}} 1 = \frac{\Gamma(1)}{\Gamma\left(\frac{1}{2}\right)}x^{-\frac{1}{2}} = \frac{1}{\sqrt{\pi}}x^{-\frac{1}{2}}.
\end{equation*}

It follows that the fractional derivative of a constant need not be 0---this is a big departure from standard calculus. However this is far from being rigorous or even satisfactory since not every function we work with is a polynomial. Therefore we must find a proper definition for the fractional derivative. The way to do this is to consider general definitions for $n$th order derivatives in the hopes that a general pattern may emerge. This is the topic of the next section.

The Grünwald-Litnokov Fractional Derivative

Let us consider the definition of the derivative (here $f$ is assumed to be smooth).

\begin{equation*}
 f^{(1)}(x) = \lim_{h\to 0}\frac{f(x)-f(x-h)}{h}.
\end{equation*}

The second derivative is defined to be

\begin{equation*}
 f^{(2)}(x) = \lim_{h\to 0}\frac{f^{(1)}(x)-f^{(1)}(x-h)}{h}.
\end{equation*}

However by employing the mean value theorem, we can see that this is actually equivalent to the following alternative definition

\begin{equation*}
 f^{(2)}(x) = \lim_{h\to 0}\frac{f(x)-2f(x-h)+f(x-2h)}{h^2}.
\end{equation*}

Similarly, the third derivative may be written as

\begin{equation*}
 f^{(3)}(x) = \lim_{h\to 0}\frac{f(x)-3f(x-h)+3f(x-2h)-f(x-3h)}{h^3}.
\end{equation*}

Notice the appearance of the binomial coefficients. With these observations, it's not hard to see that a general alternative definition for the $n$th derivative is given by

\begin{equation}
 f^{(n)}(x) = \lim_{h\to 0}\frac{1}{h^n}\sum_{j=0}^n(-1)^j\binom{n}{j} f(x-jh),
\end{equation}

where $\binom{n}{j} = \frac{n!}{(n-j)!j!}$. We can easily see how to proceed: we would like to replace $n$ with $\nu$ and $\dbinom{n}{j} = \dfrac{n!}{(n-j)!j!}$ with $\dfrac{\Gamma(\nu+1)}{\Gamma(\nu-j+1)\Gamma(j+1)}$ so that we can consider fractional derivatives. However there is a problem with the upper limit of our summation. Suppose we consider $\nu = \frac{1}{2}$, then

\begin{eqnarray*}
 f^{\left(\frac{1}{2}\right)}(x) &=& \lim_{h\to 0^+} \frac{1}{h^{\frac{1}{2}}}\sum_{j=0}^{\frac{1}{2}} (-1)^j\frac{\Gamma\left(\frac{1}{2}+1\right)}{\Gamma\left(\frac{1}{2}-j+1\right)\Gamma(j+1)}f(x-jh) \\
 &=& \lim_{h\to 0^+}\frac{1}{h^{\frac{1}{2}}}f(x) \\
 &=& \infty.
\end{eqnarray*}

This is rather undesirable behavior. We would like derivatives for nice functions to be finite! Somehow the hard limit of $\nu$ in our upper limit is not appropriate and need some way to elimiate this. A seeming fix is to note that $\dbinom{n}{j} = 0$ if $j > n$ since it is impossible to choose more objects than what you have. So we could really view $f^{(n)}$ as

\begin{equation*}
 f^{(n)}(x) = \lim_{h\to 0}\frac{1}{h^n}\sum_{j=0}^{\infty} (-1)^j\binom{n}{j}f(x-jh).
\end{equation*}

Naively, making the change $n\to\nu$, we would have that

\begin{equation*}
 f^{(\nu)}(x) = \lim_{h\to 0^+} \frac{1}{h^{\nu}}\sum_{j=0}^{\infty} (-1)^j \frac{\Gamma(\nu+1)}{\Gamma(\nu-j+1)\Gamma(j+1)}f(x-jh).
\end{equation*}

This has two limiting procedures which is already a red flag. Another is that $x-jh$ tends to $-\infty$ as $j$ tends to $-\infty$. Yet another is a matter of convergence; asymptotically, $\dfrac{\Gamma(\nu+1)}{\Gamma(\nu-j+1)\Gamma(j+1)}$ behaves like $\dfrac{1}{j^{1+\nu}}$. If $f$ grows faster than $1+\nu$ in the negative direction, the sum very well may not converge.

A derivative should be somewhat local. It shouldn't depend on how the function behaves at $-\infty$. Coupled with the two limiting procedures, this suggests that we need to combine our two limiting procedures into one procedure. Ideally we would like a lower bound on $x-jh$ as $j$ tends to $\infty$. This gives us the definition

\begin{equation}
 D^{\nu}f(x) = f^{(\nu)}(x) = \lim_{N\to\infty} \frac{1}{\left(\frac{x}{N}\right)^{\nu}}\sum_{j=0}^{N-1} \frac{\Gamma(\nu+1)}{\Gamma(\nu-j+1)\Gamma(j+1)}f\left(x-j\frac{x}{N}\right).
\end{equation}

As $N$ tends to $\infty$, $x-j\frac{x}{N}$ tends to $0$, as desired. Effectively we have replaced $h$ with $\frac{x}{N}$. Notice how similar this procedure is to a Riemann sum. This definition, while well-motivated, is very difficult to implement in practice; anyone who has done Riemann sums by hand knows that this is not surprising. Due to this sampling of points in the region $[0,x]$, fractional derivatives are highly nonlocal unless they are of integer order. With a notion of fractional differentiation, we can take a foray into the fractional integral.

The Riemann-Liouville Fractional Integral

With the theory of fractional derivatives partially explored, let us look into the case of fractional integration. For derivatives, we found general expressions for $n$th order derivatives to get an idea as to how to define the fractional derivative. We can do something similar for the $n$th order anti-derivatives. Let's consider a double integral for starters. By a change of integration order, we see that

\begin{eqnarray*}
 \int_0^t\int_0^{t_1}f(t_2)\,dt_2\,dt_1 &=& \int_0^t\int_{t_2}^t f(t_2)\,dt_1\,dt_2 \\
 &=& \int_0^t (t-t_1)f(t_1)\,dt_1.
\end{eqnarray*}

Repeating this procedure, we would see that

\begin{equation}
 \overbrace{\int_0^t\int_0^{t_1}\cdots\int_0^{t_{n-1}}}^{n\,\text{times}} f(t_{n-1})\,dt_{n-1}\cdots\,dt_2 \,dt_1 = \frac{1}{(n-1)!}\int_0^t (t-t_1)^{n-1}f(t_1)\,dt_1.
\end{equation}

This can also be checked by induction (and differentiating both sides). This is called Cauchy's repeated integral formula. Notice the striking similarity to Cauchy's generalized integral formula from complex analysis (sans a factor of $2\pi i$ and with repeated integration in place of repeated differentiation). With this in mind, the most logical definition for the $\nu$th order anti-derivative of $f$ is given by

\begin{equation}
 I^{\nu}f(x) = \frac{1}{\Gamma(\nu)}\int_0^x(x-t)^{\nu-1}f(t)\,dt.
\end{equation}

We can use the definition of the fractional integral to define "new'' notions of fractional derivatives. Suppose $n\in\mathbb{N}_0$ such that $n < \text{Re}\nu < n+1$, then define

\begin{eqnarray*}
 D^{\nu}_Rf(x) &=& I^{n+1-\nu}(f^{(n+1)})(x) \\
 D^{\nu}_Lf(x) &=& \frac{d^{n+1}}{dx^{n+1}} I^{n+1-\nu}f(x).
\end{eqnarray*}

These both warrant being called derivatives since by subtracting the powers, we would end up with $I^{-\nu}$ which we would like to naturally identify with $D^{\nu}$ since integration and differentiation are inverses in some sense. $D^{\nu}_R$ is a bad definition though because in the case that $m < \text{Re}\nu$, $D^{\nu}_R x^m$ will give $0$ since we will have differentiated $n+1$ times which annihilates $x^m$. $D^{\nu}_L$ however will give (skipping the computation - check it yourself!)

\begin{equation*}
 D^{\nu}_Lx^m = \frac{\Gamma(m+1)}{\Gamma(m-\nu+1)}x^{m-\nu}.
\end{equation*}

Referring back to (1), this is exactly what we expected. In fact, $D^{\nu}_L$ is equivalent to the Grünwald-Litnokov derivative $D^{\nu}$ but has the benefit of being easy to implement.