Sometimes when my mom asks me what I'm working on, she'll chide me for the tears shed over rudimentary algebra in 4th or 5th grade, and I'm reminded why "the cool parts of math" weren't availed to me. In a previous post I laid some foundations for bastardizing calculus in ways that were unfathomable to my fragile, weak 10-year-old synapses that were philosophically frustrated by the notion of polynomial long division.
Consider the discrete summation of the triangular numbers:
k=1∑xk=1+2+3+⋯+x
and an arbitrary geometric series:
k=1∑xrk=r1+r2+r3+⋯+rx
Both of these summations are convergent for finite x, so there exists closed formulae for each of them:
∑k∑rk=2x(x+1)=r1−r1−rx
Interestingly, whereas the domain of the discrete summations is restricted to whole numbers, we can plug in any number into these formulae. In this post, we're going to ask and answer the question: does there exist a general formula by which we can we extend the domain of any arbitrary function like this?
k=1∑xf(k)=???
(Spoiler, the answer is no in general, but for a broad class of functions, the answer can be yes). At the same time, we'll brush up against topics covered by the related literature on non-integer sums1,2,3 courtesy of Markus Müller and Dierk Schleicher.
Given some univariate function f(x), we want to extend the sum of f(k) from 1 to x, which is itself a function of x:
k=1∑xf(k)=S(x)
Since x is whole, the plot of S(x) is a discontinuous collection of points. Our goal, then, becomes trying to find a sane, natural, and generalizable way to connect these points. Now, there's no single correct answer. We could just draw straight lines between the dots,
but there are also plenty of better methods of smooth interpolation we could reach for
and cases can be made for any curve which passes through all the points...
To define the line of best fit, we'll restrict the solution space according to properties of S(x) which we want our continuous approximation of S to satisfy.
Property the first: recursion
Thus, we observe first that S is recursively defined:
S(x+1)=f(1)+f(2)+⋯+f(x)+f(x+1)=S(x)+f(x+1)
so with no other information about S other than S(1)=f(1) and f, we can derive all other points of S.
This also works in reverse: given S(x), we can compute S(x−1)=S(x)−f(x), allowing us to compute S(0) (which will always be zero), as well as S(−1),S(−2),...
Noting that the recursive definition has already allowed us to breach containment of the strictly natural domain of our summation indices, we might then suppose that if we could just find some value S(q) for q∈Q\Z, we could fin all integer multiples of that point as well:
Property the second: the limit
Second, though we don't yet know how to extend S near the origin, the asymptotic behavior off towards positive infinity is easier to work with. Just like the reciprocal function f(x)=x1 whose origin behavior is finicky, we can more intuitively understand the tail behavior:
x→∞limf(x)=0
Asymptotically, the reciprocal function behaves like a horizontal line, so for S(n) and some big, whole value of n, the amount that we add f(n+1) in order to get S(n+1) will be vanishingly small, so the points nearS(n) will be a horizontal line for our chosen function f:
Here in the limit where S behaves like a constant, it would make sense to interpolate not-natural values of of S(n) as a horizontal line.
However, the limit as our input to S approaches infinity won't always converge to 0. E.g., for f(x)=lnx, though it doesn't approach zero, it's slope does decrease over time and it gradually starts to "flatten out" towards some constant. So here, the amount that we add near S(n) for some arbitrarily large, whole value of n become constant:
So, combining these two observations, how might we generalize to more than two hand-picked functions.
General Form for Asymptotically Zero Valued Functions
Suppose that Scan be extended in the vicinity of some f at a large input n. In order to find S(x) for x∈Z, we perform a three step process:
Compute S(n) where we have a sum extension and can move forward any amount,
Step forward x units to compute S(n+x)
Step backwards n units
For this to work, we have to be able to extend S near n which, so far, we've only established is possible for the log naturale and reciprocal functions. For now, to extract a general form of this process, let's assume that our step amount x is an integer.
Computing S(n) we get S(n)=∑k=1nf(k)
Advancing x units translates to the iterated application of the recursive formula: +f(n+1)++f(n+2)+⋯+f(n+x)=∑k=1xf(n+k)
And finally, moving backwards n units: −f(x+n)−f(x+n−1)−⋯−f(x+2)−f(x+1)=−∑k=1nf(x+k)
So, all together we get a new formula for S(x):
S(x)=k=1∑nf(k)+k=1∑xf(n+k)−k=1∑nf(x+k)
Rearranging terms to group the first and third summations which share the same bounds:
And this formula will work for any whole-valued, arbitrarily big n, meaning we also need to take the infinite limit of this definition of S:
S(x)=n→∞lim(k=1∑n(f(k)−f(x+k))+k=1∑xf(n+k))
However, it still only works if x is a whole number due to its usage as an upper bound in the second term. In this second term, though, we've offset the problem of fractional x by n in the limit, so hopefully we can infer a value from a well-bounded asymptotic approximation.
Let's see how this deferred evaluation of non-whole x would work for some functions who's asymptotic behavior we understand.
Suppose that f(x)=0 as x tends towards infinity, since we're taking the limit of f(n+k) sums as n approaches infinity, it means that those terms will go to zero:
n→∞limk=1∑x→0f(n+k)
This implies that the whole summation tends towards zero since x is finite and the summation of finitely many zero-valued terms vanishes in the limit, problem solved!
which will work for any underlying f which tends towards zero for sufficiently large n, since we've removed the only part of the formula which relied on x being an integer. And we can visualize this "for real" with some Desmos handiwork like so:
N.B. that this only works for p<−1 since the larger that p is, the slower xp approaches zero, and thus we need to crank the n slider to pump up the "limit", however even this doesn't work for positive values of p since that function no longer converges on zero. So we need some way to account for that...
General Form for Asymptotically Constant Valued Functions
What about the case for functions which "flatten" out at some non-zero value, where f(x)→c as x→∞ ? What does it mean to converge on some constant c?
Revisiting the example of ln, is there any constant value we can find which this function converges on? No, because ln will eventually grow past it, so we need something less restrictive.
We can say S(x) flattens out if the limit of the underlying function f(x) approaches zero, but since S and f are not explicitly related in this definition, we first need want leverage the backwards recursive definition of S:
So we can use the left hand side as our definition, claiming that S(x) flattens out if
x→∞limS(x+1)−S(x)=0
For simplicity/generality, we can write this as f instead of S since this holds for any function4 and note that
f(x+1)−f(x)
is called the forward difference of f at x, and is familiarly denoted Δf(x). So we can say that a function flattens out if:
x→∞limΔf(x)=0
Thus we recover a discrete version of the fundamental theorem of calculus:
ΔS(x)=f(x+1)
f⟶∑SSΔ⟵f
This discrete derivation can be thought of as the amount of change after taking one step to the right:
f(x+1)=f(x)+Δf(x)
And note that this is also recursive. That is, if we have some value f(a), and we want to compute f(a+b), we simply add forward differences up to Δf(a+b−1) :
And this whole process can be expressed as a sum of forward differences:
And since this sinistral term as it appears in S is within a sum, we'll introduce the same sum to both side of our working forward difference equation:
And note as the use of subscripts on k rather than color coding. So what we've done is replace the problematic, non-vanishing term from earlier with a double sum of forward differences – and since the forward difference approaches zero in the limit AND since the double summation sums finitely many terms, neither of which are bounded by n→∞, this term also goes to zero.
Get vanished, bozo! so we can actually drop the whole double-sum term, leaving us with just a singular summation of f(n) which we can express as xf(n), and not it doesn't matter whether or not x is wholly valued or not:
We can see this formulation in action in Desmos again, where all we have to do is add this new xf(n) term to our definition of S, without which we would've needed n to be on the order of millions instead of hundreds to resemble a half-decent approximation even at this scale:
Once again, though, this fails to be a faithful approximation for any n if p>1 because f(x) no longer flattens out.
Reviewing what we've accomplished so far, we can brainstorm how we might extend the sum of functions parameterized by p>1.
For the first class of summations, it was sufficient if f(x) converged to zero as does f(x)=1/x:
x→∞limf(x)=0
For the second class, it was sufficient if the forward difference converged to zero, meaning that the function flattened out at some point:
x→∞limΔf(x)=0
So what if we iterate the forward difference, defined the way we'd expect it to be, where
to apply this to our formula for S, we'll first make some notational tweaks for readability. First, we'll reindex n→n−1 which is arbitrary in the limit:
For this derivation, let's take for example a function who's first and second forward differences do not converge to zero, but who's third order forward difference does:
x→∞limΔ3f(x)=0
to demonstrate how this form works in general for all polynomials.
and this time –since the first order forward difference Δ1 doesn't necessarily converge to zero, we can't disappear it. Instead, we repeat this process for Δ1f(n+k2), starting with the first order forward difference formula:
and finally this is the summation of a term that goes to zero, so therefore so too will the quadruple sum of this term since, again, we're only summing finitely many terms bounded by x, which cascades down the summation stack, so we can drop this whole quadruple sum too:
Is this actually useful though? Our resultant expression contains several nested, dependent sums which all still rely on x being a whole number... To make this expression tractable for non-whole x, we can observe that the terms we're summing over aren't dependent on f or n, so they're constant relative to the sum, so we can factor them out:
So now we have a bunch of nested sums of 1 which might look familiar to students of combinatorics. Consider just the triply-nested sum:
k1=0∑x−1k2=0∑k1−1k3=0∑k2−11
and now consider what happens to the innermost sum when the bound of the enveloping summation, k2, is 0:
k3=0∑k2−1=k3=0∑0−1
which is a no op since the lower bound of 0 is initialized to a value greater than the upper bound! So we can skip the step when k2 is 0 and just start at k2=1:
k1=0∑x−1k2=1∑k1−1k3=0∑k2−11
Similarly, we can look at the middle summation when k1=0,1 and notice that no summing happens till k1≥2, so we can skip those indices and initialize k1 to 2:
k1=2∑x−1k2=1∑k1−1k3=0∑k2−11
So, in this way, while the index of the variable in our upper bound increases, so to does the the initial value of the lower bound. To understand this programmatically, what we have here is a triply-nested loop where k3<k2<k1:
x =10total =0for k1 inrange(2, x):for k2 inrange(1, k1):for k3 inrange(0, k2): total +=1# total = 120
If we were to visualize this, we'd see how each of our loop indices effectively enumerates each way to choose 3 elements from x=10, which we can express mathematically as (3x) . So, our nested summation is equivalent to the binomial coefficients parameterized by n=x, where k is the number of summations:
m summationsk1∑x−1k2∑k1−1k3∑k2−1⋯km∑km−1−11=(mx)
where the binomial coefficientformula is derived from:
# orderings of k items# ways to choose from n items w/removal
e.g. for (3x), initially we have x items to choose from, then we have x−1 items to choose from, and then x−2, but if order doesn't matter for our combination (it doesn't) then we've over counted and have to divide by the number of orders we could've selected one from x, from x−1, and then from x−2 which is 3!:
3!x(x−1)(x−2)=(3x)
which does not require x∈Z! So returning to our definition of S, we can replace each of the summations with the binomial coefficients they correspond to:
And this works for any f with a polynomial growth rate (i.e. an f for which the mth forward differences converges to zero). This means we can extend summation for these functions to fractional, irrational, and even complex terms, provide that the limit of the mth forward difference does actually converge, as does the limit of the summation itself (e.g. summation over 1-periodic functions will not converge womp womp).
Footnotes
Footnotes
Markus Müller and Dierk Schleicher. "Fractional Sums and Euler-like Identities." arXiv, 2005. ↩
Markus Müller and Dierk Schleicher. "How to add a non-integer number of terms, and how to produce unusual infinite summations." Journal of Computational and Applied Mathematics, 2005. https://doi.org/10.1016/j.cam.2004.08.009↩
Markus Müller and Dierk Schleicher. "How to Add a Noninteger Number of Terms: From Axioms to New Identities." arXiv, 2010. ↩
Not quite, it wrongly identifies 1-periodic functions (those for which the equality f(x+1)=f(x)) as "flattening out," but 1-periodic functions are unproblematic. ↩