# 1.18 Lesson 18

```# Lesson 18.  Numerical Integration
# ---------------------------------
#
# Maple uses some quite sophisticated methods for numerical
# approximation of integrals.  We'll look at some rather less
# sophisticated ones, to get an idea of what is behind some of
# these.
#
# We'll start with some that you may remember from Math 101: the
# Midpoint Rule, Trapezoid Rule and Simpson's Rule.
#
# Suppose we want to approximate int(f(x), x=a .. b).
# For the Midpoint Rule, we divide the interval [a,b] into n
# equal subintervals and approximate f(x) on each subinterval by
# its value at the midpoint of the subinterval.  We'll usually
# have a, b and f fixed, and look at different n's, so we write
# everything as a function of n.

>    h:= n ->(b-a)/n:
>    X:= (i,n) -> a + i*(b-a)/n:
> midrule:= n ->
>    h(n)*sum(f((X(i,n)+X(i+1,n))/2), i = 0 .. n-1);

/n - 1                                 \
|-----                                 |
| \                                    |
midrule := n -> h(n) |  )   f(1/2 X(i, n) + 1/2 X(i + 1, n))|
| /                                    |
|-----                                 |
\i = 0                                 /
> a:= 0: b:= 1:  midrule(3);

1/3 f(1/6) + 1/3 f(1/2) + 1/3 f(5/6)
# For the Trapezoid Rule, we approximate f(x) on the subinterval
# by the average of its values at the two endpoints.
> traprule:= n ->
>    h(n) * sum((f(X(i,n))+f(X(i+1,n)))/2, i=0..n-1);

/n - 1                                      \
|-----                                      |
| \                                         |
traprule := n -> h(n) |  )   (1/2 f(X(i, n)) + 1/2 f(X(i + 1, n)))|
| /                                         |
|-----                                      |
\i = 0                                      /
> traprule(3);

1/6 f(0) + 1/3 f(1/3) + 1/3 f(2/3) + 1/6 f(1)
# Let's try a typical function f for which we know the true
# value of the integral, and look at the error (the difference
# between the true value and the approximation) for Midpoint and
# Trapezoid Rules with different values of n.
> f:= x -> 1/(x+1): J:= int(f(x), x=a..b);

J := ln(2)
> seq([nn, evalf(J-midrule(nn)), evalf(J - traprule(nn))], nn= 1 .. 5);

[1, .0264805139, -.0568528194], [2, .0074328949, -.0151861527],

[3, .0033924908, -.0068528194], [4, .0019272894, -.0038766289],

[5, .0012392949, -.0024877400]
# The errors decrease, of course, as n increases.  It turns out
# the errors in both methods are approximately proportional to
# 1/n^2.
> plot({[seq([nn, nn^2*evalf(J-midrule(nn))],
>    nn=1 .. 20)],
>       [seq([nn, nn^2*evalf(J-traprule(nn))],
>    nn=1 .. 20)]},style=point);

** Maple V Graphics **

# The theoretical result is that the error in the Midpoint Rule
# is f''(t)(b-a)^3/(24n^2) for some t between a and b, and the
# error in the Trapezoid Rule is -f''(t)(b-a)^3/(12n^2) (for a
# different t).  This assumes that f'' is continuous on the
# interval [a,b].  If not, the error might not be proportional
# to 1/n^2.
> f:= x -> sqrt(x): J:= int(f(x), x=a..b):
> plot({[seq([nn, nn^2*evalf(J-midrule(nn))],
>    nn=1 .. 20)],
>       [seq([nn, nn^2*evalf(J-traprule(nn))],
>    nn=1 .. 20)]},style=point);

** Maple V Graphics **

# When f'' is continuous,the Midpoint Rule's error is
# approximately -1/2 times the Trapezoid Rule's error.  This
# might suggest that a combination of the two rules would cancel
# out the error (at least approximately) and produce a much
# better approximation.  The appropriate combination is
# 2/3*Midpoint + 1/3*Trapezoid.  This is called Simpson's Rule.
> f:= 'f': 2/3*midrule(3)+1/3*traprule(3);

2/9 f(1/6) + 2/9 f(1/2) + 2/9 f(5/6) + 1/18 f(0) + 1/9 f(1/3) + 1/9 f(2/3)

+ 1/18 f(1)
# We'd call this the Simpson's Rule approximation for n=6
# (rather than n=3): it uses the same 7 equally-spaced points as
# the Trapezoid Rule for n=6.  We only use simpson(n) when n is
# even.
> simpson:= n ->
>    h(n) * sum(1/3*f(X(2*i,n))+4/3*f(X(2*i+1,n))+1/3*f(X(2*i+2,n)), i=0..n/2-1);

simpson := n -> h(n)

/1/2 n - 1                                                                 \
|  -----                                                                   |
|   \                                                                      |
|    )     (1/3 f(X(2 i, n)) + 4/3 f(X(2 i + 1, n)) + 1/3 f(X(2 i + 2, n)))|
|   /                                                                      |
|  -----                                                                   |
\  i = 0                                                                   /
> simpson(6);

2/9 f(1/6) + 2/9 f(1/2) + 2/9 f(5/6) + 1/18 f(0) + 1/9 f(1/3) + 1/9 f(2/3)

+ 1/18 f(1)
# The error for Simpson's Rule is (D^4)f(t)(b-a)^5/(180n^4).
> f:= x -> 1/(x+1): J:= ln(2):
> plot([seq([2*nn, (2*nn)^4*evalf(J-simpson(2*nn))], nn=1 .. 20)],style=point);

** Maple V Graphics **

# To get an idea of how much better this is
# than Midpoint or Trapezoid, let's see what n would be needed
# to have an error less than 10^(-10) in absolute value.
# For the Trapezoid Rule, the error was approximately -.06/n^2.
# So n would have to be greater than
> (.06*10^10)^(1/2);

24494.89743
# For the Midpoint Rule, the error was approximately .03/n^2.
> (.03*10^10)^(1/2);

17320.50808
# For Simpson's Rule, it was approximately
# -.03/n^4.  Change that .03 to .031 for safety.
> (.031*10^10)^(1/4);

132.6906811
# Let's try it.  I'll increase Digits to
# reduce roundoff error.
> Digits:= 15:  evalf(J-simpson(134)); Digits:= 10:

-10
-.96911*10
# Now let's look at it from a different point of view.  The
# Midpoint and Trapezoid rules give the correct answers for
# polynomials of degree 1, but in general not for higher
# degrees.
> poly:= n -> unapply(sum(c[j]*x^j, j=0..n), x);

n
-----
\          j
poly := n -> unapply(  )   c[j] x , x)
/
-----
j = 0
> f:=poly(5);

2         3         4         5
f := x -> c[0] + c[1] x + c[2] x  + c[3] x  + c[4] x  + c[5] x
> J:= f -> int(f(x), x=a..b): J(f) - midrule(1);

11          13
1/12 c[2] + 1/8 c[3] + ---- c[4] + ---- c[5]
80          96
> J(f)-traprule(1);

- 1/6 c[2] - 1/4 c[3] - 3/10 c[4] - 1/3 c[5]
# Simpson's Rule is exact for polynomials of degree 3.
> J(f)-simpson(2);

- 1/120 c[4] - 1/48 c[5]
# We could have used this to derive Simpson's Rule in another
# way.  Suppose we didn't know the coefficients, but we knew the
# general form.
> rule:= n ->
>    h(n) * sum(d[0]*f(X(2*i,n))+d[1]*f(X(2*i+1,n))+d[2]*f(X(2*i+2,n)), i=0..n/2-1);

rule := n -> h(n)

/1/2 n - 1                                                                    \
|  -----                                                                      |
|   \                                                                         |
|    )     (d[0] f(X(2 i, n)) + d[1] f(X(2 i + 1, n)) + d[2] f(X(2 i + 2, n)))|
|   /                                                                         |
|  -----                                                                      |
\  i = 0                                                                      /
> J(f)-rule(2);

c[0] + 1/2 c[1] + 1/3 c[2] + 1/4 c[3] + 1/5 c[4] + 1/6 c[5] - 1/2 d[0] c[0]

- 1/2 d[1] (c[0] + 1/2 c[1] + 1/4 c[2] + 1/8 c[3] + 1/16 c[4] + 1/32 c[5])

- 1/2 d[2] (c[0] + c[1] + c[2] + c[3] + c[4] + c[5])
> eqns:= { seq(coeff(expand("),c[jj],1), jj=0..2) };

eqns := {1/2 - 1/4 d[1] - 1/2 d[2], 1/3 - 1/8 d[1] - 1/2 d[2],

1 - 1/2 d[0] - 1/2 d[1] - 1/2 d[2]}
# I'm only doing this for j up to 2, not 3:
# there are only three parameters, d[0] to d[2], so I want three
# equations. The fact that it is exact for x^3 too is an added
# bonus.
> solve(eqns);

{d[0] = 1/3, d[2] = 1/3, d[1] = 4/3}
# This can be generalized.
> rule:= (n,k) ->
>    h(n) * sum(sum(d[j]*f(X(k*i+j,n)), j=0..k), i=0..n/k-1);

/n/k - 1 /  k                        \\
| -----  |-----                      ||
|  \     | \                         ||
rule := (n,k) -> h(n) |   )    |  )   d[j] f(X(k i + j, n))||
|  /     | /                         ||
| -----  |-----                      ||
\ i = 0  \j = 0                      //
> f:= 'f': rule(2,2);

1/2 d[0] f(0) + 1/2 d[1] f(1/2) + 1/2 d[2] f(1)
> eqns:= k -> eval({ seq(subs(f=unapply(x^ii,x), rule(k,k)=J(f)), ii=0..k) });
Warning, `ii` is implicitly declared local

eqns := k -> local ii;

ii
eval({seq(subs(f = unapply(x  , x), rule(k, k) = J(f)), ii = 0 .. k)})
> eqns(2);

{1/4 d[1] + 1/2 d[2] = 1/2, 1/8 d[1] + 1/2 d[2] = 1/3,

1/2 d[0] + 1/2 d[1] + 1/2 d[2] = 1}
> eqns(3);

{1/27 d[1] + 4/27 d[2] + 1/3 d[3] = 1/3,

1/81 d[1] + 8/81 d[2] + 1/3 d[3] = 1/4,

1/9 d[1] + 2/9 d[2] + 1/3 d[3] = 1/2,

1/3 d[0] + 1/3 d[1] + 1/3 d[2] + 1/3 d[3] = 1}
> solve(");

{d[0] = 3/8, d[3] = 3/8, d[2] = 9/8, d[1] = 9/8}
> subs(", rule(3,3));

1/8 f(0) + 3/8 f(1/3) + 3/8 f(2/3) + 1/8 f(1)
> eval(subs(f= poly(4),J(f)-"));

- 1/270 c[4]
> subs(solve(eqns(4)), rule(4,4));

16                          16
7/90 f(0) + ---- f(1/4) + 2/15 f(1/2) + ---- f(3/4) + 7/90 f(1)
45                          45
# The rule for k=3 is exact for polynomials of degree 3 but not
# of degree 4, just like Simpson's rule.  But the k=4 rule will
# be exact for polynomials of degree 5.  In general, when k is
# even the rule will be exact for polynomials of degree k+1.
# These rules are called Newton-Cotes rules.
> for kk from 2 to 12 do
>  ncrule[kk]:= unapply(subs(solve(eqns(kk)), rule(n,kk)), n) od:
> ncrule[4](4);

16                          16
7/90 f(0) + ---- f(1/4) + 2/15 f(1/2) + ---- f(3/4) + 7/90 f(1)
45                          45
> eval(subs(f=poly(6),J(f)-"));

- 1/2688 c[6]
> eval(subs(f=poly(14),J(f)-ncrule[12](12)));

251
- ----------- c[14]
12899450880
# The error in the Newton-Cotes rule ncrule[k](n) is
# approximately proportional to 1/n^k if k is odd, or 1/n^(k+1)
# if k is even.  Let's look at this for k=2,3 and 4 with
# f(x)=1/(1+x).  The k=4 rule is so accurate that I need to
# increase Digits (otherwise roundoff error will overwhelm the
# real error).
> f:= x -> 1/(1+x): Digits:= 20:
> plot({ [
> seq([2*jj,(ln(2)-ncrule[2](2*jj))*(2*jj)^4], jj=1..20)],[
> seq([3*jj,(ln(2)-ncrule[3](3*jj))*(3*jj)^4], jj=1..13)],[
> seq([4*jj,(ln(2)-ncrule[4](4*jj))*(4*jj)^6], jj=1..10)]}, style=point);

** Maple V Graphics **

# You might think that the higher the order,
# the better.  That is, if one method has error estimate A/n^p
# and a second has B/n^q with q > p, then the second will be
# more accurate when n is sufficiently large.
# This isn't necessarily true for a fixed n, however, especially
# for a function whose higher-order derivatives may grow
# rapidly.

> f:= x -> 1/(x^2 + 1/20): J(f);

1/2           1/2
2 5    arctan(2 5   )
> map(k -> evalf(sqrt(20)*arctan(sqrt(20))-ncrule[k](36)), [2,3,4,6,9,12]);

-7                   -6                     -5
[.619560396912*10  , .2226565484415*10  , -.10063093342562*10  ,

-5                    -5                    -6
.35623850744073*10  , .11208195265217*10  , -.5031750236487*10  ]
>

```