# 3.4 Assignment 4 Solutions(Israel's)

```# Math 210 Sec. 201
# Solutions to Assignment 4
#
# # 1.(a) Consider the integral
#
#                  1
#                  /
#                 |   1/2
#                 |  x    dx
#                 |
#                /
#                0
#
# # with ME(n), TE(n) and SE(n) the errors for Midpoint, Trapezoid and Simpson's Rules.
# # Plot n^(3/2) ME(n), n^(3/2) TE(n), and n^(3/2) SE(n) (for even n) on the same graph.
# # What can you conclude?
#
>   f:= x -> x^(1/2): a:= 0: b:= 1: J:= int(f(x), x=a..b);\
h:= n ->(b-a)/n:\
X:= (i,n) -> a + i*(b-a)/n:\
midrule:= n ->   h(n)*sum(f((X(i,n)+X(i+1,n))/2), i = 0 .. n-1):\
traprule:= n ->   h(n) * sum((f(X(i,n))+f(X(i+1,n)))/2, i=0..n-1): \
simpson:= n -> h(n) * sum(1/3*f(X(2*i,n))+4/3*f(X(2*i+1,n))+1/3*f(X(2*i+2,n)),                    i=0..n/2-1): \
ME:= n -> J - midrule(n): TE:= n -> J - traprule(n): SE:= n -> J - simpson(n):

J := 2/3
> plot({[seq([nn, ME(nn)*nn^(3/2)], nn = 1 .. 20)], \
[seq([nn, TE(nn)*nn^(3/2)], nn = 1 .. 20)],\
[seq([2*nn, SE(2*nn)*(2*nn)^(3/2)], nn= 1 .. 10)]});
# All three appear to approach nonzero constants as n -> infinity.  Conclude that all three
# errors are of order n^(-3/2), rather than n^(-2) or n^(-4) as would happen for a smooth
# integrand.  The fact that x^(1/2) is not differentiable at x=0 is the problem.
#
# # (b) Approximately what n would be needed to make the Simpson's Rule error be less
# # than 10^(-9)?
#
#
# According to the graph,  S(n) is approximately .08 n^(-3/2).  This would be 10^(-9)
# at the following value of n:
> (.08/10^(-9))^(2/3);

185663.5533
# So the minimum n would be approximately 186000.
#
# # (c) Transform
#
#        infinity
#           /
#          |             1
#          |      --------------- dx
#          |            3     1/2
#         /       x + (x  + 1)
#         0
#
# # into a proper integral with a smooth integrand, and use Simpson's Rule to find an
# # approximation with error less than 10^(-6).
#
# The integral is improper only at infinity (the square root causes no problem for x > 0, nor
# is the denominator ever 0).  The change of variables t = 1/(x+1) maps 0 < x < infinity to
# 0 < t < 1.
> with(student, changevar):\
J:= Int(1/(x+(x^3+1)^(1/2)), x=0..infinity);\

infinity
/
|             1
J :=    |      --------------- dx
|            3     1/2
/       x + (x  + 1)
0
> J2:= changevar(t=1/(x+1), J, t);

1
/
|                 1
J2 :=  |  ------------------------------ dt
|  /        /       3    \1/2\
/   |1 - t   |(1 - t)     |   |  2
0   |----- + |-------- + 1|   | t
|  t     |    3       |   |
\        \   t        /   /
# Some simplification must be done, to see whether we have a singularity at t=0.
> simplify(J2);

1
/
|                  1
|  --------------------------------- dt
|    /        /             2\1/2  \
/     |        |1 - 3 t + 3 t |     |
0   t |1 - t + |--------------|    t|
|        |       3      |     |
\        \      t       /     /
# Maple isn't simplifying this as much as we'd want, because it's reluctant to take the t^3
# out of the square root.  Since t is positive, this would be all right.  The "symbolic" option
# to "simplify" can be used here.
> simplify(", symbolic);

1
/
|                1
|  ----------------------------- dt
|       2           2      3 1/2
/   t - t  + (t - 3 t  + 3 t )
0
# Yes, there is a problem.  The integrand has a 1/sqrt(t) singularity at 0.  A further
# transformation may smooth this out: t = s^2.
> J3:= changevar(t=s^2, ", s);

1
/
|                   s
J3 :=  |  2 ------------------------------- ds
|     2    4     2      4      6 1/2
/     s  - s  + (s  - 3 s  + 3 s )
0
> J4:= simplify(", symbolic);

1
/
|                 1
J4 := - 2  |  ------------------------------- ds
|         3           2      4 1/2
/   - s + s  - (1 - 3 s  + 3 s )
0
> f:= s -> 2/(s - s^3 + (1 - 3*s^2 + 3*s^4)^(1/2));

2
f := s -> -----------------------------
3           2      4 1/2
s - s  + (1 - 3 s  + 3 s )
> a:= 0: b:= 1:
> S1:= evalf(simpson(10)); S2:= evalf(simpson(20));

S1 := 2.013966056

S2 := 2.013929309
# Richardson extrapolation would indicate that the error in S2 is approximately
> Eest:= abs(S2-S1)/15;

-5
Eest := .2449800000*10
# a bit larger than we want.  But n=40 should do it.
> S3:= evalf(simpson(40));

S3 := 2.013926945
> Eest:= abs(S3-S2)/15;

-6
Eest := .1576000000*10
# and we might as well use the improved approximation
> R:= (16*S3 - S2)/15;

R := 2.013926788
# Just for comparison, here's Maple's value for the integral.
> evalf(J);

2.013926789
#
# # 2.(a) Catalan's constant is defined as
#
#
#              infinity
#               -----          i
#                \         (-1)
#          C =    )     ----------
#                /               2
#               -----   (2 i + 1)
#               i = 0
#
#
# # By finding bounds on the tail of this series, determine the value of C with an error less
# # than 10^(-8). Compare to Maple's "Catalan". Hint: consider the terms in pairs:
#
#
#
#        infinity
#         -----
#          \
#    C =    )     (a   + a    )
#          /        2j    2j+1
#         -----
#         j = 0
#
#
# The pair for j contains the terms for i=2*j and i=2*j+1, so we have
> Sum(1/(4*j+1)^2 - 1/(4*j+3)^2, j=0 .. infinity);

infinity
-----
\      /     1            1    \
)     |---------- - ----------|
/      |         2            2|
-----   \(4 j + 1)    (4 j + 3) /
j = 0
> f:= unapply(normal(1/(4*j+1)^2 - 1/(4*j+3)^2),j);

2 j + 1
f := j -> 8 ---------------------
2          2
(4 j + 1)  (4 j + 3)
# This is a decreasing function of j, so the Integral Test estimates should apply.  Upper
# and lower bounds for the "tail", the sum from N+1 to infinity, are
> upperbd:= int(f(x), x=N .. infinity);

1
upperbd := ---------------------
2 (4 N + 1) (4 N + 3)
> lowerbd:= int(f(x), x=N+1 .. infinity);

1
lowerbd := ---------------------
2 (4 N + 5) (4 N + 7)
# The difference between these is
> difference:=normal(upperbd - lowerbd);

N + 1
difference := 16 ---------------------------------------
(4 N + 1) (4 N + 3) (4 N + 5) (4 N + 7)
# Our approximation for the tail will be the average of the upper and lower bounds, which
# will have error at most half of this.  For what N will the estimated error be 10^(-8)?
> fsolve(difference/2 = 10^(-8), N);

-.9999999888
# Silly of me, I didn't tell Maple that N was positive.
> fsolve(difference/2 = 10^(-8), N, N = 1 .. infinity);

145.2023119
> MyCatalan:= evalf(Sum(f(j), j = 0 .. 146) + subs(N=146, upperbd+lowerbd)/2);

MyCatalan := .9159655942
> evalf(Catalan);

.9159655942
>
# # (b) Euler's constant gamma is defined by
#
#
#                              /  n               \
#                              |-----             |
#                              | \                |
#        gamma = limit         |  )   1/i  - ln(n)|
#                n -> infinity | /                |
#                              |-----             |
#                              \i = 1             /
#
#
# # By finding bounds on the tail of the convergent series
#
#
#            infinity
#             -----
#              \
#               )     (1/i + ln(i) - ln(i + 1))
#              /
#             -----
#             i = 1
#
#
# # determine the value of gamma with an error less than 10^(-8). Compare to Maple's
# # "gamma".
#
# Note that the partial sum to i=N is
> p:= N  -> Sum(1/i, i=1..N) - ln(N+1);

/  N      \
|-----    |
| \       |
p := N -> |  )   1/i| - ln(N + 1)
| /       |
|-----    |
\i = 1    /
> f:= x -> 1/x + ln(x) - ln(x+1);

f := x -> 1/x + ln(x) - ln(x + 1)
# This is positive and decreasing, with limit 0.
> normal(diff(f(x),x));

1
- ----------
2
x  (x + 1)
# So the Integral Test bounds apply.  The upper and lower bounds for the tail are
> upperbd:= int(f(x), x=N .. infinity);

upperbd := - ln(N) + ln(N + 1) N - ln(N) N + ln(N + 1) - 1
> lowerbd:= int(f(x), x=N+1 .. infinity);

lowerbd := - 2 ln(N + 1) + ln(N + 2) N - ln(N + 1) N + 2 ln(N + 2) - 1
> difference:=upperbd - lowerbd;

difference :=

- ln(N) + 2 ln(N + 1) N - ln(N) N + 3 ln(N + 1) - ln(N + 2) N - 2 ln(N + 2)
> fsolve(difference/2 = 10^(-8), N, N = 1 .. infinity);

4999.122055
# A bit large for my taste!  Let's try the technique from Lesson 22, based on the Trapezoid
# Rule.  We required (D@@2)(f)(t) to be decreasing.  To test this, look at (D@@3)(f)(t).
> normal((D@@3)(f)(t));

2
6 t  + 8 t + 3
- 2 --------------
4        3
t  (t + 1)
> Approx:= (f,N) -> Sum(f(n),n=1..N) \
+ Int(f(t),t=N .. infinity) - f(N)/2 \
- (D(f)(N+1)+D(f)(N-1))/24;

/  N       \    infinity
|-----     |       /
| \        |      |
Approx := (f,N) -> |  )   f(n)| +    |      f(t) dt - 1/2 f(N)
| /        |      |
|-----     |     /
\n = 1     /     N

- 1/24 D(f)(N + 1) - 1/24 D(f)(N - 1)
> ErrEst:= (f,N) -> (D(f)(N+1)-D(f)(N-1))/24;

ErrEst := (f,N) -> 1/24 D(f)(N + 1) - 1/24 D(f)(N - 1)
> ErrEst(f,N);

1             1            1            1             1         1
- ----------- + ---------- - ---------- + ----------- - ---------- + ----
2   24 (N + 1)   24 (N + 2)             2   24 (N - 1)   24 N
24 (N + 1)                              24 (N - 1)
> fsolve(" = 10^(-8), N = 1 .. infinity);

70.39108520
> evalf(Approx(f, 71));

.577215665
> evalf(ErrEst(f,71));

-8
.9662771958*10
> evalf(gamma);

.5772156649
>
#
# # 3.(a) Let
#
#
#                            2 Pi
#                              /
#                      1      |
#              f(x) = ----    |    exp(x cos(t)) dt
#                     2 Pi    |
#                            /
#                            0
#
#
# # Maple can't evaluate this integral, but in fact it turns out to be a Maple function:
# # "BesselI(0,x)". Evaluate f and "BesselI(0,...)" at several points to convince yourself that
# # they are the same.
#
> f:= x -> 1/(2*Pi)*Int(exp(x*cos(t)), t=0 .. 2*Pi);

2 Pi
/
|
|    exp(x cos(t)) dt
|
/
0
f := x -> 1/2 ----------------------
Pi
> evalf([seq(f(kk) = BesselI(0,kk), kk = -1 .. 3)]);

[1.266065878 = 1.266065878, .9999999995 = 1., 1.266065878 = 1.266065878,

2.279585302 = 2.279585302, 4.880792584 = 4.880792586]
#
# # (b) Find the Taylor series of f(x) at x=0 by starting with the Taylor series of exp. Hint:
#                   2 Pi
#                    /
#                   |       n
#                   |    cos (t) dt = 0        if n is odd
#                   |
#                  /
#                  0               (1 - n)
#                                 2        Pi n!
#                              =   -------------- if n is even
#                                            2
#                                    ((n/2)!)
#
> exp(x*cos(t)) = Sum((x*cos(t))^n/n!, n = 0 .. infinity);

infinity
-----             n
\      (x cos(t))
exp(x cos(t)) =    )     -----------
/           n!
-----
n = 0
> J := 1/(2*Pi)*int(op(2,"), t = 0 .. 2*Pi);

2 Pi infinity
/    -----             n
|      \      (x cos(t))
|       )     ----------- dt
|      /           n!
/      -----
0      n = 0
J := 1/2 -----------------------------
Pi
# It's not so simple to get Maple to interchange the integral and the sum, so I'll just do it
#  by hand.
> J:= 1/(2*Pi)*sum(Int((x*cos(t))^n/n!, t=0..2*Pi), n=0 .. infinity);

infinity  2 Pi
-----     /             n
\       |    (x cos(t))
)      |    ----------- dt
/       |         n!
-----   /
n = 0   0
J := 1/2 -----------------------------
Pi
# Maple also won't do the integral of cos(t)^n in general (although it does do it for any
# given value of n.
> int(cos(t)^n, t=0..2*Pi);

2 Pi
/
|          n
|    cos(t)  dt
|
/
0
> seq(int(cos(t)^nn, t=0 .. 2*Pi) , nn = 0 .. 6);

2 Pi, 0, Pi, 0, 3/4 Pi, 0, 5/8 Pi
> seq(2^(1-2*kk)*Pi*(2*kk)!/(kk!)^2, kk = 0 .. 3);

2 Pi, Pi, 3/4 Pi, 5/8 Pi
> J:= expand(1/(2*Pi)*Sum(x^(2*k)/(2*k)!  * 2^(1-2*k)*Pi*(2*k)!/(k!)^2, k = 0 .. infinity));

infinity
-----        k 2
\         (x )
J :=    )     -----------
/          2   k 2
-----   (k!)  (2 )
k = 0
#
# # (c) Check your answer for (b) by using "taylor" on f(x) and BesselI(0,x).
> expand(1/(2*Pi)*sum(x^(2*k)/(2*k)!  * 2^(1-2*k)*Pi*(2*k)!/(k!)^2, k = 0 .. 5));

2         4           6             8               10
1 + 1/4 x  + 1/64 x  + 1/2304 x  + 1/147456 x  + 1/14745600 x
> taylor(f(x), x=0, 11);

2         4           6             8               10      11
1 + 1/4 x  + 1/64 x  + 1/2304 x  + 1/147456 x  + 1/14745600 x   + O(x  )
> taylor(BesselI(0,x), x=0, 11);

2         4           6             8               10      11
1 + 1/4 x  + 1/64 x  + 1/2304 x  + 1/147456 x  + 1/14745600 x   + O(x  )
>
# # (d) Study the convergence of the Taylor series to BesselI(0,x), using the methods we
# # used for exp in Lesson 23. Are the results similar?
> P:= (n,t) -> subs(x=t, convert(taylor(BesselI(0,x), x=0, n+1), polynom));

P := (n,t) -> subs(x = t, convert(taylor(BesselI(0, x), x = 0, n + 1), polynom))
# Of course we only need to consider P(n,t) for even n, because the Taylor series has no odd
# terms.
> plot({ BesselI(0,x), seq(P(2*nn, x), nn=1 .. 5) }, x=-5 .. 5, y=0 .. 20, colour = black);
> plot({ seq(BesselI(0,x) - P(2*nn, x), nn = 1 .. 10) }, x = 0 .. 10, y = 0 .. 20, colour=black);
# A similar effect: the curves seem to march off to the right at regular intervals.  The
# horizontal distance between the curves is approximately 2/E this time.
> plot({ seq(BesselI(0,x+2*nn/E) - P(2*nn, x+2*nn/E), nn = 1 .. 20) },\
x=-1 .. 3, y = 0 .. 3, color=black);
>
# # (e) The function f(x) satisfies the differential equation x f''(x) + f'(x) - x f(x) = 0. What
# # is x S[10]''(x) + S[10]'(x) - x S[10](x), where S[10] is the Taylor polynomial of degree
# # 10?
> P(10,x);

2         4           6             8               10
1 + 1/4 x  + 1/64 x  + 1/2304 x  + 1/147456 x  + 1/14745600 x
> x*diff(P(10,x),x\$2) + diff(P(10,x),x) - x*P(10,x);

2          4            6             8                  3
x (1/2 + 3/16 x  + 5/384 x  + 7/18432 x  + 1/163840 x ) + 1/2 x + 1/16 x

5            7              9
+ 1/384 x  + 1/18432 x  + 1/1474560 x

2         4           6             8               10
- x (1 + 1/4 x  + 1/64 x  + 1/2304 x  + 1/147456 x  + 1/14745600 x  )
> expand(");

11
- 1/14745600 x
>
#
# # 4.(a) Find the Fourier series for the function f(x) defined on [-Pi, Pi] by f(x) = 1 for x >
# # 0 and -1 for x < 0.
#
# The function is odd, so there will be only sin terms, and the coefficient can be found by
# integrating on [0,Pi].
> b:= unapply( 2/Pi*int(sin(n*t), t=0 .. Pi),n);

cos(Pi n)
- --------- + 1/n
n
b := n -> 2 -----------------
Pi
# Maple doesn't know the general formula for cos(Pi n) when n is an integer.  Note that
# b(n) is 0 when n is even, and 4/(n*Pi) when n is odd.
> seq(b(nn), nn = 1 .. 6);

4        4        4
----, 0, ----, 0, ----, 0
Pi      3 Pi     5 Pi
> S:= Sum(4/((2*k+1)*Pi) * sin((2*k+1)*x), k = 1 .. infinity);

infinity
-----
\        sin((2 k + 1) x)
S :=    )     4 ----------------
/          (2 k + 1) Pi
-----
k = 1
# # (b) Produce an animation of the partial sums, so that the Gibbs phenomenon can be
# # seen. Near what points does this occur?
> PartialSum:= n -> Sum(4/((2*k+1) *Pi) * sin((2*k+1)*x), k = 0 .. n-1);

n - 1
-----
\      sin((2 k + 1) x)
PartialSum := n ->   )   4 ----------------
/        (2 k + 1) Pi
-----
k = 0
> value(PartialSum(3));

sin(x)       sin(3 x)       sin(5 x)
4 ------ + 4/3 -------- + 4/5 --------
Pi            Pi             Pi
> frame:= n -> plot(PartialSum(n), x = 0 .. 2*Pi):
> with(plots,display): display([seq(frame(nn), nn = 1 .. 8)], insequence=true);
# The Gibbs phenomenon occurs near the integer multiples of Pi, which are the points
# where
# f is discontinuous.
#
# # (c) Show that the ratio of the size of the "overshoot" to the size of the jump is the same
# # as it was for the example we studied in Lesson 26.
> diff(PartialSum(n), x);

n - 1
-----
\      cos((2 k + 1) x)
)   4 ----------------
/             Pi
-----
k = 0
> value(");

sin(x n) cos(x n)
4 -----------------
sin(x) Pi
# So this will be 0 when sin(n x) = 0 or cos(n x) = 0.  The highest peak seems to be the first
# one to the right of x=0, which will be at x = Pi/(2 n) where cos(n x) = 0.
> S:= subs(x=Pi/(2*n), PartialSum(n));

n - 1           (2 k + 1) Pi
-----   sin(1/2 ------------)
\                    n
S :=   )   4 ---------------------
/           (2 k + 1) Pi
-----
k = 0
# By analogy with the case done in Lesson 26, we might expect this to be a Riemann sum
# for an integral.  If I take t = (k + 1/2)/n, we are adding 2/(Pi n) sin(Pi t)/t for n equally
# spaced values of t from 1/(2 n) to 1 - 1/(2 n).  The Midpoint Rule approximation for
# int(sin(Pi t)/t, t=0 .. 1) with n intervals is as follows:
> g:= t -> sin(Pi * t)/t;

sin(Pi t)
g := t -> ---------
t
> Mrule:= 1/n * Sum(g((k+1/2)/n), k = 0 .. n-1);

n - 1     Pi (k + 1/2)
----- sin(------------) n
\              n
)   -------------------
/          k + 1/2
-----
k = 0
Mrule := -------------------------
n
> simplify(S/Mrule);

2
----
Pi
# So as n -> infinity, S should approach a limit of
> 2/Pi * int(g(t), t=0..1);

Si(Pi)
2 ------
Pi
# Since f(x) = 1 for x > 0 and -1 for x < 0, the "overshoot" is
> " - 1;

Si(Pi)
2 ------ - 1
Pi
# and the ratio of overshoot to height of jump is
> evalf("/2);\

.0894898721
# which is the same as we found for the example in Lesson 26.
#

```