# 1.8 Lesson 8

```# Lesson 8
# ======
#
# Problem: find a cubic polynomial for which Newton's method has an attracting 2-cycle.
> restart;
> newt:= x -> x - f(x)/D(f)(x);
> dnewt:=D(newt);
# Since the derivative of newt is 0 when f''(x)=0, the simplest way to ensure that a cycle is
# attracting is to make this be true at one point on
# the cycle.
# Here is a cubic polynomial.
> f:= x -> x^3 + a*x^2 + b*x + c;
# We choose the points on the cycle to be 0 and 1.  The first two equations make this a
# cycle.
> eq1:= newt(0) = 1;
> eq2:= newt(1) = 0;
# The third equation is f''(0)=0.
> eq3:= (D@@2)(f)(0) = 0;
# "solve" will solve (if possible) a set of equations for a set of variables.
> s:=solve({eq1, eq2, eq3}, {a,b,c});
# Here's our polynomial.
> subs(s, f(x));
# We can use "subs" on a function definition as well as on an expression.
# However, we must use "eval" on the name of the function, otherwise we'd just get the
# name f and not the definition.
> f:= subs(s, eval(f));
# Now to try a picture of this.  The "newtstep" command plots one step
# of Newton's method.
> newtstep:= x -> plot([ [x,0], [x, f(x)], [newt(x), 0], [newt(x), f(newt(x))] ]);
# We choose a starting point near, but not on, the cycle.  Some experimenting was
# necessary: 0.2 would not have worked.
> x[0]:= 0.1;
# We calculate 10 steps of Newton's method.
> for jj from 1 to 10 do x[jj]:= newt(x[jj-1]); od:
> with(plots):
# We plot these steps, together with the graph of f.
> display({ seq( newtstep(x[jj]), jj=0 .. 10),
>       plot(f(x), x = -2 .. 1.5) });
# Here are the even-numbered x[j]'s.  Note that they approach 0, at
# first slowly but then very rapidly.  The quadratic convergence to 0
# (x[j+2] approximately k x[j]^2) is a result of the fact that D(newt) is
# 0 at one of the points of the cycle.
> seq(x[2*jj], jj=0..5);
# How big is the "basin of attraction" of this 2-cycle?
> plot({-1,1,newt(newt(x))/x}, x = -1 .. 1, -2 .. 2);
# If x[0] is between about -.12 and .14, x[2] will be closer to 0 than x[0]
# was, and the sequence x[j] will approach the 2-cycle.  Outside this
# interval, we can't be sure what will happen.
#
#
# "solve" in Several Variables
# ====================
> restart;
# Here are two polynomials in two variables.
> p1:= (x,y) -> x^4 - x*y^2 - y^3 + 1 ;
> p2:= (x,y) -> x^3 - 5* x^2 * y + y^4 -  2 ;
# Do you think Maple can find a solution for the simultaneous equations p1(x,y) = 0 and
# p2(x,y)=0?
> sol:=solve({p1(x,y)=0, p2(x,y)=0}, {x,y});
# It did find the solutions after a fashion: x is "RootOf" a complicated polynomial, and y is a
# complicated polynomial of that.  It's our bad luck that the first complicated polynomial
# can't be solved in "closed form".  But what is this polynomial?
> res:=resultant(p1(x,y),p2(x,y),y);
# This is the "resultant" of p1(x,y) and p2(x,y), considered as polynomials in y (with
# coefficients that are polynomials in x).  The main property is that if two polynomials in y
# have a common root, their resultant is 0.
#
# To understand resultants, it's simplest to eliminate the distraction of the x variable, and
# start with polynomials with constant coefficients.  I'll take these same polynomials, but fix
# x=3.
> f:= unapply(p1(3,y), y);
> g:= unapply(p2(3,y), y);
# Now the resultant of f(y) and g(y) is a number.
> resultant(f(y),g(y),y);
# Here's the definition of resultant.  Suppose f(x) = a x^n + ... and
# g(x) = b x^m + ... are polynomials of degrees n and m respectively.
# Let  f(x) have roots r[1], ..., r[n] (with perhaps multiple roots) and g(x) have roots s[1], ...,
# s[m].  Then  the resultant of f and g is
> a^m * b^n * Product(Product(r[i] - s[j], i = 1 ... n), j = 1 ... m);
# Note that this will be 0 if f and g have a root in common.  Let's calculate the roots for our
# examples.
> r:= fsolve(f(y), y, complex);
> s:= fsolve(g(y), y, complex);
> product(product(r[i]-s[j], i=1..3), j=1..4);
> round(");
# Here's a shortcut to computing it.  Note that
> 'g'(x) = b * Product(x-s[j], j = 1 .. m);
# So the resultant of f and g must be
>  a^m * Product('g'(r[i]), i = 1 .. n);
# Let's try it.
> product(g(r[i]), i=1..3);
# It also works the other way, write
> 'f'(x) = a * Product(x - r[i], i = 1 .. n);
# but note that each factor (r[i] - s[j]) must be multiplied by -1, so the resultant is
> (-1)^(m*n) * b^n * Product('f'(s[j]), j = 1 .. m);
> product(f(s[j]), j=1..4);
# Actually, there is a special form of "product" for a product of some expression over the
# roots of a polynomial.  It doesn't resort to floating-point approximations: it turns out
# there are efficient ways to calculate the sum of the k-th powers of the roots of a
# polynomial, where k is any positive integer.  For example, the sum of the roots is
# - the coefficient of x^(d-1), where d is the degree.
> product(f(t), t = RootOf(g(y)) );
# Let's try this for our original polynomials, which are functions of x and y, thinking of
# them as polynomials in y whose coefficients happen to involve x.
> product(p1(x,t), t = RootOf(p2(x,y), y) );
> " - res;
# If you had to compute the roots of one of the polynomials to calculate the resultant,  it
# wouldn't be very useful.  Fortunately, there is an efficient way to calculate the resultant.
# It is based on the  principle that for any polynomial p, the resultant of f  and g + p f is (up
# to a power of a) the resultant of f and g.  That's because
#     (g + p f)(r[i]) = g(r[i]).
# Here's how we could use this to calculate the resultant of our f and g by hand.
# We'd start with "long division" of g by f.
> 'g(y)' = quo(g(y),f(y), y) * 'f(y)' + rem(g(y), f(y), y);
> normal(");
# So resultant(f(y), g(y)) = resultant(f(y), -221 + 37 y + 9 y^2).  This reduces the problem
# from calculating the resultant of a quartic and a cubic to a cubic and a quadratic.
# Repeating the process, we get a quadratic and a linear polynomial, and then linear and
# constant.
> g1:= rem(g(y), f(y), y);
> resultant(f(y), g(y),y) = resultant(f(y), g1,y);
> f1:= rem(f(y), g1, y);
> resultant(f1, g1, y) = resultant(f(y), g1, y)/9^2;
> g2:= rem(g1,f1,y);
> resultant(f1,g2,y)= resultant(f1,g1,y)*(-2359/81)^(-2);
>

```