Representation Of Polynomials
Representation of polynomials: The coefficient and point-value representations of polynomials are in a sense equivalent; that is, a polynomial in point-value form has a unique counterpart in coefficient form. In this section, we introduce the two representations and show how they can be combined to allow multiplication of two degree-bound n polynomials in Θ(n lg n) time.
Coefficient representation
A coefficient representation of a polynomial of degree-bound n is a vector of coefficients a = (a0, a1,..., an-1). In matrix equations in this chapter, we shall generally treat vectors as column vectors.
The coefficient representation is convenient for certain operations on polynomials. For example, the operation of evaluating the polynomial A(x) at a given point x0 consists of computing the value of A(x0). Evaluation takes time Θ(n) using Horner's rule:
A(x0) = a0 x0(a1 x0(a2 ··· x0(an-2 x0(an-1)) )).
Similarly, adding two polynomials represented by the coefficient vectors a = (a0, a1,..., an-1) and b = (b0, b1,..., bn-1) takes Θ(n) time: we just produce the coefficient vector c = (c0, c1,..., cn-1), where cj = aj bj for j = 0, 1,..., n - 1.
Now, consider the multiplication of two degree-bound n polynomials A(x) and B(x) represented in coefficient form. If we use the method described by equations () and (), polynomial multiplication takes time Θ(n2), since each coefficient in the vector a must be multiplied by each coefficient in the vector b. The operation of multiplying polynomials in coefficient form seems to be considerably more difficult than that of evaluating a polynomial or adding two polynomials. The resulting coefficient vector c, given by equation (), is also called the convolution of the input vectors a and b, denoted c = a ⊗ b. Since multiplying polynomials and computing convolutions are fundamental computational problems of considerable practical importance, this chapter concentrates on efficient algorithms for them.
Point-value representation: A point-value representation of a polynomial A(x) of degree-bound n is a set of n point-value pairs
{(x0, y0), (x1, y1),..., (xn-1, yn-1)}
such that all of the xk are distinct and
(30.3) |
for k = 0, 1,.., n - 1. A polynomial has many different point-value representations, since any set of n distinct points x0, x1,..., xn-1 can be used as a basis for the representation.
Computing a point-value representation for a polynomial given in coefficient form is in principle straightforward, since all we have to do is select n distinct points x0, x1,..., xn-1 and then evaluate A(xk) for k = 0, 1,..., n - 1. With Horner's method, this n-point evaluation takes time Θ(n2). We shall see later that if we choose the xk cleverly, this computation can be accelerated to run in time Θ(n lg n).
The inverse of evaluation -determining the coefficient form of a polynomial from a point-value representation -is called interpolation. The following theorem shows that interpolation is well defined, assuming that the degree-bound of the interpolating polynomial equals the number of given point-value pairs.
For any set {(x0, y0), (x1, y1),..., (xn-1, yn-1)} of n point-value pairs such that all the xk values are distinct, there is a unique polynomial A(x) of degree-bound n such that yk = A(xk) for k = 0, 1,..., n - 1.
Proof The proof is based on the existence of the inverse of a certain matrix. Equation (30.3) is equivalent to the matrix equation
(30.4) |
The matrix on the left is denoted V (x0, x1,..., xn-1) and is known as a Vander-monde matrix, this matrix has determinant
and therefore, by Theorem (An n × n matrix A is singular if and only if det(A) = 0.), it is invertible (that is, nonsingular) if the xk are distinct. Thus, the coefficients aj can be solved for uniquely given the point-value representation:
a = V (x0, x1,..., xn-1)-1 y.