x

^{n}+ a

_{1}x

^{n-1}+ ... + a

_{n-1}x + a

_{n}= (x - r

_{1})(x - r

_{2})*...*(x-r

_{n})

where r

_{i}are real or complex numbers.

If you want to build a polynomial that has the n solutions { r

_{1}, ..., r

_{n}) just multiply (x - r

_{1})*...*(x - r

_{n}). The result will be a polynomial of the form:

x

^{n}+ a

_{1}x

^{n-1}+ ... + a

_{n}

On the other hand, if you have a polynomial of order n, the fundamental theorem tells us that n solutions exist but it doesn't tell us how to find them. Niels Henrik Abel and then later Evariste Galois proved that there is no general method for finding solutions for polynomials of degree 5 or greater. Earlier, Girolamo Cardano and Lodovico Ferari had found equations for the cubic and the quartic equation.

In today's blog, I will discuss how the elementary symmetric polynomials emerge from the discussion of polynomials and their roots.

Definition 1: Elementary Symmetric Polynomials σ

_{k}

The elementary symmetric polynomial σ

_{k}is the sum of all possible k-way products from a set of n variables { r

_{1}, r

_{2}, ..., r

_{n}} such that:

σ

_{k}= r

_{1}*...*r

_{k}+ ... + r

_{n-k+1}*...*r

_{n}

Here are some examples:

σ

_{1}= r

_{1}+ r

_{2}+ ... + r

_{n}

σ

_{2}= r

_{1}*r

_{2}+ r

_{2}*r

_{3}+ ... + r

_{n-1}*r

_{n}

In each of these cases, we can see that there C(n,k) terms in the elementary polynomial for σ

_{k}. [C(n,k) = n!/(n-k)!k!. For proof that each σ

_{k}consists of C(n,k) terms, see here]

The reason that these polynomials are called symmetric is because you could switch the values of any two of the variables and the result doesn't change.

The reason that these polynomials are called elementary symmetric polynomials is because it turns out that all symmetric polynomials can be restated in terms of these elementary symmetric polynomials. For a proof of this important fact, see here.

Elementary symmetric polynomials characterize the relationship between the roots of a polynomial and the coefficients that make up the polynomial.

Lemma 1:

For any given polynomial of the form:

x

^{n}+ a

_{1}x

^{n-1}+ ... + a

_{n-1}x + a

_{n}= 0

σ

_{k}= (-1)

^{k}*a

_{k}

Proof:

(1) From the Fundamental Theorem of Algebra (see here), we know that there exists r

_{1}, r

_{2}, ..., r

_{n}such that:

x

^{n}+ a

_{1}x

^{n-1}+ ... + a

_{n-1}x + a

_{n}= (x - r

_{1})*(x - r

_{2})*...*(x - r

_{n})

(2) Now, each coefficient a

_{i}is the sum of all multiplications that involve exactly (n-i) x's and (i) r's. In other words, it is a sum of C(n,i) terms since there are C(n,i) combinations possible [see here for review of C(n,i)]

(3) In other words:

a

_{i}x

^{n-i}= (-r

_{1})*...*(-r

_{i})*x*... + (-r

_{1})*...*(-r

_{i+1})*x*.. + ...

(4) Dividing both sides by x

^{n-i}gives us:

a

_{i}= (-r

_{1})*...*(-r

_{i}) + (-r

_{1})*...*(-r

_{i+1}) + ...

(5) We can also pull out (-1)

^{i}so that we have:

a

_{i}= (-r

_{1})*...*(-r

_{i}) + (-r

_{1})*...*(-r

_{i+1}) + ... =

= (-1)

^{i}(r

_{1})*...*(r

_{i}) + (-1)

^{i}(r

_{1})*...*(r

_{i+1}) + ... =

= (-1)

^{i}[ (r

_{1})*...*(r

_{i}) + (r

_{1})*...*(r

_{i+1}) + ... ]

(6) Now based on the definition for σ

_{k}, we have:

a

_{i}= (-1)

^{i}[ σ

_{i}]

which is the same as:

σ

_{i}= (-1)

^{i}(a

_{i})

QED

References

- "Elementary Symmetric Polynomials", Wikipedia
- Harold M. Edwards, Galois Theory, Springer, 1984.