Navigation Panel: (IMAGE)(IMAGE)(IMAGE)(IMAGE) (SWITCH TO TEXT-ONLY VERSION) (IMAGE)(IMAGE) (These buttons explained below)

UNIVERSITY OF TORONTO
MATHEMATICS NETWORK

Question Corner and Discussion Area


Solution to the Transcendental Equation 2^x + 3^x = 5

Asked by B. Ryan, Brebeuf College on Wednesday Jan 10, 1996:
Either I have forgotten or have not come across a non Newtonian ( numerical approximation ) to solve the following problem.
2^x + 3^x = 5 .
Clearly the solution to the problem is that x = 1.

By a Newton's method calculation involving first derivative, the soution is also easily obtained.

What exact method, not involving an approximation is there to solve this problem.

If available, solve the problem 4^x + 5^x = 100

solution: x = 2.5843539862708

B. Ryan

Unfortunately, the solution to almost every transcendental equation (equation involving functions other then simple polynomials) can not be expressed as a combination of elementary functions, even if the equation itself can be.

Thus, while the equation 2^x + 3^x = 5 happens to have a nice integer answer "1", there is no general formula for expressing the solution to the equation a^x + b^x = c as a combination of elementary functions of a, b, and c.

There are certain special classes of equations for which the solution can be expressed as elementary functions. For an obvious example, the solution to the equation a^x = b can be expressed as x = log(b)/log(a). However, such special classes of equations are the exception rather than the rule; it is provable that no expression exists in the general case.


Followup question by Jeyprakash Michaelraj Fernando, India on November 5, 1996:
I am wondering how to solve the equations of the type a = x sinh(b / x) (solve for x).

Even by Iteration method, is there anyway by which I can make a guess for the initial value?

THANKS in ADVANCE.

For a relatively uncomplicated equation like this, a binary search is often a good way to go.

First, one should figure out the general behaviour of the function to get an idea of where (and if!) a solution would be.

If we let f(x) = x sinh(b / x), it is easy to see that f is a continuous function everywhere except at x=0. The limiting behaviour at  (IMAGE) and at 0 can be found by calculating

 (IMAGE)

So, if your value of a satisfies a > b, you can solve the equation as follows:

  1. Choose x=A large enough that f(x) is close to b and hence less than a. (You could either make a sophisticated estimate of how large to take x, or else just start with something like x=1 and keep doubling it until it's large enough).

  2. Choose x=B small enough (close enough to 0) that f(x) is close to infinity and hence greater than a. (Again, you could just start with x=1 and keep halving it until it's small enough).

  3. Now the intermediate value theorem guarantees that there's a solution between A and B. Let x be the halfway point (A+B)/2. Check if f(x) is greater than or less than a. If it's greater than, you now know there's a solution between x and B. If it's less than, you now know there's a solution between A and x.

  4. Either way, you've cut the size of the inteval in half. You can iterate this procedure, cutting the interval in half each time, until you have trapped the root within an interval of your desired accuracy.

This method is slow and inefficient compared to other methods, but is often a good one if you are having trouble finding an initial point.

The iteration method (finding a solution to an equation of the form x=g(x) by forming the sequence x, g(x), g(g(x)), . . . ) will converge to a solution y if you start with x close enough to y and if |g'(y)| < 1. However, if |g'(y)| > 1, the iteration method will not converge to the solution y. This may be the problem you are having.

A better method is Newton's method. As long as g'(y) is nonzero, and you start with x close enough to y, Newton's method will produce a sequence of numbers that converge to y, and will do so more rapidly than the bisection method. But if you're having trouble choosing an initial value that works, the bisection method can help you get close enough to the root, then you could switch to Newton's method to do the rest of the calculation more rapidly.

Finally, note that the above analysis of the function only shows that a solution exists when a > b. In the case  (IMAGE) there are no solutions. You can prove this by showing that f is an increasing function when x < 0 and a decreasing function when x > 0. From this it follows that f(x) can never get any smaller than the limiting value of b, so there are no solutions to the equation f(x)=a if a < b.

In summary: the most important thing in problems of this type is to analyze the function's behaviour, using the tools of calculus. Figure out its limits as x approaches plus or minus infinity and as x approaches any points of discontinuity. Find where the function is increasing or decreasing. Using this, you can determine what part of the real number line a solution will lie on (if there is a solution). The bisection method can always be used to find the solution in this case; and you can use the faster Newton's Method once you're sufficiently close to the solution.

I should point out that most problems in numerical analysis are not as easy as this. For instance, you may have a function that dips below the x-axis only briefly, and is positive near both ends. Then you cannot use the simple analysis I described above to find solutions to f(x)=0, and unless you're fortunate enough to know at least one x value where f(x) is negative and at least one where f(x) is positive, you cannot use the bisection method.

In these cases you have to start with Newton's Method or something more sophisticated, and entire books can be (and have been) written on how to choose appropriate initial points, what effect small errors in calculations have on the final answer, etc. Any textbook with a title like "Numerical Analysis" should be able to explain these issues in much more depth.

[ Submit Your Own Question ] [ Create a Discussion Topic ]

This part of the site maintained by (No Current Maintainers)
Last updated: April 19, 1999
Original Web Site Creator / Mathematical Content Developer: Philip Spencer
Current Network Coordinator and Contact Person: Joel Chan - mathnet@math.toronto.edu


Navigation Panel: (IMAGE)(IMAGE)(IMAGE)(IMAGE) (SWITCH TO TEXT-ONLY VERSION) (IMAGE)(IMAGE)

(IMAGE) Go backward to Factorials of Non-Integral Values
(IMAGE) Go up to Question Corner Index
(IMAGE) Go forward to Limit of the Sequence a(n) = cos(a(n-1))
 (SWITCH TO TEXT-ONLY VERSION) Switch to text-only version (no graphics)
(IMAGE) Access printed version in PostScript format (requires PostScript printer)
(IMAGE) Go to University of Toronto Mathematics Network Home Page