"Lecture" for Week 2

I don't have much to say this week, because I don't know any more about Egyptian and Babylonian mathematics that you will, after you do the reading; and because the topic of "course procedures" is nearly exhausted (hurray!).

Location of exercises: Be careful -- Dr. Allen has questions and exercises scattered all over the site. Unless I state the contrary, the exercises I assign will be from the third maroon link, labeled "Problems", on each chapter top page.

Schedule: Prof. Allen gave a week each to the Egyptians and the Babylonians, and then 4 weeks to the Greeks. I think we can get out of the ancient world faster than that. However, I will be away at a conference the entire 3rd week (Sept. 14-20), so I'd like to space out the homework assignments during that period. I propose this schedule:

And now a little bit of math! Some of the solution methods in the original Egyptian and Babylonian sources are called "false position" by Dr. Allen and other modern commentators. Roughly speaking, this means making an initial guess for the answer, then using some ensuing calculations to correct or improve the result. What does "method of false position" mean today? Pages 52-53 of the (highly readable and informative) book by F. S. Acton, Numerical Methods that Work (Harper and Row, 1970) describes a "false position" algorithm for finding zeros of functions, an alternative to "Newton's method" as described in all calculus textbooks. Newton's method gets into the calculus books because it uses calculus, whereas false position requires only algebra and perhaps geometry (in the sense that the theory behind the method becomes clearer if you draw a graph). In many problems, however, the false-position method is better. The point is that Newton is unstable; if the derivative of the function f is zero, or nearly zero, near the root we are seeking (a place where f(x)=0), then the Newton algorithm, which involves dividing by f '(xn) in the process of finding a better approximation, xn+1, may send the sequence of approximations shooting off into some faraway, irrelevant region. Suppose, on the other hand, that you have two approximations, xn-1 and xn. Then you can construct the line through the two corresponding points on the graph of f and look at its intersection with the horizontal axis to determine xn+1. In other words, one is constructing a secant line to the graph, rather than the tangent line of Newton's method. (Acton gives the resulting formula for xn+1, which is easier for you to derive than for me to type in HTML.) If f is continuous and the two points lie on opposite sides of the horizontal axis, then one is guaranteed that a root exists between them and moving to xn+1 will bring one closer to it. The false position algorithm ensures the sign change condition by using xn-2 instead of xn-1 when necessary.

Keep this background in mind as you read about the ancient instances of "false position". Are they special cases of the modern concept, or just vaguely analogous?