Historically, evaluating e and/or Pi and generating Prime Numbers were some of the first tasks to be done on early digital computer. I suspect at one time or another everyone here has programed one or more of these. I first calculated e to a thousand digits in 1961 on a computer built at Iowa State University. (Vacuum tubes, and about 100 microseconds for a 40 bit add.) Typically e is calculated simply by adding the reciprocal factorials.
In August 1986 I did a program to evaluate the constant e up to 150,000 decimal places on an 8088 based IBM PC. Using factorials up to about 37,592! It might surprise some to know it can be done never using a divisor greater than 16 bits; but that is another talk.
8086 Unsigned integer divide with a single word divisor, is a simple loop
;BX=the divisor, CX=the number of words in the numerator, and DX=0 ;SI and DI point to the high order numerator words. ; The working loop is just: divs: lodsw ;get the numerator word div bx ;AX=quot, DX=rem of (DX:AX)/BX stosw loop divs ;DIVide Step ;Multiple word quotient has replaced the Numerator and ; the remainder is in DX.
The conversion from binary to decimal involves repeatedly dividing or multiplying by ten. (One can reduce the number of divides or multiplies, by first converting to base 10,000.)
An interesting challenge is doing variable precision binary integer "long divides" where the divisor can be several hundred 16 bit words long. I restricted myself to the Intel 8088 instruction set.
I do this very much like the grade school "long division", but with the "digits" being bytes. (Think in number system base 256.) This works fine, but in grade school decimal arithmetic we guess what the next quotient digit should be. How does a program make an intelligent guess? In pure binary it is easy. In base 10 it is a bit of a struggle. In base 256 we have to do better. We would like to "guess" the next quotient byte without error.
You can get a good guess at the next quotient byte with a machine divide of the high order word of the divisor into the high order word pair of the numerator. (Actually, I multiply by the reciprocal of the high order word of the divisor because a multiply is faster than a divide. Donald Knuth did not suggest this in his famous 3 volume set. ? It may be a mistake?) In ill conditioned cases I am sometimes off by two.
When I multiply by the quotient byte then subtract I would like to never have to check for it being too big or too small. Many times I thought I had it. But, sooner or later, there always turns up a "bad" example! Never ever having to correct with more than one add (if the quotient byte was too small) or one subtract (if the quotient was one too large) would make me happy.
I kept tweaking and rounding, several times I thought I had it, but eventually I had to leave in code which after a correcting add or subtract sometimes has to do another. I have never been so frustrated, and to this day I am not sure I could not improve getting the next quotient byte. I suspect: If I allowed myself to use a few 80386 instructions, I might make it happen. But I got tired.
To make sure I don't have undetected errors I put in the equivalent of "casting out nines" except doing it on binary words I check every divide by "casting out 65,535's". I have never been sure enough to remove that "checking" logic. It has been years since it caught any divide errors. The probability of it not detecting an error is certainly less than one in 65,535.