Iâve a feeling CORDIC was somehow more crucial in the land of decimal machines, such as the early scientific calculators. I could be wrong about that. Anyhow, hereâs a snippet from Wikipedia, the History section:

Volder teamed up with Malcolm McMillan to build Athena, a fixed-point desktop calculator utilizing his binary CORDIC algorithm. The design was introduced to Hewlett-Packard in June 1965, but not accepted. Still, McMillan introduced David S. Cochran (HP) to Volderâs algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM) had proposed as pseudo-multiplication and pseudo-division in 1961. Meggittâs method also suggested the use of base 10 rather than base 2, as used by Volderâs CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, built by and conceptually derived from Thomas E. Osborneâs prototypical Green Machine, a four-function, floating-point desktop calculator he had completed in DTL logic in December 1964. This project resulted in the public demonstration of Hewlett-Packardâs first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year.

I think there are interesting articles in HP Journal about pseudo division - several HP 9100 articles in the September 1968 edition.

Here are a couple of articles about the HP 9100 and the Green Machine:

Tom Osborneâs Story in His Own Words (from The 9100 Project)

The HP 9100 Calculator: A fundamental item at The HP Memory Project

Yes, CORDIC is an old trick.

But ultimately itâs the age-old question space versus time. Uh, and maybe versus accuracy. Also, what do you have available to bootstrap from: libraries, or even hardware support.

Precompute versus compute. Lookup tables versus logic. CORDIC is tiny compared to lookup tables, and gets it as accurate as you want. But it is slow compared to lookups. With lookups you need smart interpolation. If you are not on a space-starved hardware, lookups and interpolation, probably.

Hereâs one âwhat if âŚâ: Implementing cosine in C from scratch - Austin Z. Henley

Nice article by Austin Z Henley, thanks. Iâd forgotten that the x87 used CORDIC - must have known, because I read all of Ken Shirriffâs articles, and he looks into the 8087 constant ROM here.

Small sin/cos by Gosper et al:

Itâs also possible to use the Minsky âcircleâ algorithm to compute approximations of the sine and cosine functions. Allegedly this was used in a Canadian satellite that was launched a few years ago. (Another fun fact: it was programmed in Logo.)

Which Canadian satellite?

The similarities are obvious!

Also consider this algorithm by David Mapes, invented/discovered at LLNL around the same time, also for the PDP-1, also used for graphics:

The general algorithm is (word size is 18 bits)

```
loop: y := y + x * c1
x := x + y * c2
display(x, y)
c1 := c1 + 2-18 (incrementally loops over range -1 < c1 < +1)
c2 := c2 + 2-18 (incrementally loops over range -1 < c2 < +1)
repeat
```

with starting values for c1 and c2 for a circle-based variant ( 0 < c1 < 1 and c2 = âc1 )

```
c1 = 1/8
c2 = -1/8
```

(Compare: A Statement from David Mapes, hereâs it running in emulation, the real thing was probably more complex, but I didnât want to add any that wasnât explicitly described: Graphical Fun for the PDP-1 by David Mapes [switch variations using the menu on the top])

I guess you guys will be familiar with the CUJ/Dr Dobbs article?

http://www.pennelynn.com/Documents/CUJ/HTML/92HTML/1992024C.HTM

https://www.semanticscholar.org/paper/The-CORDIC-Method-for-Faster-sin-and-cos-Bertrand/9a2a3d16d67ae996b9947d2cde3da647ea364895?p2df

I believe itâs the âT-LogoQubeâ, but Iâm not sure. Information online says Logo programming was provided by Brian Silverman.

I just remembered the Sinclair Scientific calculator, which crams a (slow, inaccurate) firmware into 320 words, to meet a very low price point. It uses something like the Minsky method - thereâs an explanation of the code, a disassembly, and an in-browser emulation all by Ken Shirriff here:

An interesting aspect of using the Minsky algorithm

X_{new} = X_{old} â *epsilon* Ă Y_{old}

Y_{new} = Y_{old} + *epsilon* Ă X_{new}

is that this doesnât give us a perfect circle, but a flat ellipse rotated by 45Â°. But, as *epsilon* decreases, this will approach an ideal circle. So, at the first few iterations, there will be a (significant) error, but with further iteration, as *epsilon* becomes smaller and smaller, this will become less and less an issue.

In general, the Minsky algorithm is similar to a fast shortcut solution, where we use `1.0`

for *cos(Î¸)* and a small *epsilon* for *sin(Î¸)*. (For a small *Î¸,* that is.) However, this has a constant error, causing the âcircleâ to spiral out, which is just corrected by using " X_{new}" instead of " X_{old}" in the second term:

Drawing a circle with *epsilon* = 1/8 (or `1 / (1 << 3)`

),

left: shortcut solution, right: Minsky circle, blue: ideal circle.

In case youâre interested in why this works, consider the determinant of the matrix, which gives use the dimensional âsqueezeâ of the transformation. For any conic this must be 1.0, in order for this to return to its starting point:

*(This is from an unfinished blog post on the Minskytron that Iâve lying around for two years, now. Also, Iâve enjoyed a humanist education and math isnât exactly my strong side, so bear with meâŚ )*

As we may see, the error introduced to the âshortcut solutionâ by switching the terms for X just happens to cancel out the genuine error of the shortcut solution, (Meaning, for a complete turn. This doesnât automatically imply that this should give us a perfect circle.)

Edit: If Iâm not mistaken, whatâs stored in the z-table in the CORDIC algorithm as provided in the article, is really the error of the âshortcut solutionâ over 16 iterations.

Thatâs why I put quotes around circle in a previous message. If youâre interested in these aspects of the Minsky algorithm I recommend the book âMinskys and Trinskysâ: Minskys & Trinskys 3rd Edition by Julian & Corey Ziegler Hunts, R.W. Gosper & Jack Holloway | Blurb Books Yes, one of the co-authors is *the* Gosper.

Wow that book (seen in preview in the browser) looks lovely! I found a webpage describing Trinskys, with links within:

Ziegler Hunts, Ziegler Hunts, Gosper, and Holloway introduce equivalents of the

`d`

and`e`

parameters into this algorithm and use it for many of their illustrations:

`while(true){`

`x <- x - (a/2) y;`

`y <- y + b x;`

`x <- x - (a/2) y;`

`plotPointAt(x, y);`

`}`

As described above, the (two-step) integer circle algorithm is parameterized by four values:

`x0`

,`y0`

,`d`

, and`e`

. How can we visualize how the behavior of the algorithm â which at time can draw unusual shapes â changes as we modify these parameters?Much like how many visualizations of the Mandelbrot set count and plot the number of iterations it takes for each complex starting point to escape, Ziegler Hunts, Ziegler Hunts, Gosper, and Holloway choose to associate each

`(x0, y0, d, e)`

tuplet with the number of iterations it takes for the initial point`(x0, y0)`

to return to its starting point.

They then plot two-dimensional slices of this four-dimensional space â such as plotting the number of iterations varies with

`x`

and`y`

, holding`d`

and`e`

constant.The results are quite striking: hereâs the first XY plot they plotted, using

`d=1`

and`e = 4 sin(pi/9)^2`

(i.e.`d=1`

and`p=9`

):

Ah, memories. I worked on the HP45 calculator which used a non-restoring decimal version of this. The calculator instruction set was specifically designed with Cordic in mind (so shift&add/subtract by a variable amount was built-in, as was add just mantissa or exponent, etc.).

I actually attended the Spring Joint Computer Comference where Steve Walter described the algorithms. The hyperbolics were quite tricky.

Prof.Kahan (of IEEE754 FP fame) was an advisor to HP ( and also to Intel for the binary version) to ensure that results were within one least-significant position of the input ( which, to be fair, is a fairly wide range when the slope of the curve is large)