Why the CORDIC algorithm lives rent-free in my head

Why the CORDIC algorithm lives rent-free in my head


I’ve a feeling CORDIC was somehow more crucial in the land of decimal machines, such as the early scientific calculators. I could be wrong about that. Anyhow, here’s a snippet from Wikipedia, the History section:

Volder teamed up with Malcolm McMillan to build Athena, a fixed-point desktop calculator utilizing his binary CORDIC algorithm. The design was introduced to Hewlett-Packard in June 1965, but not accepted. Still, McMillan introduced David S. Cochran (HP) to Volder’s algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM) had proposed as pseudo-multiplication and pseudo-division in 1961. Meggitt’s method also suggested the use of base 10 rather than base 2, as used by Volder’s CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, built by and conceptually derived from Thomas E. Osborne’s prototypical Green Machine, a four-function, floating-point desktop calculator he had completed in DTL logic in December 1964. This project resulted in the public demonstration of Hewlett-Packard’s first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year.

I think there are interesting articles in HP Journal about pseudo division - several HP 9100 articles in the September 1968 edition.

Here are a couple of articles about the HP 9100 and the Green Machine:
Tom Osborne’s Story in His Own Words (from The 9100 Project)
The HP 9100 Calculator: A fundamental item at The HP Memory Project

Yes, CORDIC is an old trick.

But ultimately it’s the age-old question space versus time. Uh, and maybe versus accuracy. Also, what do you have available to bootstrap from: libraries, or even hardware support.

Precompute versus compute. Lookup tables versus logic. CORDIC is tiny compared to lookup tables, and gets it as accurate as you want. But it is slow compared to lookups. With lookups you need smart interpolation. If you are not on a space-starved hardware, lookups and interpolation, probably.

Here’s one “what if …”: Implementing cosine in C from scratch - Austin Z. Henley

Nice article by Austin Z Henley, thanks. I’d forgotten that the x87 used CORDIC - must have known, because I read all of Ken Shirriff’s articles, and he looks into the 8087 constant ROM here.

Small sin/cos by Gosper et al:
Screenshot from 2024-05-14 10-40-47


It’s also possible to use the Minsky “circle” algorithm to compute approximations of the sine and cosine functions. Allegedly this was used in a Canadian satellite that was launched a few years ago. (Another fun fact: it was programmed in Logo.)

Which Canadian satellite?

The similarities are obvious!
Also consider this algorithm by David Mapes, invented/discovered at LLNL around the same time, also for the PDP-1, also used for graphics:

The general algorithm is (word size is 18 bits)

loop:  y := y + x * c1
       x := x + y * c2
       display(x, y)
       c1 := c1 + 2-18   (incrementally loops over range -1 < c1 < +1)
       c2 := c2 + 2-18   (incrementally loops over range -1 < c2 < +1)

with starting values for c1 and c2 for a circle-based variant ( 0 < c1 < 1 and c2 = −c1 )

       c1 =  1/8
       c2 = -1/8

(Compare: A Statement from David Mapes, here’s it running in emulation, the real thing was probably more complex, but I didn’t want to add any that wasn’t explicitly described: Graphical Fun for the PDP-1 by David Mapes [switch variations using the menu on the top])

I guess you guys will be familiar with the CUJ/Dr Dobbs article?


1 Like

I believe it’s the “T-LogoQube”, but I’m not sure. Information online says Logo programming was provided by Brian Silverman.


I just remembered the Sinclair Scientific calculator, which crams a (slow, inaccurate) firmware into 320 words, to meet a very low price point. It uses something like the Minsky method - there’s an explanation of the code, a disassembly, and an in-browser emulation all by Ken Shirriff here:

An interesting aspect of using the Minsky algorithm

Xnew = Xoldepsilon × Yold
Ynew = Yold + epsilon × Xnew

is that this doesn’t give us a perfect circle, but a flat ellipse rotated by 45°. But, as epsilon decreases, this will approach an ideal circle. So, at the first few iterations, there will be a (significant) error, but with further iteration, as epsilon becomes smaller and smaller, this will become less and less an issue.

In general, the Minsky algorithm is similar to a fast shortcut solution, where we use 1.0 for cos(θ) and a small epsilon for sin(θ). (For a small θ, that is.) However, this has a constant error, causing the “circle” to spiral out, which is just corrected by using " Xnew" instead of " Xold" in the second term:

Drawing a circle with epsilon = 1/8 (or 1 / (1 << 3)),
left: shortcut solution, right: Minsky circle, blue: ideal circle.

In case you’re interested in why this works, consider the determinant of the matrix, which gives use the dimensional “squeeze” of the transformation. For any conic this must be 1.0, in order for this to return to its starting point:
(This is from an unfinished blog post on the Minskytron that I’ve lying around for two years, now. Also, I’ve enjoyed a humanist education and math isn’t exactly my strong side, so bear with me… :slight_smile: )

As we may see, the error introduced to the “shortcut solution” by switching the terms for X just happens to cancel out the genuine error of the shortcut solution, (Meaning, for a complete turn. This doesn’t automatically imply that this should give us a perfect circle.)

Edit: If I’m not mistaken, what’s stored in the z-table in the CORDIC algorithm as provided in the article, is really the error of the “shortcut solution” over 16 iterations.


That’s why I put quotes around circle in a previous message. If you’re interested in these aspects of the Minsky algorithm I recommend the book “Minskys and Trinskys”: Minskys & Trinskys 3rd Edition by Julian & Corey Ziegler Hunts, R.W. Gosper & Jack Holloway | Blurb Books Yes, one of the co-authors is the Gosper.


Wow that book (seen in preview in the browser) looks lovely! I found a webpage describing Trinskys, with links within:

Ziegler Hunts, Ziegler Hunts, Gosper, and Holloway introduce equivalents of the d and e parameters into this algorithm and use it for many of their illustrations:

x <- x - (a/2) y;
y <- y + b x;
x <- x - (a/2) y;
plotPointAt(x, y);

As described above, the (two-step) integer circle algorithm is parameterized by four values: x0, y0, d, and e. How can we visualize how the behavior of the algorithm – which at time can draw unusual shapes – changes as we modify these parameters?

Much like how many visualizations of the Mandelbrot set count and plot the number of iterations it takes for each complex starting point to escape, Ziegler Hunts, Ziegler Hunts, Gosper, and Holloway choose to associate each (x0, y0, d, e) tuplet with the number of iterations it takes for the initial point (x0, y0) to return to its starting point.

They then plot two-dimensional slices of this four-dimensional space – such as plotting the number of iterations varies with x and y, holding d and e constant.

The results are quite striking: here’s the first XY plot they plotted, using d=1 and e = 4 sin(pi/9)^2 (i.e. d=1 and p=9):

Period 9

Ah, memories. I worked on the HP45 calculator which used a non-restoring decimal version of this. The calculator instruction set was specifically designed with Cordic in mind (so shift&add/subtract by a variable amount was built-in, as was add just mantissa or exponent, etc.).
I actually attended the Spring Joint Computer Comference where Steve Walter described the algorithms. The hyperbolics were quite tricky.
Prof.Kahan (of IEEE754 FP fame) was an advisor to HP ( and also to Intel for the binary version) to ensure that results were within one least-significant position of the input ( which, to be fair, is a fairly wide range when the slope of the curve is large)

1 Like