To the outer planets, by IBM 7090

Space exploration of the outer planets was infeasible until 1961 and the invention (or discovery) of slingshot manoeuvres, which take momentum from a planet (or moon) and give it to a spacecraft. First by slide rule, then IBM 7090, with a thousand lines of FORTRAN.

The first JPL gravity-assisted mission analysis study resulting from the invention was an Earth-Venus-Mercury mission initiated by Elliot Cutting in June 1964.

Stills from the video linked below:

According to this page

It was achieved by a fundamentally new theory of space travel invented by Dr. Michael A. Minovitch in 1961 which he calls “Gravity Propelled Interplanetary Space Travel.”
When Minovitch presented it to JPL in the form of 47 page technical paper dated August 23, 1961, it was dismissed by the head of JPL’s Trajectory Group as impossible.

Thus, the new theory of space travel that Minovitch invented during the summer of 1961 could be represented by a non-stop multiplanetary trajectory having the form P1 – P2 – P3 – ··· – PN where P1 represents the launch planet, P2, P3, … ,PN-1 represents N-2 intermediate “gravity propulsion planets” and PN represents the final planet or target body in the trajectory. It is achieved by applying the mathematical solution of the Restricted Three-Body Problem serially to determine the precise approach trajectory to each successive flyby planet Pi (i = 2, 3, … N-1) such that its gravitational field will catapult the vehicle to the next planet Pi+1 in the series. It is a problem much more difficult than the classical Restricted Three-Body Problem because there are N-1 flybys instead of only one, and each flyby had to have a distance of closest approach to the center to each planet greater than the planet’s radius to avoid crashing into the planet’s surface. The theory represented one of the most elegant and sophisticated applications of analytical mechanics ever conceived, and one of the most mathematically difficult to solve

In 1991, the U.S. National Academy of Sciences (together with NHK Japan) produced a 6-Part documentary series on the history of space travel entitled Space Age WQED/Pittsburgh narrated by Patrick Stewart. Minovitch’s invention was explained in Episode 3: The Unexpected Universe. The segment included an interview with Dr. Minovitch showing some of his original research material and trajectory computations from the 1961-1964 time period. The segment also included some excellent footage of an IBM 7090 computer in operation during the early 1960’s.

The video can be seen here, the segment in question is about the 23 min mark. Better to download the mp4 and play locally.

The 45 page PDF report can be seen here.

Having failed to have the concept investigated at JPL in 1961, I decided to investigate it myself at UCLA in January 1962 using their large IBM 7090 computer. The initial results were very promising and I was able to greatly expand the numerical investigation at UCLA on April 2, 1962 by obtaining unlimited access to the computer on a time-available basis. This led to more encouraging results. I asked JPL if I could use their IBM 7090 computers on a time-available basis starting in June 1962 to further enlarge the research, and this was arranged. This was the beginning of a very large-scale numerical research project to investigate the concept that lasted almost three years.

The entire research project was directed by myself. The amount of computer time used at UCLA totaled over 300 hours, and approximately three times that amount on the two JPL computers. This gravity propulsion research project was one of the most extensive non-military computational research projects in history up to that time using approximately 1,000 hours on three of the worlds most powerful commercial computers, IBM 7090s and IBM 7094s.

The output of many hours of gravity propelled trajectory computations from UCLA’s 7090 computer was placed in many boxes and often picked up by JPL delivery trucks and taken directly to JPL where I analyzed and made it available to anyone at JPL that wanted to see it.

via John D. Cook’s blog post Voyager’s slingshot maneuvers

5 Likes

Thanks for this wonderful story.
I learned to program in the sixties on the University of Washington’s IBM 709, the vacuum tube ancestor of the 7090. That launched me on a long and prosperous career in computer science. In the seventies in graduate school I was systems programmer on a Prime computer in the Atmospheric Sciences department where data from the Viking Mars landers were stored and processed. Much later, circa 2000, I returned to teach introduction to programming and other computer science at UW Extension. Completing the circle, so to speak.

2 Likes

That’s a wonderful post. I wasn’t aware of this history at all.

Just for fun, I read about the 7094 a little and decided to estimate how much computation could be done in 1000 hours (about 86 x 10^6 seconds) if I made very (very very…) optimistic assumptions.

First, let’s assume all the computation was done on the (much faster) 7094-II versus the 7090. The 7094 had a 2 microsecond cycle time; the improved -II version accelerated this to 1.4 microseconds. That’s 500 x 10^3 operations/second or about 714 x 10^3 for the -II model.

It was a 36-bit computer, so let’s assume single precision floating point was adequate. The 7094 took as little as 1 cycle for a very simple instruction like setting a flag in the CPU, but minimum 2 cycles for any instruction with an operand. This short document gives some idea of typical times. We’ll be super optimistic and say 2 cycles per floating operation.

We have to account for the fact that some amount of time is always spent moving data around and branching. The linked page above shows a typical multiple and add tight loop, which takes 8 cycles for every two floating operations. I know from a paper I read recently that Meta/Facebook got about 50% utilization when training a specific LLM (that required 16,000 GPUs in parallel, running 54 days). Obviously that was highly optimized by experts. So we’ll use 50%, giving us 4 cycles per FP operation.

So we have (86 x 10^6 seconds) x (714 x 10^3 clock/second) = 61 x 10^12 clocks. Dividing by 4 we get 15 x 10^12 floating operations. With the optimism removed (particularly, I don’t think any of the computers were 7094-II’s in 1961) we get something like 10^12. So a gigaflop CPU could do this in 20 to 40 minutes, and a teraflop machine would take more like a second or two.

1 Like

36 bit floating point I suspect is just barely able to handle this.Double precision was around.
Only looking at the code would tell. I suspect a lot multiplies are in there.

Vaguely related, this is an interesting article by NASA on how accurate they need Pi to be…

tl;dr: 3.141592653589793 is good enough…

-Gordon

1 Like

Well you never see Pi, it is always 2 * Pi in the math.
A lot of effort went in to floating point functions, in the 1950’s
and early 60’s.
Important work that never really got the credit it deserved.

Nice find - squinting at the table, I’d probably estimate 10 cycles per operation. But that’s an extremely rough estimate!

That’s a great article, especially considering the background of the article’s author. But working with astronomical distances is not very relatable.

Perhaps for present company a better analogy might be to use Commodore BASIC’s approximation of π (3.1415926525):

  • the perimeter of the moon’s orbit would be off by under a metre;
  • the perimeter of the earth would be off by a little over a centimetre.
2 Likes

That wiki page frustrates me as it has no explanation whatsoever for why the C64 pi constant is wrong. I assume that’s because we just don’t know?

Likely a straight typo. Remember, this is the computer that has a KERNAL ROM …

1 Like

I’d agree that we don’t know. But it’s notable that MS Basic already contains correct binary constants for half pi and twice pi (even though they only differ in a single bit!) and the pi constant for Commodore’s addition of π is in a completely different part of the code. (Links are to Michael Steil’s repo which builds all known 6502 variants of MS Basic, as described in his blog post “Create your own Version of Microsoft BASIC for 6502”)

I note that the wrong value for pi which Commodore used is an accurate binary conversion of the truncated 10 digit decimal value 3.141592653 - for me this seems the most likely provenance.

2 Likes