Computers are here, but how did we get here

James Burke view how we got computers , as shown by the BBC.

3 Likes

This is the 4th episode of the first series of Connections, a popular series on history of science and technology. Wikipedia has this to say:

Burke examines the transition from the Middle Ages to the Renaissance from the perspective of how commercialism, climate change, and the Black Death influenced cultural development. He examines the impact of Cistercian waterpower on the Industrial Revolution, derived from Roman watermill technology such as that of the Barbegal aqueduct and mill. Also covered are the Gutenberg printing press, the Jacquard loom, and the Hollerith punch-card tabulator that led to modern computer programming.

Here and here are links to this episode in context, in both cases at the Internet Archive.

Wikipedia also notes Burkes’ related column in Scientific American - I think I can say the column and the television were both important in my development.

4 Likes

I remember finding it fascinating how all “Connections” went through Great Britain. And me as a child was very naive … I mean, I didn’t even realize that the people in Doctor Who spoke differently than people in my country (the USA). And I wasn’t fully aware that the starship in Cosmos wasn’t actually real. But James Burke’s national pride? Wow, I noticed THAT!

1 Like

This series holds up well. It’s still a treasure. The other great Burke series is “The Day The Universe Changed.”

Re. how we got computers, a great mini-series on that is “The Machine That Changed The World.” It’s out of print, which is a shame, but you can find bootleg copies online. It came out in 1992, and goes from Babbage to the internet (pre-web).

https://www.youtube.com/@datassette/search?query=the%20machine%20that%20changed%20the%20world

4 Likes

This is a great series!

Something I always find interesting about the documentaries of this era: how these are telling mostly exclusively the epic story of a kind of computers, which we would now recognize merely as a “side show” (they still do much), while mostly ignoring that strand of interactive computing which has become dominant for us today, starting with Whirlwind, followed by subsequent MIT computers, DEC, Unix, etc.

(Notably, in the US, these came even first, with Whirlwind solving first problems in 1949 and becoming fully operational in 1951.)

1 Like

Just as a quite literal illustration, here’s the famous “Family Tree of Computer Design”, from “A Brief Summary of Computer Development” (NSF, 1960):

According to this, we are sitting almost entirely on a leaf of a rather weak, mostly unrelated branch branching off from the “IAS type” machines (which may be a rather generous attribution, given that Whirwind was in fact the first binary computer project in the US and was fully operational before the IAS machine).
Also, of course, no notion of Colossus, which was still highly classified, with the last one (of 10 in total) just going out of service when this illustration was made.

Even more so in this earlier “Computer Tree” from “Electronic Computers Within the Ordnance Corps” by Karl Kempf, where Whirlwind isn’t only unrelated to SAGE, but also appears as a child of the IAS machine, leading to ORDVAC, MANIAC, ILLIAC and JOHNNIAC, a mere episode in the history of “big iron” (there’s a noticeable Von Neumann bias in this):

(Images from: https://longstreet.typepad.com/thesciencebookstore/2012/06/the-family-tree-of-computer-development-part-ii.html)

2 Likes

I can’t seem to find ATLAS in there. The UK must have a least a bush somewhere.
Looking at that tree, I can’t thing of any thing new in computing hardware since then.
RISC,ASCII, MULTI-CORES,CLOUD all are just newer versions of ideas from back then
that was limited by the hard ware of that era.(IBM stretch)

While there’s an EDSAC-LEO branch in the first tree, it is certainly US-centric.
The second diagram is really US-only, as it is about machines in the context of the US armed forces.
(Of course, the Manchester Baby beat all Turing-complete machines in these diagrams by becoming operational in 1948.)

This is how it was when I was in school in the 1980s. I did a couple research papers on the history of computers then, and my sources went from Babbage, or Pascal’s Pascaline, then to Turing, the Bombe, and Colossus, then to ENIAC, EDSAC, Harvard Mark I, maybe some others, and would end with mainframes from IBM in the '60s, and maybe some minicomputers (no manufacturers named) from the '70s. Nothing about Whirlwind, or SAGE, nothing really about DEC, or Remington-Rand. I mean, the closest I’d get to any history of DEC was, ironically, if I’d find something about the history of video games outside of a school project, since they’d talk about Space War, which ran on a PDP-1.

This felt a bit confusing, because I was using interactive computers a lot, with 8- and 16-bit machines. Nothing I read talked about where they came from, except in the computer magazines I used to read avidly, which would say they came from Apple, Atari, Commodore, Radio Shack, etc., as if they invented the concept. Though, at some point I learned about the UK computer market, with Sinclair, Acorn, etc. I used to watch the BBC show The Computer Programme fairly regularly, which often featured the BBC Micro.

It wasn’t until I was almost out of college in the early '90s that I saw documentaries that focused on the history of what’s now “the main event” in computing, starting with The Machine That Changed The World, and then Cringely’s doc., Triumph of the Nerds, which came out in 1995/96. Both included coverage of Xerox Parc. As I remember, both contained the first coverage I ever saw of Doug Engelbart’s “Mother of all demos,” in 1968. Even so, they didn’t really show how amazing it was, because one of the major points was to demonstrate collaborative computing, where groups could work on a shared knowledge base together, and more than one person could see and speak to each other through teleconference while working on shared documents. The coverage then was all about Engelbart’s invention of the mouse, and how it interacted with a GUI, which is really trivial, by comparison.

Even describing it this way kind of misses the point, because the reason Engelbart created the technology was he wanted it to be used in a process of increasing a kind of “group intelligence,” as a way of hopefully scaling beyond the limits of individual intelligence, to address problems where the complexity is too high for any individual to comprehend, and to address well.

What they also always said was that Engelbart’s invention was great, but that hardly anyone knew who he was. Yet, his technological work, to a significant degree, formed the basis for everything that’s followed. However, how we use it is a far cry from what he intended. This is a common refrain, whenever one looks into the research that was done then.

The interesting thing is, it wasn’t just Engelbart. E.g., in the very first DECUS Proceedings from 1962 (the same one, where we also find Martin Graetz’s Spacewar paper), there’s a description of the World Oceanographic Data System by Ed Fredkin, which shows a screen of the PDP-1 with parametric slider controls to be operated using the light pen:

There’s also another article discribing the use of a color display (no idea which) and the light pen for selecting and directly manipulating data points and their eigenvalues. ITEK was describing their own display system used for their CAD system, which they had began to work on together with Adam’s Associates in 1960. (This was probably the first commercial CAD application. And, yes, these are the same Adam’s Associates, from whom the sine/cosine routine for Spacewar came from.)

The same year, Ed Fredkin went on to found Triple I (originally for artificial intelligence work, again on a PDP-1), which became a major player in computer animations. (And he took Ben Gurley, the PDP-1’s designer, with him, who was eventually shot in 1963 while at III.)

George Michael describes the use of a mouse derived and improved from Engelbart’s design for a PDP-1 at Lawrence Livermore National Laboratory (LLNL) by what must have been the mid-1960s. (Apparently, there had been various applications using a light pen for manipulation already, and users were divided by about 50:50, whether the mouse or the light pen was the superior input method.)

There’s a video of GENESYS, a parametric animation program for the TX-2 (much like early Flash) by Ron Baecker as part of his 1969 PhD thesis, which shows a screen with a main window for the animation state, icons depicting the animated objects and parametric sliders, in what we would call nowadays a tabbed viewport:

Apparently, these things popped up about everywhere, as soon as interactive computing with a visual display and an instrument for direct manipulation became available…
(Which was, of course, mostly in the MIT/DEC context. But there were similar things on different systems, as well, e.g. GRAIL by Tom Ellis for the RAND tablet.)

However, these things mostly vanished in favour of timesharing, only to be reinvented for bit-mapped displays in the 1970s.

And, yes, Engelbart was about much more, esp. about content and organization of content. (The about only legacy is probably the outline mode in MS Word.)

2 Likes

The Atlas would be in the Ferranti leaf in the Manchester branch of the first tree, so it is there even if well hidden.

The label implies that the right side of the tree is all serial machines but most of the ones there are actually parallel as well. And it seems odd to me to not consider “IAS type” to be derived from EDVAC.

1 Like

To me, “IAS type” appears to be some mythical, teleological origin, similar to the EDVAC in concept, but of serial type and probably favoring a large word size, So, according to this origin story, the EDVAC draft immediately progresses to reality, thus becoming the father of all parallel machines, while the EDVAC draft also begets the “IAS type” as a sibling to the real EDVAC, while doing so.

 (poly-methodological prehistory, AKA, "the ancient")
              |
              |
            ENIAC                (radix radicum)
              |
              |
         EDVAC draft             (spirtual origin)
          /       \
         /         \
        /           \
 "IAS type" –––––– EDVAC real    (teleological goals)
(bit-serial)     (bit-parallel)
    / | \           / | \
   /     WW
IAS real   .
             .
             DEC                 (mixed heresy)
              .
DP2200 ........
              .
              we are here…       (valley of tears)

:wink:

I thought the IAS was parallel (it is shaded as such in the first tree) but this text is confusing: “Although each binary word raced through the machine’s units in sequence, one bit after another, all the bits in each word were stored, and operated on, in parallel, as in ENIAC.”

If this means the IAS was bit parallel, word serial then this describes most computers up to the superscalar and out-of-order ones.

Word serial and drum or disk memory , I would say was normal for computers
used by the average person until after PDP 1 and other similar machines around 1965.
Core,transistors and disk I/O had advanced from the lab to production lines,

I guess, this implies a serial bus with shift register buffers and a parallel ALU?

I think, the divide into strictly serial and strictly parallel is hard to make and to maintain – and I got confused in my fun diagram, as well.
E.g., Whirlwind was a serial machine, but is listed as parallel (which tripped me in the first place). I think, some parts were upgraded to parallel logic, though. If I haven’t forgotten most of what I once knew about the UNIVAC I (I once wrote an emulator 20+ years ago), it had a parallel ALU, but it is listed on the serial side. I believe, instruction/function decoding was mostly serial, and for any 2nd generation computers microcode somewhat implied a certain amount of seriality, as did hardware multiplication and division (because of shift-steps and probably some sub-timing networks for that), etc… Aside of Colossus and some vector machines, I can’t think – at least at the moment – of any machine that would have been truly parallel.

Carry propagation is another factor, not often discussed in hardware.
You can have a parallel alu, but different versions of carry generation depending on
core memory speed, and logic speeds. The IBM 360 family comes to mind.
Micro code tends to imply a regular instruction decode, like the IBM 360
or a 8 bit micro chip, and Von Neumann architecture.
RISC is Harvard architecture of course. Many machines mostly random logic
as microcode memory seems effective for larger order codes.

Very interesting trees, especially the 2nd.
Can anybody explain what computers are on the left and why there are on the left. Different purpose or word size or memory technology or…?

I guess, maybe, because it’s the US armed forces, it’s also about organizational criteria?
E.g., coming back to Whirlwind as an example, we find an entirely unrelated branch starting with “AN/FSQ 7 SAGE”, which are the IBM produced Whirlwind 2s for the SAGE Directional Centers, and their smaller siblings, the “AN/FSQ 8 SAGE” for the Control Centers.
Or, there’s this branch on the lower right leading from the IBM 604 to the IBM 608, which are all calculators, but we find the IBM 609, also a calculator, on the left side. (Why?) Said branch then continues to comprise the 610 Autopoint, the RAMAC and the 1401, and I can’t see much similarities in those (other than, say, departmental use cases).

The IBM 701, 704, 7090 (right branch) were scientific 36 bits. Some maybe military.
The 702, 705, 7080 (middle left) were commercial (or general purpose), same as UNIVAC I and variable-length
I don’t know most of the computers. But that could be an explanation.