The memory employs printed-circuit wiring on a flexible sheet of resin-impregnated glass-fiber cloth. The arrays of memory elements, deposited on thin glass backing plates, are positioned on the wiring so that each magnetic-film element rests on the intersection of two perpendicular leads on the wiring sheet. When all the memory element arrays are in place on the lower half of the wiring sheet, the upper half is folded over to make the completed memory.
I was reminded of the plated-wire memory used in HP’s 9100 programmable desktop machine - there’s a nice explanation here
Also noted on that site, the use of similar plated-wire memory in Univac machines, found to be insufficiently reliable:
This makes our UNIVAC 9400 a member of the so called “third generation” (first generation being tube based, second generation using discrete transistors).
The members of the UNIVAC 9000 family were:
The UNIVAC 9200, a “powerful, card oriented electronic data processing installation” with a memory capacity of up to 16 kB.
The UNIVAC 9300 featured tape and disk drives and could be equipped with up to 32 kB of main memory (the museum has one of this systems in storage).
Finally, the UNIVAC 9400 was sold in 1969 as a “flexible tape and disk oriented computer system featuring multi programming, real time capabilities and versatile possibilities for data telecommunication”. (The picture on the left shows an “advertising brochure” for this system dating back to 1969.)
As an interesting note all systems of this series used a magnetic thin film memory, a then revolutionary development which, unfortunately turned out to be the most error prone part of these computers. One of the main advantages over then conventional core memory was the non-destructive read out which allowed for shorter memory read cycle times.
Due to the insufficient reliability of this memory system the complete subsystem was eventually replaced by an even more modern semiconductor memory system using some of the first memory chips made by INTEL. More than 1200 of these chips were necessary to implement the 256 kB of our machine.
Construction is very simple, relatively inexpensive, and inherently reliable. Since the wire carrying the thin flm is also part of the memory circuitry, the number of components in the memory structure is reduced and its electronic operation simplified.
Both machines at a glance look to be the stripped down low cost 16 bit versions of the IBM 360.
The 9300 looks to be a marketing gimmick, high speed but only for 16 bits, not 32 as a normal 360.
Strange indeed.
There’s a nice little booklet 9000 Series Facts and Figures which has the 9200 as being disk capable, and the 9200 II and up being both disk and tape capable. Another nice brochure Communicate… Compute… Control (with colour photos!) describes the disk drives as taking a reversible(!) cartridge, and offering the equivalent of 40,000 cards online. Oh, and the disks offering a 10x throughput improvement to your application compared to cards.
Balance between I/O, storage, and compute is of course an interesting challenge, and the right balance will depend on the application. The 9x000 series seems to be aimed at accounting, inventory, management information. Possibly some of the nature of that is lots of data held over from previous work and relatively little new data being input and relatively little output.
Well punch cards lasted to the early 80’s, and the big iron that went with them.
Paper tape and serial I/O went with the new micro computers. That was the Golden era
for Data processing, I think, when 16K core ran every thing.
The Lincoln Lab’s performance stats for the FX-1 indicate it was significantly faster than the TX-2, and yet the TX-2 seems to have been in operation longer. I don’t know, yet, why that would be.
I had a look at the two linked articles in Bitsavers, and I noticed that the FX-1 had only 256 13-bit words of memory. There’s a photograph, too, which shows it’s small, and a comparison table that shows it had only 3000 transistors, about 1/10th the number in the TX-2.
But it had a clock rate that was staggering for this early date: 50MHz! The article also mentions other cutting-edge technologies, like microstriplines for controlled impedence on critical signals, in addition to the 300ns thin-film memory.
My guess is that the FX-1 was not intended as a production computer, but rather as a minimal testbed for high-speed computing technologies.
Well yes, but that statement was also true of the TX-2.
The transistor count is a bit misleading though, because most of the memory of the TX-2 was core memory. Though driven by transistorised logic. Had the TX-2 used transistorised memory it would have been faster.
The TX-2’s X-memory (i.e. the storage for its index registers) was upgraded later, as this was something of a bottleneck. I don’t have the precise date to hand, but somewhere in the mid-60s I think (it’s mentioned in a progress report for Lincoln Laboratory Division 2’s “Graphics” group, I think).
What is currently known about the FX-1 timeline? Do we know of a rough date when it was turned off or removed? Or failing that, when was it last seen alive, so to speak? And what do we know about what happened to its memory over its lifetime? From Kessler and Konkle it seems that it started life with 256 12-bit words (plus 1 parity bit per word) with a maximum capacity of 1024 such words and it was still in that state in early 1962. Then that Lincoln Labs Division 2 report for 15 August 1964 which is also linked from TX-2 Documentation | TX-2 Project seems to imply that by 1964 the full 1024 words were installed … but is there additional information?
I don’t know yet about that part of the timeline. My own primary focus is on the development of the TX-2, and in particular its design and configuration at the time the surviving pieces of software (Leonard Kleinrock’s network simulator and Ivan E. Sutherland’s Sketchpad) would have run, and that’s what’s needed to get those programs to run again.
I suspect the answer could be somewhere in the LL Division 2 reports which follow the one you cited. But maybe not in the same kind of report. The Quarterly Technical Summary for “General Research” covers several divisions, while the Quarterly Technical Reports (which I don’t have for Division 2) probably give more detail.