The TX project (TX-2 - Wikipedia) birthed computer graphics (Sketchpad - Wikipedia) and DEC.
Famously the computer with a roof â but I didnât know about the calendar (conviniently next to what seems to be an uptime counter)! ![]()
Great images!
DECâs PDP-1 was very similar to the modified TX-0 that was donated to MIT. I donât think DEC ever had anything like the TX-2.
I think, the TX-2 was just too complex and too expensive to be commercialized.
On the heels of the TX-0 and TX-2 transistor experiments, there was an FX-1 experimental computer with magnetic film memory. Some info here: FX-1 - Computer History Wiki
A 12 bit test computer with a 50Mhz clock. Amazing how thin film was the wave of the future,
that never arrived.
What happened to the TX-2 hardware after it was shut down in 1977 (p. 128 of the DEC Computer Engineering book, ISBN 9780932376008)? Does any of it survive?
The TX-2 Simulator project is looking for help. They have a curated list of things that need to be done, with clear labelling of things that people could work on as a first contribution.
There are a lot of interesting and novel things about the TX-2; among them is the fact that I/O was performed by per-device dedicated threads. This design choice also applies to the later Xerox PARC Alto computer.
The Altoâs creators said they were inspired by a description Alan Kay made of the TX-2, though Alan himself seems to not remember that conversation. Another interesting TX-2 feature was SIMD execution. The 36 bit ALU could be split into various combinations: 36, 18+18, 9+9+9+9, 18+9+9, 27+9 and so on. Unless I am mistaken we would only see that again on the 1989 Intel i860.
Alan Kay explained it to me[4] this way (Wes Clark[7] was the designer of the TX-2, as well as other computers):
(Also Wes Clark was an âadvisorâ to Parc (via Taylor), but I donât recall that he was involved in the Alto early thinking).
I would not want to try to assign the multiple program counter idea to any one source, but it was Chuckâs design (and, for what itâs worth, Chuck once said that he decided to go that route because of a talk I gave at Parc about âcorountiningâ (which is a SW way to do interleaved parallism)).
Basically, Parc was the ultimate in team sports and everyone should get a World Series ring who was there and participated.
âTaylorâ above presumably refers to Robert Taylor[5], of Xerox PARCâs Computer Science Laboratory (associate manager 1970-1977, manager 1977-1983). âChuckâ refers to Charles P. Thacker, designer of the Alto[6].
Wikipedia tells us that the first Alto machines[2] were introduced on March 1, 1973. Bert Sutherland had been at Lincoln Lab and had worked on the TX-2 directly (like his brother Ivan, his Ph.D. work[3] was based on the work he did on the TX-2)[1]. Bert was also manager of the Xerox PARC research center, but this was in the period 1975â1981 (in the oral history document at the CHM[1], Bert confirms that he was hired at PARC in 1975). In other words, while Bert was very familiar with the architecture of the TX-2, he got involved with PARC too late to influence the design of the Alto.
Wikipedia indicates that the term coroutine[8] was coined in 1958 by Melvin Conway[9]. The TX-2âs sequences are slightly closer to threads[10] than coroutines, as switching between tasks was preemprive (though code could prevent a sequence change by setting the âholdâ bit in any instruction after which the programmer does not want a sequence change to occur). Terminology here is a bit tricky. The question of who invented these concepts depends closely on how you define the concept, rather like the invention of the computer. Certainly though, the TX-2 pre-dates the points at which the Wikipedia articles referenced below suggests for the introduction of the terms thread and coroutine. Wesley Clark states[11, page 145] that the multi-sequence design of the TX-2 was an extension of the breakpoint operation found in DYSEAC[12] of the National Bureau of Standards. The DYSEAC article in Wikipedia compares the âbreakpointâ to an interrupt, see also section 9 (page 7) of Digital Computer Newsletter, Vol 6 Issue 4[13].
Citations:
- [1] Sutherland, Bert Oral History
- [2] Xerox Alto - Wikipedia
- [3] On-line graphical specification of computer procedures. Ph.D. thesis, 1966
- [4] Email
- [5] Robert Taylor (computer scientist) - Wikipedia
- [6] Charles P. Thacker - Wikipedia
- [7] Wesley A. Clark - Wikipedia
- [8] Coroutine - Wikipedia
- [9] Melvin Conway - Wikipedia
- [10] Thread - Wikipedia
- [11] The Lincoln TX-2 Computer Development, Wesley A. Clark, 1957 Western Joint Computer Conference
- [12] DYSEAC - Wikipedia
- [13] Office of Naval Research, Physical Sciences Division, Digital Computer Newsletter Vol. 6, No. 4. Unclassified.
A lot of the Alto software and documentation is available online at the Computer History Museum.
Hereâs a blog post with an introduction:
And a âwalkthroughâ of the hardware and programming languages available (BCPL, Mesa, Smalltalk, and Lisp):
And finally the raw archive, in the form of various file servers that supported the Altos:
Being somewhat bewildered by all the Lincoln Lab computers, I tried to make this table and timeline: Lincoln Lab computers ¡ Issue #4 ¡ larsbrinkhoff/open-simh ¡ GitHub
Maybe I missed some; please help.
I call it Linc here, because that was the initial name whilst developing at the Lab. It was renamed LINC when the project moved to the MIT campus. I also want to add Wilkes there because apparently she was the major designer of the instruction set, and as a software guy I think thatâs an important part of a computer.
Great - hereâs her Wikipedia page with links within
(No relation to the early British computer scientist Maurice Wilkes)
Is the link in the L1 the first occurrence of what became the modern carry flag?
(It seems that L1 is named after the link bit.)
An interesting thought! From the paper Lars linked:
The provision of a single binary element called the link (designated by the letter L in the block diagram, Figure 1) is of particular value. The link simplifies operations on multiple-word numbers, and is itself a simpler and more useful version of a similar device of the Whirlwind computer (2)
Ref 2 is
âWhirlwind I Operation Logic,â Report R-221, Digital Computer Lab.,
Mann, M.F., Rathbone, R.R., Bennett, J.B., May 1, 1954.
Iâve had a look at R-221 and itâs not what Iâm used to! Itâs ones-complement which introduces oddities, and the document speaks of end-around-carries which I donât fully understand. I didnât see a mention of multi-word numbers: itâs possible that such an idea would be very much a minority interest, in a machine with an adequate word size. My own 8-bit territory absolutely needs consideration of multi-word (multi-byte) operations.
How ever back then 8 bits was never considered a useful size.9 bits makes more sense to me
as 2 bcd numbers and a flag or sign bit. Historically 40 bits was to be the standard word size from
Von Neumann in 1940âs for general Number crunching like for the A-Bomb. Data Processing ended up with 36 bits for 6 bit text data. Bigger numbers, had calculators for that in the 50âs.
The link is a important step for modern computing, but ones-complement still needs a end round carry bit.
Twos complement saves the need for that.That I think is the most defining thing about modern computer
designs. Sadly both formats have problems with correct processing of arithmetic functions.
In onesâ complement, we have to add any overflowing bit to the other (LSB) end again.
E.g., assuming a 4-bit word for simplicity,
1101 (-2)
+ 0100 ( 4)
--------
1.0001 ( 1 + overflow)
--------
0001
+ 1 (end-around carry)
--------
0010 ( 2)
also: sign bit of accumulator changed => set overflow flag
Why? Mind that we have two zeros, positive zero (0000) and negative zero (1111). Hence, as the operation overflows, indicating a traversal of zero, we must make up for this by adding 1 (thus making it across both zeros.) Twoâs complement kind of does this in advance by adding 1 to any negative number. (Which is a bit costlier, in case we wonât need it. But we get rid of negative zero.)
Mind how this coincides with how bit-jamming works with rotate instructions on machines of this era. (And how this changed with the link/carry bit.)
PS, this also works with all negative numbers:
1101 (-2)
+ 1101 (-2)
--------
1.1010 (-5 + overflow)
--------
1010
+ 1 (end-around carry)
--------
1011 (-4)
So, the more general explanation is that, with negative zero, we start counting in the negative direction by an offset of one. (Itâs a biased representation.) If we add to this for a positive result, we must compensate for this, as weâll have to do in case weâre adding two negative numbers, since the result now includes two of these offsets, thus overshooting the correct result by one. In both cases, adding the (unsigned) overflow to the truncated intermediate result does the trick.
(Of course, the bias is entirely on our side, since itâs only a biased representation as weâre referencing negative numbers to positive/unsigned zero, while the positive and negative number ranges are perfectly symmetric in onesâ complement. Notably, we could apply this bias to either range.)
PS: For 6502 fans:
SBC on the 6502 does classic onesâ complement addition with the complement of the operand, and we set a bias in the carry bit, in order to either add 1 for a twoâs complement representation, or, if itâs unset, effectively subtracting one (like when a previous operation didnât overflow, thus indicating what is really an underflow), as we fail to compensate for the bias. (In other words, we start with the end-around by setting this bias.) Which is why the carry works kind of in reverse with SBC.
(Moreover, this is really why the operations of the overflow flag are somwhat cryptic, since, because of the end-around, this must include two of the bits of either operand.)
As weâre at it, thereâs another oddity with onesâ complement machines.
At least, itâs so on the PDP-1, but Iâd assume itâs the same with any of its LLNL precursors, because there is a sensible reason for this. Namely, addition normalizes negative zero to unsigned zero (a simple complement from all-ones to all-zeros), while subtraction does not.
Why should this be?
Well, in principle, addition (ADD) and subtraction (SUB) are the same operation, but SUB first complements the contents of the accumulator (AC), because this is where thereâs already the circuitry for this.
Therefore, at the end of SUB, the operation is finalized by complementing the contents of AC again. This uses the same timing slot used for the normalization of zero by ADD. The difference being that this is a conditional operation, performed only on a negative zero result for ADD, but is performed unconditionally with SUB.
Somewhat simplified the operations look like this:
ADD SUB
t1: complement AC - X
t2: add memory bufffer X X
t3: carry step X X
t4: end-around X X
t5: complement AC if AC == -0 X (unconditionally)
---- finis -----
So, thereâs no slot left to normalize zero on subtraction, as this has been used already for restoring the sign of the result. (Or, the other way round, ADD repurposes the timing slot, which is required for SUB anyway, to normalize negative zero as a quality of life improvement.)
As a consequence of this, dealing with negative zero results on subtraction is left to the program, while addition is carefree.
PS: Operations on the TX-0 where somewhat more complex, since, because of the reduced instruction set, it had to rely heavily on micro-programming. Thus operations were much more granular. So there was ADD for performing a partial addition of AC and the âlive registerâ (really much like an XOR), and a separate instruction (CRY) for performing the carry step of the addition.
Nasty subtract is - (-ac + (y)).
Throwing in more hardware, like exclusive orâs to compliment before and after the main adder
would remove state T1 and T5, leaving T5 for the -0 or a -0 at the end of the state T4 would
clear the AC rather than loading.
I wonder if the PDP-1 came out a year or two later they could of added similar hardware.
I also wonder that we are adding bit more hardware, could one have made it a stacked based
machine?