Fairchild Symbol Computer

at Fairchild Semiconductor R&D in Palo Alto, Gordon Moore was the manager and he had
5-6 different projects running at R&D.
Another one that we worked on, was a decision to be in the memory systems business
which meant in this case selling a large scale memory with power supply, cabinet, etc.

As IBM was the biggest computer maker, the objective was to sell memory systems
to IBM customers, and to compete with IBM for the memory biz.

So we used our system packaging to build memory systems, so that was
another project and as I recall it was on the fringe of what we were
doing on Symbol.

stan mazor

Rex had studied mechanical engineering, as I recall, and
the development of the PCB cage with 2 rows of 100 pins,
and zero force insertion, seemed very interesting.
There was a cam shaft running down the 100 pins
on each side, so you opened up the pins, slid
the large board in, and then turned the crank on
the cam shaft that then pressured all the pins.
so it was a ‘marvel’ of mechanical engineering
and gave us 200 pins for interconnections,
being 100 per PCB side. the 50 pins at the top of the PCB
accepted a more regular connector and generally we used those as test pins
wherein we selected some major and interesting signals to send out.

Now in particular we were building state machine
logic, and the most interesting signals to then bring out during debugging was the
info of what state is, so at least the internal machine state would be visible to
the debugger. On the other hand we did have a problem and that
was when one does soldering of the gold pins there was a residue,
and often the PCB might have a residue on them that interfered with
electrical signals.

So as a consequence before inserting a PCB we cleaned the pins
surface with alcohol, otherwise we might dirty up the connector pins.
I think that was a major issue to overcome.

stan mazor

!

1 Like

!Fairchild Symbol Computer S. Mazor
IEEE Annals of the History of… 31 March 2008

Abstract
Under the leadership of Gordon Moore, Fairchild Semiconductor embarked
on the design of a high-level time-sharing computer, Symbol IIR.
In the mid-1960s, the falling costs of semiconductors made hardware seem
ike the logical design choice to replace common software functions.
Symbol embodied that design, but to what effect?

1 Like

Thanks Stan - another interesting paper! (https://doi.org/10.1109/MAHC.2008.5)

Symbol data variables were of arbitrary size and could change type and size during program execution. The following Symbol code fragment emphasizes this point—the hardware executes three consecutive assignment statements:

x <= ‘‘a’’;
x <= 1.333333333333333333333333333356;
x <= ‘‘now is the time for all good men to come to the aid’’;

In the first case, the variable x is assigned a simple string character. Thereafter, the x value is replaced by a long number, and finally by a long string. The Symbol hardware provided for dynamic allocation and deallocation of storage for variables like x, as well as marking the data type in the symbol table during program execution. Most conventional software languages don’t allow variables to change type or size during program execution.

Symbol floating-point arithmetic hardware permitted up to 99-decimal digits of mantissa, and the precision was dynamically user-determined. Many computers only offer either single or double precision; the IBM System/ 360 Model 44 users controlled floating-point arithmetic precision via a front panel dial.

1 Like

the issue is whether you support nested block definitions in the
declarations and if you do and are using a stack then there
are pointers within the stack, and the stack is in virtual memory,
and now the only question is there is a finite number of
nested declaration, and therefore you can do it.

So this isn’t a super big deal.

In the case of Symbol, the symbol table was
created at ‘compile’ time, buy the compiler
(translator) but the stack was kept at run time.

Every variable and every label is an entry in that symbol table,
and the table is completed during the run time, and
in particular type is coded in that name table,
but the type is not constant. therefore a
variable can change type using an assignment
statement. Further the Symbol table
also had one 64-bit word to also hold
the variable (data) or if was a large
array, that entry was a pointer
to the data (virtual address).

so you could have:
x <= ‘a’';
x <= “now is the time for all good men”
x <= 3.14159 + 15/50; --this is an executable floating point operation.
x <= [“abc”, “pdg”, 1,750.] —a 3 element vector and items different length/type

x <= Fred; --Fred is a label within the scope of this block;
go to x; the instruction sequencer (IS) goes to the name table
and finds that the contents of the variable x is of type label,
and then executes the ‘jump’.

2 Likes

just another amplification
the use of a block is to put in a piece of code and that code
can contain local variables that are symbolically referenced
in the block. But if a block is declared within the scope of
another block (the encompassing block) the variables in
the outer/higher block are also available and visible.

Now for complications, the outer block can declare
a variable named X, and then the enclosed block
can also have a ‘local’ variable named X.

So from language design, when you see an expression z = X +3;

the meaning is clear and the X is either in this local block and
if not then you look in the upward enclosing blocks to find X.

So high level language let you pick up pieces of source code
that do interesting functions and have ‘local’ variable declarations
and these can have the same names of the variables that
are used in other blocks, but they are all different variables depending
upon the scope as to where they are declared… so if they
are not declared in the same block then they need to be
declared (visible) in the encompassing blocks.

Another related idea that the most outer block is
called the ‘main’ program and any block enclosed
and also operate on these ‘global’ variables…

of course the Symbol computer was defined with
multiple and nested blocks in the hardware
translator and execution unit.

To wit labels are also treated the same way so
you can have dozens of blocks all containing the
label: L1 and some sort of go to L1 and that
means to this block or the upward contained block…

stan

2 Likes

Symbol was delivered to Iowa State University, and they
did a lot of work on it and there are a few reports from
them, and as I designed and built the FPU and ALU,
I was curious if my hardware worked. From what
I can tell some of my ALU worked okay, but
I never knew the full scope of the debugging
and fixing that took place.

Certainly the Symbol computer was too complex
and too complicated, and my guess is that
it never worked very well or completed.

I worked on Symbol for 3 years and then
went to Intel to co-design the 4004,
8008, 8080 and more…Some of
these chips worked I believe.

stan

3 Likes

Ironically, Dave Ditzel (who co-wrote the first paper on RISC architectures with Dave Patterson) worked on it at the University of Iowa.
I can only speculate that the experience with Symbol had something to do with that.
And I work for his startup now…

3 Likes

David Ditzel also developed CRISP as an alternative to RISC better optimized for running C code. AT&T tried to sell that as the Hobbit processor but quickly gave up forcing Apple and Be to redesign their Newton and BeBox respectively.

That’s not the way it happened. ATT was designing both Aquarius and Hobbit for Apple
(who asked for changes). Apple cancelled Aquarius, and ATT then demanded that Apple copletely fund Hobbit development - which was not the agreement (that ATT fund it, and could then sell it to anyone).
Apple basically refused, went to Acorn and asked them to set up a joint development spin-out which would be able to sell the resulting chips to Apple and Acorn and anyone else, lowering the cost to each, which Acorn was happy to do, and Apple dropped Hobbit.
ATT panicked at and tried to sell it elsewhere, and got EO to use it (did they buy EO? I don’t recall), but it failed in the market, and… ARM didn’t. There is a bit more to the story, but characterizing it as ATT giving up and Apple had to redesign Newton as a result is not the way actually happened.

3 Likes

I’m sorry that I simplified the Hobbit story to the point of being misleading. As far as I know Be had working hardware prototypes using the Hobbit and were forced to redesign to use the PowerPC instead.

Alan Kay claims that the Newton group did a comparison between the ARM and the Hobbit and found the ARM technically superior. Looking at the respective datasheets I would have expected the Hobbit to be slightly faster, but there might have been problems that aren’t obvious to me.

Speaking of the Aquarius, it is another processor that looks good on paper. I had no idea AT&T was involved in this - thanks for the information!

1 Like

Allen, not sure if you’ll remember me, I worked for Acorn and hung out with the ARM crowd when you came to work in Cambridge. Could you please get in touch with me?

(Edit by mods to remove private information, and PMs sent.)