Stack machines vs. RISC, a modern comparison?

A TTL based RISC in 1978 would actually have been cheaper than the VAX but given the slowness of the memory might not have performed as well. A microcoded CISC used a very fast but tiny memory (the microcode ROM or RAM) for most accesses and the slower main memory a bit less.

DRAM chips not only became larger over time, but also faster. Of course, high performance RISCs had caches in place of the microcode memory.

The only RISC minicomputer I am familiar with is the Pyramid from 1983. Cray’s supercomputers at CDC and his own companies were RISC too, though his unconventional assemblers might not make this obvious.

1 Like

Steve Furber wrote somewhere that this was really the tradeoff being made: large microcode on the chip, or large cache. Well-cached code looks like task-specific microcode. As a side effect, perhaps, it’s a whole lot easier to specify, model, implement, and validate a RISC core plus cache than it is to do those things with CISC. Same die size, similar production cost, massive difference in engineering effort. Plus, as it happens, clever compilers can make great difference.

Ah, here, from his 1989 book VLSI Risc Architecture and Organization:

A feature of microcode is that the microcode ROM usually has very good access time, so frequently used operations will run fast if they are microcoded. One of the earliest RISC exponents (Radin, 1983) pointed out that cache memories also have good access times, and whereas microcode only contains the static set of operations chosen by the original designer, a cache can contain a dynamic set of frequently used operations selected automatically by the hardware to suit the current task.

2 Likes

Well, C is an Algol-type language, it just has different syntax and ordering of parameters, but the basic internal workings are the same. This is why AT&T’s own processor designs were “stack like”, conceptually they were stack machines but had hardware to make the stack work quickly. At least, to my limited knowledge of how they worked.

That’s an interesting question actually. I have one Intel machine in my home, two Macs, an iPad and two iPhones. I think the router is ARM as well. We also have a Nest, and I don’t know what that uses, and then various appliances.

ARM definitely outsells and out populates Intel by a wide margin these days, there’s something like 300 billion active ARMs in the world. Intel doesn’t publish numbers, but there are something like 3 billion laptops and desktops in the world, and 70% are intel, so maybe 2.5 billion on the outside. That’s x86-ish Intel, not all their other designs.

I’d bet the total number of “others” is probably around the same as ARM, at least the same order of magnitude. There’s some sort of moderately complex controller in my thermostat, and the furnace and other appliances all have simple controllers, so maybe a dozen total?

You could make “real” RISC at any time, and I recall one such design for a minicomputer - Raytheon perhaps? No, not them, it was one of the US industrial/aerospace companies anyway.

RISC was really aimed at addressing one issue: microcode. In the Moto 68k, you had ~68,000 gates, of which about 23,000 were running the microcode, and taking up, along with the code store, something like 1/4 to 1/2 of the entire chip. And yet, as Codd demonstrated, the vast majority of these instructions were never used. Removing all the fancy addressing modes made the otherwise identical actual processor inside much cheaper and run faster because there was no overhead decoding everything. So now you can build a processor on exactly the same line that has the same internal guts with half the transistors (or less, the original MIPS was ~23k IIRC), or use the same 68k transistors, but now it has, for instance, a complete 32-bit ALU instead of 16-bit, and maybe more registers, and maybe a pipeline. Those will all make the processor run way faster than all those fancy instructions no one was using in the first place.

Microcode costs. Cost was meaningless to IBM, and the advantages are huge for them - it lets them completely re-engineer the actual machine while all their programs keep running. This is Intel’s model to this day. But for a microprocessor designer, where lowest cost/performance is the main goal and you don’t have backward compatibility to worry about, what’s the point? Moto seems to have added all this because “that’s the way you design processors, just look at IBM and DEC”. When I was in uni I took a course on CPU design and it basically just right out assumed microcode was the only way to make one.

Now why weren’t people using all those fancy instructions? Mostly because the compilers didn’t. Why didn’t they? What Codd found was that they selected the one version of any instruction that they knew ran the fastest on the lowest-end machines, because then at least it wouldn’t bog down on those platforms, and if we’re giving up some performance on the higher-end systems, well, such is life. They didn’t have the memory or performance needed to put in code to figure out the best instruction for any particular machine, and that killed the cross-system compatibility (in a way), so they punted.

Today this is not a problem, our compilers do dozens of passes across the code, and they can easily take advantage of these sorts of things. So we see people trying things like VLWI and such where more of the load is put on the compiler. The Mill is also sort of attacking that issue too, but in a different way.

1 Like

The original ARM didn’t use caches, but took advantage for DRAM’s fast page mode (a CISC wouldn’t have been able to do that as easily). The ARM3 had an internal unified cache and got a huge performance boost out of that.

Early high performance RISCs (MIPS 2000, AMD29000, early SPARCs) couldn’t fit the caches into the same chip and had to use external SRAMs instead.

. When I was in uni I took a course on CPU design and it basically just right out assumed microcode was the only way to make one.

That I think is do to the tools, used for CPU design. For the longest time it was a #3 pencil and quad paper,and random logic, Next came ROM table lookup.Then Microcode, All by hand.

Only with programmable logic and PAL software did we get the tools for better hardware,
Sadly VHL and VERLOG took over as we went to mass produced designed chips,
thus the end of classic computer designs.

My home-brew computer designs use CMOS PAL’s & CPLD’s .Simple and easy to program,
I have a small C program to generate microcode state tables or pass text thru as CUPL programming logic for the control 1508 CPLD. This makes really easy to redesign the CPU.
Sadly I can’t update the software as fast as the hardware.
Ben.

*I use a Mac to help me design the next Cray.(when he was told that Apple Inc. had recently bought a Cray supercomputer to help them design the next Mac
– Seymour Cray"

1 Like

DEC (or rather, Ken Olsen) was notoriously hostile to UNIX, preferring DEC’s own operating systems (just like Dave Cutler).

2 Likes

Was that because of marketing or do to a Personal gripe?
I wonder really how many people ran early UNIX because it came on mag tape?

I can’t speak for Ken Olsen, but I can easily believe that the man having founded the company, simply trusted their own tech, and found something hacky thrown together by some Bell Labs nerds to be suspect.

More specifically, Unix was made to be “portable” from the very start. Not only the DEC idea that “they trusted their own tech”, but the entire model of computer companies writing operating systems for their own machines, was challenged by Unix and C. And this goes back to a previous statement that VAX was created to run C code, I think elb was being too polite and gentle in his answer. Vax would never, ever be made for a Unix language! One of the things that doomed DEC was they made as much as possible in-house. There’s a book by AnnaLee Saxenian, “Regional Advantage,” that dissects the difference between the computer businesses on Route 128 (and northeast generally) and Silicon Valley. The Valley companies subcontracted, outsourced, and networked with other companies. That’s how they won. The book is a little dry, of course. I figured this was an opportunity to point out the battle between underlying philosophies here. And ironically, I think centralization makes working on/with DEC computers more fun. Everything DEC was made in close cooperation, by a single entity. Hardly anything is made like that these days.

2 Likes

It feels a bit like cathedral vs bazaar… I did like VMS, felt it was a rich coherent world. Initially I stumbled with unix, a collection of small utilities seemed fragmented - but I came around to the idea of composability (and I became a big fan of awk)

4 Likes

yes, except “Open” in this sense meant open to other companies. Rod Canion who started Compaq even called his book(memoir?) “Open:How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing” very humbly lol which afaik, totally ignores what ESR called “Open Source”, but is kinda-sorta halfway there.
As far as RISC, it reminds me, Chris Garcia called the 1954 DYSEAC with 512 words of 45 bits, but only 11 types of instructions, “in essence” a RISC machine. As Maury said, RISC could be made at any time.

Who isn’t? Oh yeah that’s right, all my colleagues, who are south of 40.

(shakes fist at cloud).

4 Likes

In all fairness, they hate Perl, which kind of superseded awk. :slight_smile:

As I know the story, it was really Dave Cutler, who detested Unix.

(I’m not so sure, if Ken Olsen hedged strong ideas about this. It took quite a while for DEC to distribute their own software, and they really were all in for open, shared software, right from the beginning, operating more as a hub with things like DECUS and software distribution lists / memos. However, I think, when operating systems became a must, a more cathedral-like foundation just made sense for a company like DEC. That said, there may have been aesthetics involved, as well.)

1 Like

The Wikipedia article on Olsen leads to a Computer World interview where he compares Unix salesmen to snake oil salesmen, and calls for more standardization of Unix. True, not hate as such, more like distrust, and yes, Cutler really detested Unix.

In all fairness, they hate Perl, which kind of superseded awk

Frankly these days I will be pleasantly shocked if south-of-40 people even know Perl, and truly shocked if they know awk (or sed, etc.) Python seems to have taken over the scripting ecological niche completely.

2 Likes

I wonder if making end-user writeable microcode available would have changed things there.

HPs HPPA line of computers started out as a board of TTL that plugged into a low end HO3000. The performance of the prototype basically killed the super-“VAX” project that was underway.

There were (at least) 4 RISC processors that came out of HP alumni (only 2 RISC though).
.

  • Pyramid was founded by Rob Ragen-Kelley
  • Elxsi (not RISC) was founded by Len Shar
  • Ridge was founded by John Sell ( who started a RISC project at HP prior to HPPA) and of course: -Tandem (not RISC), the most successful of the bunch.
1 Like

Some thoughts on stack machines from back in the day… https://textfiles.meulie.net/bitsaved/Books/Koopman_stackComputers.pdf

1 Like

Also available from Koopman’s own site: Stack Computers: the new wave -- an on-line book

4 Likes

You are correct: most of the really early machines were RISC, except they had mostly memory operands : no general registers, but a small number of special purpose ones.

I think general registers are more the exception rather the rule, thinking of the PDP-11.
A Program Counter and Register file are more common, like the PDP-10.
Up to the 70’s core memory often was used as register file or index registers.