PDP-11/HACK - breadboard computer with a J-11 microprocessor

From Brent Hilpert’s website, a minimal breadboard computer build around a J-11 (which is a two-chip module on a wide ceramic carrier)

At PDP-11/HACK, Brent says:

The J11 (more properly the DCJ11, aka “Jaws”) was one of the last gasps of the PDP-11 - a CMOS microprocessor implementation of the PDP-11/70. It’s two chips actually, mounted on a single 60-pin, over-wide, DIP ceramic carrier. The J11 was developed in the early 1980s and introduced in the PDP-11/73 in 1983/84. Although this was at the end of the heyday of the PDP-11, the J11 saw a relatively long production life, being produced till sometime into the 1990s. The unit here has a date code from 1987.

In a linked page about J-11, Bob Supnik writes:

The J-11 (code name Jaws, which the design team never used) was DEC’s fourth and last PDP-11 microprocessor design, and the first to be done in CMOS. The project was co-developed with Harris Semiconductor. Bob Supnik was the project leader through 1981, then Dan Casaletto. Paul Rubinfeld was lead engineer on the Data chip, Gil Wolrich on the Control chip and the FPA. Keith Henry wrote the microcode. Circuit design and layout were done by Harris Semiconductor. This was the last chip project for which the physical design was outsourced.

Also linked, a page about Peter Schranz’s similar build (“He got the IC count down to 9, while increasing the memory to 128KW, by using a GAL for the glue logic and larger SRAMs. The alterations also allow it to run at 18MHz”), with a photo

via a discussion on PiDP-11 mail list.

6 Likes

Thanks for posting this. I did not realize until I read it that there were actual data sheets for implementations of PDP-11 processors. Looking at the data sheet for the J-11 I happened to notice the answer to a question I’ve had for a long time.

Some models of PDP-11 couldn’t handle page faults. What I mean is that depending on the instruction that causes a memory fault, it was impossible to fix up the fault and reliably restart the program. Instructions that used auto-increment or -decrement addressing were the worst offenders.

The PDP-11/45 was like this. But some other (later) models did have support for reliably restarting a program after a memory fault. I never knew how they did it. But now I do:

So the clever designers of the J-11 implemented a tracking system that recorded up to two auto-increment or -decrement operations within a single instruction (you could code e.g. MOV @R4+, @R5+ and fault on the second one, for example). Then the exposed the tracked results though this register. In addition to mapping in the faulted page, the operating system’s page fault handler had to notice this had occurred, reach into the saved state of the faulted program, and undo the operation(s) of the faulted instruction so it could be restarted and get the right result.

It must have been quite a mess to write the page fault handler for such a beast. Not for the faint of heart! But it could be done. Incidentally, this is why RISC architectures don’t have auto-increment addressing modes; they are useful, but not enough to be worth all this complexity.

1 Like

Almost a piece of modern art - particularly the second illustration; I wonder what the local Hams would feel about broadband noise at 18 Mhz when the board is running? And, given that one would only invest all that time and effort to achieve a successful board one may presume some “useful” application might be run - if only to show success - just don’t run financials, privacy-dependent, or any cryptography-based apps with real data - interception is trivial at short-wave frequencies.

I think it would bother my shortwave radio more.This is not a phone, so for some reason I feel safe.
I wish one could have got a J-11, back then (80’s), and VT100’s (132x25) but NO we got the the 4,77 Mhz PC with TV video and cp/m err DOS 1.

Yeah, I agree. Too bad… I can imagine an alternate history where DEC successfully rode the PC wave and avoided bankruptcy. But note that pretty much every (proprietary) workstation company went out of business eventually (e.g. Sun, SGI). It just took them a little longer.

I bet the J11 was an expensive chipset at the time. And I have to believe that the prevalence of the 6502 was primarily because it was cheap. To me, no other explanation fits.

The thing I love the most about the J11 is that it has a debug monitor built into the microcode, which worked well in lieu of a front panel. I’m sure the breadboard J11 project made good use of this feature. (Now it just needs a PCB).

It would be fun to get UNIX v1 running on a J11, if it hasn’t been done already.

These chips are still available as NOS on eBay.

Part of the problem was that DEC’s business model was predicated on relatively expensive machines. This was needed to support their not-exactly-lean employee base. What worked during the mini-computer era stopped working during the “race to the bottom” of the commodity PC era. Look at it another way: It’s amazing DEC lasted four decades, and the Alpha was a ground-breaking microprocessor.

2 Likes

I grabbed one a few months ago, for when I retire (soon, I hope) and have infinite time for my round tuit projects (he lies to himself).

PDP-11’s did rather well in USSR. You can find what seems to be a personal computer,
on ebay now and then like the Elektronika BK-0010-01.


A English version of the machine could have been quite interesting,

1 Like

There was - the Terak workstation. They typically ran UCSD. I suspect the Elekreonika to be (another) Russian copy…

-Gordon

2 Likes

I don’t think that these were related. Elektronika started to produce PDP-11 based machines in 1975 (among them the Elekrtonika 60, famous for Tetris) .

Here’s a rather comprensive list of machines (sadly, without dates, but a viable starting point):
“PDP-11s behind the Iron Curtain”
https://web.archive.org/web/20120325234853/http://www.village.org/pdp-11/faq.pages/Soviet11s.html

1 Like

In conversation at the Retrofest with Eduard, I think he said that “Elektronika” was a kind of brand which covered many different endeavours. I got the impression that in a command economy, the products used whatever could be got, and were often bartered or liberated from the factory, not always sold, and when sold were affordable only to institutions, not individuals. So, once a Z80 clone or a PDP-11 clone exists and is available, it will be used, rather than the computer designer carefully selecting their preferred microprocessor.

Yeah, Elektronika produced all kinds of things, not just home computers; the BK 0010 and 0011 were basically the Sinclair (or maybe Commodore?) of Soviet Russia, though, as I understand it. If you had a computer or knew someone who had a computer, it was probably one of those. Like the ZX80/81 and VIC-20, the 0010 was sold primarily with cassette storage. The 0011 was more likely (although I think it was still relatively uncommon) to have a diskette drive.

3 Likes

As I understand it, the USSR didn’t have dedicated, separate supply chains for military grade and consumer components. For this, consumer goods competed with military and/or public applications, and took a decidedly second seat. On the other hand, the production quality of components, if available, at all, was generally good and vintage consumer goods seem to have not suffered the kind of degradation, we’re kind of used to. On yet a different note, this also meant that there were not those large-scale effects of production at scale (broad-scale availability at deminishing costs), we enjoyed in the West.

I’ve always marveled at the USSR’s penchant for cloning western computers. DEC was a favorite for a while. Brilliant to choose hardware that ran existing software, of course.

But the modern approach would just be to smuggle western chips. Makes me wonder why they didn’t. Was it really harder back in the day? I guess no TSMC or SMIC back then, so US export controls may have actually been effective.

The US and Canada, had the high disposable income for toys like low cost (US) color TV’s , telephones and computers,in the 70’s as well as no tariffs on said products, Would you buy a APPLE ii
in the US at the UK prices and import tax had Steve Wozniak lived in POLAND and the 6502
created in the USSR and your job was a farmer?

I guess (as always on this matter), the real problem was in R&D towards mass manufacturing. Competing with NATO countries and Japan was an uphill struggle that was only to be lost, given the much smaller volume of the domestic economy and that this is really about tooling, which is a much greater effort than just the engineering. – Compare this YT video by Asianometry on how the GDR bet their economy on the attempt to convert the country into a digital power house (and, quite obviously, lost).

The USSR had impressive architectural engineering. Compare the BESM-6, which could stand its ground against Cray super computers. (E.g., during the Apollo-Sojuz mission, BESM-6 processed telemetry data in a minute, while it took NASA about 30 minutes for the same task. About 350 BESM-6 were made over the years.) Or the Elbrus architecture, which figured as an ever-lurking promise for high performance computing in the 1990s, until it eventually vanished.

Anyways, the USSR decided to standardize on Western architecture (IBM/360, DEC PDP-8 & 11 and later VAX), which seems to have been mostly a political decision. I guess, the prospects of profiting from global software efforts may have been a viable incentive. R&D already being done may have been another one.
(But I have to stress that I don’t have any special insight into this.)

Yes, I’ve read that! In Pioneers of Soviet Computing, a memoir and history by Malinovsky, we read

The computing industry in the Soviet Union always lagged behind those of the United States and Europe. The reasons for this were varied, but stemmed from having almost no industrial base at the time of the 1917 Bolshevik revolution, no large-scale punched card industry, and no commercial computing industry analogous to IBM that would have helped foster competition. Also, the Soviet government did not aggressively promote computer construction before the Second World War or immediately after.

Soviet computer designers worked diligently to close this gap, given the poor economic conditions they faced in the post-war period. The heroes in Malinovsky’s account - Sergei Lebedev, Viktor Glushkov and several others, deeply believed in the Soviet government’s emphases on education and socialized progress, and devoted their lives and careers to improving the state of the art of computer technology in the communist bloc countries. According to Malinovsky, when their government decided to copy the IBM 360 system in the 1960s instead of relying on their own enormous community of scientific and engineering talent, Lebedev, Glushkov, and several of the Soviet Union’s established computer scientists fought this directive vigilantly while trying to retain faith in their political leaders. This decision remains a topic of dispute among former Soviet computer scientists. Those Soviet scientists who pursued the IBM 360-based series of computers - the well-known ES (Unified System) machines - undoubtedly had their own good reasons for following this path. Their story deserves to be told as well.

Not a bad decision to use western architectures. The effort to write software is always underestimated. If Linux had been around back then, it would have been another story. But at the time, everyone here knows, languages and OSes were incompatible, proprietary, and applications were hard to port.

A friend of mine, a chip designer, told me stories of how aware DEC was of the USSR’s exploits. On a microVAX chip he worked on, the engineers wrote in Cyrillic (Russian) “When it’s worth stealing the very best.”

And yes, we know how smart Russians are overall.