ROM based home computers since Atari ST?

I have had to patch thousands of telemetry units in the Y2K scenario - the units didn’t have upgrade feature but could send an intel hex string using SMS over GSM network and it would understand. I ended up setting one bit in program the flash area ; then had to wait 24hrs for the unit to do a nightly cycle and reset.
It wasn’t a y2K bug, but it was sloppy coding :smiley: (if GPS_DATUM > 2000 then GPS_DATUM=INVALID)

The CPU that was used was a Motorola 68000. We devised a patching sceme after that; with the code in ROM and some flash with the deltas that could be applied.

3 Likes

My TRS-80 Model 100/102 computers are all ROM based.

The Tandy 1100FD that I just finished my Gotek upgrade to can boot MS-DOS 3.2 from ROM and has the base Deskmate in ROM.

1 Like

Oh wait - the Psion 3 - the finest of the Psions! Handheld, excellent keyboard (considering the size) and excellent battery life. And OS on ROM, as required. Monochrome display, of course, but that’s correct. This 3mx has a backlight, and slightly worse clarity as a consequence. But it does have 2Mbyte of RAM and a 16 bit CPU.
Google Photos

How about the Z88, which is still being actively used (although probably not by a huge number of users)?

Psion had (briefly) the MC series - I have not personally used one, but from the press reviews at the time they looked really nice.

2 Likes

I’m afraid we have to compare the Z88 with the superficially similar NC100… and I gather the NC100 wins! (In unrelated news, I own an NC100 but not a Z88. And the NC100 has BBC Basic. Oh, so does the Z88…)
For a start it has a bearable keyboard which is one up on the Z88 and Psion”

But there’s room for differences of opinion on this:

I don’t think the Z88 is still in production, but the Amstrad NC-100 is
very similar!

About as similar as eggs and oranges. Let’s just say the Z88 is more versatile powerful and interesting, but people who like colour-coded keyboards and rolodex gadgets might disagree.

I have a hazy memory that the fly-by-wire and other systems on the Shuttle were based on multiple/redundant special versions of the S/360. Mind you, this is based on memory of an article I wrote for Personal Computer World in the early 90s, so I could be getting this entirely wrong.

1 Like

I was thinking of posting about this: as the Shuttle flew up until 2011 and the computer system design dates from 1966, that’s retrocomputing right there.

1 Like

Right, so that’s for avionics. Having typed my previous reply I remembered that I’d posted my 1990 PCW article on my blog. At one point it says:

What is perhaps amazing for such a state-of-the-art vehicle is that the five computers on board are none other than IBM 360s, each with a miserly 104K of 32-bit word memory. This made sense back in the 1970s when the Shuttle project was born. The choice also makes sense from the point of view of the processor being very familiar. This may have contributed to the astonishingly low software error rate of 0.11 per thousand lines of code for the 500,000 lines of the flight software – about 100 times better than the industry average. Another factor could be NASA’s willingness to spend around $1000 per line which allowed IBM’s programmers, using the PL/1-like HAL/S language, to allocate the time to getting things right. At times, up to seven different versions of the software were being developed. The programmers also created around two million lines of code on Shuttle-specific software tools, with an error rate of 0.4 per thousand lines, and in the first operating period of 1981-1985 made some 4000 changes to the flight code.

2 Likes

SPARCv8 is both open source and space-certified, so it seems like a pretty appropriate CPU for controlling space ships. Most simple processors should be fine for space, since large feature sizes are less vulnerable to solar and cosmic radiation. Miniaturizing them would work against both rad-hardness and ease of checking for back doors, though! Of course, a high-tech backdoor can be made well nigh undetectable, so there’s not much choice other than tightly controlling the supply chain.

Homebrew CPUs made of parts that are too common to be worth backdooring all of them would be another option. Encrypt each PROM so that if it happens to be designed to read out a backdoor program the program will just be gibberish to the CPU.

Perhaps are out of scope, but we have several DOS based “portables” computers have their OS in ROM, like Toshiba T1000 (and their family), and Atari Portfolio. Another one that could be taken into account it’s the handhelds that contain Windows CE

8-, 16- and perhaps even 32-bit seem the wrong direction to go here. We’re have modern technology and we’re looking for systems that are simple and easy to verify above all else, which to my mind is more easily achieved with a larger word size. 36-bits was a standard word size in the '50s and '60s in part because it was an easy way to achieve ten decimal digits of accuracy; smaller machines needed more complex code to do that. In this case, since transistors and memory are cheap, there’s no reason not to go with 64-bit, 80-bit or even larger word sizes and take advantage of the simpler code and data structures we can get as a result. Address space sizes would also be correspondingly large, making it easier to deal with large data sets without additional complexity in code.

The same applies to storage and peripherals. Berkeley FFS for example had considerable extra complexity to optimize the speed of I/O on rotating disks that had a head moving across them. Much of that can go away once we have storage devices such as flash memory with O(1) access.

On the software side, on the other hand, I’d imagine things would be a throwback to the '80s, or even earlier. Programs and particularly operating systems would have to be much be simpler. Graphical user interfaces and all their accoutrements (such as transparent “cut and paste” for different data types) would be gone; instead we’d be using simple command line programs and displaying graphics in much simpler ways. Data formats, too, would become simpler: you can’t attack a system by exploiting a vulnerability in its JPEG parser if the system just uses straight bitmap images instead of JPEGs.

Note that massive but simple processing power fixes one of the largest problems with using old computers in a modern environment: they don’t have the processing power to deal with modern security algorithms, which rely heavily on very numerically intensive cryptographic algorithms.

Big computers are not the answer.
Well if you stick to real computing and TEXT displays
20 bits of address space goes a long way. Space computing has to be rugged, you can’t have a cosmic ray crashing the navigation system, and modern stuff does not cut it. 8, 12 and 16 bits where never ment for computing, just process control like a power plant.
IBM cut down 36 bits to 32 bits with byte access as a string function, leaving words,half words,32 bit and 64 bit floating point as normal operands in the early 1960’s as revised version of the IBM 7030 in my mind.

I have a nice 20 bit TTL design I am working on
am sure other than bitmapped displays it would handle critical systems fine. Feel free to take the design and make a nice LSI chip for space with it.
PS: real space computers are named HAL …

Sure, but 20-bit registers and memory are kind of awkward (particularly if you’re dealing with handling text characters). And it’s very little more difficult to verify a 32-bit ALU and memory access than 20-bit.

I think I wasn’t as clear as I could have been about this in my post above, but the general idea is that you simplify the processor and the programs by setting them up so you remove as much complexity as possible at the cost of making things “wider.” So only flat address space for example, because segment registers and the code that uses them add a lot of complexity and increase the cost of verification. Having an arithmetic word size large enough to have sufficient accuracy for all of your mathematical calculations (e.g., for navigation) also avoids code complexity; 32-bits won’t do it and you need probably at least a 60-bit word.

I did go wider to make it simple, 5 74LS181’s for the ALU over 16 bits. And what law states bytes are 8 bits? 10 bits work nice here. Niklaus Wirth Wirth has nice 32 bit computer design, see the Oberon notes.

This topic was automatically opened after 34 hours.

I think perhaps I’m still not being clear enough about my “wider words” propsal here. This is much more along the lines of the PDP-10, or perhaps CDC-6600, rather than the common byte-oriented architectures of the IBM 360, PDP-11, VAX, and most microprocessors.

Let me start with the simplest example of the kind of simplification I’m talking about.

Byte and “small-word” need to do multi-step, multi-word arithmetic to achieve sufficient precision for scientific applications (such as, I’m assuming, celestial navigation). Here’s an example for adding the first two elements of a 2*n-wide array on an n-wide processor using little-endian storage:

inputs      .const  $12, $34, $56, $78
output      .const  0

add:        load    inputs+0    ; least significant word of input 0
            add     inputs+2    ; add (without carry) to LSW of input 1
            store   output
            load    inputs+1    ; MSW of input 0
            adc     inputs+3    ; add with carry from previous operation
            store   output+1

So what I’m proposing is to make everything wide enough that you generally wouldn’t need to do multi-word operations; your code would instead look like this:

inputs      .const $3412, $7856
output      .const 0

add:        load    inputs
            add     inputs+1
            store   output

In the early days of computing, 36-bit words (10.84 decimal digits) were felt to be the minimum viable size for general scientific computing. (32 bits gives only 9.63 decimal digits.)

I’m not sure here if you’re proposing that 20 bits (6.02 decimal digits) is wide enough for most calculations or if you’re simply proposing to continue with methods such as my first example above. I’m
guessing the latter; even for late '70s microcomputers, Microsoft quickly extended the the 24-bit precision (7.22 digits) of their early 8-bit BASICs to 32 bits (9.63 digits).

As background, early computer manufacturers felt that 36 bits (10.84 digits) was the minimum precision necessary for scientific applications. (I am assuming that things such as celestial navigation
would come under this area.) Seymour Cray, when designing the CDC 6600 for scientific calculations of the 1960s, felt that 60 bits (18.06 digits) was more appropriate.

Well, none of course. You’ll note that in my examples above there isn’t even any such thing as “bytes.” But of course one probably does need to interoperate to some degree with the modern backdoored
computers of Earth, if only to exchange email, so it seems to me likely that you might want some sort of concept of “byte” (perhaps a 16-bit one, large enough to hold a Unicode basic plane character) that
makes that easier, if that’s what they’re still using in the age of the OP’s story. Bringing the concept of a byte into your own systems might introduce similar problems to the ones I’m trying to avoid above, though.

I guess the summary is, I’m proposing looking at using “retro” computer technology as the original poster suggested, but a very different kind of retro technology, more like the PDP-10 and CDC 6600
rather than the IBM 360, the PDP-11 and their spiritual successors, which include the processors used in '70s and '80s microcomputers.

I think the answer here is that there’s always been a tension between wider and narrower word sizes. Over on the anycpu forums @oldben has been developing (and developing thoughts on) 20-bit computing. Other projects over there have moved from one word width to another. Ferranti, I believe, picked up on 24 bits as being a good size for instructions, with 48 bit operations being good for some scientific computing, and 24 bit being good for a lot of realtime computing.

Of course the memory implementation and cost is one of the factors: core memory, FPGAs, both allow for fairly free choice. Nibble-wide SRAM and bit-wide DRAM also allow for fairly free choice. Other technologies are fairly 8-bit or 32-bit oriented.

It’s clear in history and on anycpu that there is no right answer. There may be sweet spots, and a given person may have a preference - they may even be able to say why they have a preference. But no-one is right.

(We’ve seen the 8 bit and the 6 bit byte, I think. We might have seen 7 bit and 9 bit bytes - I don’t have any examples to hand.)

I believe, while smaller word sizes came generally for cost efficiency, the amount of actually used bits in a memory location might have been a crucial factor. With 8 or 12 bits, you probably make fairly efficient use of the memory available. With 48 bits, chances are, you’re storing only small numbers or codes and are wasting significant amounts of precious core memory. Personally, I found 18-bit nice, for computations and storage, but it’s still a bit short for many tasks. A sweet spot is probably in the 20-24 bits range, reasonable single precision and still some memory efficiency. (The latter, of course, is nowadays of no concern.)

Edit: On the other hand, costs and complexity of the address logic, buffers, and ALU increase exponentially with the word length. (Historically, registers used to be the most cost intensive parts.) From this perspective, even 20-bit is already luxurious.

The PDP 11 and the IBM 1130 where more a general purpose machine than other 16 bit machines used for often for control in the 1970’s. 18 and 20 bit machines
tended more designed for control rather than general
purpose computing. I have been assuming air craft and space craft computers would fall in this ball park area for word size. I suspect NASA has a lot papers on possible space computer designs and that would be the real answer.
PS: found the site F-14 “Tomcat” first microprocessor 1971 (20 bits). https://firstmicroprocessor.com/