ROM based home computers since Atari ST?

I hadn’t focused too much on that (yet), but I do recall thinking that I certainly wouldn’t want an OS as complex as GEM. In fact, for my more critical machines I probably wouldn’t want an OS at all: I’d build my program with whatever minimal support it needs for I/O, put it into a cartridge, and then turn the machine off, put in my cartridge, and turn it back on, just like cartridge apps used to work on the Commodore Max Machine (which had no ROM).

(There’s a parallel there with how we use Docker today: the sysadmin supplies only a kernel, and whomever builds the container determines everything else that goes, or doesn’t go, in to it. Though what I’m talking about above seems more directly parallel to a unikernel, which I first saw implemented more than a decade ago but which since then has not achieved much popularity, as far as I can tell.)

For data transfer, PROM in memory cards with a physical “write-disable” works well, too; you need not deal with much of the complexity that comes with securing networks and the software that serves clients on them. (Particularly, with securing the software to make sure it serves only the data it should serve, and not otherdata on the same machine that should stay private.)

Is this then taking us back to Star Trek and the like, with their little cards they keep toting around because they apparently have never heard of networking? Perhaps they didn’t get that so wrong after all!

1 Like

This is an interesting and probably important approach. Let’s optimize architectures for verification.

Regarding immutability, another approach may be the OS root mounted read-only. While this would lend itself a bit better to updates than ROMs, it’s effectively the same. (Compare recent versions of Mac OS.)

1 Like

5 posts were split to a new topic: Thoughts on characters and strings

2 posts were merged into an existing topic: Thoughts on characters and strings

BTW, another take on making computing simpler and more understandable is given in “Mu: A minimal hobbyist computing stack”:

Over the past year I’ve been working on a minimal-dependency hobbyist computing stack (everything above the processor) called Mu. The goal is to:

  1. build up infrastructure people can enjoy programming on,
  2. using as little code as possible, so that people can also hack on the underpinnings, modifying them to suit diverse desires.

Conventional stacks kinda support 1 if you squint, but they punt on 2, so it can take years to understand just one piece of infrastructure (like the C compiler). It looks like nobody understands the entire stack anymore. I’d like Mu to be a stack a single person can hold in their head all at once, and modify in radical ways.

I don’t know that I really agree with a number of his design decisions (and his use of comments like “move ebx to esp” rather than "move R5 to R4 increases confusion, for me at least), but it’s definitely an interesting approach with some worthwhile ideas in it.

(This came to me via BigEd on anycpu.org.)

1 Like

3 posts were split to a new topic: On what computing is and whether it is simple

Thanks everyone for the interesting ideas and thoughts! For the purposes of my SF story, my concerns about an older 1960s design are:

  1. At some point a “smaller” architecture is going to be too limited for the computing requirements for operating the relevant spacecraft systems.

  2. In this setting, there simply aren’t many space inhabitants. As such, these usages are piggybacking on the efforts of others - who have different goals and priorities.

In other words, there aren’t enough manned spacecraft users to support a hardware/software design effort. There are certainly a lot of unmanned spacecraft, but it’s simply accepted that these will regularly get hacked and fail sometimes. That’s fine for them. That’s not fine for manned spacecraft systems.

That said, why not a hobbyist project for some sort of reliable provable computing from the ground up? That probably makes more sense.

OTOH, one of the themes of my story’s setting is how things in the world tend to NOT be what makes more sense.

How about the B-205 computer. You will find console style displays from classic shows like “Lost in space”
and “Star Trek” to movies like " The Angry Red Planet (1959) " I tend to favor more hardware digital based rather software based systems for a space habit. Smaller chips, can be tested both at run time (parity bit) and replacement of a module.
While joking about the B-205 , only the lack of memory would prevent from it being used in space if a decmial computer would be needed.The advantage with the older machines they where designed to be more error forgiving than modern stuff.
PS: Getting space plane to oribit, I think should come before the computers. The USA (defence department) does not want easy space access, so we are foo-bar-ed any kind of space development.
(A good topic, but not for this site).

Wow, there was certainly a huge miscommunication here.

My purpose in proposing a 1960s architectural design was to have a much larger and faster machine than the Atari ST series could ever hope to deliver, yet also simplify the software.

The machines I’m talking about were not slow. The CDC 6600 and its successor the 7600 were the most powerful computers in the world for a full decade, (1963-1969), and their successors, from the CDC STAR-100 and Cray-1 through the Cray Y-MP/832 maintained that position for another decade, through 1988, via the addition of vector processing to more or less the same architecture.

Now there are certain things they did for speed that I would not propose taking on, such as superscalar instruction execution (unless the scoreboarding system could be dramatically simplified), but I think that faster transistors in smaller packages would make up for going back to the simpler (and slower) instruction execution architectures of microprocessors from the 8080, 6800 and 6502 through the 68010.

Larger word sizes make larger memories much easier to handle. Even a 32-bit word-addressed machine has a 16 GB address space (larger even than the 68040’s 4 GB address space, and much larger than the 68000’s 16 MB address space). But I’d certainly propose a larger word size than that, at least 40 bits, and probably 64. The latter potentially gives a larger address space than any modern machine, or alternatively the opportunity to have instructions containing pointers within them to anywhere in a still very large address space.

Sticking to CPUs with minimal parallelism and avoiding tricks beyond very basic pipelining, the larger words again give more speed: a 64-bit machine with 64-bit buses does a 64-bit addition in significantly less than a quarter the time of something like the Atari ST’s 68000, with its 16-bit bus, and in one to three instructions rather than a half dozen to a dozen.

This could all be done in a way that’s easier to understand and verify than a 68000, and let you write software that’s much easier to understand and verify than the Atari ST’s ROM.

It sounds like you’re talking about a ground-up novel design, rather than directly replicating/emulating an existing historical computer.

Looking at the historical specifications of the CDC 6600, I don’t see anything that would suggest to me that it would be more powerful. Because clock speed will be greatly improved in any case, that’s not really the important limiting factor. The important limiting factor would be memory space. The CDC 6600 has maximum RAM under 1MB, while the Atari ST could have up to 14MB. For many sensor tasks - such as imaging radar - 14MB might be doable but <1MB would not be adequate.

For more complex tasks, of course, 14MB will not be adequate. But that makes for interesting story consequences. Any sort of more complex sensor analysis would be done by more vulnerable computational devices. So, the characters have to make choices about how much to use them and how to make do without them when their spacecraft’s systems have been compromised.

Looking on line just what you need to have to be space rated, I could not find much other than you can get a MIPS 64 cpu space rated. The low working voltage of modern cpu’s and memory <3.0 volts means a high risk of failure with radiation. To get space rated may mean 4x decrease in speed and memory size. 1.5 volt core now 3 volts from say the year 2000. Remember error checking needs to done of some sort.

It’s a novel design in that it wouldn’t be a direct copy of any existing computer. However, it’s not novel in that it re-uses old architectural ideas, making relatively minor changes to them. The most novel thing about it is that these old and standard architectural ideas can be tweaked and simplified because modern hardware fabrication techniques use much faster and smaller gates.

No, the CDC has a 256 kilo-word address space, which is about a 2 mega-byte address space. Going back to the older word addressing rather than using modern byte addressing is one of the key things that I’m proposing. Except of course, I am specifically very very much not proposing using an 18-bit address space. If you’re fixated on that, the problem you’re concerned about is a straw man.

(And I don’t think that 14 MB is anywhere near enough. That’s hardly even two 1920x1080 full-colour screens of graphics data. I would suggest that several gigabytes is about the minimum address space you’d want if you want to avoid adding unnecessary memory management complexity.)

That certainly is an interesting kind of problem to introduce to your story, though I think you’d have this issue anyway with just software alone, even with a system following my proposed style. Do you use your own verified software, or do you use a package that claims to produce better answers but is not so well verified?

I suppose part of what you need to think about is how “hard” you want your science fiction story to be. Using Atari STs or other computers from the '80s is certainly an interesting fantasy, but it is fantasy in that it’s not at all what an expert with modern knowledge of software and hardware vulnerabilities would suggest as a good way to reduce them. That might put off some computer professionals (though not me, I might add) who find their suspension of disbelief disturbed in the same way you and I probably find ours disturbed by, I dunno, bombs with numerical time displays that tick down and worries about which wire to cut.

You mean from the something like from the movie “galaxy quest”.
Since byte adressing is being dropped
this site has several ideas for weird size computing.
http://www.quadibloc.com/comp/compint.htm
" Another Use for 51 Bits" is good example of clever thinking.
BTW A spell checking feature would be handy for me.

Wow, there’s a lot of reading in that! Thanks for a great link.

Yes, Would a Perfect Computer be Old-Fashioned? is thinking along some of the lines that I am, though he’s a lot more concerned with conserving memory by packing and unpacking words.

(Packed data is something about which I have grave doubts, since it introduces complexity to save RAM, and saving RAM isn’t such a great concern with modern hardware. You really get a feel for this when building clones of '70s and early '80s computers, where you just plop in a 32K×8 static RAM as the default, automatically giving you, e.g., four times the amount of memory an expanded Apple 1 had, because that’s by far the easiest way to do it using 1990s technology.)

Some Architectural Preferences is interesting, though his comments on endianness are not entirely on-target, I think. For example, he says that

Because DEC made less expensive computers than IBM, because it was identified with the independent-thinking “little guy”, and because UNIX was first developed on the PDP-11, a large number of people got used to the little-endian way of doing things, and thus this method of representing numbers was found on the 8080 (and later the 8086 and 80386) from Intel and the 6502 from MOS Technology.

I am pretty sure that’s not the reasoning behind the little-endianness of the 6502. Many of the folks who designed that came out of the Motorola 6800 project, which was big-endian, and what I recall from reading the original MOS manuals is that it seemed to me that they made the switch purely for efficiency reasons: little-endian format saved a clock cycle in the indexed absolute addressing modes such as LDA $1234,X and LDA $1234,Y.

1 Like

Spacecraft sensors have resolution much lower than 1920x1080, and only one color at a time, and I specifically mention imaging radar - which has MUCH lower resolution and no color. It’s a delay-doppler graph that takes time to produce a fuzzy low resolution image. There just aren’t that many radio photons returning, so there’s only so much resolution you can tease out of it.

"it’s not at all what an expert with modern knowledge of software and hardware vulnerabilities would suggest as a good way to reduce them. "

They’re not developed by experts, but rather by hobbyists. The government space agencies employ experts, of course, but they all use hardware and software purposefully designed with backdoor vulnerabilities. All of the major governments in the world by that time insist upon this in all computational devices, so anything outside of that is designed and built by obscure enthusiasts for whatever purposes. Basically, this is a world where locked down closed garden operating systems like iOS won. I came up with this background when I was getting frustrated trying to install Debian on a Samsung Galaxy Note phone. Ultimately, I concluded it was practically impossible.

It made me realize - what if the desktops die? What if the laptops die? What if the relatively open BIOS goes away so it’s no longer an option to install something like Linux on an affordable computing device? After a century of that, what would even be left? Could you still have a thriving Debian project if practically no one has any device to install it on?

I figured that retrogaming and emulators might somehow survive through the cracks, because there’s always someone looking to make a quick buck on a library of thousands of games they don’t have to develop. And the demoscene community has done incredible things exploiting the most obscure details of precisely what the hardware does and precisely what’s available in ROM.

This isn’t a world where things are designed for what’s best. It’s a world where things are designed for the benefit of a few at the expense of ordinary people. And it’s not even really a dystopian world. People are happy. I mean … how many people are unhappy with their iPhones? No, they love them. There’s no people’s uprising against their closed garden overlords.

As with most science fiction, I’m not really trying to predict what the world of 2160 would really be like. I’m just describing the modern world through a certain point of view, with tweaks. This point of view is a cynical view of the relationship between ordinary people and technology. It’s about how utterly little ordinary people care about giving up the freedom they never utilized - for the benefit of the Apples and Googles of the world. The sad state of the manned space is also a reflection of how little ordinary people care about space program funding (and really, why would they care?).

Pervasively, the setting is not a projection of how I wish things would be. It’s the opposite, because ultimately it’s a background which explains why the main character fled it all. She herself doesn’t understand how all the things in the world added up to a situation which screwed her over, but as the writer who conceived of the fictional world, I do see it.

This is further in the future than I’d imagined, but makes my question about this all the more relevant: on exactly what hardware do you anticipate these folks will be running their software? Working 68000-based computers are likely to be exceedingly rare at that point, and correspondingly valuable. Consider that an Apple 1 these days goes for between a quarter and half million dollars.

That’s not to say that there isn’t still a vibrant Apple 1 community running on real hardware, but even when trying to be fairly faithful to the original, that hardware is still heavily modernized. Consider my Apple 1 clone: it has far more memory than any original ever had, in part because it’s now much harder to find and more expensive to buy 8 or 16 4K×1 RAM chips than a single 32K×8 RAM chip. (Similar considerations apply to the ROM.) I imagine as chips like the 6521 PIA I use fade away, they will be replaced by an FPGA or similar programmable logic, and probably even the 6502 eventually.

The video and keyboard system on the other side of that 6821 has already succumbed to this (I don’t know where you’d even get 1024-bit shift registers these days) and has been replaced by a modern 8-bit microcontroller emulating the video and keyboard circuitry, taking input and sending output to a serial link because that’s considerably more convenient than having to build a real keyboard.

And that’s an example of what tends to happen when you do retrocomputing with modern hardware available: bits not so important to you get replaced with more modern technology that’s more convenient (albeit still usually much simpler than what’s used in modern computers for the same purposes). Examples abound: almost everybody I know uses, most of the time, floppy disk emulation for storage of disk images in solid-state memory, rather than an actual floppy drive, for example.

Give it another hundred years, and people who want to be using machines of 1980s power seem to me unlikely to be using any original hardware from that era at all. My guess is that they’d be programming their entire systems in to FPGAs or similar, and tweaking the CPU and other components in the same way hobbyists building single-board computers tweak their component selection and address decoding design now.

Indeed, but I don’t see how an emulator helps at all, since you’re running it on a backdoored system and it was vetted (and, in your scenario, probably modified) by the App Store owners to send back information about just what you’re running on it so they can come after you for IP law violations when they see you’ve been playing Lode Runner on it.¹

Really, the only way out of this that I can see is to run on your own hardware. And probably bootstrap your own FPGA production and programming tools from retro hardware before it vanishes entirely.

By the way, I hope you’re taking all this in the spirit of constructive criticism. And also just another idea for how things could go, not as me telling you how to write your book.


¹ Yes, Lode Runner is under copyright in 2160. The Taylor Swift Copyright Act of 2054 extended copyright to the lifetime of any corporation that currenly owns a work, plus fifty years. (Effectively forever, since even when a corporation dies, its assets can be sold to another corporation.) And don’t be fooled by the name: Taylor Swift hated the whole idea, but due to licensing agreements she couldn’t stop Apple-Disney and AOL-Time-Warner-Pepsico-Viacom-Halliburton-Skynet-Toyota-Trader-Joe’s from slapping her name on the act.

The great thing about art, of course, is that it’s creative, and it shows us a way to look at the world. It would be a mistake to react to art as if it were a flawed documentary.

And for speculative fiction, one is always allowed one or two unexplained deviations from known limitations.

There was, for a while, a bargain device being sold off, some kind of prototype home appliance which happened to have been implemented with a large FPGA. I would think scavenged electronics like that could easily be imagined to be adopted by a counterculture.

This background to Postcards from Cutty is a great scenario, in my view. We do have hobbyists today who create CPUs, operating systems, open source tooling for FPGAs, development toolchains. It’s very fruitful.

Well in my opinion you’re allowed as many as you want. It’s an artistic decision about whether you want to be writing more towards the “hard SF” side or the fantasy side of the spectrum. But that should not be used as an excuse to shut down conversations about how this kind of thing might play out in reality.

And again, let me emphasize I’m not trying to tell him how to write his story; I’m merely trying to explore the space of ideas he’s opened up. If that needs to be split out into a separate thread, fair enough, but let’s not shut it down.

One major issue there is the programming tools, which was why I was careful to mention them in my previous post. Existing large programmable devices, if they’ve not had their lock fuses blown at maufacture, may still not be programmable for you unless you have access to and the platforms on which to run the tools necessary to program them. But that aside, yeah, that kind of scavenging is already going on in full force; it seems that a lot (perhaps almost all) of the older chips you buy off of AliExpress are pulled from old devices (and often re-marked as something else, as several threads on forum.6502.org have shown.)

Yup. That’s an area of great interest to me, although it now seems that Isaac’s story background is headed in a different (although also interesting) direction, more involving re-use of already existing older software and hardware platforms, rather than components.

Most of the Atari ST enthusiast community does indeed use emulation on compromised platforms. And there are presumably some people who use refurbed antiques and true replicas. But then there are some weirdos who do full custom hardware that replicates functionality at a low level, but with more modern hardware. Sure, the major governments don’t like it, but then there are always folks who get away with breaking the rules, and there are usually countries where relevant laws are lax and/or not really enforced.

And then there’s the fact that enforcement and strictness of laws waxes and wanes with time.

Although I’ll admit I’d expect this to be more likely with the Commodore 64 platform than, say, the Atari ST. There’s just something about the SID sound which enthusiasts are just crazy for, but there’s a limited supply of SID chips. Sooner or later, I think folks are going to have enough demand to create true SID chip replicas, and possibly VIC2 even. I don’t see the same level of enthusiasm I see for the C64, for any other vintage computing platform.

Acorn’s BBC Micro failed to cross the pond, or indeed to cross the Channel, but there’s a lot of enthusiasm for it, and many recent projects taking it in some interesting directions. As you may know, it was architected to support second processors, and back in the day it did have 8, 16 and even 32 bit second processors. But while the original 6502-based machine is very much ROM based, the second processors had only a small boot ROM, so the whole topic doesn’t quite match this thread…

(I say failed: I mean in big numbers. It crossed both bodies of water in a small way.)

Edit: of course I agree that the C64 was a much bigger phenomenon, in far more countries, and does retain a very lively and enthusiastic scene.