A single board computer using Z180

The standard 8" floppy was about 250K (which, honestly, is a lot).

The Apple 5 1/4" was 113K (13 sector), the TRS-80 was 85K, my Atari 800 was, like, 80K. There was ye olde use a hole punch to notch the floppy and flip it over trick for “double storage”.

The 360K floppy in the PC was quite a milestone for the day.

With my CP/M work, using the generic 8" floppies on a simulator, I was running in to the 64 file directory limit. This may be changed (I think) via reformatting it.

I don’t know how big the DEC floppies were, but I doubt they were much bigger than the 250K the 8" floppies were.

Well said. There’s always that feeling, you’re missing some essence of the real thing, which may also add some to the frustration that comes with the system anyway. (But now, you don’t know, whether this frustration somewhat justified or just an effect of the indirectness of the setup.) I guess, this also describes the value of these small, modern systems, as they provide a shortcut to the close-to-real experience. However, this often cuts out when it comes to the peripherals, which are an important aspect of a system, as well.

It was! As was the contemporary (even a bit earlier) 1.2 MB 8" disk. However, many of the other CP/M machines (and I think even the late TRS-80s?) had double density 5 1/4" floppies with formatted capacities in the 200 kB range. My Osborne 1, for example, has a DD expansion and can put about 180 kB on a disk. Even the majority of Apple Disk II units were 16 sector at 140 kB. Not so far from the 250 kB 8" single density disks. (Though, while I don’t know if any micros used a smarter encoding on 8" disks, the DEC RX02 puts 512 kB on single density 8" disks without issue by using a more efficient encoding.)

As far as files per directory, I believe that is (on most CP/M systems?) a property of the BIOS. The Osborne BIOS, for example, was capable of reading and writing a wide variety of formats (I’ve bought 5 1/4" new old stock disks that read cleanly as an empty CP/M volume at about 140+ kB, for example!), but it could only initialize Osborne-format disks.

I often find this to be the case in simulators, myself. For example, in PDP-11 SIMH, I find it annoying to halt the simulator, attach a different RL disk pack image, and continue the simulator … even though this takes far, far less time than taking the drive offline, waiting for it to spin down, physically changing packs, and spinning the drive back up! I think it is, as you mention, the lack of physicality of the experience.

With every thing now with single SD card slot, how do you copy SD data?

Well, precisely. Many of the hobbyist systems only support a single flash card. How, indeed, do you copy SD data?

The answer is you download it to your “real” machine, and then upload it back on to a new card. Or, you could use the host to build the card itself directly from the data.

Playing “swap the floppy” doesn’t work very will with a, say, 8MB partition, 2 CF cards, and 64K of RAM. Even if you could utilize the entirety of RAM, that’s 128 “swaps”.

Of course, back in the day, we didn’t “copy” hard drives, not really. Rather, we streamed them to tape, carted those over, and reloaded them. Tape offered comparable density to the hard drive itself, robustness in portability, and reasonable read and write performance.

Heck, on the Alpha Micros, we use VHS tapes and off the shelf VHS recorders much like the micro industry used cassettes. Later, these mostly got replaced by Exabyte drives utilizing 8mm video tapes for storage.

Mind, today, if you have a 115,200 baud serial connection (with hardware that can actually sustain 115K), that’ll download an 8M partition in under 15 minutes.

Of course, today the use cases are far different. Rarely, if ever, are the legacy systems a source of content or data. Rather, they’re the destination. All of the work is done on modern machines, crossassembled/compiled, written to fast media over fast interfaces, and then plugged in to the devices.

But this is one of the reasons why when I was approaching the design of a custom build, I settled on making USB connectivity a primary goal. The premise is to get interface the CPU to a “super I/O” device that can bridge to modern protocols like USB and TCP/IP. Once that bridge is made, you can plug as many USB fobs as you like in to a off the shelf hub and copy to your hearts content, or stream over TCP to a remote destination.

It doesn’t work yet, but I have all sorts of schemes in my head incrementally marching to that goal.

I have a version of DD for windows so I can use that,
as batch file under DOSBOX. The hardware I have only has
1 SD card slot,but I have header pins for expansion,
I plan to add two SD card slots there, once this virus is over.
(I am emulating a TTL computer - 64KB ram CPU and dual removable media and serial I/O in the 1974-1977 time frame so I see modern I/O as crappy designs made to save $$$ and make you by new hardware.)
Ben,

I wasn’t a big CP/M user back in the day, but I still enjoyed working with CP/M 2.2 and CP/M Plus. What I enjoyed the most was to write tools to do what CP/M could not do - one of the first things (as I imagine many did) was to write tools to fix and repair broken file systems (the simplest variant would be accidentally deleted files). I remember a small accounting firm which managed to destroy their whole customer base running on an MP/M system - I wrote a tool to recover it, fortunately it worked.

But that was just one thing - I actually liked that CP/M had so many limitations, because then there were a lot of opportunities to write something better. And when I think about it - that’s what I liked in bigger systems too, to find weak spots where I would write code to do it better. So, to get back to CP/M, “pip” is limited, so I would write a tool to handle user areas.

Systems which “just work” are actually very boring… as I never particularly enjoyed playing games etc. I liked, and still like, writing programs - but then there must be something that’s needed, something useful that can be added.

Hi all ! I’ve built many Z80/Z180 SBC over the last years, also using FPGAs (Grant Searle’s Multicomp derivatives). My last one is a Z180 based SC-126.

I run ZCPM3, an enhanced CP/M 3 with ZCPR3 improvements. It handles paths, aliases, named pseudo-directories, command history, etc.

Regarding CP/M 2.2 lack of User Areas handling, it actually had some rudimentary functionality to copy files between different user areas.

You had to use the [Gn] option where “n” is the user area to get the file, i.e:

B:
USER 1
PIP DESTFILE.EXT=A:SRCFILE.EXT[G0]

… will copy A:SRCFILE.EXT form User 0 to B:DESTFILE.EXT User 1 area.
This will work if you already had PIP.COM in B: User 1. If not, you had to go over this sequence of commands:

A:
USER 0
DDT PIP.COM
GO
USER 1
SAVE 28 PIP.COM

… and voilá, you had PIP.COM in User area 1 so you could copy file from any other user area. So possible but not practical…

Cheers,
José Luis.

1 Like

I like this kind of thing - like learning to juggle! It’s good when things are possible, but not in an obvious way.

Well, it’s simply a snapshot in the evolution of things. CP/M kind of got caught in the middle.

DOS didn’t suffer from this as much, but that’s partially because the foundations were there in place in the beginning. There are quite dramatic differences between DOS 1 and DOS 2,3,4…

And DOS also matured in the static environment of the PC vs the chaos of CP/M systems.

All that said, I’d be curious to learn more about the “Mini Unix” and what compromises it made. An 8K floppy based OS with 28K total RAM, while supporting 2 tasks. I don’t know if the tasks were peers, or one was simply something like a spooler. Both the TRS-80 and FLEX had built in facilities for spooling, not as if a spooler is particularly resource intensive. Did DOS have one out of the box? I don’t recall.

But there’s something fundamental from a computer experience about lighting up the printer and letting it bang away in the background while you continued on with your work (just don’t take the floppy with the file you’re printing out of the drive…).

I can’t speak to when the change happened, but by about 1983 everybody I knew using CP/M and still using 8" diskettes was using DSDD volumes of about a megabyte. (This was no doubt why they didn’t move to 5.25" mini-floppy drives; those kinds of capacities were still a few years away on mini-floppy.)

Well, on all CP/M 2.x systems. Section 10 (p. 25) of the CP/M 2.0 Alteration Guide describes the entries in the BIOS-supplied Disk Parameter Tables that determine the directory size.

1 Like

This is unsurprising to me, I would just expect the transition to 5.25" to be well underway by then, with only power users using the physically and logically larger 8" disks and mechanisms. That’s 6 years into the TRS-80, the SoftCard is out, and portables like the Osborne and Kaypro have even established themselves.

I wasn’t computing yet by 1983, though, so I may have my historical wires crossed.

Well, I don’t think that the TRS-80 had much intersection with the CP/M market until the release of the Model 4, if ever. The Model III needed a third party board or homebrew mod to get the RAM out of the low address space in order to run CP/M with the TPA at the usual $100 location, and putting the TPA above 16K was not terribly compatible with a lot of software out there.

But yeah, new(-ish) CP/M machines by that point were all 5.25" drives, as was of course the Microsoft SoftCard with its own special 5.25" format.

I just realized, though, that of course all the folks I knew were on 1 MB 8" drives. They were all long-time CP/M users, and so had started with 8" drives, and facing the option of upgrading to a double-density disk controller and buying 2-4 new 5.25" drives (and new minifloppy diskettes), or just upgrading to a double-density disk controller alone and reformatting old SD 8" diskettes to a higher capacity than any 5.25" minifloppy, the choice was obvious.

(For those that are unaware, there is no such thing as a “single-density” or “double-density” diskette; the only difference between SD and DD is whether the controller uses FM or MFM. The media are exactly the same. The only media change came with the move to HD.)

1 Like

One of the features of the PC-AT which came out in early 1984 was the option to use 1.2MB 5.25" floppies. These had the same format (number of tracks and sectors per track) as the popular 1.2MB 8" floppies which made software compatibility a little simpler. The new drives also had a 360KB mode, but if you wrote to the disk you might or not be able to read it later in an actual 360KB drive. These drives had a new signal to tell the computer that the floppy hadn’t been removed (so no need to read the FAT again), but mixing one of these with an older drive could mess things up very subtly.

That doesn’t seem quite right. According to Herb Johnson’s page, 8" drives had 77 or 78 tracks, but the 5.25" HD drives had 80 tracks. And virtually all the 8" formats on Wikipedia are either 128 or 1024 byte sectors. The only 512 byte sector format (which is what the AT used, isn’t it?) there is one for IBM’s “53FD” drive that’s 1.1, not 1.2 MB. (It is 15 sector, but only 74 tracks.)

You are correct. The 1.2MB (1,212,416 bytes) 8" floppy with FAT12 is 74 cylinders with 8 sectors of 1024 bytes each, while the 1.2MB (1,228,800 bytes) 5.25" floppy with FAT12 is 80 cylinders with 15 sectors of 512 bytes each. I suppose IBM was making a big deal at the time of having matched the capacity of 8" floppies and I wrongly remembered that as matching the exact format. At the time we were mostly worried about the switching between 300 and 360 RPM thing.

1 Like

Nice discussion of Steve Cousin’s excellent SC126/130/131. In the middle of the thread were some very flattering comments of my ZRCC and a hint about another similar design targetting ROMWBW. That design is described in details here:
https://www.retrobrewcomputers.org/doku.php?id=builderpages:plasmo:zrc

I called it ZRC (Z80-RAM-CPLD) because I was hoping to eliminate (or make it optional) the CF disk. I plan to have high speed serial (230400) to load the ROMWBW serially, but that didn’t really work all that well so I eventually added the CF disk so it becomes just like ZRCC except the DRAM is 2 meg to run ROMWBW. DRAM is too slow to run 22MHz, so ZRC is only run at 14.7MHz. It is a very simple design and the magic is in CPLD for ROM bootstrap, emulate 6850 serial port, simple DRAM controller, memory bank controller, and general address decodes. The 2 megx8 DRAM is almost free, salvaged from SIMM72 memory stick; Z80 and CPLD are inexpensive, so ZRC’s part cost is even lower than ZRCC’s $15. Unfortunately I can’t get away from SMT DRAM and CPLD, so ZRC kit is not available.

ZRC is too similar to ZRCC to be a different product, but I’m intrigued by having 2meg RAM that can be loaded quickly from the compact flash disk. There are also some spare capacities on the CPLD to implement additional functions. So ZRC is a hardware looking for applications beyond ROMWBW.
Bill

3 Likes

Very nice! I like the bootstrap idea (CPLD big enough to contain a tiny bootstrap over fast serial) - is there anything to learn from how that didn’t work out?

ROMWBW image is 512K and need to be loaded every power cycle. It takes a minute to load 512K Intel Hex file at 230400. Loading binary image can cut the time down to 30 seconds, but still too long to boot. The ZRC concept may still be acceptable for smaller applications.
Bill

1 Like

OK, that is a pretty big image! Textual formats for loading is probably good for the second stage, but perhaps a third stage loader would be a good way to go: binary, compressed, and with a crc or similar for integrity check. (If you can put up a splash screen or some progress text that can really help reduce the psychological cost of a short wait.)

(But of course you might not be motivated to revisit a boot-over-serial story: that would be quite understandable.)