Retrocomputing during lockdown

Here’s a photo from yesterday:

You may be able to see:
Acorn BBC Master with green-screen monitor…
running a BBC Basic frontend to…
FPGA-based Mandelbrot engine with VGA output…
hooked up to a Raspberry Pi for reprogramming using OpenOCD…
captured with a phone while on a video call with two other retrocomputists on lockdown.

You can also see my giant poster of the Mandelbrot Set. And my Raspberry Pi 4 which would normally be acting as a second processor inside the Master.

(Many of the items mentioned are @hoglet’s projects or ports or co-productions.)

Any stories or photos to share of retrocomputing undertaken in the past few weeks?

4 Likes

It’s cooled off due to other distractions, but past few weeks I’ve been getting the editor from the Software Tools in Pascal book typed in to try and get it to work with Turbo Pascal 3.01 on a Z80-CP/M simulator.

The editor in that book is very similar in flavor to the Unix ed editor, which is a line editor.

I’m doing this because I want a better editor for CP/M than the ones I’ve tried. Out of the box ED is just absolutely terrible. Command line character editors are awful, at least for me, especially today. Line editors are much better (ed is a line editor). Turbo Pascals editor is nice – for Turbo Pascal. It’s quite clunky and fiddly to use just as an editor. You have to jump through a bunch of commands and screens just to get in and out if you’re not using the compiler.

I tried VEDIT, and it’s a TECO clone. Easy enough to arrow around and add characters and what not, but as soon as you want to go beyond that, you’re dumped head first in to their TECOish macro language. And I, honestly, don’t wish that on anybody. Back to command line character editing. I’d rather retype a line than move a blind pointer 10 characters over to make a change.

Lots of folks use Wordstar, but that’s pretty darn heavy for just a text editor.

I keyed most of the files in using ed on Unix so as to get used to the commands and flow. Using ed to enter the source code hasn’t been painless, but it’s not bad either. And since I’m writing my own editor, I can make tweaks if I see fit (which I will soon anyway since the pattern syntax in this editor isn’t quite the same as regex is in ed).

it’s quite the little project. The book has all the code organized in to several small files. And even the way it’s structured as a Pascal program is interesting. In Pascal, you can nest procedures. Historically, myself, I’ve rarely done that. Typically it was done for little helper functions like for recursive routines, things like that.

But the editor is, when all is said and done, essentially one, very large procedure with several nested utility routines, and very few global variables.

This is all well and good, especially when you leverage #include files to manage the individual routines.

But while TP does support including routines, it does not support nesting them. So I can’t trivially convert the #include statements from the raw source in to the equivalent TP directive.

So, after I copied all of the files to a “diskette” for CP/M, I wrote my own mini-preprocessor to handle the #include myself. It’s straight forward, but since it nests, it’s also recursive (at least it’s more easily done with recursive calls). Once you run the little pre-processor you end up with a file that’s too big for TP to load. It seems to make a valiant try to compile, but as soon as you get an error (and TP stops on the first error), you get a line number to a file that the editor can’t read. So you don’t know what it is. Which makes the turn around kind of a pain.

But while writing the pre-processor, here’s where I ran in to an interesting limitation with TP.

First, by default, TP does not generate code that can be used recursively. I have not disassembled any of it, but I imagine its using a lot of static areas for local variables and what not rather than stack frames. That’s ok, because there’s directives to selectively enable and disable support for recursive code.

However, one caveat is that when you do use recursive calls, you can not use the var clause in the routine parameters.

Quick refresh:

function thing(a : integer); begin ...; end;

function thing(var a : integer); begin ...; end;

In the first instance, the a parameter is passed by value. In the second, the var keyword tells it to pass by reference, and that means that the value of a can be changed within the routine and it will change the underlying variable. Version 1 passes the value of a, version 2 passes a pointer to a.

So, for some reason because of how it manages memory, you can not pass the pointer to local variables to recursive routines. Fine.

Next, Pascal has the TEXT type, which is, essentially, a FILE of CHAR. TEXT is a file reference. In C, it would be akin to FILE *aFile, using stdio.

Turbo specifically disallows passing a TEXT (or any kind of FILE variable) to a routine by value. You have to use the var construct. (There are lots of sensible reasons for this.)

To wit, perhaps, you can see my conundrum.

Initially, I had something like:

procedure process_include(var in_file : text, var out_file : text);

You can perhaps visualize that this reads in_file, scans each line, and writes it to out_file, If it finds a #include line, it simply opens up the file on the line, and calls process_include again.

procedure process_include(var in_file : text, var out_file : text);
    var
        work_file : text;
    begin
    ...
    if (starts_with(line, '#include')) then begin
        assign(work_file, extract_filename(line));
        reset(work_file);
        process_include(work_file, out_file);
    end;
    ...

And…that can’t work. You can’t pass the work_file without a var, and you can’t pass a var to a recursive routine.

So, I had to do my own stack to handle the files. In hindsight I might have been able to make it work with direct use of pointers and dynamic memory. But, no matter. It works.

I also wrote a simple more utility. CP/M was designed back in a day when you could page through files using ^S/^Q to stop and restart the screen, and ^C to stop because the slow terminals were, well, slow. So it didn’t really need a more utility. But on modern hardware, it’s kind of necessary.

By this time, though, I’ve now run in to another thing. By default, the CP/M diskettes I’m using are limited to 64 files in the directory. And I’ve been bouncing off that limit. Very exciting when you bump in to that trying to save work from TP – you’re essentially doomed at that point, because I can’t swap out a diskette at this point using the simulator, and even on a real machine, you can’t swap out the diskette on the fly because CP/M requires a warm restart everytime you swap a floppy – something that you can’t do in TP. So, when that happens, you lose work. Thankfully, I was using a modern host and modern terminal, so I just selected the text page by page that I wanted to keep and copy/pasted it to a safe place while I quit TP. (Oh, and pasting it back in to TP? Not recommended. Not pretty.)

This sent me down the rabbit hole of making “diskette management” easier for the z80pack simulator I’m using, because it’s, honestly, a bit of a pain the way they do it now. If this were a “real” computer, then, yea, I’d just be swapping floppies, formatting new ones, PIPing back and forth. But on the tools I’m using from the command line, it’s awkward and a bit painful.

So, I need something a little bit higher level to manage those.

Now, z80pack out of the box comes with 4 floppies and a hard drive. I could just use the hard drive, but out of the box CP/M is pretty awful with a hard drive. No directories, the USER spaces are kind of terrible. It’s, at least with 2.2, really more of a diskette OS, so I’m trying to stay with the diskette idiom. Creating work floppies, selectively putting utilities on them, etc.

And as currently set up, it’s really a bit of headache keeping it all straight.

So, I’m working on the meta-problem of operating my CP/M “computer” a bit easier.

3 Likes

Regarding calling by value vs calling by name: this is a phenomenon which is also typically encountered when porting C code to JS. In JS, simple values are passed by value (copied), while complex values are generally passed by name (as a reference). In C code, on the other hand, you typically find pointers to simple values passed around quite frequently. One way to work around this (as indicated in your comment) is to wrap a simple value in a complex structure (in Pascal by using a record), just for the sake of moving things around. Using a dedicated stack instead and integers as pointers to this, is actually clever and avoids messing around with the heap and dynamic memory allocation. Nice.

(Moderators note: this reply might shortly be moved to a new thread. Ideally, if a point arising looks like it might become a discussion, better to start with a ‘linked thread’ at the time of making the point.)

I am somewhat fortunate to have Dartmoor on my doorstep, so I’m still able to get out and about to walk and tend my horses up there.

At home I’ve been able to spend more time with my Ruby 816 project - this is a “retro-brew” 65c816 system (16-bit /ish/ version of the 6502). My aim is to make it a properly stand alone system capable of editing, compiling and running code directly rather than cross compiling - mostly because that’s all we could do ‘back in the day’ … Also desperately trying to “sanitise” my retro collection (not as in “with bleach”, but to sort it out, etc.!) with a view to selling some of it as I was in the process of selling my house before all this happened… So I need to make more effort there, but if anyone in the UK wants a Northstar Horizon and/or PDP-8a then let me know…

Hope you’re using jitsi or some other non-zoom alternative! I’ve only recently gotten into the whole video-con thing (despite running a VoIP Telco in the past where I’d host voice tele-conferences) and Jitsi just works for me - ticks all the open source, peer reviewed software boxes too…

Cheers,

-Gordon

1 Like

Sitting inside in my home office all day is a good opportunity to work on my DREAM 6800 emulator.

2 Likes

Most I have been up to is trying to get a gopher server and bbs up on my pi zero using only the terminal.

Edit: and failing st rhat goal. Why does it feel like there is something insanely simple i a not getting with gopher?

1 Like

Now, I have not set up a gopher server. But I’m pretty sure I could pound out a simple one in, well, probably anything that can accept a socket in a 1/2hr.

It’s, by design, very simple. But at the same time, it can have hidden complexity.

A simple example is that the structure that is returned: the item type, description, path, host, port is the meat of it. These could be trivially (albeit, tediously) maintained by hand as text files. If all you were publishing were filenames and directories, then a server is basically trivial with some simple rules. Such as anything that’s an actual directory is, well, a directory. Anything ending in .txt is a type 0 text file, and anything else is a type 9 binary file.

And…that’s it! A simple shell script connected to inetd could do the trick.

Now, getting beyond that, where you have more descriptive titles than “notes.txt”, well, that’s where things get more complicated. How do you manifest that meta data, etc. Hardly insurmountable.

If you want to maintain the directories by hand, you can adopt a simple thing like .gopher_index contains the gopher formatted information.

1 Like

A quick Commodore BASIC implementation of “Conway’s Game of Life”, in memoriam John H. Conway:

https://www.masswerk.at/pet/?run=life.txt

1 Like

There was an online meetup yesterday, most of the day, on a single video call with maybe 15 people: it was the second virtual ABUG, for Acorn and BBC Micro users. Lots of interesting chat and some show and tell of works in progress. It looked a bit like this, but higher resolution:
image
(ABUGs usually take place over a long weekend, at a hotel or museum, so an all-day call was in some ways an abbreviated version.)

3 Likes

Small upade on Life for the PET 2001 (see above): I optimized the loop for the screen update (featuring POKEs to the video RAM) with considerable results. What did I do? Just reuse a variable. As it was, the address used for the base address of the POKE was the last one defined, the one I reused was the third one defined (with about 8 variables defined in between). The speed-up is considerable in relation to how small that optimization is.

Meaning, we all know, how important it is to define frequently used variables first, since BASIC does a sequential search on variable names, but there’s still a difference in knowing this and seeing it in action, even in a relatively small loop. Also, doing the same with less variables is definitely worth it. (And this is also, where true spaghetti code begins… :wink: )

Hurrah - any chance you could make the playfield a torus? I think the edge effects at present prevent the spontaneous discovery of gliders and similar.

You mean, copying arrays on each frame or handling edge cases on the edges (now we finally know, why they are called like this)? I mean, this is BASIC! :slight_smile:
But I actually like how cells build up at what become borders in this implementation. And, provided you chose the right dimensions, it’s much like blinkenlights on a really slow computer. (This may have been even better before the update, when there was still more of a noticeable sweep to the screen update.)

I’m naively supposing that a MOD operation here and there would allow the edges to wrap in a natural way. If the board size is a power of two, that’s an AND operation. But, possibly, neither is available!

A mod operation in Commodore BASIC? Those spoiled BBC Micro kids…

(AND, however, is available. But the true concern regarding the main loop is rather, if using a subscripted integer variable for the play field is actually worth it. This means, spending the time for parsing the “%” on each of the 8 lookups versus saving 3 bytes of RAM on each cell. As a sidenote, integer variables aren’t any faster in Commodore BASIC, they are actually the same, but slower for the extra character in the variable name. The difference is only in subscripted variables/arrays and the space reserved for each individual subscript.)

1 Like

I’ve got to admit I rather like the low-res, blocky look. It looks appropriate for a retrocomputing videochat.

2 Likes

A post was merged into an existing topic: (Conway’s) Life in BASIC

And one of the better ones out there, actually. To this day I’m still reasonably comfortable working in ed (it was my first editor on Unix, actually), and I’m continually surprised by how bad pretty much every microcomputer editor is in comparison.

My workbench isn’t too exciting today, since after building a PSU for my C64 last week I’ve moved back to working more on software stuff. (Although I don’t know if several days getting deep into the gory details of the Apple II floppy disk system really counts as “software.”)

There is, however, that cute little National (Panasonic) JR-100 that arrived the other day that’s waiting for me to give it a go at powering it up. It’s a 6802-based machine, which is refreshing. Unfortunately it wants a DIN-4 plug for power, of which I have none, so while I’m waiting for the ones I’ve ordered to show up I’ll probably have to open up the thing and see if I can get power into it some other way. (If you want to see what it looks like, there’s a video on YouTube of a cute demo running in an emulator.)

Today I finally implemented the code in 8bitdev for building disk images and loading them up in an emulator, so I now can edit a file and with a couple of keystrokes have the system assembled, unit tests run, a disk image built and LinApple pop up and immediately run the program, which is pretty nice. The next step there is to do the same with VICE so I can do PET/VIC-20/C64 programs as well.

2 Likes

Radio Shack, came out similar computer as the JR-100,
just after the COCO III came out. I think it was like $59.
vs $150 for the COCO III. 64K,BASIC and tape I/O.

You’re probably thinking of the TRS-80 MC-10, released in 1983, two years after the JR-100 and three years before the CoCo III (but three years after the original CoCo). It used a Motorola MC6803 processor, not an MC6802; the 6803 had some extra instructions and other features.

The MC-10 had less RAM and expandability than even the JR-100, though it did have a colour display like the JR-200.

Had a bit of a retrotechnical investigation today with @hoglet: he’s been bringing HDMI video output into play from the PiTubeDirect project. When it’s ready, the Pi in your Beeb which can act as a high performance retro-authentic second processor will also be able to act as a high definition and high performance text and graphics output device, obeying the usual MOVE DRAW and PLOT commands for lines, triangles and circles. Another win for the architectural foundations of the BBC Micro, where the VDU subsystem is separable (mostly) and resolution-independent. There’s a bit of a gotcha at present for line-editing, and a bit of a puzzle about hiding the latency of expensive operations like scrolling. But puzzles can be solved!

1 Like