Playing with Fuzix on the Pico

We’ve mentioned David Given’s port of Fuzix to the Raspberry Pi Pico previously, once or twice, and the other day I thought I’d give it a spin. No extra hardware needed, although an SD Card is supported.

It’s my first taste of Fuzix, I think. I had to change one line to build, but it was otherwise plain sailing.

The Pico reboots to offer a serial device over USB, which I can connect to, although not quickly enough to catch the boot messages:

Current time is 15:52:37
Enter new time: 

 ^ ^
 n n   Fuzix 0.3.1
       Welcome to Fuzix
 m m

login: root

Welcome to FUZIX.

We get all the usual utilities: ls, rm, mv, and so on. But no awk! Not even a sed! There’s a vi-like editor, with a man page:

# man levee
levee(1)                                                FUZIX System Utilities

     levee - A Screen-Oriented Editor, based on "vi".

     levee [+address] [file ...]

     Levee  is  a  screen  oriented  editor based on the Unix editor "vi".  It
     provides a terse, powerful way to enter and edit text  (however,  if  you
     want a word-processor, you're better off with WordStar.)


I usually look around when I arrive on a new system (copy-pasted from several sessions):

# ls
# df
Filesystem       Blocks   Used   Free  %Used Mounted on
/dev/hda           2515   1918    597    76% /
# ls /usr/games
# who
root    tty01   Thu Nov 03 15:58:33 2022 
# free
         total         used         free
Mem:       160           56          104
Swap:        0            0            0
# uptime
 16:00:26 up 00:07, 1 user, load average: 0.00 0.00 0.00

We have proper unix-like subprocesses and pipelines too:

# (sleep 4 && echo done) &
# ps -ef
    0     1     1  0 15:14      ?  00:00:42 /init
    0     9     1  0 15:15    tty1 00:02:27 -sh
    0    24     9  0 15:17    tty1 00:00:00 -sh
    0    25     0  0 15:17    tty1 00:00:00 sleep
    0    26     9  0 15:17    tty1 00:00:00 ps
# done

# ps -ef
    0     1     1  0 15:14      ?  00:00:42 /init
    0     9     1  0 15:15    tty1 00:02:42 -sh
    0    24     9  0 15:17    tty1 00:00:06 echo <defunct>
    0    27     9  0 15:17    tty1 00:00:00 ps
# ps -ef
    0     1     1  0 15:14      ?  00:00:42 /init
    0     9     1  0 15:15    tty1 00:03:09 -sh
    0    28     9  0 15:17    tty1 00:00:00 ps

Just for fun:

# /usr/games/cowsay Pico
< Pico >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
# banner Fuzix!
FFFFFFF                                   !!!
F        u    u  zzzzzz     i    x    x   !!!
F        u    u      z      i     x  x    !!!
FFFFF    u    u     z       i      xx      !
F        u    u    z        i      xx
F        u    u   z         i     x  x    !!!
F         uuuu   zzzzzz     i    x    x   !!!

David included Adventure and Forth, but no other programming language. So Forth it will have to be - here’s everything I know:

# fforth
22 7 / .
3  ok
22000 7 / .                                                                                                       
3142  ok                                                                                                          

I’ll paste some programs from Rosetta Code:

# fforth

: DECOMP ( N -- )
  BEGIN  2DUP DUP * >=
         IF   DROP  1+ 1 OR    \ NEXT ODD NUMBER
         ELSE -ROT NIP  DUP .
  DROP . ;

2 2 5 5  ok
29 31  ok
7 11 13  ok
2 2 2 2 2 2 2 2 2 2 2 2  ok

And I can run a program from a file:

# cat factor.fth
# fforth factor.fth
60 30 20 15 12 10 6 5 4 3 2 1 
12 6 4 3 2 1 
899 31 29 1 
64 32 16 8 4 2 1 
11 1 

What would be nice is to be able to write and run a program to blink a LED, on the device itself, but that’s a step too far at present, I think.

There’s a thread here which might be useful or interesting:

Fuzix describes itself as

a fusion of various elements from the assorted UZI forks and branches beaten together into some kind of semi-coherent platform and then extended from V7 to somewhere in the SYS3 to SYS5.x world with bits of POSIX thrown in for good measure.

and of course it’s an @EtchedPixels project.

1 Like

Looks cool - especially if he includes the ability to manipulate the GPIO pins. This may be the control system that I start using for most of my projects.

Do you know if he includes a C compiler? I would like to port over nano, or the SANOS editor.

NM, I just read the article - no C yet. Oh well, another project to keep my eyes on - this looks good.

Indeed, you write your application in C on a modern host, compile it to ARM there, and then get it onto the pico’s filesystem one way or another.

I don’t know if a small ARM assembler could be written to run natively on the pico - it’s a 64k address space in Fuzix, I think, although I might have that wrong.

But surely a device driver could be written for the GPIOs, so an
echo 1 > /dev/gpio/3
would do something.

Good grief man, 256K of RAM and 2Mb of flash??? I should damn well hope you could fit a compiler in that, never mind an assembler. With an operating system and editor and all the utilities you would ever need.

The PDP15 we used to use in '76 had a fraction of that memory and was completely self-hosted. You can see what a compact compiler looks like at: - and yes, it does compile itself! The editor was pretty small too because it had to share the memory with the program being edited…

If it weren’t for already having a queue of projects so long that I think it will outlive me, I’d have a go at creating something similar for the Pico myself!



Having used compilers on 8-bit system in the distant past (e.g. Aztec C on an Apple II), yes, you’d think so…

But I do wonder… I think some folks have gotten a bit overly enthusiastic now - C compilers effectively include lint and 1000 other checks - like the arguments to printf and things like some simple array index checks and this, while nice, just adds to the size of it all (and execution speed, but what the heck, just get a faster cpu…).

Even my old friend BCPL has suffered this to a degree - the binary of the compiler is about 48KB but it now sucks up RAM for data structures and while I can make it run in about 300KB of RAM I’m not sure I could get it to compile anything bigger than a simple hello world in less than that.

Ah well. I’m sticking to BASIC for now :wink:


Oh, an IMP for Fuzix, targeting ARM (ideally) would be excellent! There’s surely a subset of ARM which would make a readily targetable machine - maybe even a little like the PDP-15.

(But as I say, I’ve a feeling Fuzix processes live in a 16 bit address space, and if that’s true, it’s quite a constraint.)

256K ram and 2 meg flash, was a 8" floppy drive and big 11" removable platter hd,
in 1976. 16K words or 48K bytes was standard on most machines. In hindsight that was, the best one could get before the advent of 16K x 1 dram. Just ample to bootstrap a system. Just when 16 K rams did come out, so did the complex Intel X86’s and
single chip PDP 11’s and every thing went downhill from there hardware and software
wise.‘Penny wise and Pound foolish’ seems to apply here that problem was 16 bit registers was used for addressing, when double registers have been needed since 1965.
I think part of the reason FORTRAN IV and BASIC lasted so long was they DID NOT USE
character stropping. I guess the people who desigined the other languages never had to
key in their own programs,or fight the limited I/O hardware at the time
Faster cpu’s implied NEW I/O hardware, the real fix for slow compiles.
C is 2+ pass compiler, the macro processor never seemed to be remembered
as a pass.
Not having used IMP, is it a complete self compile including the OS calls?
Is IMP a one pass compiler, generating a stack based instruction set.
Pascal for example can self compile, but I/O routines can not be written in Pascal
for several reason not listed here.

Still working on hardware with a 20 bit address space, rather than software.
What I call a RISC, is a CISC design with R+,-R, R# and ABS addressing
and about 8K of microcode using slow EPROMS and 1us memory cycle.


The original Bell Labs UNIX v1 was written for a PDP-11 (so 64k address space + bank switching, which I’m not sure UNIX used), and had a K&R C compiler, enough to compile itself. This compiler was bootstrapped from an earlier language (“B” if I recall), so it may not be trivial to port, but the point is, this provides an existence proof that you can fit a minimal C compiler in 64k of RAM. I wonder how big Stallman’s earliest versions of GCC were. I’m pretty sure he originally developed it on a VAX (because that’s what we had at 545 Tech Square), using a “self cleanroom” port of PCC. So that begs the question: how big was PCC? You have to remember how truly small C was in its original form. And optimizations? “Forget about it.”

1 Like

PCC is still around, but it has been upgraded so you can’t tell the size anymore.
The 6809 os/9 C compiler ran on OS/9 level 1 and was with just 64k total memory.
OS/9 level 2 had a MMU, but still only a 16 address space. Intel got lucky with the x86
as segments and/or bank selection never really worked with other architectures.

From what I read about GCC it was based off a pascal compiler that read all the source
in memory at once. That I would say is cheating, as C was not developed with the 256K
DRAM chips and the 386+ cpu’s.


Just for background, I see

The Portable C Compiler (pcc) came from Bell Labs in the 1970s…

Looks like one early version of PCC was 8000 lines of code, spread over two passes. A multi-pass compiler is a good tactic for small-memory machines, and perhaps a good tactic for other reasons too.

As a crude measure of the degree of portability actually achieved, the Interdata 8/32 C compiler has roughly 600 machine dependent lines of source out of 4600 in Pass 1, and 1000 out of 3400 in Pass 2.

Oh, and now I notice

Although the compiler is divided into two passes, this represents historical accident more than deep necessity. In fact, the compiler can optionally be loaded so that both passes operate in the same program.

I was in Stallman’s “orbit” at MIT while he was writing GCC. I heard nothing about a Pascal compiler, only PCC (circa 1985-86).

I know a C compiler could fit in 64K of memory, because there were C compilers for 8-bit computers, back in the day, which worked without bank switching, and the computers had a main memory limit of 64K. I worked with one on my Atari 8-bit. It was a bit tight getting all the files to fit on one floppy disk. It didn’t leave much for saving source and object code on the same disk.

1 Like

In the 8-bit C world, I may have mentioned this before, but it still seems relevant:


1 Like

Interesting - I see the MANX compiler for Apple II offers overlays - that’s another way to get more out of 64k.

1 Like

The rather fascinating thing for me was to find out that the typical 8-bit compiler in fact compiled to a kind of p-code. That’s what the documentation called it. I found out several years ago that the reason for this was that most of the 8-bit compilers were based on Tiny C, which was originally written for an Intel processor. I forget which now, perhaps the 8088. The way they were implemented on platforms like the Atari, Commodore, and Apple computers was rather than compiling for the 6502, they compiled to Intel machine code, and the runtime that ran your program was really an Intel VM for the 6502, which emulated a 16-bit stack. This was one reason compiled C on the 8-bit was faster than Basic, but noticeably slower than assembly language.

The CC65 cross-compiler compiled to 6502 machine code. The way memory was organized with it, as I remember, was to have the stack go from high to low memory, and the heap go from low to high memory. I remember at the time (early 1990s) seeing that there was no check in the runtime to see if the heap and the stack collided with each other… You just had to be careful, keeping track of how much free memory you had.

1 Like

The reasons 8 bit Pascal compilers use a compiler-interpreter approach is

  • the idea and initial start of most compiler is the Portable ETH Wirth Px compiler (P2 and P4) or Pascal-S
  • a compiler-interpreter makes porting and cross compiling easy, only the interpreter is hardware (CPU and OS) dependent
  • memory is a severe restraint on 8 bit architectures, the ‘VM’ or p-code interpreter is much more efficient in memory usage
  • a high level language such as Pascal (and C) require a stack based machine, with stack frames. The 8 bitters are not designed for that, the interpreter is (the 6502 is worst in that aspect)

The effects are easy to predict: larger programs are possible (such as the compiler itself) and slower execution. A good example is UCSD Pascal on an Apple as Apple Pascal.

The VM implemented in the interpreter is not an Intel emulator in the compilers I know. An 8080 is also not designed as a stack machine, so that would be even more inefficient. You have an example of a 6502 compiler generating Intel VM code?

1 Like

Virtual machines, made sense back then since a virtual machine order code could be rather
compact, compared to native code.
OS/9 had BASIC and PASCAL as virtual machines, but the 6809 cpu could compile C with no problem.
The PDP 8 BASIC and Algol 68 was a VM.
USCD in thier great wisdom changed the opcode format around version 3? killing the
the only hardware version of PASCAL, the PASCAL MICRO ENGINE.
A few 2901 bit slice PASCAL cpu’s where designed.

PASCAL being based off of ALGOL, kept the messy DISPLAY data structure making
slow even with hardware support. C at that time, just had static and local stack variables
making it faster.

Let’s not forget the classic VM , Threaded code, Some versions of FORTH and FORTRAN (PDP11).
A slow APPLE computer , was better than a experimental FAST computer time shared
from someplace far way.

It does seem more likely that the p-code offers useful things like 16 bit values and a 16 bit stack pointer, as well as perhaps some rather high level operations, because why not. (The BCPL machine also offers some high level operations.) But the virtual machine might look more like an Intel microprocessor than a Motorola one, from a distance.

It’s encouraging if the 6809 has a natively running native code C compiler, running in 64k. It’s perhaps an indication of the expressiveness and density of 6809 machine code. (The 6502 is a long way behind, for all that it’s my favourite.)

The term “p-code” just means “portable code.” I wondered as well whether these C compilers used p-code from Pascal, but my research told me it wasn’t. The reason it was called “p-code” seems to be the fact that so many 8-bit compilers were based off of Tiny C, all producing the same machine code (Intel), which you could build a VM against. So, theoretically, one 8-bit C compiler could produce code that would run on another Tiny-C-based VM, just as the UCSD P-machine scheme was theoretically able to do, with a different instruction set. (I say theoretically, because once you got into the realm of using machine-specific features, such as producing graphics, then that portion of the Pascal p-code became machine-specific).

So, this was not to say that the VM for these compilers is stack-based, just that it emulated a 16-bit stack, which IIRC the 8088 had.

Once I looked at the 6502’s architecture, I could see the reason the implementors would go this route. The 6502 has an 8-bit stack (so, 256 bytes), which is pretty limited for C’s architecture. So, don’t bother compiling for it. Just use the Intel machine model, with its 16-bit stack. Though, I had the thought, why not take the approach CC65 did, by compiling for the 6502, but implementing the stack in software, rather than thinking it was a choice between having to use the 6502’s stack or Intel’s? It seems to me that would be the easier approach, rather than implementing a VM.

I guess it’s possible the implementors had the idea that since C is seen as a cross-platform language, that the 8-bit implementation should be that way, as well, similar to what Wirth and the implementors at UCSD were shooting for with Pascal, taking some of what would be machine-specific features (such as outputting text to the screen, and taking keyboard input) and generalizing them in the VM. This way, you don’t have to create a different back-end for the compiler. You just have to make a machine-specific version of the VM, to handle those features.