Making Smalltalk On A Raspberry Pi

This installs a full Smalltalk environment on a bare metal Rasp Pi. There’s a link to another version that runs Smalltalk 80 (GNU Smalltalk) in the OS in the article.

Today, you probably don’t think much about object-oriented programming, it’s just part of the landscape. But decades ago, it was strange and obscure technology. While there were several languages that led up to the current object-oriented tools we use today, one of the most influential was Xerox PARC’s Smalltalk language. [Michael Engel] took a C++ implementation of the Smalltalk VM, some byte code for a complete Smalltalk system, a Raspberry Pi “bare metal” library, and produced a Smalltalk workstation running on a bare Raspberry Pi — even a Pi Zero. The code is on GitHub and is admittedly a work in progress.
[…]
If you don’t want to use a spare flash card to boot into the system, there are Smalltalk 80 versions that run on normal operating systems. The tutorial in that program’s user manual might be helpful to you if you haven’t done Smalltalk before.

2 Likes

These are some of my favourite things! I always wince when the Pi is viewed as a cheap Linux machine - it is that, but it’s by no means only that. Any alternate OS or bare metal application is a showcase. And any Pi is transformed by simply swapping the SD card. I have a couple of baremetal Pi projects in regular use right next to me, both @hoglet endeavours, and also a Linux-equipped Pi used for programming FPGA. And I have at least one SD Card on standby with RISC OS Pico on it, for the joy of instant-on Basic.

The person behind this Smalltalk has an interesting motivation:

This Smalltalk port is part of a project to investigate the usefulness of a class of systems for applications in areas such as the IoT (together with my students here at NTNU). These systems – we are also looking at Plan9, Inferno and Oberon – fill the niche between a typical RTOS (dozens of kB memory, some MIPS computing power) and full-featured Linux/BSD systems, which nowadays have significantly higher resource requirements. The Raspberry port was just a stopgap, my aim is to provide ports to RISC V based (FPGA) systems with the vision of creating a system that a single person can understand in a reasonable amount of time – so open source for hard- and software and low code complexity are important here. One inspiration for this is Nikaus Wirth’s Project Oberon (http://www.projectoberon.com/).

3 Likes

Oh, the instant-on programmability. Perhaps the single one thing that I am most sad that we have lost as the paradigm of computing has changed. “Let me wait for the machine to boot, the OS to start, the GUI to start, the IDE to start.”

1 Like

My Linux laptop does boot from cold to full desktop environment in 5 seconds, but yes, instant-on is definitely something. Or was something, for 8-bit micros… in the same period the VAX at work took fifteen minutes to boot (and new AIX systems takes as much too, and HP servers also take several minutes).

How long does CP/M take to boot?
Just playing around with the hardware I have running at
1977 speeds the SD card seems to be working at floppy
speeds, ~ 5 seconds to boot the 16k O/S.

By definition, longer than the ROM based computers. Simply the fact that it must load stuff from the disk slows it down.

Mind, by todays standards, 5s is nothing. Also, much of what slows down machines from booting is loading drivers and waiting on hardware to stabilize and spool up. Back in the day, there was little more than a UART or the video chip. Plus, most older machines didn’t do anything like memory checks on start up.

And if you think about it, back in the day, cold start time was important. Why? Because when computers crashed, they crashed hard. The only solution was to restart. Restart early and often.

So, restart times were actually part and parcel to the user experience. Far cry from today when the machines are never turned off, and badly behaved programs can be dispatched by the user through other means than a hard reset.

1 Like

Far cry from today when the machines are never turned off,

Depends on the machine type. Personal machines like laptops/tablets sure, but servers seem to be constantly rebooted. Which is kind of the opposite of the old times. At a place I worked in had microvaxes that always had dozens of users, and were booted only every few years due to computer room maintenance.

1 Like

Our experiences differ then, as we rarely reboot our servers. And historically where I’ve been we rarely rebooted our servers. In fact, I distinctly recalled a time when we had to reboot our pair of HP-UX servers (maybe it was power thing, I can’t recall). Nonetheless, they were both down at the same time and had to be restarted.

We tried to restart them, but they would not come back up. It turn out that we had cross hard NFS mounts on them. Machine A had a filesystem from machine B, and vice-a-versa. And since they were “hard” mounts (and NFS concept), the mounts would not complete until the other machine was available, so they were dead locked.

What that told us in this context that we had never had both machines down at the same time before.

So, in the historical context, no, servers were not restarted. Certainly not like personal computers were back in the day.

However in the modern day, there is pressure for faster boot times Linux for a couple of reasons. One, folks are using them on their laptops, but Linux has “OK” support for sleeping and such. It can work, but as I understand it, it’s got finicky hardware support. So, they very well may be restarted often.

Two, there is much zeal for auto scaling other deployment scenarios, rather than simply having a program that’s started and stopped, you have entire environments. Whether that’s at the container level, or on a virtual machine. Being able to fire off 10 more servers for an hours work is a modern day requirement, so having a faster start time can impact reactivity and availability.

In the past we strived to maintain the operating state of a machine as best we can. Going through efforts to NOT have to restart it.

Today, such environments are disposable to the point that folks deploy the entire machine image when one component changes, rather than just that component. So individual instance life tends to be much shorter than in the past, but overall service availability stays high.

Indeed! Such advantages are discussed in detail in Pros and Cons of Suns:

Well, I’ve got a spare minute here, because my Sun’s editor window evaporated in front of my eyes, taking with it a day’s worth of Emacs state.

So, the question naturally arises, what’s good and bad about Suns?

This is the fifth day I’ve used a Sun. Coincidentally, it’s also the fifth time my Emacs has given up the ghost. So I think I’m getting a feel for what’s good about Suns.

One neat thing about Suns is that they really boot fast. You ought to see one boot, if you haven’t already. It’s inspiring to those of us whose LispM’s take all morning to boot.

Another nice thing about Suns is their simplicity. You know how a LispM is always jumping into that awful, hairy debugger with the confusing backtrace display, and expecting you to tell it how to proceed? Well, Suns ALWAYS know how to proceed. They dump a core file, and kill the offending process. What could be easier? If there’s a window involved, it closes right up. (Did I feel a draft?) This simplicity greatly decreases debugging time, because you immediately give up all hope of finding the problem, and just restart from the beginning whatever complex task you were up to. In fact, at this point, you can just boot. Go ahead, it’s fast!

1 Like

Can we make a list of baremetal, retro-related Pi projects?
(Maybe complimented by a link to a guide on how to get started with your own baremetal Pi project?)

Yes we can - see the linked topic
Some bare-metal Pi resources for retrocomputists

2 Likes