The weird world of non-C operating systems

Not a technical deep dive but lots of jumping-off links:

3 Likes

They seem to have forgotton Assembly coding, of the Big Computers.
It was the high cost of memory of memory, that prevented High Level languges
from being used for operating systems until the late 1970’s. The PDP 11instruction
set and MMU hardware,defined UNIX more than the C programming Language. At that time you had mixed C and ASM programing. Moving from punched cards
also was big change for I/O compared to the older OS’s. Ben.

Also, speed may have been an issue, as well, especially, as compiler optimization wasn’t that sophisticated as it today. Even hand-optimizing compiled code could achieve a viable speed-up, even in the early 2000s. When your hardware isn’t that fast, 10% faster is a lot for frequently visited OS routines. Assembler routines were not unusual for time critical code.

(I remember a hand-optimized version of an early virtual machine for Intel Macs in 2006, when the MacBook Pro was fresh, which still achieved an advantage of about 10% over the normally compiled open source version of the same software.)

Speaking of Apple computers, LISA OS and classic MacOS were written in Object Pascal and some assembler code for time critical routines.

Nice to see BCPL get a mention - there is one other BCPL OS not mentioned there - OS6 at Oxford Uni and of-course my Ruby BCPL OS…

(Which I’ll get round to making a proler video of one day!)

-Gordon

1 Like

Since there seem to be many UK dwellers on this forum, any of you have experience with the Poplog system? Poplog - Wikipedia

I’ve heard of it, but only as I was briefly sharing a flat with an AI researcher who was at Edinburgh Uni at the time - he was using it in his research with POP, Prolog, etc… Other than that…

-Gordon

I think Liam intentionally excluded assembly coding in the article:

This is not intended to be a comprehensive list. There are too many such efforts to count, and that’s even with intentionally excluding early OSes that were partly or wholly written in assembly language.

Perhaps he should write another article about those. I’m sure it would jog a lot of people’s memories judging by the number of comments under that article. :slight_smile:

2 Likes

The article alludes to this a bit, but what’s interesting to me about the Burroughs B5000 is even though it technically had a machine language (low-level operators), it did not come with an assembler so that programmers could access them. Instead, it shipped only with high-level languages, Algol, and Cobol, which compiled programs to bytecode. The B5000 ran it in hardware. If you run Java or C# on a VM, it’s thanks to the B5000.

Indeed, all system programming on it had to be done in Algol. Business applications could be written in Cobol.

Alan Kay has talked about how this is his favorite system, because it was designed specifically to run high-level languages, and it was designed for stability. He has remarked about how it was very difficult to crash.

Doing a little research, I see Apple Pascal ran Apple Pascal Operating System, which was derived from the UCSD p-System. Apple released it for the Apple IIe. I remember in order to store programs off Apple Pascal, I had to boot it up, and format a disk with it, because it did not use Apple DOS.

1 Like

Probably warrants a thread on it’s own, but I did a summers work with Apple/UCSD Pascal in the very early 80’s. One of the original “write once, run anywhere” environments and we had Apples, Terraks and some other systems in the lab all running UCSD. Where it all fell down was the graphics (or lack of them in some systems)

Probably also led to my dislike for a walled-garden IDE approach to programming too, but that’s yet another thread waiting to be written!

Cheers,

-Gordon

2 Likes

The security model for the B5000 and successors depended on users not being able to write assembly language programs. Only valid compilers could generate files tagged as executable and the system would refuse to run any other.

The major security hole was that the world was not Burroughs-only: you could take a tape to an IBM machine and use that to set the executable bit for some file, then bring the tape back to the Burroughs that would then try to run it.

2 Likes

Perhaps you could answer this. When Alan Kay has talked about the B5000, he’s said that most people in the industry didn’t like it, but he’s never said why in any detail. The gist I’ve gotten from him is that detractors would always come up with some reason for why its design constrained their practices in ways that were not good for what they were trying to accomplish. He’s contended this was because they didn’t understand how it worked. He said the system was fully documented. So, if anyone wanted to know how it worked, they could, but it was a bridge too far for them to even do that.

What’s your view about that? What was the barrier to wider adoption?

Kay has said the one industry that liked this system was banks. As I remember, this was where he talked about its system stability. They liked that.

It’s interesting you bring up system security. Were other systems more secure despite the fact that programmers could access their low-level operators using assemblers? It seems to me they would’ve been even easier to exploit, but I’m completely ignorant about what security, if any, they used. Was this a case of Burroughs overselling their system security?

The other part that would be needed to run something unauthorized is someone would have to write a Burroughs assembler, or some such. It seems to me, ironically, that Burroughs security was probably more about “security through obscurity.” If it had been more widely adopted, that would no longer be true.

Re. the executable tag, this reminds me of what Mark S. Miller has talked about with creating an OO system, where low-level, late-bound objects govern system functions. He addressed the security issue, saying that he used an encrypted key system to identify authorized objects for critical system functions, and he addressed the weakness of it, that if the encryption algorithm and keys are discovered, that would present an exploit that could be used. He attributed most of this vulnerability, though, to the fact that his system was running on top of insecure operating systems, which themselves created opportunity to poke around in their internals. He attributed this to the reality that “the OS ship has sailed,” and the only practical choice was to create an extension on top of them. Otherwise, he’d be locking his systems out of most people’s reach.

The Burroughs system was well documented, as you said, and the sources were available to clients. So it was not a case of security by obscurity.

The first computer I got to use was the university’s B6700. There was a lot of pressure to replace it or at least complement it with an IBM mainframe, which the university soon did. The issue was software compatibility.

Any system which can securely run C programs will be able to handle assembly programs just as well (unless it has special assembly instructions for manipulating caches, page tables or i/o). A separation of user and supervisor modes and some virtual memory scheme are normally enough. A proper virtual machine scheme is even better.

Speaking of which, an alternative to running a new secure system on top of a traditional operating system is to run the traditional system in a virtual machine on top of a new secure system.