What ever happened to ANDF?

What ever happened to Architecture Neutral Distribution Format (ANDF)? This seemed like such a good idea at the time and still seems like a good idea to me. Wouldn’t it be nice if we could generate binaries that run on nearly all hardware with no modification? Java bytecode sort of revived this idea but not quite. I was hoping whole operating systems would be created in ANDF as a means to liberate software from hardware dependence but it was not meant to be I suppose

3 Likes

New to me! From the linked 1993 usenet post:

It is based on a compiler intermediate language in which all target dependence has been abstracted out and deferred to installation. Thus a single version of an application can be distributed in a “shrink-wrapped” format that can be installed on diverse platforms.

With ANDF, the compilation process is divided into two parts. A “producer”, which is similar to a compiler front-end, processes the source code and target-independent header files to produce the ANDF form of the application. When the software is installed at the target site, an “installer”, which is similar to a compiler back-end, combines the ANDF code with target-dependent definitions and libraries to produce an executable program or an object code library.

I wonder if some intermediate output from LLVM would do the trick. For those languages which have a suitable front end!

Because JIT techniques are so spectacular, using bytecodes as C# (CIL) and Java (JVM) do is perhaps no great disadvantage these days. Java first appeared in 1995, not so much later than ANDF. I wonder if there was any influence - Edit - see below!

This HN discussion relates to the UCSD p-system, and mentions BCPL’s virtual machine, and Smalltalk. None of those are primarily about portable applications though. Some nice links in there.

ANDF is definitely under-appreciated. If it had gained more traction, it might have gotten us to “write once, run everywhere” with adequate performance earlier than actually happened with Java. The basic ideas were fundamentally sound, and worked out to an impressive degree for that time. Unfortunately, few nowadays would be able to see that past some superficial details that seem dated to those blessed with hindsight.

Edit: ah, just found a James Gosling keynote.

… Java as a compromise between C and scripting languages, ANDF and Virtual Machines …

As much of the OSF technologies it suffered from major NIH reactions from the existing UNIX vendors. I think only Digital embraced it. Sun went its own way (as OSF could be seen as an attempt to thwart Sun), IBM was lukewarm though did adopt some ideas, HP couldn’t care less. (Though I did use a rare bird: HP OSF/1. They thought about it, but then decided it’s not worth breaking their existing customer base.)

In theory it would be good. But first, companies won’t make money. And the problems are graphics and different hardware.
You can’t run the same software on every device, like smartphones. Or touchscreen apps on computers without. The future might be holograms and voice directed or even mind directed input.

1 Like

LLVM came to my mind, too. There is some discussion about using WASM (web assembly), which compiles quite well from LLVM, as an intermediate format for native applications. (Personally, not so sure about this, but it may be fine for most, but the most performance critical applications.)


Edit:
Compare the Web Assembly System Interface (WASI), https://github.com/WebAssembly/WASI
or this talk (transcript): WASI: a New Kind of System Interface

One problem is hardware keeps changeing faster than standards. When java came out, I think we still had 386’s with 24 bit addresing. Nobody planned a need for 64+ bit hardware.
The other factor is bit mapped displays all seem to pixel addressed,
setdot(345,531,green). The correct way is fractional addressing, for a portable
format but nobody uses that.
The one thing that was strange, JAVA clamed to machine independant when it came out,
but the fine print said “Sun or Windows only”,
Ben.

Hi
What I thought (or hoped) would happen is that people would write a kernel and a HAL for new or existing machines. Then an ANDF virtual machine/interpreter/compiler so that the great bulk of the OS rode on top – independent of the hardware. Each machine architecture would require a certain amount of unique code but it would be limited. The rest would be portable. That is not the way it turned out and we still labor under hardware dependent operating systems & applications much like we did in the 1980’s. This surprises me to this day.

ANDF is a new one on me… Interesting idea, but like others “Not Invented Here”…

Cross platform binary format is hard - Apollo and Apple did it when transitioning CPUs. Java was supposed to be the compile once, run anywhere with it’s byetcode, but that relies on the local architecture implementers getting it right and the same for every platform. Interstingly the 2nd time Apple did it, they also provided a tool to translate the binary code on the fly… See here for some details of the Apple and others:

SHAR was a distribution format used for Unix - it was a shell executable that contained inside it a UUENCODED file - usually a TAR file to be expanded locally - then compiled, although I suppose it could contain many binarys for different platforms, but if you have every had to try to maintain source code for different Unix/Linux/etc. platforms then you’ll know just how hard it can be - even with autoconf it’s still not easy, even with the same OS running on different CPU architectures - endian, hardware differences, etc. all make this non trivial.

But, as part of something else I’m doing I’ve just managed to port my BCPL system to another CPU - and the port works at the binary level. This works because like Java the binary is a bytecode, so all I had to do was write the bytecode interpreter/vm for the new platform… And because I’d done it before I made it work without any changes to the BCPL source code - (well, technically I did have to make changes but the changes were to make the source code 100% identical on all platforms now and in-future too, if that ever happens).

-Gordon

1 Like

Hi
Yeah, I am familiar with the pains of trying to port legacy code even from one Un*x to another or within Linux – even the same machine architecture. Especially older code, it seems portability is very limited. Same for Windows. Not even counting moving from one architecture to another and dealing with endian, word sizes, etc. Extremely painful especially with older code even with full source code and documentation.

It seems to me the rationale behind ANDF is every bit as valid today as it was 30+ years ago. If anything it would be more practical on modern hardware since there is so much more resources like RAM and processor throughput available these days. Java bytecode would be great for portability but aside from a few applications it doesn’t seem to have made much inroads with the OS. I would think the shell utilities at least would be written in a portable fashion just to save a labor costs every time something changes. Maybe it’s time has come again?

I have an old C application which, while I didn’t write it, I did hack it about extensively … some 30 years ago now. I tried to compile it recently. It failed. Very very badly. Mostly because it is K&R C and not ANSI but also because the compilers are better - for some value of better in that they do more tests, are much more fussy, warnings are errors and so on.

This is a good thing, overall, but for old code you want to resurrect? It’s a tricky call…

Shell utilities, but what about shell scripts? Bourne shell (sh), Bourne again shell (bash), dash, ksh, csh, tcsh, zsh or …

Lets face it - it’s a mess.

-Gordon

Portable binaries can be interpreted or JIT compiled, as the many examples given in this thread show. They can also be translated to native code when the application is loaded from disk to memory. They can also be translated to native when the application is installed in the computer.

I am pretty sure the Pick operating system translates at install time, but for the Mill computer I don’t know if translation is done at install or load time.

1 Like