The Golden Age of Retrocomputers

I think, what Alan Kay said was that URLs should have been Objects.
(In a way they are, if you think of the HTML DOM as a declarative object definition with standard methods, but this is certainly not what Kay had in mind.)

Regarding JS, I’m a confessing fan, right from the beginning (and somewhat worried about developments, where it must accumulate semantic sugar from whatever language a developer had used before – even duplicating the pragmatics of language features that had been forgotten). I think, the syntax was a right choice at the time, as it was pretty accessible and in line with C and then modern scripting languages, like Perl. (Lisp was still obscure, outside of Emacs, and marrying C-like syntax and Lisp-influenced schemes was an achievement on its own.)

Viola WWW might have been interesting in this regard, as it was influenced by HyperCard and had a rather powerful object oriented scripting language that used a lot of bootstrapping – and scripts and applet objects could be accessed by URL.
(I think, these were more self contained than, say, a typical JS application, which usually comes with heavy dependencies on the UI. Which is also, where JS started, mostly as a glue for UI interaction. – In the first version, you couldn’t even create your own arrays or objects, but had to emulate them by instances of function objects.)

1 Like

“…You gotta know how to hold 'em,
Know not to fold 'em,
Know when to throw away,
and know when they’ll run…”

(ugh, and this from someone who would rather listen to Touhou doujin or Vocaloid…)

1 Like

3 Likes

Re. URLs should’ve been objects

Likely, but more to the point, he’s said that they should be real object references. IOW, they should have a connection to programming semantics, rather than just being strings that code manipulates, which are only used through APIs.

I was inspired by this notion many years ago to create a proxy class in Smalltalk that used CGI to send messages to a website. It was an exciting project, for a bit, anyway, because I was able to map Smalltalk messages to CGI messages, and it proved the point that, “You can (sort of) treat web servers like objects!” I didn’t get beyond retrieving web pages, because I could see others’ point that HTML is not friendly to the notion of objects. Not to say it couldn’t be transformed into meaningful objects. I just didn’t have the foggiest on how to go about that.

A frustrating thing about URLs that Ted Nelson pointed out is that they’re hierarchical, like so much else in the design of computer systems. In my experience, this makes them “fragile” in the sense that if the owner of the website moves something on their server, suddenly the URL you’re using can end up referring to nothing (404 error). Why should it be like this when the document is still on the site? It seems dumb.

Making an analogy to Smalltalk, if you change the ontology of a class, that doesn’t make references to it invalid. Though, interestingly, if you change a class’s name, at least in Squeak, the designers were thoughtful enough to bring up notifications of every other class that references it, so you can make edits to update them to the new name.

The way more thoughtful web designers have dealt with this is creating the notion of “permalinks,” which are pseudo-links that are more symbolic than directional. The site architecture sets up what I’d call a “jump table,” so that a permalink reference goes to the right webpage, regardless of where it’s located on the site; creating a happy medium between the need to manage sites and people who want reliable reference material.

Re. Lisp was still obscure

You might be interested in The Nature of Lisp by Slava Akhmechet. I got the idea for “a Lisp for the web” from it. Incidentally, I wrote a blog post on his article (Lisp: Surprise! You’re soaking in it).

He made an analogy between Lisp S-expression structure and HTML/XML structure. I had already started thinking along these lines, after learning Lisp for a while, but when I read it, it “popped”: “Yeah, this makes sense.” The sentiment I got from others, though, was like, “I HATE XML! If you’re trying to make Lisp enticing, this is NOT the way to do it!”

Oh well…

1 Like

Hyper-G/HyperWave (which is now officially vintage computing, as about all links regarding this are dead and Google doesn’t find a single valid entry for “Hyper-G”) would have addressed this by a global link database.
But I think, there were reasons for its failure (it been handled as the exciting new thing after WWW for a while). Hierarchical symbolic links, like file paths, are much easier to handle than just a list of iNodes and references to this, especially for a document based system like the WWW. In practice, you’re just duplicating what a filesystem already does – and it’s easier to maintain this with reference to a filesystem (especially in the 1990s). It’s also much easier to setup, like everything in the “/en/” path gets an English skin and the “/fr/” path the French makeover… The hurdles are just much lower with that kind of reference to a structure that mimics what is already on your machine, and there’s also no need to register anything in a higher-level system… (If you were running a bigger server with lots of virtual hosts in the 1990s, this would quickly become a nightmare.) And, as it is, there are ways to have URL rewrite rules on your server, it’s just that everyone is much too lazy… :wink:

But, we could have both, a document based WWW and a network of objects with permanent references. I treally think that there was room for both. (However, the usual marketing hype – like in “Dot-com bubble” – would have produced enourmous pressure to go for one and to forget about the other.)

More on Hyper-G:

1 Like

Hmm. Hyper-G is reminding me a lot of Gopher, with some expanded capabilities. The “local map” feature is interesting, showing both forward and backward links. The “landscape” feature is reminiscent of an Irix workstation. I read it had this feature (though, for mapping IT topology; famously shown in “Jurassic Park” :slight_smile: ).

Yep, that is Emulith, my Lilith emulator.
You can download it from my FTP server on ftp://ftp.dreesen.ch/Emulith
Runs on Linux, Window and Mac.

It is basically a reimplementation of the Lilith’s hardware and runs the original microcode.

1 Like

Which works well, as the 800 was designed to look like a Selectric, so that must have caught their eye.

1 Like

And decades later, the only solution we have is hoping that archive.org grabbed it at the original path.

Does anyone remember the Tank Girl web site? It not only disappeared, but none of the technology works. So one of the first wide experiments in interactive web pages… gone.

I’ve really noticed this, because I’ve written a blog. If I go back to old articles, it’s common for me to find links that are dead. That doesn’t mean the content is gone. I can usually find it again by doing a search. It’s just been moved. The URL changed, sometimes on the same site. It takes a lot of work to keep links up to date. The more articles I write, the more there is to check!

Aside from that, I disagree with the notion that once something is on the internet, it’s there forever. I’ve seen some content disappear for good. Perhaps I should use the WayBack Machine more, but my memory is that it doesn’t capture everything. I’ve only used it once or twice for content I couldn’t find anywhere else. It seems like there have been plenty of times where I’ve tried finding something that’s disappeared elsewhere, and it’s not on WayBack, either.

I find these days not only do I often use the Wayback machine to find old (or moved) content, but also I relatively often submit pages to it, to be saved for the future. It’s almost a single-click action, if you’re signed in, and it will also save linked pages which is even better. (It doesn’t always work as you’d like, depending on javascript trickery and so on. It can’t, for example, handle infinitely-scrolling content like this forum.)

But sometimes one finds that a site has vanished, and is marked at the Wayback Machine as “this content has been excluded” - presumably by robots.txt mechanism. Sometimes, if we’re very lucky, some enthusiast will have hoovered up their own private copy, and it will come to light later.

I think, though, that this is the nature of history: it’s very very full of holes. Whether it’s Socrates or Shakespeare, not every thing survives, and for anyone much less famous than that, it’s more likely to be missing than to be preserved. I suppose what’s distressing is to see those holes appearing, when it seems so avoidable.

Most of the stuff lost, seems to be from Free sites going off line, or hosting providers got bought out like in my case.
Only in a few cases, has the author died.
For me it all the dead PDF links under Wikipedia that seems to found
I grumble about. Thank God, we still have bitsavers around.
Ben.

1 Like

Actually it takes more than a robots.txt file to exclude the WM. They changed their policy and now you have to jump through all kinds of hoops to stop them scraping your site. They’ve become really quite aggressive.

TBH I see a possible positive in there, in that when a site dies and the domain name is sold, I don’t want the new owner’s exclusion to apply to the old site’s content. But having said that, I don’t know what their policies are (or were).

Yes, for the most part I think the Wayback Machine is a good thing. But what I’ve sensed with Archive.org is a growing sense of self-importance. I run several web sites and am happy for most of them to be archived. But there are a couple where I want more control. Basically, I would like to have the decision about what material I have online. If I take something down, there’s usually a good reason for it, and I feel that should be my choice. It might be for privacy reasons, for example (due to changed circumstances).

The WM used to let you prevent archiving through the robots.txt file. Now, like I said, they make you jump through hoops (I actually gave up). It feels liked entitled bullying.

And, to be honest, I don’t think link rot is a problem. Culture (including the web) is an organic thing. We don’t live in a museum. Not everything needs to be preserved. If you think something needs preserving, then preserve it. But automatic preservation of everything is absurd. And yes, I know there are things that will be lost because we didn’t realise they were important, but that’s life. Embrace it.

1 Like

Hang on a minute I think perhaps I do!

2 Likes

Regarding accessibility of documents, all kind of things can happen. E.g., recently, there had been some confusion about incongruous documentation of the PDP-1. As Al Kossow rather bitterly remarked on a mailing list, this has been due to Jason Scott duplicating the Bitsavers documents on TextFiles, but flattening the hierarchy, so that the context got lost and the documentation of the PDP-1 and the quite special, one-of-a-kind beast that is the PDP-1X became mixed up, hence the confusion. On the othere hand, I remember a time whan Bitsavers had zipped what had been directories before into archives, so that you couldn’t link to the contained documents anymore, and TextFiles had been a viable backup in this time. (Bitsavers has restored its full hierarchy since.)

Which illustrates that all kind of things can happen.

And we now have Bitsaver Mirrors as well. :slight_smile:
I find the component section quite usefull, as it gives
me a good history of what came when.
Ben.

I remember buying a full height 5.25" floppy disk drive and wiring it up to my 6809 machine with a Western Digital chipset, and writing the formatter with a language called PL/9. It was an editor/compiler in one and spat out images onto my FLEX9 system.

I made one mistake with the formatter - I set the ‘a0’ bit to 0, but found everyone else used 1 instead, so I had to revise it. That machine was fun for a while - it plugged into an RGB TV (40x24), had 48k of RAM and the floppy disk.

I keep thinking of building a single board 6809 system for nostalgic reasons, but never get round to it. Instead I fool around with Pico W or Arduino boards.

1 Like

I would have to say, the GOLDEN AGE was when 64k drams came
cheap. At that time I still dreamed of real 6809 with OS/9 level 2 and a hard disk rather than the Coco 2 I had.