Story Research: Non-Internet Networking

This is actually for two separate books I’ve had on the backburner for years; One being post apocolyptic AI driven, and the other being sort of ‘the apocalypse hit fantasyland #5218484 and now they’ve started to rebuild enough to consider themselves a stable society.’

Two very diffrant setups so ‘one size fits all’ probably wouldn’t apply. I’m leaving out specifics because ‘the non internet network’ is a good general topic of discussion.

Sneakernet? Classic guy running from A to B with a dufflebag full of stuff.
Use networks for some types of traffic between ‘core’ nodes, runners for outlaying nodes, and go asynchrenous for those outlaying nodes (such as BBS/fidonet/usenet?
Anyone here familiar enough with minitel-like systems to explain how that worked?

Going Asynch would seem like a good common ‘thing that is good’ since that means each node isn’t useless when it can’t instantly connect to its fellows and can still serve something to end users.

I mean it’s my setting technically I can do whatever I like especially since in both cases it’s ‘use whatever is on hand’ as opposed to ‘the only things working are the museum pieces’ but I’d still like input from folks who are more knowledgeable or in cases actually have memories of that ‘era’.

So here’s the Scenerios for each (which setting is going with which is less important than it might be if I were going for specifics more than ‘shape of things’ I imagine.)

  1. One has a city that is transitioning away from courier based mail and message running to essentially repurpose found technology for hardline communications as a hub/spoke to cut down on the number mail persons. Biggest downside Isee to this is it doesn’t precicely remove the need for couriers as it’s hub/spoke rather than a true ‘network’ and the system itself needs mantinancing. On the other side depending on how lines are run you can get faster communication to specific businesses more securely (line straight into town hall, banking, differing ‘important’ points where near instant communication would be useful, etc.))

On tehe other hand current notes put the hub/spoke as being ‘stage one’ where the backgroun problem beyond the big plot moving situation is 'this was built on salvaged technology that with smaller feeder lines radiatinng out from each spoke could tap into each and keep these hubs from acting as potential gate keepers.

  1. Scenerio for the other story involves a still organizing settlement using land-line network for the core ‘built and established’ settlements, and radio links to outlaying nodes that are part of exploritory units that are either resource gathering, acting as security, or trying to act as that first step in expansion of the core settled bits so that new settler groups can be integrated and have a communications link without having to leave their immediately safe area.

I dunno if this is even proper for this group I’m just kinda spitballing a set of ideas I’ve had half baked for a few years I’ve done depressingly little with. It’s just something feels ‘off’ about both, or maybe it’s just personal fear that both feel somehow inauthentic.

2 Likes

Could be an interesting discussion…

First off, Minitel is, I think, very much a case of dumb remote terminals. No storage or compute power at the terminal end, only at the server end. Much like running a text mode browser, or a gopher client. But with teletext-style graphics.
https://www.flickr.com/photos/16658021@N07/46006878575

Yes, store and forward makes a lot of sense when connections are intermittent, unreliable, high latency. Usenet is like this: UUCP over dialup.

I’d add packet radio too. The original shared medium network connection, IIRC, was the ALOHA radio system in Hawaii.

3 Likes

Seems useful for the tenetive steps during the hub and spoke phase. Easy to impliment, but really doesn’t offer a whole lot beyond a ‘dumb’ access point. Superficial as all hek but I actually like how the terminals look like the keyboard folds up to form a protective layer over the monitor. Not sure if I’d incorporate that, might though for the sake of either flavor or a moment of ‘guy breaking in and using someone else’s credentials but the reason it isat least partially detected is a broken case lock’

I’d been rewatching Jason Scott’s BBS documentery to try getting a feel of the sentement of the era.

Kinda makes sense really given how hard stringing cable on the islands would be.

1 Like

One thing about UUCP is that it’s more like flood-fill than hub and spoke. You can imagine, for an important message, sending it out in three independent directions, and re-fanning at each changeover at intermediate nodes.

Possibly you could use the power line network - or whatever remains of it - to get signals over long distances. It’s very difficult to run new wires!

A long range mesh of wifi might be possible, if you had loads of talent in rewriting firmware! Wifi can work over some kilometers with appropriate antennas. And even after the techno-apocalypse there will be very many wifi routers to be scavenged.

1 Like

Memories of me attempting to wie a dish antenna to my router to see how far out it could go…

Flood fill is like what ‘message radiates out from wherever the newsgroup discussion is stored at to all subscription nodes’?

Your bringing up wifi does bring forward interesting ideas.

1 Like

Yes, by flood-fill I mean each message goes to every next-hop node, which then forwards to all its neighbours.

1 Like

Lets not forget The Clacks:

https://wiki.lspace.org/mediawiki/Clacks

3 Likes

You would really think by now I would’ve read the discworld books given how often those seem to pop up in discussion.

1 Like

I love the Clacks. The system got constantly improved throughout the stories, often in order to increase the bandwidth. I remember a description which was clearly classic run-lenght encoding. And getting more bits per transaction too, somewhat like going from binary to dec to hex to b64.

There’s a round-world equivalent of the clacks, as mentioned in the article Gordon linked. Used in France, I think? I didn’t follow through the link and it’s been a while since I read about it (the roundworld system). Not sure if it was as efficient as Terry Pratcher’s system.

And of course it would be possible to run tcp/ip over the clacks too… possibly using the RFC mentioned in the same article. If tcp/ip by pidgins works (and it does), then The Clacks will have no trouble.

1 Like

Indeed, T.P. did seem to have done his research and put a lot of real-world understanding into his books. From that Clacks page:

The real-world counterpart of the Clacks was known as the optical telegraph or Semaphore Line. Invented in the late 18th century and operated into the early 19th century before being made obsolete by electrical telegraphy, semaphore lines were used by the governments of France, Britain, and other European countries to convey vital information more rapidly than horseback riders could. Semaphore lines could only send about two words a minute and were thus much less efficient than those of Discworld. They also were not available to the general public, being used only for military and government use.

The Count of Monte Cristo by Alexandre Dumas père contains a fictional account of a successful exercise to subvert the operation of one such semaphore tower. This scene almost certainly had some input into the Discworld events along similar lines.

There’s also “The Victorian Internet” a good (factual) read about telegraphy.

2 Likes

No, UUCP itself was purely point-to-point, with no facilities for anything other than “queue this file and/or command to be sent to a node that will be direct-dialed.” Anything passed through multiple machines was done via one of several higher-level protocols that required further co-operation from the nodes involved. (One of these protocols was an e-mail system using the classic bang paths for addressing and routing, e.g., mcvax!moskvax!kremvax!chernenko.)

UUCP over dialup is one lower-level protcol over which the Usenet protocols (mainly RFC 850 and its update, RFC 1036) could be run, but far from the only one. In the early days X.25, ARPANET and even FidoNet were also used. By the mid-90s NNTP over TCP/IP was the most common protocol over which Usenet was run. (Like SMTP, NNTP is also still store-and-foward, allowing at least to some degree for disconnected operation.)


Looking at the original question, the main issue that the societies are dealing with appears to be intermittent connectivity (and perhaps even operation of their computers). For this, an asynchronous networking system makes perfect sense, and there’s plenty of historical precdent for it, including RFC-822 email transfered in a multitude of ways, other store-and-forward email systems such as FidoNet, and of course Usenet as mentioned above.

I’d imagine that their system would quickly evolve to have a widespread “batch”-oriented store-and-forward network at the lower level, with large chunks of data (e.g., a complete message, file, picture or video) identified by hashes and (globally unique) request identifiers, and transferred via various lower-level protocols that allow the chunk level to use offers and requests to determine what needs to be transferred, with these messages indicating chunk identity and size.

The upper level would consist of various protocols running on top of the lower level protcol described above, for doing things such as:

  • Email and Usenet, as described above.
  • Point-to-point file transfer: sending a request for a file to a particular node for a file or set of files, and the node sending back in some way what was requested. (The response indicating what’s being returned might be via a different lower-level protcol than the actual return of files.)
  • File sharing search: sending a request for a file or group of files that have a particular identifier (e.g., en.wikipedia.org/wiki/Usenet) to get information about what nodes have this archived and available to send. This would also include the ability to delegate the actual collection of files to another node (if it has agreed to do this for the requesting node) that would transfer them all back to the requester.

So here’s how a scenario might play out. I’m browing the local Wikipedia archive at (what’s left of) Showa Station, Antarctica. While reading this page I note that there’s an error in it, and compose a change request to fix it, as well as clicking on a few links leading to material I don’t have locally, thus generating requests for it. Done with this, I send an email to a friend, letting him know how things are going (cold!) and that I’m sending him some photos. Separately, I send the photos to him thorugh a file transfer protocol.

Unfortunately, the radio link is down while I’m doing this, so everything I’ve done can only be queued locally. Eventually the radio comes back up, allowing me to send the Wikipedia change request, the requests for further Wikipedia pages, and the email. The photos however, are too large to send over the slow, unreliable radio link, so all that’s sent there is identification information about them. Soon enough a reply to the Wikipedia change request comes over the radio, and an acknowledgement of the Wikipedia page requests.

Eventually the supply boat shows up, and with it a portable storage device containing more data for me, including the Wikipedia pages I requested. I copy those from the device and write receipt acknolwedgements to the device, as well as writing the photos I’m sending to my friend. These are carried back to civilisation (such as it is), though the node that sent me the Wikipedia pages will probably get my acknoweldgement via radio before the storage device arrives and is reconnected to the remote networks.

2 Likes

That’s mistaken. UUCP did the routing between hosts. If you did:

uucp file.dat a!b!c!file.dat

The underlying system would route the file through the “bang path” and, inevitably (assuming proper cooperation via intermediary sites) the file would arrive at system c. Each system would cooperate and move the file along.

It was not just point to point. This capability was fundamental to all of the other systems built on top of it.

On the one hand, “yes”, it was “point to point” in that you sent a file from one specific system to another, you could not implicitly “broadcast” to several destinations. But the route to that inevitable system was not, necessarily, point to point. Routing was intrinsic to UUCP.

Later, EMAIL via UUCP supported the idea of “smarter hosts” that knew more about routing to where you would send a message “directly” to a destination, and the underlying system would route it there vs having to specify a bang path. But many were bang paths from a “well known host” that was well connected and then “manually” routed.

But that was the email system on top of UUCP.

USENET was mostly ignorant of UUCP, it just knew about newsgroups. UUCP was used underneath for moving the messages, but you as a user didn’t need to know that.

1 Like

I am pretty sure you’re misunderstanding what’s going on with that command.

The multi-bang syntax for uucp, as far as I know, was a 1990s extension to that command. (Neither the V7 documentation nor 4.3BSD documentation mention allowing more than one !, and this pair of Ultrix manpages appear to be an earlier and later (2005) manpage for uucp where only the later one offers multi-hop routing.) And this only worked if every system was running a version of UUCP that supported this syntax, as mentioned in the “Notes” section of the Solaris 11.2 manual page for uucp:

The forwarding of files through other systems may not be compatible with the previous version of uucp. If forwarding is used, all systems in the route must have compatible versions of uucp.

The way uucp file1 foo!bar!baz!file2 worked was by using the UUCP protocol merely transfer the file to foo (as in, uucp file1 foo!tempfile and then additionally sending foo a uucp tempfile bar!baz!file2 command to be executed to copy it to the next system, and so on down the chain, This was no different from doing the same thing yourself with uucp and uux, just a bit more convenient. In fact, the original uucp command already would do the same thing for you for remote-to-remote copies (uucp foo!file1 bar!file2, where the command is run not on foo or bar but a third host), as described in the V7 and 4.3BSD documentation above:

Uucp generates a uucp command and sends it to the remote machine; the remote uucico executes the uucp command.

So what you’re doing there is not just UUCP protocol any more; you’re also executing a (rather hacky) “higher level protcol” that uses UUCP to transfer a single hop and then sends commands around to do further transfers.

But in general, even if you just generated your own uux commands to try to do this, it wouldn’t work. UUCP nodes, for what I hope are fairly obvious reasons, generally accepted no commands from remote hosts other than rmail and rnews, which implemented the higher-level e-mail and Usenet news protocols. Those programs, not UUCP, were responsible for figuring out how to further route received mail and news messages, and by the early '90s there was a good chance that a UUCP node’s mail and news would be transfered through other means (the nascient Internet) after that first UUCP hop. (This was certainly the way it worked for the UUCP node I ran at the time; except for mail to other UUCP hosts directly connected to my UUCP provider, my mail was transfered via SMTP after getting to my provider.)

1 Like

I didn’t get involved with UUCP until the 90’s, but the multi-bang path was supported on all of the common implementations that I was aware of at the time including HDB, BNU, and Taylor UUCP. But the bang path had a history back to the early 80’s in to email, so I can’t say when it got into UUCP proper.

But from your V7 link:

So, they clearly had the concept in mind if not specifically laid out by example in the man page.

Of course its different – it’s supported by the underlying toolset, by UUCP. It’s a first class concept. Multi-bang path support is a minor leap over the single bang path of simply recording the originating host, and the ultimate destination and promoting that as the file is moved from system to system.

The underlying system was, as most of the systems were back then, quite simple. It was a queueing system, a modem dialer, and a serial handshake protocol. The modem dialer was a pain in the neck. The serial protocols were complexity of its own. But at a high level on top of those primitives, yea, it’s a pretty simple system: move files from a to b, and “do something” with them when they arrive. If you want to consider adding multi-bang paths a “hack”, you may as well consider the entire thing one.

If soneone were “hacking” it themselves, then all of the systems along the line would have to implement the same “hack”. Instead, they simply update to a later version the UUCP programs that supported it.

UUCP is a system. uucp is a program in the system. The UUCP system supported many different line protocols (typically represented by letters like ‘a’ or ‘g’). The connecting systems negotiated which line protocol they were using.

UUCP has always had a “higher level” protocol, this is what was done via the exchange of the command files that came along with the actual data files. “Here’s the file, here’s what to do with it”. The basic version passed along the final destinations of the files so uucico could park them there when it was done. But uux could execute arbitrary commands on the destination systems.

USENET is not UUCP. USENET was built on top of things like UUCP. UUCP was used for things besides USENET.

There’s always a level of trust between the systems along a UUCP network. By todays standards, it pretty much security nightmare. But those were friendlier times. All files, for all systems, were managed by the pleasure of the individual hosts doing the work.

From storing the files, to making the phone calls (if they did), to prioritizing the traffic.

And there were always rote disclaimers about what other systems may or may not allow you to do.

We used multihop UUCP for our own and customer systems. We’d use it to send updates and install software and such. These weren’t plugged in to the “wild”, or accepting calls from strangers. We had to the power to do awful things to customer computers. We just didn’t.

1 Like

Though I have never had any personal experience with Fidonet, it is very relevant for retrocomputing.

2 Likes

I have been following this discussion with interest and honestly am flattered at the sudden activity. However i find myself going glassy eyed at a lot of it.

And that lack in comprehension bothers me because if i don’t understand how do i communicate this sort of debate to a reader who possibly would have even less knowledge.
That said? The society envisioned is effectively picking up old gear, old technical manuals, and a few still living accounts and each segment of the world is trying it their own way. Which means tgere may be several towns connected viaone standard, another using an internal syatem butnot connected to the rest, or someone using the rail network as backbone for their own thing (or the rail lines adopting their own standard that gets lain down with new rail.)

I love the back and forth thats gone on because I could totally see people arguing this both in person and via remote means (letter, email, courier, whatever.)

1 Like

Well, I’m going to disagree with that characterization. If you say that multi-bang paths are “supported by the underlying toolset,” you can also say that “FTP is supported by the underlying toolset” because you could just as easily send a command to the remote system to FTP a file and copy it back to you via UUCP.

The key here is to look at the UUCP protocol itself; you’ll note that in the protocol there is no concept of paths at all, or even hostnames outside of the inital handshake: just “transfer from me to you” and vice versa.

UUCP has always had a “higher level” protocol, this is what was done via the exchange of the command files that came along with the actual data files.

I would not call those higher-level protocols part of UUCP, any more than I would call HTTP “part of TCP/IP.” Systems that support TCP/IP do not necessarily support HTTP (and more often don’t run HTTP servers than do, actually).

And note that, as I mentioned before, the higher-level protocols do not necessarily use UUCP. As in my example of mail for my system back in the early '90s, UUCP was used only between me and my immediate neighbour host; the mail protocol continued over TCP/IP after that. (And for users with logins on that system, they’d send mail using the same higher-level mail protocol but UUCP would often never be involved. Same for Usenet news.)

So, to clarify: by the 1990s UUCP bang paths were basically never used. An email message might have a header line To: moscvax!kremvax!kchernenko in it, but UUCP would never see this; that’s just internal contents of a file (that UUCP does not examine) that’s passed to another system, which gives that file to rmail and rmail interprets that as it wishes.

There’s always a level of trust between the systems along a UUCP network. By todays standards, it pretty much security nightmare.

Not really, no. There was an incredibly limited level of trust between me and my upstream (I allowed it to execute only rmail and rnews, and no trust at all between me and other UUCP nodes, who couldn’t even connect to me via UUCP.

This is why I’ve been so careful to distinguish the protcol layers.

The key here is to remember that “mail” is its own protocol, and that’s handled by passing a file with a mail message to the rmail (or similar) program. How that happens is irrelevant. It could be any of:

  • Transfer the file to a neighbour via UUCP and ask it, via UUCP, to run the rmail program on it.
  • Upload the file via XModem and somehow ask the remote system to run the rmail program on that file.
  • Put the file on a disk or tape, mail the disk or tape to someone, and ask them to run rmail on the file on the disk or tape.
  • Ask an SMTP client to read that file, contact an SMTP server, and transfer the contents of that file to the SMTP server via SMTP.
1 Like

My advice would be not to worry too much about it. What’s interesting (to me) is methods, and results: especially in a future context where records might be patchy. What’s better, to create a way of communicating moderately reliably and cheaply, or to find some precise historical specification and follow that slavishly? All historical protocols were invented, at some point, by people trying to solve problems. And most of them were subsequently revised. And almost all of them fell into disuse as the situation changed.

I like to think of the times when ships were the only way to transport messages across oceans. Families stayed in touch, remote agents transacted business, people travelled for weeks or months and then spent years somewhere, possibly returning and possibly not. When it takes weeks to get information to someone, and months to have a series of exchanges, you necessarily take a different approach to trust, to delegation, to independence. It’s very much the opposite of sending an email or text and getting annoyed not to hear back within minutes!

It’s great to see discussions here where people bring their understanding and recollections and experience. No-one has to be right, everyone is wrong some of the time, and it’s better to bring something new than to keep rehashing a point.

2 Likes

Well, I think you should worry a lot about the actual mechanics of how this communication would work, though I agree you don’t need to worry so much about clearly communicating all the details to the reader.

Especially in science fiction, but even in day-to-day literature and other stories, a coherent and (certain accepted fantasies aside, such as FTL travel) realistic background helps a lot, even if the reader only ever sees parts of it, and sees it only through the story rather than directly. The presence (or not) of that will be clear to the reader whether or not he knows it exists. This is why every major television series has a “bible” recording all of the background information and every detail that comes out on the show. Many SF writers do the same for their works.

One of my favourite examples of incredibly well done research and background work is Vernor Vinge’s A Fire Upon the Deep. Though written in 1993, it holds up perfectly fine today becuase he worked out the ideas and got them right, from physical shipping of several different cryptographic one-time pads via different routes to be assmbled via XOR at the destination to how his equivalent of “Usenet/Internet chat” worked. Even reading it in 2020, from the opposite side of Internet history, everything is perfectly plausible and makes perfect sense.

The design I put foward was not just a random idea, but exactly where I would start if I right now were thrust into the situation you described and had to keep communications going. Nothing in what I’ve described is particularly novel, and much of it is direct re-use of existing software and protocols we use today or have used in the past.

What’s particularly interesting, though, are the protocols that survived, and often survived almost unchanged.

The RFC-5322 email we use today is simply an extended version of RFC-733 from 1977, and most messages in that format are still valid mesages today. Careful attention to protocol layering helped a lot with this: in my previous post I mention four different ways of transferring RFC-822 e-mail (as I like to call it—it’s the same thing as the other two above) and there are many more options, including the ones I proposed (implicitly) in my original post on this topic. That kind of layering lets us get maximal re-use out of existing technology while substituting other things for parts of the system that no longer work (due to environmental changes or otherwise).