What if the Internet Never Existed?

I normally like Cody’s videos but this one just plain sat wrong with me because he seemed to ignore that BBS’s and other connectivity existed. A thing they seem to be skipping out on is the BBS; which was actually coded up by a couple guys in Chicago when a snowstorm basically shut the city down and they were bored so decided ‘hey let’s hook up the computer club’s phone line up to a computer.’ One of these guys was actually involved in ARPAnet in a very tangential manner and so had some experience in connecting computers together that didn’t nessicarily share operating system standards (CPM being the dominant OS at the time.) It is my personal feeling that the BBS would still happen, as it was a very ‘bottom-up’ affair that sorta grew into its own thing with no real connection to ARPA, NSFnet, or anything the government was doing. BBS’s ended up spawning FIDOnet, which allowed BBBS’s to exchange information between each other and os you could get messages from one en of the country (and eventually the world) to the other with no user NEEDING to pay anything (though often times individual region nodes would try raising money for operational costs.)

Their track is of a world where there was no networking outside of the government sector. I would like to see you do an alternate history where the BBS continued to hold sway.


  • What would be ‘The Internet’ as you would think of it? Not the Web, which I consider to be the advent of HTML Linked Documents that lives on the internet (Or Gopher, or any of a dozen other protocols that might or might not exist to serve in a time/space where a ‘network of networks’ isn’t a thing. this is perhaps the most important question because we have to establish what it is exactly that no-longer exists when the timelines diverge, and why this thing we know as ‘The Internet’ does not exist so we can figure out how the rest of the dominos may or may not fall

  • Would University run networks still be a thing? I could see campus-centric ‘Chicago University’ or ‘NYU’ or ‘FSU’ networks that you would be issued an account for if you were a student or staff, but would Usenet develop there without NSFnet to act as a backbone?

  • Would the Intel x86 sill be the standard in computing? Would there be any room for ‘home micro’ machines like the Atari ST line, Amiga, etc? Mildly important but more a curiosity.

  • ‘The Internet’ is way more than just ebay, reddit, youtube, and cat pictures. It’s a backbone service hang off of so you can do remote banking, reservations, business to business transactions, and the like. I could see an ‘internet like’ thing existing, but it never getting into the public hands since it’s seen as a business toy and distinctly separate from DARPAnet and had grown up for similar reasons 'getting machine A to talk to machine B even though the two don’t share common OS or interfacing.

  • Can anyone think of any other questions I should be asking and or pondering that I haven’t touched on? It’s one thing to go ‘I do not know this thing’ but a whole other matter to suddenly realize after the fact just exactly how much you don’t know you never knew, to begin with.

I guess, everyone would have settled for OSI networking, which was already the declared standard. It was pretty much for CSnet (Computer Science Network) and the NSF (US National Science Foundation), which gave rise to the NSFnet in 1985 and its spread over academic institutions, that the Internet as we know it (based on TCP/IP) came into being, right after it was obsolete, as for standards. – Probably, there would have been a similar network anyway, but optimized in other ways.

I know nothing of this. Could wiki. In fact:

Anyone here use the darned thing?

The OSI layer model is what’s left of the OSI networking standards – from transport layer to application layer and back again …
OSI was started in 1977, published in 1984 as a standard, and was formally adopted by NIST (the US National Institute of Standards and Technology) in 1985. (I believe, it was adopted as a ISO standard as well.) So, technically, in 1985, when TCP/IP started in earnest with NSFnet, it was already deprecated as far as standards are concerned. – Some future isn’t totally without irony… :wink:

1 Like

Another alternative future: In the U.K. Donald Davies and it his research group at the National Physical Laboratory (NPL) in Teddington had pretty much the same idea in 1965, packet switching, IMPs as dedicated network processors, distributed control, etc. By 1966 they had proposed a U.K.-wide network running at 1.5 megabits/sec and put up a test node running on a Honeywell computer, but the plan was eventually canceled by the British Post Office, who denied any further funding. However, one of Davies’ deputies, Roger Scantlebury, was in the US to tell the story and provide some essential input. As a result, plans for ARPA’s packet switched network were upgraded from the original 9.6 kilobits/sec (which was, what was deemed affordable at the time) to 50 kilobits/sec and networking became much more viable. (Just imagine, what starting with 1.5 MBit could have meant for networking.)

Edit: Again, as with OSI vs NSFnet, it wasn’t so much about a lack of (competing or parallel) ideas, but much about funding and establishing critical tracking and reach, which brought us the Internet as we know it.

1 Like

As my intial response to the video noted networking as a concept seemed like an inevetable due to everyone having the problem of ‘to ofew REALLY expensive computers that don’t speak the same language how to move data across when one is in DC the other in calafornia… or even the pentagon having five machiens all incompatible with eachother.’

I’m just looking into the actual video – and there are some obvious problems with it. One is the usual tale about packet switching and the Department of Defense being involved. In deed, packet switching was first advertised for its redundant robustness in the case of a nuclear war and maintaining command infrastructure, but this was years before ARPANET and for voice-communications. (Conceptionally Voice over IP came actually first.) As for ARPANET, it was chosen for speed and cost. The alternative, circuit switching via direct lines, would have required minutes to establish a connection coast to coast and would have required expensive infrastructure as well. Relaying packets on individual paths point to point, on the other hand, was faster and cheaper, but required some commitment in terms of computing power by the nodes involved, which was solved by dedicated network processors (the IMPs).
Second, the story jumps a bit too fast from A to Z: Especially, I’m missing the Compatible Time Sharing System (CTSS) at MIT, showing well that respective social phenomena (including security and privacy issues, something Multics was supposed to address especially) were popping up as soon as there were shared and nonvolatile (stored) resources available.
Thirdly, TCP/IP wasn’t “released to the public”, but formally contracted out by ARPA to Stanford, BBN, and the University of London expressedly in order to establish a public protocol based on ARPANET technology.
Then, there had been networks before ARPANET: Famously, there were three different networks reaching into J.C.R. Licklider’s ARPA IPTO office, suggesting a unifying approach, forming a network of networks (which is, what TCP/IP is really all about). (On the other hand, another major goal of ARPANET, requesting ad-hoc computational resources from foreign nodes didn’t become a reality before the advent of cloud computing and looked much like a complete failure just until recently.)

However, communities, electronic mail, discussion boards of sorts, public documents, etc, had been around on the various timesharing systems and their associated remote networks, even before there was any wide area network of note.
I guess, broadening these networks and the access to these resources was inevitable, one way or the other. (E.g., CTSS went online in 1961 and by the mid 1960s there are already all sorts of plans for nation wide networks.) But there were discussions, how these networks should be configured in terms of control and topology. An alternative were central networks, acting as an information utility service, much like cable TV or what AOL was attempting or what the Internet is generally gearing towards today (including broadband services in the US according to FCC). (I guess, the French Minitel deserves a honorary mention, as well as various attempts to establish similar services outside of France. Also, compare the Japanese Docomo mobile network, from which we inherited Emoji.) – We’ll still have to wait, how this may work out, eventually…


I haven’t (yet) watched the video, but…

In the UK, the universities build JANET and one can imagine a future in which the universities extended service to technical colleges and schools and eventually libraries. Certainly one could imagine Minitel, because it actually happened! Relatively dumb terminals, offered free to every home, because it was cheaper than printing and distributing phone books.

In a corporate setting, large multinationals such as Philips and TI certainly had networks, using expensive leased lines, and I would expect they’d become connected as various joint projects cropped up. You’d have gateways between various messaging systems. (At Philips there was SERINET and I recall it was gatewayed to JANET.) The high cost of leased lines could be mitigated by selling capacity: if you have encryption and traffic shaping that should be safe.

There was uucp and usenet: I recall at INMOS there was a periodic dial up to connect and share Usenet updates, Just as a large engineering company would have a technical library, it would want information exchange.

In fact I think connectivity for end users falls out naturally from the mission of libraries. One can imagine local libraries acting as nodes on a store and forward network, and allowing dialup by local members.

1 Like

UUCP/USENET and, similarly, FIDONet were pretty viable. It was occasional connectivity with store and forward capability. I used to use UUCP for USENET and Mail with a 2400 baud modem.

You could do things like:

uucp a!b!c/file1 d!e!f/file2

That “file1” would hop from system c, to b, to a, to d, to e, and, finally, to f. Six copies and transfers in all. Certainly not spectacularly efficient, but when you don’t have point-to-point connectivity, there’s not much you can do.

However, from another point of view, from the meta-themed “what if the internet did not exist” concept, consider that for some, today, the internet does not exist, or the internet that they know today is ceasing to exist as polities are locking down their internal networks more and more.

In a world of ubiquitous point to point connectivity, we may well in the future have drop boxes and forwarding services to hurdle the walls the local nets try to establish.

You can also see examples where you have a secure bastion as a gateway in to a network, but no direct connectivity from an outside machine to an internal machine. You can see how uucp can be used today (uucp is complete viable over internet and secure IP connections):

uucp localfile bastion!backendserver1/ultimate/destination

Perhaps uucp will return someday.

1 Like

I have now watched the video - interesting how ‘the internet’ is taken to include streaming video. Of course, for many, it does, and there’s lots of that, and it’s probably changed the world. But for me it’s not too important - the ability of people to have conversations is the important thing. And the ability to find information, and to disseminate it.

1 Like

For me his ‘does not touch the civilian sector’ is the part that makes this unrealistic.

If nothing else an ‘internet like’ backend for business and finance would be pretty much an inevetable. It’s too valuable a tool.

There was an “internet” for business. They were (are?) called VANs – Value-Added Network. Used mostly for EDI transactions. Essentially if you wanted to get a Purchase Order from Walmart back in the day, you signed up with the appropriate VAN and downloaded them. I have no idea if VANs were integrated, by the time the company I was with started needing them, I had a foot out of the door and didn’t do the integration. (I boggled when they dumped $3K for some software to map and pull apart an (THE, they were only getting one) EDI message when I could have probably done it in Perl in an hour.)

But these services certainly existed to facilitate business traffic.

1 Like

Folks have already mentioned FidoNet, UUCP, and UseNet. There was also X.25, including a service called Telenet which both had a subscription service where you could dial a local modem and then dial out from a remote modem, avoiding long distance bills, and a service that businesses could subscribe to in order to get a nationwide local modem presence.

There was also SMDS and later ATM, which unlike frame relay would potentially let users create virtual circuits “on the fly”.

Places that didn’t have a big BBS scene, perhaps because of phone tariffs or phone line quality, had much larger amateur packet radio scenes, including automatic routing protocols that aren’t widely used in the US. I suspect these would have gotten more use in the US. There might have even been enough pressure to allow commercial use that we could have seen public packet radio networks sooner.

And obviously commercial online services like CompuServe, AOL, and Prodigy would have lasted a lot longer. Maybe they would have eventually interconnected.

x86-based PCs would still dominate; they were dominant long before the Internet allowed commercial use, and most other platforms got an IP stack around the same time if not earlier, so I don’t think the Internet particularly helped it dominate.

Ultimately, though, I think we still would have ended up with an internet without THE Internet. As in, a global public packet-switched network supporting arbitrary applications.


I once pondered how something like the WWW could have evolved from UUCP as something like USENET (sort of how HTTP was like anonymous FTP).

Basically, a web site was just a bunch of files with links, right? Well, UUCP could be used to store and distribute copies of entire sites. So you’d have something like an offline reader, where you middle click on links to open the linked pages in tabs, right? But only some of them would be available locally. For unavailable tabs, a request is sent for a copy. This doesn’t necessarily have to come from the original site, it could come from a nearby cache.

Of course, this paradigm is mostly suitable for non-interactive sites which are updated periodically. But with UUCP Service Providers that are connected to multiple computers at once? A request could be sent to a huge cache and a response delivered within seconds. Eventually, USPs would be interconnected with each other, so interactive sites could be practical.


The problem is simply that you’d have to “download and update all of the internet” on a routine basis.

USENET essentially did this. The entirety of the newsgroups ecosystem (i.e. all of the messages) was replicated and duplicated all over the world, all the time. Mind, it was also purged. Dedicate sites had the storage and resources to keep messages eternally, but most folks simply couldn’t do that. So, they purged early and often.

But it still wouldn’t scale in the end. I look at some of the truly popular forums on the internet, and their scope and scale. Now imagine having entire copies of those on your system, downloading all the time. Trying to keep track of everything. Sure, you could subscribe and filter out the bulk of it. But the beauty the connected network is that you can dive down holes you didn’t even know existed before.

The power of the web was it’s instant interconnected-ness. If you recall the old days of Encarta and other similar encyclopedia efforts. It was nice to have the entirety all cross linked and referenced and a click away (at best having to wait for a CD ROM to spin up. Having 640MB(!!) of data quickly accessible on the CD was a boon.

But it was also static.

Modems were bandwidth limited. Phone calls were expensive. Once that bandwidth got high enough, and cheap enough, then real time interconnected systems could take over.

Early on, the scaling would be very different. Like I said, it would work best with sites that were updated periodically. Basically like “news” sites.

But things would get different once you started having USPs with large caches constantly networked with each other. These USPs wouldn’t need a local copy of anything already cached on another USP they’re connected to. So, anything on the “USP Web” only needs one copy somewhere on the USP network. Ultimately, you’d end up with a large number of 24/7 connected site hosts which don’t need copies made.

So really … the end result would be similar - but with one major difference. Anyone could host their own web sites. During the dial-up era, any home computer owner could host their own UUCP site and push out updates to it onto their USP. Eventually, the transition to the WiFi era would mean computers would be connected 24/7. But the legacy of “anyone can host” would be a more “peer-to-peer” sort of web.

1 Like

I like the idea that we could have free and full access to all the world’s information, without it necessarily being instant. At one point, at work, I used to use email gateways to get indexes of ftp sites and then to fetch files. Before that, at Uni, I used to write letters to home and to schoolfriends, and the round-trip would be a week. Before that, at school, we had a weekly turnaround for punched cards. And before that (I think) the inter-library loan service would (presumably) take at least a couple of weeks to get A Fortran Coloring Book.

A slower pace might not be a bad thing - certainly it’s not unbearable.

So a multi-hop store-and-forward which deals firstly with indexes or lookups and secondarily with content, and which takes hours or days to retrieve the desired document, that’s perfectly workable. It’s a whole lot faster than writing to a monthly magazine and getting a reply printed two or three issues down the line!