On what computing is and whether it is simple

The singular problem with this is that computing is neither simple, nor understandable. Not for novices. Not for people that “don’t think like a computer”. Computers are the most pig headed and opinionated things on the planet. And they will always be this way.

The only way to make them “simpler” and “more understandable” is to pile abstraction upon abstraction upon abstraction on top of them. That’s why a billion people can use Facebook on their phone without knowing a bit from a byte.

Because while a “computer” (i.e. a CPU) may (may) be simple, computing is not.

It’s a wide leap from getting a CPU to make an LED blink and doing what the vast majority of people want to do with computing today.

Machine language does not “reveal” computing, or make it more understandable. Especially today. We look at the herculean efforts of things like Visual 6502 trying to provide total transparency of the computing process, and it’s still not very approachable. And, moreso, it doesn’t remotely represent anything like what a modern CPU does today. The simple fact that “machine language” is actually a high level language implemented on top of something even more primitive on modern microprocessors makes the head swim enough as it is.

This is why kids are not taught machine language. They’re taught at a far, far higher level where the underlying CPU is utterly buried (as demonstrated by the fact that their tools run on lots of different CPUs and environments). They’re taught logic, sequencing, and syntax.

I am not a modern CPU designer. But I believe that even they must work at a rather high level. “Coding” new CPUs in higher level concepts and then compiling them down in to silicon for performance vs wiring together gates and transistors. Perhaps this is still done at the very tiny microcontroller level, but even they are doing high level work. When it seems most every peripheral chip is actually an embedded micro-microcontroller rather than a vast array of hand tuned gate logic.

In the end, to be successful with computing at many levels simply no longer requires intimate low level knowledge. When even the definition of what a computer is, is in flux. Our CPUs have sideline management CPUs built in to them, with their own OSes. A huge amount of computing today runs in virtual environments. When that black square things on the circuit board has 32 cores and can be “patched”, what does that even mean when talking about CPUs of old.

So, frankly, plugging “7F”, or anything else, in to a computer doesn’t really make it more understandable, or useful. It’s borderline interesting knowledge for those who what to go there, but it’s certainly not the place to start.

Is this just argument and contradiction? Can we please focus on adding new and interesting information to discussions?

@EdS I think it’s a reasonable argument to make, and one that should be addressed. (It may want to be moved to another thread if it grows beyond three or four posts.)

Simplicity is relative: things are simple or not in comparision to other things, not inherently simple in and of themselves. Thus it makes no sense to say that computing is “never simple” without explaining to what you’re comparing it and arguing that that’s to what someone else was also (or should be) comparing it. By some standards of comparison, almost nothing in the word is simple, and it’s certainly true that average people navigate systems of massive complexity every day (e.g., going out to lunch), feeling that they’re relatively simple.

Computing is clearly understandable since we are able to successfuly construct computer systems. Certainly many systems are not completely understandable by one person, but that doesn’t mean it’s not possible to build such a system, and it also doesn’t mean that, even if not completely understandable by one person, it’s not advantagous to make significantly more of a system understandable to a single person.

The only way to make them “simpler” and “more understandable” is to pile abstraction upon abstraction upon abstraction on top of them.

Well, no, that’s not the only way. (In fact, using abstractions where not necessary is often a source of complexity, not simplicity, as we programmers have often seen.)

Understandability is a relationship between a person and a thing, and what’s frequently missed is that you can make the thing more understandable not only by changing the thing, but also by changing the person. I think the best way I’ve seen this idea captured is in Rich Hicky’s distinction between “simple” and “easy,” in this classic presentation.

Machine language does not “reveal” computing, or make it more understandable.

This is a simplistic (in the sense of trying to simplify too far) restatement of the argument being made here. Let’s put my response this way: I’ve found that knowing something about “what’s under the hood” has been a huge benefit in my day-to-day IT work and gives me a significant advantage (in producing reliable systems, etc.) over those who do not have this background.

I do not believe that “total transparency of the [end-to-end] computing process” is in any way the aim of Visual 6502. I think they’re just trying to capture every detail of a particular processor implementation (i.e., a narrow slice of the computing process), which is a very different thing. Some of the work they’ve done is probably useful as a component of the better end-to-end system understanding the OP seeks, but I don’t think the OP is trying to say that developers on his proposed systems are going to understand Visual 6502 inside-out, and I am certainly not saying that.

They do indeed, and this is not just for modern CPUs, or even microcontrollers. In fact, compiling a higher-level Hardware Description Language to a gate representation has been done since the early 80s even for devices of a few hundred gates or less. But when using HDLs it’s pretty much essential to understand the hardware underneath and how what you write in the HDL is being translated to that, because what the hardware does is not always what a naïve interpreter of the HDL code would think it is.

(This is different from something like SQL optimization in that the SQL written by someone who doesn’t understand DBMS implementation details will still work correctly, albeit slowly. But it’s also similar in that effective use in large systems even of SQL requires at least some knowledge of what’s going on “under the hood.”)

In the end, to be successful with computing at many levels simply no longer requires intimate low level knowledge.

Well, it depends on your (or your client’s) definition of “successful.” But from my experience, I believe that someone with at least some knowledge of a computing stack from top to bottom will produce a better system at less cost that someone without this knowledge. And my experience has shown that lack of this knowledge definitely can lead to both bugs and security problems.

(I’ve split this topic off from the parent. If the discussion can veer towards retrocomputing, and introduce new and interesting ideas, that’d be great. Your moderators would much appreciate a collaborative exploration of ideas, in preference to boldly stated positions which are then attacked and defended.)

Here’s a perspective from Alan Kay, which I found a tad surprising:

Now, the difficulty here, as often, is the meaning of words. For one person, a computer means a modern PC with a modern OS, a GUI, and the usual applications: a big pile of complexity. For another, a computer means anything that’s Turing-complete, but it’s necessarily programmable. For another, a computer might include an embedded microcontroller running code from ROM, and for another it might include a slide rule or the Antikythera mechanism.

(We see here, I think, why it’s best not to get hung up on definitions, and to read what’s written in a spirit of empathy and generosity. Otherwise we fail to communicate. If something is ambiguous, clarify.)

Maybe a bit off topic, but “Technology Connection” recently posted a nice video (rather, two of them) on the Wurlitzer jukebox mechanism, which also poses the question, whether or not this is a computer, and what a computer is in general. Especially, since there is a program implemented in switches and cams and also memory. (There are actually some parallels in the selection accumulator’s electro-mechanical memory addressing and core memory.)

Anyway, it’s certainly nice to watch (52min in total):

First part (general overview): The Computer-free Automation of a Jukebox (Electromechanics) - YouTube
The second part on the selection accumulator and its implementation of memory and how this encodes conditional operations: The Selection Accumulator; a Jukebox's Brain - YouTube

1 Like

We can always carefully define our words. How about “computer” for anything that is Turing complete and “full computing stack” for something like what I am typing this on right now?

Both are interesting things to teach, though obviously the latter is a lot more complicated. The “NAND to Tetris” course is a good example of teaching a full computing stack from the bottom up. It starts with the NAND logical gate, shows all other basic gates, moves on to registers and other building blocks, then an absurdly simple processor. In fact, the processor is so simple that it would be painful to program on directly. So they create a virtual machine on top of it that is like a very stripped down Java and do a compiler. Finally they program a game like Tetris on that.

The decision to make the processor so simple they had to add an extra layer is interesting: the idea is that the two layers (simple processor from ALU and other blocks + virtual machine) actually require fewer pages to explain than a single layer that would have a fancy processor from ALU and other blocks. The Gigatron people made the exact same decision.

There are two kinds of complexity: incidental and intrinsic. The latter can’t be eliminated but only shifted around between layers. But it is very hard to know which is which so we can eliminate the incidental complexity. Studying old computers and comparing them with later ones is the best way I know of figuring this out.

2 Likes

Two great examples there! Yes, we could define our terms…

more

… and in an academic context that would be stable. My difficulty is that on a public forum, there’s every chance that different people will have different ideas in mind when they use words, and may well have no idea that these different ideas even exist. In slow and measured communication between two parties, it’s usually not too hard to reconcile our ideas. But when people jump on the use of a word and indulge in a rant, and when other people respond to the rant with a counter-rant, we lose the calm atmosphere and the possibility of learning from each other.

This is why I keep finding myself calling for calm, and asking that people de-escalate. Many people don’t need to hear the message, and some don’t seem to hear it at all.

I’d much rather enjoy discussions of retrocomputing in all aspects, than the mechanics of discussion!

1 Like