I want to understand the whole machine. Which machine?

Indeed. And I can think of two good reasons for going this route:

First, adding a video display to a machine, as opposed to using a UART chip (or even just bit-banged serial I/O) adds considerable complexity. Learning about how video signals work is great fun and not really much more complex than learning about how to put together a CPU and RAM and so on, but it’s a rather different area that requires its own set of knowledge and expertise.

Second, this is actually the more “retro” way of using microcomputers. The rise of microcomputers with video systems around 1976-77 was most directly due to economic factors: video terminals generally cost well over $1000, easily doubling the price of a simple computer system. (Even used printing terminals started at $400-$600.) Homebrew “TV typewriters,” using a standard television as a display, were the first attempt to mitigate this expense; the Apple 1 was in fact a TV typewriter circuit that Woz had previously built glued on to a processor and some memory and I/O. This eventually led to integrated video circuitry being a standard feature (even on systems that provided a monitor, such as the PET or TRS-80 Model I; the latter actually used a re-badged black and white TV) because of its advantages in cost; that falling RAM prices made them easily capable of more sophisticated memory-mapped displays was basically just an added bonus.


I’d recommend building, or at least fully understanding, a simple 8-bit single-board computer (SBC); I’ve found that there are many things that go on in hardware design (even at a high level, such as system address decoding) that will really illuminate how machines and their instruction sets work in a way that you never really see from the software side. (I used to see designs with a lot of “wasted” space for I/O in memory maps and think, “What a waste of space”; now I see ones without that and think, “What a waste of logic gates.”)

There’s no need for soldering to do this; with reasonable care (and sticking to low clock speeds) you can build such a system on a breadboard. (But use high-quality breadboards to avoid pain!) Ben eater has an excellent series of videos on doing this; merely watching them will probably be immensely educational. Nor do you need to use that system for all future projects; doing your initial programming for such a system and upgrading to something more sophisticated (such as a more capable pre-built SBC or an Amiga, the latter of which already hides a lot more complexity than is hidden by chips like a 6502 and its peripherals) when you find your programs are getting too big or complex for your little 8-bit system.

That said, the From Nand To Tetris approach is certainly quite valid, too, and probably as good if you want to slightly gloss over some of the hardware details (though it also does get more deeply into other hardware details, such as CPU design). It may be as much a matter of preference and previous experience as anything else; I not only really like playing with “real” hardware, but came to this with a strong theoretical background already so that the actual “hands on” experience with hardware was what I was missing.

Yes, this is quite a good idea; it’s portable, self-contained with a decent keyboard and display, has a serial port that makes loading/saving and cross-development easy, and detailed technical documentation is easily available for both the hardware and software. I’m not particularly fond of the 8080 CPU architecture (the 6800 and 6502 are both easier to learn and use, IMHO), but that could be as much my bias as reality. (I may also be biased in my liking for these machines; I owned and very much enjoyed a Model 100 back in the '80s, enough that I have recently bought one again, as well as a NEC PC-8201.)


And now some verging-on-off-topic notes on other comments here. If you want to get into the details of any of this, it may be better to branch off into a new thread for it.

The PDP-11 never had segmentation in the way the 8086 did; it was a 16-bit machine with a 16-bit address space (not at all unreasonable for the time) which later used a few bank-switching-like tricks to expand that a bit. The architecture was never intended to support address spaces larger than 64 KB, as the 8086 was.

For the 8086, it’s clear that one of the primary design criteria was to easily be able to replace the 8080 (or, to a lesser degree, the Z80) in embedded systems. If you had an 8080-based board and software with decent (and fairly typical) design, but with memory use bursting at the seams, it required minimal and fairly easy design changes to create a new board with an 8088 and reassembled software (with very minor tweaks) that solved that problem in most cases by easily letting you split your code, data and stack into three separate segments of up to 64 KB each. I go into more technical detail on this in this Retrocomputing SE answer. The 16-byte segment offsets were really more of a nice hack that theoretically allowed use of a 1 MB address space while still allowing efficient use of memory (which was still expensive at the time) for such upgrades.

(Further discussion of either of these should probably go to either the Thoughts on address space extensions thread for general discussion or a new thread for discussions of specific schemes.)

Well, I’d say lower cost microprocessors; even at $150 or more, the microprocessors of the day were a significant cost reduction over anything else available. Nor would I say that the 6502 “trailblaze[d] the home microcomputer revolution”: that was clearly already under way. One of the U.S. 1977 Trinity used a Z80 and in some markets, such as Japan, the 6502 had little impact. (The thriving Japanese microcomputer market was almost exclusively Z80- and 6800/6809-based.) Even in the U.K., until the Commodore 64 really took over, the 6502 was used mostly in mid- to high-end microcomputers, with the Z80 used in the low end.

3 Likes

Thanks Curt! I discovered Ben Eater’s channel last week and have been slowly making my way through some of his series. That guy has some real talent, and not just with electronics! I never would have thought that probing the mysteries of CRC32 would have me glued to my screen…

I finished chapter 5 of nand2tetris with a working CPU. I think I will probably discontinue it there. The rest of the book is software - writing an assembler, and using it to bootstrap the Jack (a bastardized Java dialect ) virtual machine. Not that that’s necessarily a bad project, but I have Aho’s original “Principles of Compiler Design” I can study, and I think on the language side I’d rather get to know Forth than splodge around with curly braces and programming ideas that I already know well from C/C++.

Oh, speaking of Forth, and bottom-up construction, it’s worth looking into JONESFORTH, a single well-commented Forth text:

A few years ago I wrote a literate FORTH compiler and tutorial called JONESFORTH. It’s a good way, I think, to understand the power and limitations of FORTH, and a good way to learn a completely different and mind-blowing programming language.

2 Likes

Forth is certainly a language worth learning. You might also consider looking at one of the LISP family of languages; these are not too hard to implement and provide considerable power.

For Lisp, I find this implementation interesting:

https://github.com/attila-lendvai/maru/

(Not necessarily because it is particularly powerful or comprehensive, but it’s an interesting implementation strategy).

2 Likes

We have discussed JONESFORTH before; I said it then, and I’ll say it now, it’s one of my favorite systems to read over from time to time. It’s really a joy to behold.

1 Like

Don Lancaster was one of the first pioneers of the TV Typewriter, which appeared in the September 1973 edition of Radio-Electronics. He went on to produce the “TV Typewriter Cookbook” which is available as a pdf:

The original TVT 1 used shift registers to hold the 32 x 16 characters to be displayed, and a small PROM to hold the uppercase character set. The rest of the logic was based on TTL which was by then starting to become affordable to hobbyists.

Later versions used Intel 2102 static 1K x 1 RAMs as the video buffer - but you still needed 8 of these as the screen buffer.

There is a lot to be learned from these early video text displays because they predated the 8-bit microprocessor era. Nowadays even a low cost microcontroller such as an AVR or ARM M0 has sufficient onchip resources to generate a monochrome text (or even colour graphic) display.

Wozniak is often heralded as being the genius behind the monochrome text display, but these had existed as hobby projects for a few years before the launch of the Apple 1. Wozniak’s design skill was integrating the video generation hardware into the 6502 based system at an affordable price, for the general market, and subsequently with the Apple II which was their first real volume seller.

This approach of integrating the video hardware with an 8-bit cpu gave rise to several popular “home” machines of the very early 1980’s - including those by Acorn, Sinclair and others.

1 Like

Really! That seems weird, since his Apple 1 video circuit seems to be a pretty standard shift-register-based formulation of the type that had been around for some time and was brought to the wide public two or three years earlier by Don Lancaster, as you point out. It was not particularly powerful; the only control character was CR for a newline (you couldn’t even backspace, and clearing the screen could not be done in software but only via a hardware signal) and it was also quite slow (60 chars/sec or half the speed of a 1200 bps modem) since after writing a character you had to wait for the shift registers to roll around before you could write the next.

On the other hand, his Apple II video circuit (which was very different) was a demonstration of his genius. Not only did it do high-resolution colour graphics with just an incredibly parsimonious sprinkling of 74xx series parts, but also did the DRAM refresh for the entire system. And the ability to suppress the colour burst for clearer text (added with the Apple II+, I believe) was a nice touch.

Well, I’d say just as much from the ones during the 8-bit microprocessor era, too, at least those that didn’t use custom video generation chips. The major shift in video generation between the early and late '70s was really all around the price of RAM, not the microprocessors, and that was occuring in video terminals too.

Although it doesn’t really address @Paganini’s question, this discussion of video output hardware to me brings to mind Sinclair’s ZX80, an extraordinarily simple and cheap home computer, arguably rather a compromised machine, but with the merit that it’s all simple TTL chips, and not many of them. See How the ZX80 Works for more - it’s so simple you can build your own.

Unlike most other computers of it’s era, it doesn’t have a video chip. It doesn’t have any custom chips in fact. There is a Z80 CPU, a 4K ROM chip, two 1K x 4 bit static RAM chips, and seventeen 74 series TTL chips, and that’s it.

With only a few changes you can make it a ZX81, which has floating point, and a very slightly improved video system.

(Outside the UK you might recognise these machines as Timex 1000 or 1500)

2 Likes

I like LISP; at least, I like Scheme. Between 10-15 years ago I did a pretty comprehensive study of SICP. I enjoyed it a lot, but I remember thinking at the time that it was putting me even further away from the machine. Doing nand2tetris, I see that that was wrong. All those functional thinking skills from SICP made nand2tetris much simpler for me. A lot of posters on the nand2tetris forums ran in to trouble because they got hung up on thinking of HDL as though it were an imperative language.

The other day I ran across this:
http://www.sunrise-ev.com/z80.htm

It just so happens that I have a fondness for Altoids and their tins, and a habit of saving them to put stuff in.

The display discussion is pretty interesting. I snagged the TV Typewriter Cookbook (thanks @monsonite!) to peruse in the future. Not that I don’t want to learn how display circuits work, but right now the idea of escaping from the complexity of custom video / sound chips by just chucking them in favor of serial I/O has an appealing freedom.

2 Likes

@EdS The ZX80 / ZX81 has been recreated in kit form by Tynemouth Software and is known as the Minstrel. You can buy a bare pcb for about £20 and build up your own. It uses modern ROM and RAM devices.

Grant Searle did a very comprehensive description of how the ZX80 / 81 video works, a few years ago - and this led to him reverse engineering the ZX80, which I believe was the basis of the Tynemouth Minstrel.

It’s more than 40 years since the introduction of the ZX80. It set a new price-point in computer affordability. I still have the ZX81 I built from a price-slashed kit in 1983. Too many subsequent years were spent focussing on Z80 mnemonics - this led to my complete failure to grasp the 6502. In those days - it was either one or the other.

2 Likes

Ding ding ding! We have a winner!

It’s been clear since FORTRAN became popular that it’s easier to write correct programs using a denotational approach (f(x) = 2x + 1) than an operational approach (LD r0,x, MUL r0,#2, …), but due I suppose to reasons of economy almost all CPUs from the 1940s to this day use a purely operational approach. Knowing how to translate between the two (and that they can be made formally equivalent) is key to really understanding what’s going on under the hood. (As is the similar thing with combinational vs. sequential logic for hardware.)

If you got through SICP ok, you probably have what you need from that side, so your next job is really just to start playing with some simple hardware to develop the intuitions and experience you need to fill out that side of things. It doesn’t matter too much what it is, even (so long as it’s not too high-level); you just need to get deeply enough into one system that you learn the details, and then muck about with a couple of others so you learn the abstractions underlying those details.

The Membership Card systems are pretty standard single-board computers, so liking Altoids tins is as good a reason as any to choose to start with those.

It’s really just a matter of trying to cut things down into more managable chunks so you don’t get overwhelmed while trying to learn about all this stuff. You should feel free to poke as much as you like into stuff about video; just don’t get stuck in a position where you can’t proceed to experiment with CPUs, memory and peripheral chips becuase you can’t get video working.

(It’s actually reasonable to reverse things and build computer-free video circuits first, if video really turns your crank. But it sounds like you’re more motivated by the computer side of things.)

The ZX81 (American version, since Brazil is 60Hz) was my first practical computer. ZX80 clones were the actual start of the micro revolution in Brazil in 1981

But since Don Lancaster has been mentioned in this thread, it should be noted that his Cheap Video Cookbook had done the same thing to the KIM-1 in 1978.

2 Likes

That seemed to be a “thing”, at least with me as well. I learned the 6502 first, and it had a lingering negative effect on my ability to become comfortable in other assembly languages. I respected all of them for their efficiency, and I could stumble through some coding exercises for the 8008, 6800, 6809, Z80, 68000, pdp-8, IBM 360 … but they just didn’t “click” with my adolescent brain when subtlety and intricacy were called for. 8080 and x86 were lost causes for me, and still are.

2 Likes

I even learned Z80 first, but more as an abstract excercise without a chance to get hands-on with a real machine. My first active steps were then on a 6502 machine much to the same effect, soon forgetting most of what I had ever known about the Z80.

Maybe the best thing for learning is a Paper Computer. :wink:
There are different ones. I own a 1969 German book called “Wir bauen einen Computer” (We are building a computer) by Alexander Stüler. Aulis Verlag publisher.
Or other, general books.
I also own the vintage How Why and Wonder book (see introduction thread). But that’s not for detailed learning.
Then maybe simple computers ZX81, Commodore 64 etc or embedded systems. But I think kids are much smarter nowadays.