Thoughts on Apple's IIGS

What offsets are you referring to? Things like jumps and relative calls for position independent code?

The 68000 address modes which use an address register, a displacement and possibly a data register as an index can do the same job as the 8086 address modes with the address register used in place of the segment registers in the original code. This was needed because the 8086 canā€™t run normal C code (as you might find for a VAX) but had things like far pointers and other complications. In the case of QNX, its C compiler accepted @ as an alternative to * for pointer stuff, the the address would use ES instead of DS as its segment. That allowed it to address up to 256KB (64KB code, 64KB stack, 64KB heap and an extra 64KB for a second heap with @).

So this ā€œnot quite Cā€ code brought its segments with it when ported to the 68000. The 68020 had more address modes and allowed larger displacements for the existing modes.

Sounds like this is trivially fixed by making your segment pointers point to the middle of the segment, rather than the start. :-)

You could certainly use far pointers for everything, it was just less efficient. And I know when I did some early work (I was never much of a PC/DOS/Windows developer), I had data set divided up in to 64K block to be segment conscious.

This is similar to the '816. Many '816 folks are 6502 programmers that treat it as a machine with some better registers but only 64K of RAM. They feel daunted by having to jump the hurdles of dealing with the bank register. But if you accept that reality, rather than continue to fight around it, then itā€™s straightforward to use. It certainly helps having higher level tools (notably things like the segment manager in the GS OS) to help manage memory. The only thing to be careful of, really, is that your code does not ā€œrun off the endā€ of data bank. That is, all of your code elements (not the entire program) fit within a 64k block, which is not a daunting requirement. Other than that, you can just use long JMP, JSL, and RTL as a rule.

There are always elements that can use the utmost in optimization, but most code does not warrant it. Thatā€™s why we write in high level languages in the first place.

1 Like

Normal C code was 64Kb code and 64Kb data,
under PDP 11 - Unix. For most problems in the 1970ā€™s
that was more than ample space. Bigger problems had IBM 360ā€™s and PDP 10ā€™s. I see the lack of a stack on 6502ā€™s style cpuā€™s more a problem than program size.

Iā€™m not totally convinced this is true. Werenā€™t overlays fairly heavily used for large CP/M programs (such as WordStar) back in the late '70s?

And certainly Intel in 1976 felt that the 64K limit of the 8080 was enough of a problem that a primary design goal of a subsequent CPU should be to run re-assembled 8080 programs almost unchanged yet give them access to much more memory. (Thus, the 8086. For a large class of 8080 programs you need only write a few lines of code to set the data and stack segment registers, perhaps move some data around, and suddenly you had more than twice the amount of directly addressable memory available.)

It can still be true that 64k is enough for ā€œmost problemsā€ā€¦

I think there is a temptation when building a homebrew machine to add all sorts of capacity and capability - lots of RAM, many ports. And if itā€™s an 8bit machine, lots of RAM means some kind of banking or MMU, or if itā€™s an 816, the memory is there but you need to use different habits to make full use of it.

And then, often enough, the machine is built but the software which needs all that capacity isnā€™t written. Which is fine - hopefully the exercise was fun - but it shows that 64k is plenty. (Itā€™s always more likely, I think, to have a modest program which accesses lots of data, than it is to have put together a program which is so large.)

So, what am I sayingā€¦ Iā€™m saying hardware projects often take on a life of their own, which outstrips the corresponding OS projects, or toolchain projects, or application projects.

2 Likes

That has nothing to do with C or Pointers, however. Pointers are an abstraction to represent memory.

Their declarations can vary, but their use is identical. You could set compiler options that change what the default pointer type was, so standard C code would compile against any model.

The 6502 has a 256 byte stack. The '816 had a 64K stack.

64K of stack space is a silly amount of space for a 16M computer. While C uses a ā€œpass by valueā€ semantic, idiomatic C is to pass pointers to routines specifically for that reason. Not so much to lessen stack space, but simply to lessen the copy operations involved with moving arguments to and from the stack. Why pass a 64 byte structure when you can pass a 2 byte pointer.

Classic 6502 Forth has two stacks, the data stack is even smaller than than the call (return) stack. The return stack is the system stack, so limited to 128 entries.

I have vastly complicated Java programs that push that barrier of 128+ call levels deep. But even thatā€™s rare.

Any program can obliterate any finite stack space, but many programs didnā€™t use that much. Thereā€™ve been some papers on how a stack depth of 20 addresses can meet a large number of use case.

UCSD Pascal leveraged the stack a lot, again with pass by value semantics. But it was not constrained by the 6502 stack as it used an internal one within its virtual machine.

Z80 has a ā€œ64Kā€ stack, but it only has 64K of RAM as well. However its stack is enough of a pain to deal with for call frames that barring a need for reentrancy, itā€™s better to not use the stack at all.

Well, itā€™s not quite that simple, as anybody whoā€™s programmed C (or just tried to compile Perl) on non-flat memory-model machines can tell you. And of course it breaks any time you need data structures larger than 64 KB, such as for large buffers, mmapā€™d files, and so on.

But the largest user of space in the stack was not parameters (that was generally relatively quite small) but local variables. C and many other languages store local variables in the stack frame because theyā€™re faster to allocate and they all get deallocated ā€œfor freeā€ on return. Sure, you wouldnā€™t usually copy a 1 KB data structure on to the stack, but if you needed one for the duration of a function execution youā€™d generally allocate it there.

Well hey wouldnā€™t you know. thatā€™sā€¦ exactly what happened.

3 Likes

Well, this didnā€™t age quite as wellā€¦ :slight_smile:
(Guess what happened, MB Air and the Mini were the first ones to receive ARM CPUs.)

1 Like

Interesting thread revival, so I went back and re-read my own contribution about 2 years back and since then, Iā€™ve developed my own little 65c816 board and implemented a little retro-style OS on it, all written in BCPL.

And the biggest headache? Those 64KB banks of RAM and working round them.

Now I understand that there are C compilers for the '816 that will treat everything as a ā€œfarā€ pointer but the overhead is significant - my own little bytecode interpreter (written in '816 assembler) spends far too much time making the memory appear linear to BCPL programs. Some might suggests Iā€™m going about it the wrong way though.

There are still a small number of real '816 enthusiasts out there, but really the IIgs was where it was at and the only thing that helped that go was one little bit of hardware assist for the graphics - even that was marginal (a hardware block-fill function)

(Ok, the SNES a few years later but that was a fairly specific thing)

Today there should be no need for banked memory (retro/vintage systems excepted) and even in the mid 80ā€™s when the '816 was ā€˜goā€™ I do feel there were better CPUs - however code and hardware legacy is a strong force to be reckoned with!

-Gordon

3 Likes

Gonna be blunt: A lot of my fondness for the IIGS is driven by the fact that it is an apple product that isnā€™t a sealed box and that I want it pushed as far as it can go as a sort of middle finger to Jobs due to Jobsā€™s takeover of apple by way of the Mac division becoming a cancer that took everythingā€™s resources.

Donā€™t get me wrong; As an Apple II fan back then (and now), If I could have afforded one at the time then Iā€™d have been the first to sign up, but in the UK they were more or less unaffordable unless you had a really good job at the time which as a lowly research student, I didnā€™tā€¦ That plus the exchange rate and import taxes rendered them well out of reach )-:

There were still better CPUs in 1986, but the strive for compatibility was strong (says he, looking at the Apple /// sitting on a desk behind himā€¦)

-Gordon

2 Likes

Totally agree. Apple II and derivatives were great computers built by enthusiasts for hobbyists. Really captured the microcomputer spirit of the times.

Then Jobs pushed the ridiculous (IMO) Mac and nearly destroyed the company. Hermetically sealed box that required special tools to even open and very difficult to modify in any way. In the late 1980ā€™s Mac II tried to make amends with a decent case and NuBus but by then it was too late. I recall someone who bought one of the initial 128KB Macs as soon as they came out and it was pathetic. A walled garden (before there were such things) and a practically useless multiple thousand dollar paperweight.

Apple II vs Mac are like two completely different companies

1 Like

I totally agree regarding the somewhat ill-fated positioning and design of the Mac. Especially, if you take a look of the potential lineup, with the Lisa as a serious, open business computer with both HP and PARC heritage and some Larry Tesler UI wizardry (esp., if it ever got square pixels and faster graphics, which were available, but werenā€™t shipped, and a speed-up to what the actual production processor could do ā€“ in some aspects Mac arrived where Lisa had been already only with OS X) a low range Apple II and an ambitious IIGS. However, there was also the Apple III, which nearly broke Appleā€™s neck. It was really the Apple IIIā€™s failure, which necessitated an axing of half of the lineup and rendered Macā€™s shortcomings critical.