Apple RISC 6502?

Some consider the 6502 MPU a RISC processor avant la lettre, while others strictly oppose the idea. (And, in the strict sense, they are absolutely right.) – But, as can be learned from the following recount by Pete Foley, one of earliest chip design engineers at Apple, a RISC 6502, “potentially outperforming the 68000”, nearly became a thing!

  1. A RISC based implementation of the Apple II 6502 Processor: In mid ’85 I performed an analysis that showed a simple RISC style implementation of a 16‐bit binary compatible superset of the 8‐bit microprocessor used in the Apple II 6502, along with some judicious use of on‐chip caching, could substantially improve performance – to the point of potentially outperforming the 68000 used in the Mac, and given the simplicity of the 6502 the implementation was “doable” by a small team. This was a more direct approach than emulating 6502 compiled binaries by a different processor as was done some four years later in the Mobius project in the Advanced Technology Group (ATG). I set about completing a feasibility study that went through several revisions (Turbo‐I and Turbo‐II), which included a complete micro‐architecture design of the processor along with resource usage diagrams for every clock phase of every instruction. When the design seemed solid and I was ready to move on to an implementation, I sought the counsel and the support of my mentors in the IC Technology group (to whom I owe a huge debt of gratitude), Bob Bailey and Walt Peschke. As usual, when they felt it was time to impart some wisdom upon me, they said, “Pete, lets go for a walk”. As we walked around the local residential neighborhood in Cupertino they explained to me that marketing/sales/biz dev would have no idea what to do (how to position, etc) with such a thing and I would just end up with a black eye. Of course they were right and I stopped working on it. Their warnings were prescient, as four years later Jean-Louis Gassee was to shut down a similar project called Mobius in the Advanced Technology Group (ATG) where the ARM microprocessor was used to emulate another architecture.

(Found via HN,


That seems more or less like what the 65816 from Western Design Center was, which was ultimately installed in the Apple IIgs in 1986 – and under-clocked, because at full speed it made the Mac look underpowered, since the IIgs had superior (color) graphics and sound capabilities.


I think, the RISCy bit would have been a uniform instruction length, which would have allowed for an even higher throughput. (The variable instruction length, as in 1, 2, 3 bytes, is necessarily slowing down the MPU by requiring multiple fetch and decoding stages. Just think how you could fly with a single fetch and decode stage. Effectively 2 memory cycles, like on early fixed instruction length machines, but pipelined.)


That said, this is probably as well, why this wasn’t to succeed: At that time, mother boards still used 8-bit components, like with the original Mac or the IBM 1550, because this was what was available at scale and cost effective. (This was what the Intel 8086 was all about!) – Now imagine transitioning to a 24-bit board architecture without a fab of your own to provide for this… This will never pay off. (But it may have been great for shuffling bitmaps around.)

Oh, I do hope not! The '816 is a very peculiar beast, and I’d hope one could do a lot better. As an implementation, it’s like a stretch version of the 6502, with a mostly-8-bit datapath. As an instruction set architecture, it offers 5 modes, whereas liberal use of prefix bytes or better use of previously unused opcodes would in my opinion be far preferable. And the 8 bit path to memory, and the imposition of 64k-sized banks for some (but not all) purposes… well, it takes some getting used to. It’s better than the 6502, in that it extends the 6502, but I can’t believe it’s the best extension one could make.

1 Like

Agreed. I was more offering it as an example of “yup, seems to be a valid approach” than “this is what 6502 should have become!”.

Although it’s not the only 16-bittish microprocessor from that era that failed to quite break out of the 16 bit address space even when adding more address lines. (Looking at you, 8086!) That seems to have been a … failure of imagination that was in the air at the time. To be fair, some minis (looking at the PDP-11 now) arguably did even more poorly, by not allowing individual applications to break out of that 16-bit address space at all. At least they made their mistakes the better part of a decade earlier.