CPU's better with high level languages

Other the the PDP 11, the Western Digital Chip set was used for Pascal micro engine.
Did anyone ever use this chip set?
This is the only other CPU I can think of from the 70’s, and before the new crop of 16 bit cpu’s
like the 8086/z8000/68000 or the 6809 that was better with high level languages, than the cpu’s.
of that era. The PDP-11 was out, but nobody (like myself) could afford them.
Ben.

1 Like

This Wikipedia article mentions machines using the Pascal Microengine chipset. But it doesn’t include the Terak, that we discussed here back in 2022.

1 Like

There was an WD processor the WD16 based on it, that saw limited success in the Alpha Micro systems before the switched to 68000. Not co-incidentally Dick Wilcox who was one of the main designers of the WD16 founded Alpha Micro.

It was also cloned in the soviet union although I’ve no idea what they did with that chipset.

1 Like

As to processors from that era better at high level languages

Intel 8085. Added a small number of instructions to 8080 to fix high level language performance, division performance, signed maths, overflow trapping and 16bit decrement and branch. Then Intel hid it all to make your buy an 8086 8)
Despite this I think it’s probably the first released mainstream microprocessor that tackled the problem (beating TI by 3 months)

TMS9900 - July 1976. Downsized minicomputer processsor that accidentally ended up in the TI99 series micrcomputers. Very very good for compilers but with a few quirks.

INS807x series - end of the 1970s. Proper stack, register/memory loads and stores via indexes on the stack including some pre/post inc/dec functionality. The actual maths and logic side though is clunky. Does come out fairly well in C code even with a not terribly optimized compiler. Not a successful CPU.

6803 in 1979 as well. Not as fancy as the 6809 but much cheaper and with some built in I/O and similar functionality. Predecessor of the HC11. Very good for high level languages.

To some extent I guess it depends if you count stuff like IMP16, INS8900, micronova, Fairchild 9440, uCOM 1600 and the like

2 Likes

Actually - does the Intellivision count (CDP1600) - weird CPU but also a candidate and slightly earlier date. It actually did end up in a low end mass market device but was never really available to most potential customers.

I would only count CPU’s with 16 bit stack pointers. That I think that separates general purpose computers
from micro controllers.

A bunch of them just have general purpose registers and the stack is a software construct. Hardwired stacks not needed so I guess it means all of those count in your list then.

That is true, but with out a macro-assembler or a simple high level language ( FORTRAN II perhaps)
subroutine calls are a PTA. Real index registers would give you something Algol ish.
LDA #RETURN
STA FUNCTION
JMP FUNCTION + 1
( ARGS )
RETURN

FUNCTION 0 ,
( FUNCTION CODE )
JMP I FUNCTION

I dabbled with the 9900 some years ago.

These are my observations: https://talk.dallasmakerspace.org/t/project-log-python-on-the-6502-c64-8080-6800-6809-and-avr/18852/269

3 Likes

How safe is using these “secret” instructions in all of the 8085 derived processors?

It more fun to expect using ‘secret’ instruction xyz will explode the micro chip, than being safe.
This is wear testing is needed on real 8085’s to see what works, and with what chips.
I thought I seen a small c version using extra 8085 instructions once on the web.
Ben.

The 8085 ops work 100%. They were used by everyone in the know since they got reverse engineered (eg in Tandy model 100 apps and embedded). They were They were formally documented by some of the vendors like Tundra later one.

Several compilers support them. We know they were only killed for political reasons - just not which political reason is the true one.

1 Like

Function calls are not a PITA. Most of the processors without a hardware stack construct (eg the TMS9900) use link registers and branch/link models. The old PC ends up in a register, the PC ends up at the new address.

You end up with code on such machines like

                         ; call function foo
                         jsr foo              ; PC becomes foo, X becomes old PC
                         ...
foo:
                 stx (-s)                ; save link register
                 blah
                 ldx (s+)
                 rsr                       ; back to X

What makes the biggest difference for a compiler is having the ability to reference a constant offset from an index register. So for example the Intellivision does have a stack but accessing a local involves

            movr r6,r0               ; copy SP
            addi offset,r0           ; make pointer
            mvi @r0,r0               ; get value

You have several registers though you can use this way and you can track pointer values to reduce that overhead a lot, plus its word based so 16bit maths is one op

The TMS9900 for example though if you’ve got your stack in say r1 ends up as stuff like

                      add @offset(r1),r0

and you can even if I remember rightly do stuff like

                     mov @offset1(r1),@offset2(r1)

to implement something like a local to local assignment or argument to local

1 Like

Maybe off topic since it’s before the 70s, but Burroughs 5000 was designed to run Algol.

Lisp machines were made for running Lisp.

Forth processors; their instruction sets are often subsets of the Forth primitives. Then again Forth is kind of a macro assembler for a stack machine, so maybe it’s not a high level language.

Maybe Lilith. I don’t know to what extent it was specialised for running Modula-2.

2 Likes

Good thought! It was influential too. From this comment on HN:

Far as old stuff, there are at least three more greats along Burrough’s style: System/38 (later called AS/400 & IBM i), Intel iAPX 432, and Flex Machine w/ Ten15. Sole survivor of those is System/38 in form of IBM i. It lived up to future-proofing, reliability, and relative security its architecture intended. Like with Burroughs (later ClearPath MCP), (slow clap) “Bravo to the designers.” :slight_smile:

iAPX 432 and System/38 here: Capability-Based Computer Systems

Flex and Ten15 here: Flex machine - Wikipedia

Of course we should not forget the Transputer, well-equipped to run Occam.
The transputer and occam: A personal story by C. A. R. Hoare

1984 slide deck from Dave (Prof David May at Bristol):
Transputer and OCСАM

1 Like

Toot - Toot, Tooting my horn here.
The home brew computer I am always revising. (now) 9,18,36 bits. It was planned from the
beginning (late 1980’s) to be easy to program in a high level language but not blazing fast.
At this time it was a 18 or 20 bit paper machine as concept for what I thought of having a better machine
than the PC at the time.Having a 9 or 10 bit byte, gave me a good address size and ample opcode
space for decoding. I had features ( other than floating point ) for what I thought was needed for
a high level language.Small C was to be the programming language. The FPGA card had problems
with 18 bits.That set me back years.of prototyping.I am now using a CPLD hardware design.
Getting it working was the first has led to many design changes.
So I do have a cpu designed better for high level languages, but what I thought was important features
in the 80’s is not what languages really need for a better implementation.Virtual memory seems be one feature as well as multi-tasking. The languages programed may not need it, but a operating system requires that.

1 Like

This is a slide from my 2019 talk about Smalltalk computers where I list some other language specific computers.

4 Likes

Great slides! I should have recalled the KDF9 and the Rekursiv.

Something from fairly recently, interesting but not retro, the rather unconventional Reduceron project (github repo here, HN discussion here))

a special-purpose processor for running lazy functional programs

By providing a minimal set of features tailored to the execution of functional programs, such a custom computer could be not only fast but simple, with benefits such as fuller verification and lower energy consumption.

F-lite is a core lazy functional language, close to subsets of both Haskell and Clean.

1 Like

This paper turns out to be quite witty.

An interactive computer system or component should spend nearly all its time waiting, so that it is nearly always ready to respond to the actions of its environment. Otherwise it is the environment that has to spend too much of its time waiting for the actions of the computer, and that is impossible, intolerable, or at least undesirable; and with the advent of much cheaper computers it is increasingly unnecessary.

4 Likes