Interfacing modern MCU's to older hardware


With some hardware that needs attention,getting real peripherals is quite a challenge and costly. With Moores law working at the time, I was confident I could get a modern MCU in software to keep up with a 30 year old processor.

My aim was to memory map a Realtime clock in the Amiga 500 trapdoor to an ESP32 processor and give the old Amiga an NTP clock. Possibly re-purpose some of the registers and create a banked register set and add Wifi/TCP networking to the same 4 bit bus! drooling at the possibility

The experiment started - the first bit was to emulate the memory read timing; RAM in the Amiga is 150nSec. A simple application on the ESP32 that toggles a GPIO and then having the INTERRUPT routing just setting an output high. this would show me the interrupt latency - I need this for the memory READ and WRITE strobes as the output data needs to be presented on the data pins very quickly.

In the end, the ESP32 Interrupt routines are too slow! It takes far too long to register it at 2.5uSeconds

Off to try another CPU - lets see

The UniBone Linux computer for UNIBUS
Using modern ARMs as peripherals for retro 8 bit systems

This should be an interesting journey! Is there any chance you can do the job by having a busy-waiting loop? That could - should - be a bit faster than an interrupt. After an access you may well have a bit of slack time before the next possible access, to do some work.

(We similarly found that the GHz-class Raspberry Pi is hard-pressed to react in time to a 2 MHz 6502 bus.)

1 Like

The busy-wait loop is experiment #2 in the list, but I really don’t think a wait loop is feasible as the ESP32 runs FreeRTOS. I want to keep the whole thing deterministic to timings. (with FreeRTOS I have to yield otherwise it goes bat-poo crazy)

Next experiment is to modify the interrupt functions and hijack the ISR vector for my own nefarious purposes

1 Like

(Is this a dual-core micro? I’ve a vague recollection one can run the RTOS on just one core… which might allow for busy-waiting on the other?)

1 Like

Yep, is a dual core… one core reserved for RTOS/Wifi/BT things… the other for application… that’s what I think it was configured as.


I done a quick test on a wait loop … massively better - but as I was told all through school; ‘Could do better!’ :smiley:

1 Like

Ah, do I see from your blog that you’re writing in C? That does give you some headroom, if writing at assembly level is a practical proposition.

1 Like

All options are open… Using C to establish a baseline - assembly can always get better, but the magnitude of the improvement is open for debate.


Down do 256nS by just using Register writes for GPIO toggles… 200nS to go!

1 Like

Taking this off on a slight tangent…

Some ~30 years back I interfaced a modern computer for the day (A BBC Micro) to my older CPU system - a Sinclair Mk14. Those who know the Mk14 will think the keys on a ZX80 were brilliant by comparison… Hence my desire to interface it. It worked well - I wrote a little assembler on the Beeb for it, and I could download code into it.

Sadly my beeb was stolen, however I still have the Mk14 - phew :slight_smile:

And in recent times, it’s been done with a Raspberry Pi too that thread was started some years back, but has been recently updated.

My Mk14:

It might need some TLC…



The timing is tight … I have to do alot of work to make an ESP32 work consistently.

my next CPU is the BeagleBone Black. It has 2 Programmable Realtime Units that have a 5ns per instruction timing and a 1 instruction/clock cycle. The Caveat is the bootup time - and Amiga will be spinning its wheels for what it feels like is a month before the BeagleBone is fully linux booted.


Just a thought but would it work for the microcontroller to fill an SRAM with code then let the Amiga run?


I am doing something similar with an ESP32 and a modern recreation of the Sinclair QL I am working on (68SEC000 at 7.5MHz).

I have found it easier to simply put the ESP32 behind some dual port RAM in the IO area. It just runs a continuous loop updating the real time clock, uses a PWM output to run the fast interrupt timer, and all keyboard input is handled through a single byte register.

Luckily, small dual port SRAM is quite reasonable in price, and simplifies the design greatly.


I did look at dual port RAM, but in this case the Amiga Clock is on a 4 bit bus with only 4 bit data lines attached. (for BCD type data) - I want to create a pseudo serial where I can read the same address multiple times and the ‘Modern CPU’ on the other side increments the meory location.

Much like I2C memory where you write a command and address, and then you burst read/write and the device handles the smarts of incrementing the memory location.

@Dave - I am interested in your QL project … having owned one in the early 90’s. it unfortunately was thrown away when the microdrive broke and I couldnt play chess anymore.


So, how would you manifest this. Is the data “clocked” in via, say, Chip Select logic that isolated the magic byte?

That is you have your MCU wired up to your CPU, then you have, what, the R/W lines, Address lines (however few there are), and the Data Bus lines from the CPU. Then you’d have a separate Chip Select signal from the address decode logic.

I’m just visualizing that the CS signal acts as an ad hoc Clock Signal to clock in (or out) the data. Because once you do your MOV DEST <- SRC instruction, where DEST is your magic address, the data bus will have the SRC data on it, the address bus will have your DEST address on it, R/W will be Write, and the decode logic will set CS HIGH (we’ll say).

A cycle or so later, the CS will be low, because the address bus has moved on to the next instruction, thus dropping CS.

You MCU can use CS and R/W (and the address bus) to react appropriately, it just has to act fast enough before CS goes away, and the data bus moves on. It’s one thing to latch the data on a CS toggle so the MCU can read it “at it’s leisure”, but quite another for providing data back – that data has to be on the data bus really soon after CS goes high I imagine (I imagine CS is one of the last signals in the process).

But, in the end, it seems that the CS signal is the toggle to get work done. Save that you have to do everything within the clock life of the CPU, since it’s not going to wait.

My scheme (unimplemented) has 7 control signals, and 8 data bits, designed to be used off of GPIO ports, rather than the Address/Data bus of the CPU. An async protocol relying on handshakes. A “Get It, Got It, Good” system. My hope is to use it in a 4MHz system.

Things like, set WRITE to HIGH, put the data on the bus, set HOST DATA READY to HIGH. Co processor sees the WRITE HIGH, and the HDR HIGH, reads the data, asserts COP DATA READY. Host sees that, de-asserts HDR, Co proc sees that and de-asserts CDR. 1 byte written. If either the host or co processor stalls, they just wait for the handshakes.


On the Bus I have 4 ADDR and 4 DATA lines with a RD/WR and CS line…

I wanted to use the RD line as the clocking mechanism. The RTC has 16 addresses that give out BCD data on the date/time in the clock.

I want to re-purpose a address and use it as part of the protocol. Wire F to address 16, the MCU can detect that and then do some smarts on what happens next.

e.g. Write 0xF 0x0 … selects bank 0
then a READ 0xF will return the byte contents of bank 0… after every RD strobe, the memory address in the MCU is incremented… and after every read the next nibble is placed on the bus.