6502 floating point

I want two’s compilent floating point. Simple and non-standard. PDP/8 for example.
I missed that BYTE, for floating point. Thank you.

There’s quite a nice explanation at
The Floating-Point Guide - Floating Point Numbers
but most of all a couple of handy links at the bottom of the page. In particular, Float Toy lets you flip bits to see how a number changes, or type a number in to see which bits represent it.

I think the floating point formats we normally see in the 8 bit world are very like the 32 bit IEEE format, it’s just that they only bother with ordinarily representable numbers from the small to the large. (They don’t do infinities or not-a-number or the unreasonably small denormalised numbers.)

So, if you understand standard single precision floating points, you’ve got the basics. Actually doing the necessary shifting and comparing and operating byte by byte is the usual thing in 8 bit land - I don’t think anything very unexpected is happening.

Perhaps the only thing to look out for is that the working space for floating point routines will usually have an extra byte of precision, to be rounded off and improve accuracy. Oh, there’s often an implicit 1 bit at the MSB of the mantissa, to save a bit.

Oh, but hang on, it looks like the fields in IEEE format at not byte-aligned, so most likely in an 8 bit floating point format the bit allocations will be slightly different. For example, this explanation of Acorn’s BBC Basic for the 6502 shows that the 8 bits for the exponent come first, the sign bit is next, then the 31 bits of the mantissa (with the sign bit sitting in the same place as the implicit 1.)

See also this explanation of the Acorn format.

1 Like

You could have a look at MS BASIC.
A full, commented version of the source code is availbe here: https://github.com/brajeshwar/Microsoft-BASIC-for-6502-Original-Source-Code-1978/blob/master/M6502.MAC.txt

The floating part in isolation here (but fewer comments): https://github.com/mist64/msbasic/blob/master/float.s
(Full distribution https://github.com/mist64/msbasic, see also this article: Microsoft BASIC for 6502 Original Source Code [1978] – pagetable.com)

The AppleSoft version, also densly commented, is available here: https://www.txbobsc.com/scsc/scdocumentor/

Mind that there is a bug in at least some versions of the evaluation routine, but this is easily fixed. Compare: Multiply bug - C64-Wiki

2 Likes

decimal or binary floating point

Floating point.

32 bits is binary. 40 bits BCD in general

The standard MS package used 32- or 64-bit FP on 8080-like machines, and 40-bit FP on 6502 (and perhaps 6809).

A few systems, I believe Atari, TI and MSX used 40-bit BCD.

Interestingly, Microsoft’s 6502 Basics had a build-time option for 4 byte or 5 byte floats, and different customers chose different options:
Create your own Version of Microsoft BASIC for 6502 – pagetable.com

Acorn’s BBC Basic used 5 byte floats and seems to have been pretty quick, according to @drogon’s investigations:
Comparing BASICs with Mandelbrot…

Both MS and Acorn used binary arithmetic - BCD isn’t a great performance option on the 6502, addition and subtraction being supported well but multiplication and division coming out more difficult than in binary.

But yes, a few systems chose BCD!

2 Likes

Okay, now I have a vague memory of some microcomputer … I think it used 6502 … where the floating point math was based on base 256 instead of base 2. The 6502 only had commands to roll left or right one bit at a time … not the most efficient thing ever, if you’re fiddling with carries for multi-byte mantissas.

So the idea was for the exponent to represent byte shift rather than bit shift. Shifting by 8 bits at a time was quick and efficient, and incidentally easier to code.

I want to say it was one of the UK micro computers that did this. I don’t remember, but in any case I really like the concept.

1 Like

This sounds rather interesting/intriguing. – Does anyone know more about this?

EhBASIC: 79 seconds
CBM2 BASIC: 82.5 seconds
BBC Basic 4: 48.2 seconds.

That’s pretty impressive, about 58% of the runtime with MS/Commodore BASIC .
(Also, well done, @drogon!)

More like well done, the boffins at Acorn.

They had a few advantages - namely the 65C02 vs the 6502 that those Basics are written in (although the version of EhBASIC has been slightly optimised for the 65C02 and it’s an on-going process). Also the older BASICs on the BBC Micro were still considerably faster, all things considered. (About 62 seconds)

-Gordon

2 Likes

I haven’t found an example, but I think this would be called radix 256. The problem with it is that when aligning values you might lose 7 bits off the end, so you lose precision. So you need extra precision to make up for that. I did find this note on a big page of floating point formats:

The TI 99/4A uses radix 100 in an 8-byte storage format. 7 bytes are base-100 mantissa “digits” (equivalent to 14 decimal digits), and the exponent (a value from -64 to 63) is stored in the 8th byte along with a sign bit. The exponent is treated as a power of 100.

1 Like

Well, you lose an average of 3.5 bits of mantissa, but you gain 3 bits of exponent. Okay, I don’t think you can legitimately “average” the lost bits of mantissa, but still …

I guess if it actually existed and wasn’t just some random discussion of floating point ideas, it must have been somewhat obscure.

No reason for us to not implement it anyway, though, right? I’d go with 1 byte signed exponent (really overkill), and 7 bytes signed mantissa, zero allowed. There’s no implicit leading 1 because that only makes sense for a binary mantissa. The various math operations seem pretty straightforward to me. The only clever bit would be altering exponent at the end of add/subtract operations. You want to check for either leading $00 or leading $FF, rather than just leading zero bits.

1 Like

I think it might well be that aligning before add and subtract, and normalising afterwards, are fairly costly, so it might be worth a go! With 7 bytes of mantissa it feels like you’ve plenty of margin. (It’s 2.4 decimal digits which might be lost, although as you say, on average it’s half that. One might consider doing what calculators do, and rounding at the conclusion of a complex expression. You’d quite like the square root of a square to be an integer.)

IBM (mainframes) floating point had a strange format, scaling by 16 rather 2, Internaly they
added a guard byte for rounding and other stuff.

1 Like

I can’t be sure if @IsaacKuo’s base 256 idea is totally original, but I’m intrigued by the performance aspect of it. Six bytes should be plenty for native precision, so if the performance boost is worth the size penalty, I see no reason not to spend some time test-coding. Oh, wait … time … sheesh.

3 Likes

I think maybe my vague memory is from a USENET discussion where someone described the TI 99/4A as base-256. Or I read it wrong. Or I remember it wrong.

RADIX-100 has some interesting advantages. The big one would be how financial numbers worked (here in the USA, at least). You could calculate with a cent being 0.01 and various important sub-cent values … and they would be accurate. On other computers, you’d have to know to use integer values and pretend there’s a “.” before the last two digits. And hope to heck you don’t have any places where 1 means 1 dollar. And if you had to do some financial calculations that involve sub-cent values then you were … uhh … well, good luck!

For a general purpose home computer, where you couldn’t expect users to be familiar with the limitations of (binary) floating point math? I can see how the TI 99/4A route would make more sense. Especially for Texas Instruments, considering their most popular products were calculators that did precise decimal floating point math. Those calculators never surprised users with a “gotcha!” while doing basic financial calculations.

1 Like

It’s very good point: people have expectations about how numbers work, and calculators (usually) play to those expectations, and TI might well have wanted to avoid being confusingly inconsistent.

1 Like

This is 6800 but I think one of the best explanations and coverages I’ve seen with real code.

3 Likes

Nice find! As it turns out, a couple of years later he wrote a 6502 version (which I just happen to have by me on my bookshelf…)

3 Likes

Well I thought BCD was the only 40 bit floating point, but Mr Gaters decided to have 32 bit binary fraction when he
ported BASIC from the 8080 to the 6502. Now your $2,900,00 computer has 8 digits, just like your
$29.00 calculator. Real BASIC’s have a MICROSOFT easter egg in it, that displays “MICRSOFT”,
like WAIT 6502 on your C64.

One thing that helped me finally put things together was figuring out what the Guard, Round, and Sticky bytes are and how they work. You can research that independent of any 6502 implementation.

Calc65 which you mentioned above is my creation, so I’m happy to answer any questions if that’s helpful.

2 Likes