The last programming project from Bill Gates: Microsoft BASIC for TRS-80 Model 100

3 Likes

It would be interesting what computer was actually used for coding. I’m somewhat skeptical about this actually being the TRS-80 Model 100.

For a bit of backstory, the Model 100 is based on a design by Kyocera (Kyoto Ceramics), which was licensed to NEC, Tandy, and Olivetti. The NEC model (PC-8201) was first to market, followed by the Model 100, then Olivetti’s M10, and finally Kyocera’s own Kyotronic 85 (or KC-85, not to be confused with the Robotron computer going by the same name).
While all variants share the principal architecture and code base, the NEC models (PC-8201, PC-8201A) featured a BASIC adjusted to match the commands and some command formats of NEC’s own BASIC, dubbed N82-BASIC. The code of the Olivetti M10 is based on the ROM of the Model 100, as is the code of the Kyotronic 85 (with some segments stripped.)

Notably, the license for this design was first sold to NEC, which then re-licensed the design to other manufacturers:

[T]he systems are like two peas in a pod — sisters whose “parent,” Kyoto Ceramics (Kyocera Corp.), sold the rights to them to NEC, after which NEC licensed the 100 to Radio Shack, but kept the 8201 for itself. Sibling rivalry notwithstanding, the systems have many similarities (including their $795 price tag), although there are a few surprise differences in design philosophies, which are obviously being tested in the marketplace.

(C. P. Rubenstein, NEC PC-8201A Lapsize Computer, in Computers & Electronics, Vol. 22, No. 4; April issue, 1984; p. 36)

Moreover, it seems very plausible to me that there would have been a demonstration system by Kyocera, on which all further models were based.

I once did a short write-up on the family, a few years ago, in the context of a RetroChallenge project:

So, what was the computer this ROM and the standard applications were actually coded for? There are certainly photos showing Bill Gates with the Model 100, but was this really the system where it all started? Commonly, the literature is very US-centric, thus concentrating on the Model 100 and mostly ignoring the various siblings, but the history is a bit more complex than that.

3 Likes

The Model 100 is a joy to use - if you’re not a fast typist. It can take a while to catch up with even a moderate touch typist. To its credit, it does have quite a large input buffer.

This machine also came out when Microsoft were dabbling with BCD floating point. Several other BASIC interpreters they wrote around that time (notably MSX BASIC) used BCD. I’d love to know the rationale behind this (if any): being Microsoft, it must have had a commercial motive. Maybe MS were concerned that the optional BCD requirements of ANSI BASIC would catch on? By the time QBasic came out, Microsoft didn’t care to include BCD floating point in the interpreter.

Didn’t DEC BASIC, which inspired MS BASIC, feature BSD math?

Regarding the keyboard, all the portable “Kyocera silblings” featured Alps keyboards (and Alps SKFL switches), which were generally nice.
(I still miss the Apple Extended Keyboard II, another classic keyboard featuring Alps switches.)

I suspect BCD math in basic was for ‘balance your check book’ type programs.
Later with QBASIC it was more GAME’s you can play. QBASIC was DOS program so
you may not had a easy way to program in BCD.
As side note, processors after the 68000 dropped BCD math. Now they bring it back (2010’s?)
as PATENTENTED idea called Densely packed decimal from IBM.
Ben.

1 Like

Some might. PDP-8 BASIC didn’t, but PDP-11 Paper Tape BASIC might have. It gives pretty much the correct answer when fed this little tester:

10 LET S=0
20 LET X=0
30 FOR N=1 TO 1000
40 LET S=S+X*X
50 LET X=X+0.00123
60 NEXT N
70 PRINT S,X
80 PRINT "CORRECT RESULT: 503.54380215, 1.23"
90 END

Before about 1983, none of Microsoft’s BASIC interpreters used BCD floating point. From about 1983–85, some of them did (or at least, had it as an option, like MS BASIC for Macintosh). After that, they stopped doing BCD again.

BCD is an interesting subthread in the history of programming languages. Certainly makes sense for financial applications.

As for the patent, my blood boils at something like data encoding scheme getting patents.

2 Likes

Found this interesting post about 40-bit floating point of BASICs: history - Why did 8-bit Basic use 40-bit floating point? - Retrocomputing Stack Exchange

In terms of volumes of calculations, financial systems have tended to use decimal representations since the very beginning. So maybe our binary floating point is the weird outlier after all.

IBM’s patent on fast BCD arithmetic in the POWER series is more about the clever internals than the bit representation. BCD has always been slower than binary floating point. Better to spend all night on your batch job than get sued for mismatches in the small change.

Part of the problem with binary floating point is the crappy I/O conversion.
I guess the paper “27 bits are not enough for 8-digit accuracy”, was never
read by anyone important.
bits/digits
25/7 23 or 24 bit floats
29/8
31/9
48/14 CDC 6600 48 floats (lucky as the paper came out in 1967)
38/11 36? 48 bit floats?
55/16 54 double float
BCD is needed only for money calculations. The fact the EURO is
some weird floating point number to convert to anything screws you up.
But when you think about it, scaled numbers will work for money matters
but not in BASIC. but like FORTH.

I’m not sure I get you on the EURO € thing… It’s the same format as US Dollars - decimal with a fraction. $123.45 or €123.45. Same for GBP too: £123.45.

Maybe you’re thinking UK pre-decimal (prior to Feb 15th, 1971) where we had pounds, shillings and pence. 20 shillings to the pound, 12 pence to the shilling…

Here’s something of interest too:

-Gordon

I can’t find the site that said that about the Euro. So ignore that that.
And in 1961 we had cents. After Ascii we lost cents.(2012 they stopped minting them)
In the UK did one get to use 10 and 11 for pence in ASCII?

*BARBARA: Well, I don’t know how you explain the fact that a fifteen year old girl does not know how many shillings there are in a pound.
IAN: Really?
BARBARA: Really. She said she thought we were on the decimal system.
IAN: Decimal system? *
Dr Who." An Unearthly Child"

Could have been about the exchange rates, all major currencies are floating (within windows decided by central banks) and have quite many digits, since it matters when you are exchanging large amounts of money.

From the history - Why did 8-bit Basic use 40-bit floating point? - Retrocomputing Stack Exchange

"The floating-point routines for Microsoft BASIC were written by Monte Davidoff in 1975, originally for the Altair, which used an Intel 8080 CPU. The source code had been lost for years, until Bill Gates’ former tutor discovered a copy in 2000 that had fallen behind his file cabinet two decades before.”

The numbers 10 and 11 were just that - 2 normal digits in whatever character representation you like. It predates America by a millennium or so.

When writing it, there were symbols, but they were normal symbols. Sometimes the pound (£, not #) sign was an L overstruck with a dash, depending on the typewriter. Pennies were suffixed with a ‘d’, so 6d for 6 pennies (or 6 pence, sixpnce or a tanner) and shillings were often suffixed with /- as in 4/-

LSD was a common name for the currency - deriving from librae, solidi, and denarii

Blame the French for having a currency where the penny was 1/240th of a pound (Avoirdupois) of silver…

Could have been worse - just look at the wizarding currency in Harry Potter…

But we were all supposed to learn our 11 and 12 times tables because of that…

I have a few old mechanical calculators that handle LSD currency too however they stop at whole pence and don’t handle halves or quarters (Farthings)

COBOL can represent old UK currency using a PICture definition (which I guess is BCD internally)

There are (or were?) a couple of currencies where a fifth was the base, so the main unit was divided into 5… Good thing we don’t have 7 fingers…

-Gordon

1 Like

Everything was fine with the Habsburg Thaler. If you really needed a 1:1 conversion, you could always use a file… But then came the Dollar: to keep up appearances, they kept the Pillars of Hercules, but made it 0.92901462 (USD:EUR as of writing). Kids nowadays… :wink:

BTW, IBM had kind of an alternative to BCD: bi-quinary coded decimal (not to be confused with quin-binary on the 1401).
(As Colossus used bi-quinary, too, it kind of started with this, but nobody was allowed to tell.)
Notably, bi-quinary predates Bretton Woods – and thus the finely graduated volatility arising from this. :wink:

Old-style British money had the advantage that it could be divided into many more fractions, and still have whole numbers of pennies. So you could split a pound into three equal parts of ‘six and eightpence’, or six, eight, or twelve equal parts.
We still have twenty-four hours in a day, sixty minutes in an hour, Three hundred and sixty degrees in a full circle, boxes of a dozen eggs, etcetera for the same sort of reasons. It’s useful to be able to divide the compass rose into sixths, eighths, and other common fractions - and this usually trumps the ‘usefulness’ of having one hundred gradians in a right angle. Most calculators can be switched to gradian mode for angles, as well as the more-often-used degree and radian modes.
But although twenty shillings in a pound, and twelve pence in a shilling were nice, I suppose decimal currency is ultimately simpler - and it fits much better with the way other countries divide up their monetary units.

1 Like

And yet, in 1962, a character set was proposed for consideration…

The British wanted a four-bit decimal subset to include digits 0 through 9 plus 10 and 11, period (.), slash (/), minus (−) and plus (+).

See p18 of Eric Fischer’s The Evolution of Character Codes, 1874-1968 which has much of interest about character sets (we are of course well off topic)

2 Likes

At the risk of going more off-topic, some years back I wrote my own BASIC interpreter and included turtle graphics - wanting to make life easy for kids (and shades of the old Big Track toy), I added “clock” angles. So 15 was 90° and so on.

Seemed like a good idea at the time, but it appears that kids have not been learning how to tell the time on a round clock for a couple of decades now.

-Gordon

1 Like

You could have made 3 do that job! So many possibilities…

2 Likes