Help! How many dialects of Basic - citation needed

At last night’s ABUG (Acorn-centric user group) one of our number made a claim which he later felt he had to withdraw. He thought he’d seen a relatively definitive statement of how many Basic dialects there were (or are) - I think it was between 200 and 300. He’s had a search, and I have, but we can’t find any such claim.

Anyone?

I’m also interested in unusual but useful features of Basic dialects which might be good to adopt and might not be widely known. (Inbuilt assembler, complex numbers, string slicing, extended precision arithmetic…)

For a bit of related reading matter, try these:

https://www.salon.com/2006/09/14/basic_2/

BASIC wasn’t designed to change the world. “We were thinking only of Dartmouth,” says Kurtz, its surviving co-creator. (Kemeny died in 1992.) “We needed a language that could be ‘taught’ to virtually all students (and faculty) without their having to take a course.”

“Our vision was that every student on campus should have access to a computer.”

In the past, Kemeny and Kurtz had made two unsuccessful stabs at creating computer languages for beginners: Darsimco (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). But this time they considered modifying an existing language.

An early manual stated the maximum program length as “about two feet of teletype paper.”

… it’s possible that Dijkstra was exaggerating for dramatic effect. BASIC wasn’t his only bête noire among programming languages: He also spewed bile in the direction of FORTRAN (an “infantile disorder”), PL/1 (“fatal disease”) and COBOL (“criminal offense”).

The single most influential book of the BASIC era was not a textbook–at least not officially. It was 101 BASIC Computer Games, later known as BASIC Computer Games and edited, in both versions, by David H. Ahl.

Kemeny and Kurtz were exceptionally disappointed with what others had done to their creation. In 1985 they published a book, Back to BASIC, which bemoaned the crudeness and inconsistency of Microsoft BASIC and other variants available for microcomputers. They titled the chapter on PC-based BASICs “What Went Wrong?” and called them “street BASICs,” a moniker meant to sting.

BASIC’s creators didn’t just complain about what had happened to the language. They also founded a company with a meaningful name–True BASIC–which produced a version that added new features while preserving the original Dartmouth BASIC’s original vision. Unlike Microsoft BASIC, True BASIC was also designed to be the same language, no matter what computer you ran it on.

Kurtz says “We thought we could do some good by implementing the same language (literally) on the different computers. We were wrong. The different computers came up so thick and fast that we, a small company, couldn’t keep up.”

1 Like

I’d say, while theoretically there is a finite number, for all practicality there are infinite, since we can’t count them all. (Partly for historical reasons.) Especially, if we include any subvariants. E.g., I think, on the BBC Micro, there was BBC BASIC Level I – IV, Hi BASIC III & IV, BAS128, not counting RISC OS variants. Commodore BASIC had – just a quick guess – about 7 variants. (Mind that Commodore BASIC 1 is quite different in capabilities from, say, BASIC for the Commodore 128 in Commodore 128 mode or the Plus 4. Are they still the same?) The TRS-80 Model 100 and its sibling NEC PC-8201 come both with MS BASIC and share vast portions of the same ROM code, but are still significantly different from one another to the point of being incompatible. What about extension packs, like Simons’ BASIC? And so on.
However, there may have been even significantly less than 200 major dialects of BASIC. It really depends on how you define “dialect”.

Maybe, the best approach was to sketch a family tree and then decide, where to cut the branches for the purpose of defining dialects?

1 Like

I suppose one approach might be to look at how many independent implementations of Basic there are… it would perhaps merge all of Acorns’s flavours of BBC Basic, and all of Microsoft’s 8 bit Basics, while still counting any modern of historic re-implementations. (The original 6502 and z80 BBC Basics were independent implementations of the same specification, with the z80 version then licensed to Acorn, non-exclusively.)

In a way, counting things loses far too much information. Perhaps a family tree is a much better thing to study. It would still be interesting to see how big it is: much more interesting to me to see how many very different things there are, than to try to count minor variations like Commodore’s tweaks to Microsoft Basic.

There is of course a Wikipedia page, for what it’s worth, subject to deletionism and squabbling:

In a way, it’s a bit like saying, “Well, insects, who cares?” We need BASIC entomology! Linné to the rescue! :slight_smile:

There are far too many people (like me) who’ve put together their own version. Even my BASIC has several variants which are embedded in commercial products, so it will always be… n+1

-Gordon

I must admit I’m unclear in my own mind whether I’m interested in dialects or implementations (or indeed ports). I like the idea that there’s more variety than we suppose - it’s Darwin’s tangled bank. And I very much like the idea that there are possible features which are usually overlooked.

Well to start, there are many hundreds of dialects. You can find that many in source form on GitHub right now. If you wish to bookend the selections - dialects from the 80s, dialects for 8-bit, etc, I think the number would remain largely the same.

But…

I’m also interested in unusual but useful features of Basic dialects which might be good to adopt and might not be widely known. (Inbuilt assembler, complex numbers, string slicing, extended precision arithmetic…)

So I’ve been writing articles on BASICs on the wiki for some time now, and found many little differences that may be of interest. The key is where to draw the line. For instance, if memory is no limit then you could include them all, and effectively memory is no limit on a modern machine. If we instead limit ourselves to some sort of smaller set, say ones that might run on a 8-bit machine, then we could pick and choose. In any event:

Dartmouth’s later versions added string manipulation using the CHANGE command, like CHANGE A$ TO B. This would put the ASCII codes for each character in A$ into the array B. You can then convert back with CHANGE B TO A$. There’s nothing here you can’t do with a loop, but it’s interesting. Wang used the somewhat more obvious CONVERT command for this.

HP is the first I’ve found that used string slicing, although I cannot guarantee it’s the first. It was widely copied in other 1970s dialects, including Data General, North Star, Apple, Atari, and (later) Sinclair. String slicing has a very major performance upside because using slicing does not make a new string in memory, it simply makes a new pointer into the string. In contrast, LEFT$(A$,5) makes an entirely new string, copies 5 chars into it, and returns a pointer to that. This eats up memory as well.

HP, however, is unique in that it used square brackets, optionally, for defining limits. This has a significant potential advantage… In Atari, as an example, A$(1,10) makes a slice of 10 chars - but this syntax also means there is no way to define a string array! HP did not allow these either, A$(1,10) and A$[1,10] were identical internally, and could also be used for numerical arrays like B[1,10].

This represents a lost opportunity, because if you say square brackets are for slicing and round for arrays, then you could do A$(1,10)[1,10] which would return the first 10 characters of item 1,10 in the A$ array. So any new dialect I would STRONGLY recommend use this syntax.

HP and a few others optionally allowed # for “not equal”, which is actually kind of clever. Not terribly important though, although it might be useful for some porting efforts of very old code.

Some basics, like Tymeshare’s SUPER BASIC and DEC’s BASIC PLUS, offered post-statement tests like PRINT “IT IS FIVE” IF X=5, as opposed to IF X=5 THEN PRINT “IT IS FIVE”. I’m not sure if this is because they wanted to support JOSS-like syntax, of if its because the original IF did not allow statements, only GOTOs (Darmouth for instance, and many small computer versions).

Personally I don’t see a reason to use this in any modern dialect - it’s syntax is somewhat more obvious IMHO, “print this if…”, in the same way that “black cat” really does make more sense in the French fashion, “cat, black”. But it makes the parser seriously more difficult and likely the interpreter too.

And internally there’s some decisions to be made as well. Curiously, after looking at all the mainline dialects, I concluded that Atari is the most suitable for advancement. It tokenized everything at edit time… number constants were converted to internal form, variable storage was set aside during edits and then replaced by a pointer to the value, etc. In theory, this should have allowed it to run circles around something like MS, which left much of the code in its original ASCII and re-converted at runtime, things like constants and variable names. Sadly, they wrapped this great parser in the world’s worst runtime, which made it one of the slowest BASICs of its era (only TI was slower). This is by no means unique to Atari, BASIC09 worked the same way, but it is the only mainstream example from the early 8-bit era I am aware of.

The concept of its system is clearly superior, easy enough to implement on 8-bit machines, and should be the model for future dialects. It’s how my basic works.

Finally, I strongly recommend that every dialect should have some sort of LABEL concept, which can be used as the target for GOTO and GOSUB. While some support this concept, and others (like Atari and Tiny) allow you to GOTO any arbitrary expression and thus A=500:GOTO A, they all ultimately call the normal line lookup on the resulting numeric value.

My idea is slightly different; any LABELs encountered work like a variable in Atari basic and immediately create an entry in a table of name:lineno:location. During parse time, branches followed by a valid label name use a separate token and, as with variables, the label name is replaced by a pointer to its storage in the table. When these are encountered at runtime, the pointer can be followed to the location value, if >1 then you immediately branch to that location, otherwise use the lineno to call your line lookup routine, put that into location, and continue as before. This lazy-evals these lines for super-fast lookup, and during edits you simply set location to -1 for any entry greater than the lowest line number changed. If you support PROCEDURE or similar structures, these would use the same mechanism, thus eliminating the line-lookup overhead for GOSUBs.

Of course, one can implement similar concepts for every line number, as was the case in BASIC XL for instance, or use various caches as in TURBO BASIC and others. But these all require more memory and more logic, and might not be suitable for small machines. Additionally, we can suggest that the author of the program might be aware of those locations that are often used, and make LABELs just for them, so the storage table might be quite small, perhaps 8 or 16 entries. It might be so small that you simply merge it into the variable storage as a “line number type”.

4 Likes
  • BBC BASIC’s EVAL and Sinclair BASIC’s VAL are function evaluators, interpreting the string passed as an argument: X=1: PRINT EVAL("X+1") returns 2.
  • A few interpreters used BCD floating point. MS toyed with it at one point (in the Tandy 100, MSX and BASIC for Macintosh) and Atari BASIC used it too.
  • Locomotive BASIC had timer-based interrupts, so you could poll subroutines regularly. Make the timer too short, though, and it would prevent the Esc key working. I had to retype more than a few programs lost to that one.
  • Almost no micro dialect used the Dartmouth/ANSI MAT matrix commands, but some BBC BASIC dialects can do matrix maths on multidimensional arrays.
  • Mallard BASIC (CP/M, shipped with every Amstrad PCW) had features for manipulating ISAM indexed files. (This makes up for AMSDOS on the Amstrad CPC machines lacking random access on disk files, I guess.)
1 Like

Thanks @MauryMarkowitz, @scruss, some good ideas there.

I do agree that memory should be regarded as significantly less limited now, even for 8 bit platforms, so interpreters can have more features and more machinery for performance, and space-time tradeoffs can be made in favour of time. APL, I think, shows us that if we offer higher-level constructs, we can execute those efficiently and it’s less important how fast we might (for example) perform explicit loops.

OS/9 (6809) had a very nice Basic and Pascal. You could run the B-code or P-code
for BASIC and PASCAL , or compile that in to 6809 machine code for speed.

how many variants of BASIC were there… probably more versions per brand and models that one should consider.
For example the TRS-80 Model 1, had the Lvl 1 then Lvl2, the model 3 had the same and the Lvl 1 & 2 & GBasic (HRGC) were different from the previous (more in-depth) model, and the list goes on for Models as they were built through the year, though Bill gates had his hand deep into that development it wasn’t the only person working on it. at the same time in parallel Apple, TI, IBM. Kaypro and many others at the same time were tweaking their own version of “Basic” available and though they all had a basic “source” or “roots”, just by the many books of the period that had for example “games” for TRS-80 APPLE, Commodore ti etc… side by side listing of the basic program, each with the “variant” Pertinent to the machine it was aimed at.
How many version or flavours of BASIC was there? I don’t know, load up excel and start compiling, but you will be doing this for a long time, and I can guarantee each time you publish it, 20 people will suggest additions, corrections and deletions based on numerous factors, a great many based on the origin of the source code (many times shared among manufacturers or licensed.)

Actually, I was surprised how many did this when I looked. As an Atarian I always thought this was something unique (and dumb) in that dialect, but as I read more I’ve found many examples. Add TI to the list for instance, and almost all minicomputer versions.

Having thought about this in some depth I still haven’t reached a conclusion on whether BCD is a good idea or not. Here’s some of the tradeoffs:

BCD upsides:

  • fewer weird rounding errors (there’s a name for this, hazing? I can’t remember). It doesn’t eliminate rounding due to loss of precision or similar, but people expect that. They don’t expect small numbers to give back 0.99998364 for instance.

  • much easier conversion to and from ASCII. Basically a couple of shifts and there’s your char

  • easier to mentally convert when examining the tokenized program, although I don’t think there’s much value to that.

BCD downsides:

  • somewhat less dense. a 4-byte mantissa + 1 exponent gets you 8 digits of accuracy, whereas binary might get you 9 or more depending

  • less range; BCD gets to 99 million before it rolls off to the exponent, whereas in binary it’s 4.3 billion. Again, not a major issue as most numbers are small.

  • somewhat slower, although not as much as I originally thought. addition and subtraction really don’t change (a cycle or two at the end), but multiplication is slightly slower and division has about 20 to 25 more cycles.

    • the last two are due to the need to do shift-four to move the decimals instead of a single shift. This could have been helped with a nibble-shift operation. I find it interesting that none of the mainline processors had this opcode, so maybe I’m missing something here.
  • in my experiments, 90% of all numerical constants found in a program will fit in a 16-bit int. SOME of those, especially in micros where there’s lots of peeks and pokes to high memory addresses, are longer than 4 decimal digits. So… this means that the VAST majority of constants in a program can be stored in a 3-byte tokenized form (token + 16-bit binary value) but doing the same in BCD would require an extra byte to store the MSD.

    • that said, those same experiments demonstrated that 1/3rd of all numbers are 1 or 0, which means if you just use a separate token for those you can represent 1/3rd of all the numbers in a single byte. This saves so much memory that the 16-bit storage becomes superfluous. There’s still a potential performance argument for it, because…
  • since line numbers are always stored as 16-bit binary (although some mini’s used BCD here, cf Wang) you either want to store targets as a separate “line item constant” type in binary or you have to convert back and forth from BCD at runtime. Neither seems perfect. One of the major reasons for Atari’s terrible runtime speed was this issue - you could put any arbitrary expression after a GOTO (some others allowed this as well, cf Tiny BASIC), which it then evaluated, converted to binary, and then searched. Lots of overhead when 99.9% of the time it was a constant.

Ultimately I think the performance issues using BCD are not something you can look at statically - I/O is the slow part in most programs, so if the conversion from binary to ASCII is slower by X times and you perform math <X times per input/output, then BCD can be overall faster in spite of slower arithmetic. This would totally depend on the program; Super Star Trek does a bit of math but a whole lot of I/O, so I suspect it might be faster on a BCD machine. The line number issue is a concern, but you could either store line numbers as BCD and use an extra byte per line (or alternately limit numbers to 9999), have a separate token type for them, or use some sort of caching as I proposed above, which would make it a non-issue as the conversion would only happen once at edit time.

I should also mention the GRASS language here. It was a very interesting dialect of BASIC intended for animations. It was used to make the graphics during the rebel training scene at the end of Star Wars. It had matrix commands, timers, all sorts of cool concepts.

3 Likes

It’s part of the ANSI Full BASIC standard: in fact, it appears that OPTION ARITHMETIC DECIMAL is expected to be the default mode. There was also a requirement for fixed-point decimal numbers, which I’ve only ever seen on embedded systems far, far from the ANSI standard.

Full BASIC has got some real head-scratchers for features, such as scheduling statements for concurrency. Even the (no-retro) Decimal BASIC that tries to implement as much of the JIS Full BASIC standard doesn’t go near that one, but it does implement the standard graphics module (that’s probably unlike any graphics command set we’re familiar with).

The Basic on Luxor’s ABC machines supports a global mode, DOUBLE or SINGLE, and also supports fixed-point arithmetic in strings, up to 125 digits. So, offering two binary and one decimal format!

As there’s a performance penalty for multiplication, I’d expect log and trig to be especially slow on a decimal Basic: we could perhaps say that decimal suits a commercial application, and binary suits a scientific application. Benchmarks in popular magazines tended to the scientific.

But what if it was slow. Basic or Focal was the 4 function calculator at
the time, all running on 10 Cps TTY. With time sharing you got a whole
1/100 of second of your very own, as BASIC was ment to be run.
(Was a IBM 360 most common HOST for basic before the micro soft era?)
Ben.

GRASS was not Basic compatible, but its baby brother ZGRASS (or ZBASIC or SuperBASIC) was. The “Z” was because they got it to run on the Z80 based Bally Home Library Computer. It was an amazing achievement for 1978. When that failed to become a commercial success they built a dedicated machine for the language using the same chipset.

It was sufficiently Basic compatible that you could type in some listing from a magazine and it would work, but it was different from Basic in many regards.

2 Likes