ASCII 65 & ½ Ben's programming version

Can you program new languages (1970’s) with ASCII 67?
ASCII 1965 added the lower case letters, but left the glyph’s the same.
ASCII 1967 changed ^ and ~ _ to what they are to day, and made it pesudo-international
code. You got a token £ or ø and few accent marks to over strike AEIU and N’s.
Programmers are left with FORTRAN V punched card symbols.
Burroughs had a 64 bit character set and used ALGOL quite nicely with it.
ASCII never planned to use video display terminals as a output device,
just TTY’s,mag tape,paper tape
Over striked characters can be done on a video display, but they look horrid
as a experimental VDU proved.
I feel that after looking at the whole ascii 7 bit mess, and the number code pages
needed just for latin characters, accent marks should be in the control section of the ascii code.
The same could be said for color ribbon change from black to red.
Codes like WRU and ARU-WHO belong with the tardis. :slight_smile:
USA ASCII 67 you can C program with, but I have no idea how TTY-33’s ran under unix with Ascii 63. Since any other version of ascii may remove the [] characters
it is worthless for programming. Here is my version of acsii for programing rather than
international use.

Algol style symbols, sub set of boolean operations but compleate.
and & ,or ^,xor $ bit wise operations not and compare logical operations.
← is asignment

Ben’s version, ← is now a dash, or unary minus.
= is asignment or compare.

Parsing is simple as all operands are 1 character in size.
{} are shift right and left, unused in Algol.
A simple programing language (recursive decent) is being planned
once I figure how to print the fonts under windows 10, I
will start writing with this character set.
Since I have no ? will be used in strings *?

C and Pascal seem to be lucky in that ASCII-67 has {}'s. With out them
the laguages may not have developed quite the way they did.

Ascii 67 almost has the symbols needed for programming,
but the massive number of control characters kept it from having the a larger international alphabet
AE for example and few more basic programming symbols.
New Programming laguages (1970’s) are posible to program with ascii-67, but the loss
of some programing symbols make parsing harder than it need be.
feet notes.
Now ASCII-63 (6 bit code) could have added the extra symbols, perhaps the mechanical hardware
was not ready for a 7 bit code until the late 1960’s.

ITS was largely created assuming model 33’s and ASCII 63. Later Datapoint 3300 terminals were added, still using the '63 character set. The custom made graphical terminals added around 1974 made the bold leap to ASCII 67. Now text using ← and ↑ would display as _ and ^. I.e. the left arrow was the left shift operator in the assembler, but it was now an underscore instead. No big deal, people just knew to substitute the characters mentally. Or just accept the new look.

On the other coast, WAITS was also created in the ASCII 63 era. Unlike MIT, the Stanford people stuck with ASCII 63 throughout its history until 1991, and also added greek and math symbols to the low range otherwise used for control characters: this was called Stanford extended ASCII. (MIT also adopted Stanford ASCII, but modified to look more like ASCII 67.) When new terminal devices were added to the system, they were modified to display Stanford extended ASCII. (Not sure about Datamedia terminals.)

If not IBM (selectric) it is Ma Bell, pushing thier monoply along, with 1960’s tech and the ASR-33.
Was there any programming lanqages that used the ← as asignment operation, that is the only real
loss I see going to ascii-67. At the moment I am having problems printing my version of ascii-67,
so I plan to translate the 7 bit code, to a 8 bit code having similar gyphs and print that.

I have a Video display (640x350) and a ps/2 keyboard being developed on a FPGA trainer board
along with a simple 32 bit computer design around 1977 to 1979. Right now I need to finalize the 7 bit
rom fonts and write the software. Was ASCII the only 7 bit code around?
C has gotten too complex, so I am looking to write simple Algol-ish language,
for cross compilation, in a C subset. The one of outputs of the compiler will be
a translation to C so I can do most of the testing on the windows machine.

The cpu is 90% done, just IRQ’s and a simple MMU need to be added.
Now the harder part, better software now that I have finished the trial and error
design of the hardware.


The compiler is there works across platforms - compile to Cintcode, write an interpreter for that, then/or compiler to SIAL and translate that to your native code?

Development on e.g. a Linux platform is easy too.


I got a windows printer. :frowning: I have the BCPL book, but that is for a 16 bit machine,
I have 32 bits(more reading and downloading ). More and more I discover, there are no boot strapable compilers for a 32 bit cpu other than bcpl. Small C is 16 bits and the same goes for most Pascals.
Have to play with windows fonts for a bit. I may find a interesting retro unicode page.

While I’ve never done it, I imagine re-targetting small C or similar for a 32-bit system vs. 16 isn’t that hard… It wasn’t hard for me to make the 32-bit BCPL compiler run on my hybrid 8/16 bit hardware either - I just had to write the cintcode interpreter (plus run-time librarys, operating system, etc. but that is all in BCPL). No MMU either.

There are still 32-bit C compilers out there - gcc while not small can produce code for a 32-bit system - I recently used it to produce RISC-V code for my RISC-V system (which is currently an emulator written in BCPL) and there are millions of 32-bit ARM systems out there still being supported.

I’m not sure C itself has gotten too complex - what I have seen since I first used it it just the sheer bulk of library code and templates and abstractions I’ve seen people add into it. The big change for me was moving from K&R to ANSI. There are digraphis and trigraphs if you want in C, but personally I feel we’re better off without them.

An issue in the retro world might be the question of “self hosting” - at least that’s an issue/question for me and it’s one reason I went with BCPL - I wanted a self-hosting system because why not. My retro hardware is a 65c816 CPU with 512KB of RAM and I can run the compiler on that system.

(I can even compile the compiler with it, but it takes a very long time - the compiler has grown a lot in the years since it first started)

And there’s always BASIC :wink:

(And I’m not just having a giggle here - The first version of what’s now RISC-OS - Arthur - was written in a combination of BBC Basic and ARM assembly language)

Can you write a compiler for a new (ish) language on your own system? I’m sure you can, as I’m sure I could too, but for me it would boil down to time + effort as well as expectations. I looked long and hard for a Pascal or C system I could bludgeon to my needs on the '816 and didn’t find anything suitable. PL/M or PL/1 - there are some people playing with it today but a couple of years back I couldn’t find much about it, or compiler sources. There are free Pascal compilers though. Also Imp77 which might be a candidate? It should be capable of self-hosting on a 32-bit system with 512KB of core… (Or it was back in the late 70’s on an Interdata 7/32 system I used then…)

Sounds like an interesting FPGA project though - do keep us posted!



Smalltalk used ← (at the ASCII 63/65 _ code point) for assignment.

Smalltalk-72, -74, -76 and -78 used their own internal character sets. As part of the effort to make Smalltalk-80 compatible with the rest of the world this was switched to ASCII, but as Xerox was among those still using ASCII 1963 and previous Smalltalks had always used the left arrow for assignment they did use code point 16r5F (Smalltalk notation for hexadecimal) for that.

When Digitalk released their Methods system with a text based GUI for the IBM PC they used “:=” for assignment even though that machine did have a left arrow in its character set. Their goal was to make it easy for Pascal programmers to move over.

Over the years the Smalltalks that used 16r5F for assignment had problems dealing with external systems that used underscores in names and various workarounds and user preferences were created to allow “:=” to be used instead while dealing with backwards compatibility.

I remember BYTE having the Smalltalk issue.

It is amazing how much computer design, got lost with the advent
of the IBM PC and the MAC. We still have not got the computer
replacing paper, replacing the TV yes, paper no.
(Tablets come close as a display device, but they want to play phone)

I have gone back to the MCM6574 font, because it was a standard display
rom. Rather than saying modern compilers are too complex, they just
are too big, to self compile for a dos sized machine.
( 256Kb, 80x24 display,16 bit cpu,5 meg hard drive).

1 Like

I think, this is a pretty good choice.
Extra credits for the square lozenge! :slight_smile:

Regarding up arrow vs caret and left arrow vs underscore: The underscore has proven to be quite useful, especially on systems and with languages with support of long names/identifiers. I should really miss it. The up arrow is so suggestive for “power-of” that I would probably not lend itself easily and somewhat intuitively to another function, like XOR. (We could still use another character for this, like the lozenge/diamond, but this would defeat the original purpose. I’d say, that ship has sailed.)

I like the up arrow or caret for floating point numbers 1.8 ^ -6 . When character stropping for keywords
was around one could have spaces in variables or numbers like ‘is dog wet’ or 1 234 .37 .
I also like the use of ^ for a record pointer as in pascal. @ also could used rather than e for exponents.

One of my favorite things in this issue is a bit in Sol Libes’s article about the up-and-coming 5.25" Winchester disks that may be as large as 13 megabytes.