You can also do the same thing with Windows if you can find a Window XP or 98 install (depending on which version you need).
]]>Your post answers that question. I was using 5 terms in the series, so even with the ‘best’ precision on a real Z80 computer, -2PI to 2PI was the range where values were reasonable.
I will try some more playing with more terms (it’s just a constant in the program) and see what I get.
Meanwhile I continue to play with PL/I on the Z80. It’s amazing what decent compilers I can get for CP/M 2.2 with minimal effort compared to the digging I’ve had to do to get older compilers for ‘modern’ machines. As it is I can’t seem to find a working PL/I compiler (there is one, but all the zips are corrupted).
]]>Pretty impressive though.
]]>And huh … I never realized the original TRS-80 font was only 6 pixels wide (5 pixel wide characters with an extra blank column). I’ve never actually seen a Model I - just lots of Model III’s in schools and Radio Shacks. The Model III had a different font, and I think each character occupied a space that was 8x12 pixels on them (so the pixel width would be 8 x 64, not 6 x 64)
]]>The Windows version does not seem to support Windows Me, which sadly is pretty much the only old Windows computer I have that’s still functional.
I’m now trying to figure out Star Commander:
https://sta.c64.org/scdoc.html
It looks like I’ll be able to connect a 1541 drive to both the PC and the C128 at the same time, issuing POKE 56576, 199 on the C128 to make it “let go” of the common serial bus.
]]>I particularly enjoyed the discussion of the baroque character generation in the original hardware.
]]>Following is from the PDF document describing the clone:
]]>[T]his clone is a functional replica of the original computer in the traditional hardware sense. It’s not an FPGA port or an emulator running on a Raspberry Pi and nor is it a part-for-part duplication of the original circuitry, but a complete ground-up re-design using contemporary discrete CMOS logic and memory devices, with some additional features thrown in for good measure. At the time of writing every component used in this project is a current-production part.
]]>About the technology
It struck me that, at least in theory, organ pipes should generate quite primitive sound waves. If so, how come a church organ doesn’t sound like a chip tune, which is also built up from simple waveforms? Well, actually it will, if you remove the church. And if you connect a Commodore 64 home computer to a loudspeaker in a large hall, it will sound like an organ.
So the music on this album is not performed on a pipe organ. Instead, what you hear is the sound of one or two SID chips (controlled by a Commodore 64), enhanced by a convolution reverb to simulate church acoustics.
[…]
This album is an attempt to demonstrate that classical music can indeed be performed by a computer. But the amount of work that goes into programming the computer will never be less than the work that a traditional performer would put into studying the same piece of music.
(The XEM1541 has a couple jumpers to switch it between XE1541 and XM1541 mode.)
Well … I do have an old (linux) laptop with a parallel port that fits the adapter. I even have an old Windows Me desktop if that would be easier. And it costs me zero dollars to give this thing a try. So that’s what I’m going to try first. I’m not sure, but I think I need to use the following resource:
https://spiro.trikaliotis.net/opencbm
It looks rather daunting, though. If anyone has any experience getting XM1541 (or XE1541) cable working, I’d appreciate advice on it…
]]>Regarding precision in polynomials and practical consequences, see this graph of the gravity calculations of Spacewar 3.1, using fixed point math, and the revised version of Spacewar 4, using on-the-fly conversions to floating point and back again:
Mind the devil’s staircase graph of version 3.1 for small values at the left side, which is owed to the limited precision of fixed point math.
(The formula is f(x,y) = ( √((x >> 3)^2 + (y >> 3)^2) × ((x >> 3)^2 + (y >> 3)^2) ) >> 3, here with x = y, meaning, with all the squares and shifts applied, the remaining precision is extremely limited, especially for small numbers, where it’s about 3 bits of resolution only with fixed-point 18-bit values.)
I think, this is a nice illustration of problems arising from limited precision and how results typically deviate from the ideal function graph. In the case of sine, we may expect a mirrored image, where deviation increases with the size of input values.
(The graph is from https://www.masswerk.at/spacewar/inside/insidespacewar-pt6-gravity.html, where there’s also an extensive excursus on the particular implementation of binary arithmetic, including integer square roots and sine/cosine.)
]]>The KC85 BASIC implementation should be the same as the one for the TRS 80 Model 100.
My observation on higher order terms was based on 18-bit fixed point math. (Obviously, it doesn’t make sense for floating point arithmetic.)
It may be interesting to think of the Taylor series in terms of graphs: It’s a well known fact that sin x approaches x for very small numbers. What could we do to extend the range, where the straight line graph closely matches the function? Make it a (shallow) curve. This is better, but our graph soon starts to bend away from the actual function graph. Let’s add a correcting term. This is much better, but now we overshoot towards the other side. Let’s add another term, and another term. With each term added, we extend the range of the close fit a bit, by bending the graph slightly towards the opposing curvature. Ideally, repeat until adding another term doesn’t contribute to the result in a given precision.
Notably, any of the higher order terms will approach zero for very small numbers, for all practical means, thus providing correction mostly for bigger values and extending the fitting range.
This may be absolute nonsense in mathematical terms, but this is how I comprehend it.
angle | terms | sig.fig. | terms | sig.fig. | terms | sig.fig. | largest |
---|---|---|---|---|---|---|---|
4 byte | floats | 5 byte | floats | 8 byte | floats | ||
1.6 | 8 | 6.9 | 9 | 9.6 | 12 | exact | 1.6 |
3 | 11 | 7.2 | 12 | 9.5 | 15 | 16.3 | 4.5 |
3.2 | 11 | 7.6 | 13 | 9.7 | 16 | 15.1 | 5.5 |
4 | 12 | 6.9 | 13 | 9.6 | 17 | 15.2 | 10.7 |
5 | 13 | 6.1 | 15 | 8.7 | 19 | 14.7 | 26 |
6 | 15 | 5.6 | 17 | 8.5 | 21 | 16 | 64.8 |
6.1 | 15 | 5.6 | 17 | 8.0 | 21 | 14.9 | 70.4 |
6.2 | 16 | 6 | 18 | 8.1 | 22 | 14.3 | 76.3 |
6.3 | 16 | 5.9 | 18 | 8.5 | 22 | 13.9 | 82.7 |
7 | 16 | 5.2 | 18 | 7.7 | 22 | 14.7 | 163.4 |
9 | 19 | 4.4 | 21 | 6.8 | 26 | 12.9 | 1067 |
11 | 22 | 3.3 | 24 | 5.5 | 29 | 12.7 | 7147 |
22 | 36 | -1 | 41 | 1.5 | 47 | 7.3 | 3.00E+08 |
I revised my program a bit. I did realise a couple of things
Anyhow, I found a KC85 emulator with four-byte floats, like the usual single precision, and that extends the experiment. (It supports drag-and-drop to load a program, which is very handy.)
As you see(!) it takes 8 terms to compute an accurate sine up to pi/2, but 11 terms to compute up to pi, and 16 terms to compute up to 2pi. Even then, the accuracy drops from about 7 digits to less than 6 digits of precision. You can see that the largest term when computing sin(6) is over 64, so that will knock 6 bits off the accuracy of the fractional part.
Here’s the guts of the program as revised:
100 READ A
110 GOSUB 130
120 GOTO 100
130 T=A
140 M=ABS(T)
150 S=T
160 N=2
170 T=-T*A*A/N/(N+1)
180 IF ABS(T)>M THEN M=ABS(T)
190 N=N+2
200 P=S
210 S=S+T
220 IF S<>P GOTO 170
230 D=0
240 IF S=SIN(A) GOTO 260
250 D=-LN(ABS(S-SIN(A)))/LN(10)
260 PRINT A;" "; : REM ANGLE
270 PRINT N/2;" "; : REM NUMBER OF TERMS
280 PRINT D;" "; : REM SIGNIFICANT DIGITS
290 PRINT M;" "; : REM LARGEST TERM
300 PRINT
310 RETURN
320 DATA 1.6,3,3.2,4,5,6
330 DATA 6.1,6.2,6.3,7,9,11,22
]]>]]>After making wooden cases to some peripherals related to retro computers, I started to wonder if I could make the commodore 64 itself from wood. Encouraged by my woodworking experience from years ago and my wooden model aircraft history, I saw this idea as a new challenge and decided to do it.
To be honest, I wasn’t too ambitious, but I really liked the result. Let me go to the description of the production stages without further ado, I took more photos this time, I will try to convey the process with short explanations under them. I shot video while I was working, but video montage is a long task, I compile the shots and share them later. Lets start.
Of course, I’ve just realised, if you knew the range of input values, you could fix the count of how many terms to sum. And so, if you picked up a recipe which sums a certain number of terms, and applied it on an extended range, you’d see poor accuracy.
]]>Small anecdote, regarding the number of terms used in the series: The original Spacewar code for the PDP-1 uses a four terms series, while a later version for a much faster PDP with the same word length, uses just a three terms series. Apparently, it was found that more algorithmic precision wasn’t worth it, provided you reduced the argument to a small value. (Since sin x approaches x for small numbers and terms like x^9/9! approach zero for small numbers in low precision, few algorithms tend to extend beyond four terms.)
]]>What’s reported here is that accuracy of sin(x) computed with Taylor’s series (the usual power series, I suppose: odd powers of the angle divided by the odd factorials) falls off beyond pi.
That observation goes against my understanding, which is that these power series converge for all arguments. I suspected that what’s happening is that larger arguments cause some of the terms to get quite large, which in any given precision of arithmetic causes some loss of the small parts of those terms. And yet, as alternate terms are mostly cancelling in these cases, the small parts are going to wind up quite important in the final result.
I tested a small program using RISC OS PICO on a Raspberry Pi - helpfully, there are two versions of BBC Basic on this platform, one with 5 byte floats and the other with 8 byte floats. I expected the 8 byte version to do somewhat better for problematic arguments, which would indicate that it’s not the power series that’s unreliable, it’s just the limitations of the usual floating point arithmetic. You need more precision for the calculations than you wanted in the results, if you’re going to evaluation on large arguments.
(The usual practice is to reduce the arguments to a nice small number, by subtracting multiplies of 2pi, then multiples of pi/2 with adjustments to the argument, and then perhaps halving the argument repeatedly and making use of the half-angle formula to reconstruct the final result. So, never evaluate the power series on large arguments!)
One thing I wanted to do was see how large the largest term would be.
Here’s my code. You’ll see I’ve used meaningful variable names throughout, and lots of helpful comments:
100 REM EXPLORE POWER SERIES FOR SIN
110 REM TEST CASES SIN OF 7, 11, AND 22
120 REM TRY 5 AND 8 BYTE FLOATS
130 REM USING 5 BYTE FLOATS:
140 REM SIN(6.2831853), 1 SIG FIG
150 REM SIN(7), 7 SIG FIG
160 REM SIN(11), 4 SIG FIG
170 REM SIN(22), 0 SIG FIG
180 REM USING 8 BYTE FLOATS:
190 REM SIN(6.2831853), 6 SIG FIG
200 REM SIN(7), 14 SIG FIG
210 REM SIN(11), 12 SIG FIG
220 REM SIN(22), 6 SIG FIG
230 A=6.2831853
240 PRINT "CALCULATING SIN OF ";A
250 T=A
260 M=ABS(T)
270 A=A*A
280 S=T
290 F=-1
300 N=2
310 REPEAT
320 T=T*A/N/(N+1)
330 N=N+2
340 PRINT T
350 IF ABS(T)>M THEN M=ABS(T)
360 P=S
370 S=S+F*T
380 F=-F
390 UNTIL S=P
400 PRINT "SIN IS ";S
410 PRINT "MAX TERM WAS ";M
It’s interesting that the blogger found that single vs double precision wasn’t making a difference - that’s something I don’t yet understand.
]]>For my particular use case, 1541 emulation is not important. It would mainly be useful either for dealing with copy protection schemes for old games and GEOS. But I have an NTSC C128, which shuts me out of most old games anyway.
In contrast, the new C64 software I’m interested in (including my own) doesn’t have such copy protection schemes.
As for GEOS - I do actually have a physical copy of GEOS128, but it’s not something I have any interest in using again. I dunno … it does not have the same retrocomputing charm, to me, of the usual C64/C128 interface.
]]>It’s a pricey but awesome FPGA device, which apart from cycle accurate drive emulation allows multiple kernel ROMs, multiple cartridge images, additional SID emulation, REU emulation, tape image access, network access and more. It really is the ultimate solution.
The next best in terms of emulation accuracy (and cost) is probably the Pi1541.
The SD2IEC is the cheapest of the lot and gets the job done for the most part. (Main issue is with broad fastloader support.) Many variants of it on eBay, but recommend TFW8B.
You may also wan’t to check out the Commodore 8 Bit Buyer’s Guide.
]]>I only saw a corner of the machine, must have looked away for the full reveal:
It can run Fuzix, which has got the interest of Oscar Vermeulen (of PiDP-8 and PiDP-11 fame):
Quote:
]]>In a nutshell: 18MHz Z180 CPU with on-board MMU, 512K RAM, 512K ROM, an SD card for storage and serial ports for your terminal … Absolute top-of-the-line CP/M specifications that you would have paid a fortune for back in the day, but now shrunk down to $49 and the size of four credit cards.
But here’s where it starts to get sexy: multi-user, multi-tasking with Fuzix. Also, built in to the ROM are CP/M, the Z-System, Basic and Forth. A very fast RAM disk and a ROM disk with all essentials come as standard.
And while watching it, the thing that kept running through my head was:
“Remember, it’s only 5Mhz”.
I mean, sure, you can watch the spreadsheet recalculate, but, still, 5Mhz. Pretty impressive.
]]>The cheap dirty WAV -> Datasette option may be a moot point, since I’m not sure where it is. I don’t even know if it works, since I never used it after I got a floppy drive. (Here in the USA, at least where I was, the entire Commodore scene was C64 floppy disk piracy and BBS’s - no one else I knew ever had a tape drive.)
So I’ll invest in some sort of SD2IEC device or alternative … I just don’t know what’s out there and whether there’s a more suitable option. Something that could connect directly to a network/computer/laptop would be even nicer (I think) than physically dealing with SD cards. But I don’t know what’s out there.
]]>The tape cassette adapter might well work, and if it does it’s a nice solution. But beware: what works for audio is not always good enough for data, especially when dealing with pulse-width formats like Commodore’s and Sinclair’s. If you can try it without too much outlay of money or time it could be worthwhile.
]]>(Disclaimer: I haven’t tried it myself, yet.)
]]>It’s an SD2IEC device which connects to the cassette port for power and the IEC port acting like floppy drive 8. The cable looks too short for the C128D model, but I have a normal wedge C128 (NTSC) so that’s no problem.
At $43.90, it’s a bit pricey for an item which I’m not 100% sure is the most suitable option.
I’m only going to develop single loader .PRG programs, and I might be able to find my old Datasette and tape cassette car CD adapter (remember those old things, that you connected to a CD or other audio device, which let you play them on a car casette player?). So maybe I could figure out some sort of PRG to WAV converter, and play the WAV on my linux laptop while tape loading on the C128. Super slow, of course, but dirt cheap. Since I’m doing my development with testing on VICE emulator, I’ll only occasionally want to play on my real C128.
Anyway, I just finished setting up a fresh Debian laptop for C64 development. Setup VICE emulator for testing and DASM assembler, as well as my first “Hello World” test program. Got it set up at base address of 2049, with a simple BASIC stub so it can load/run with or without “,1”:
2020 SYS2061
I’ll be programming routines for NUFLI graphics mode, and using mouse input. I know that most C64 users don’t have C64 mice, but I really want to use mouse control. I’ll program options for joystick and mouse, though.
Thanks for any advice!
]]>I can’t find any statement connecting the Workslate with APL - do you have a reference or a link?
Actually, I can’t remember where I picked that up. (I guess, once there had been more sources available on the Workslate than can be found now.) While it would make some sense to implement a machine which handles everything as a spreadsheet in APL, I can’t verify it. Contemporary reviews just talk about a proprietary OS [1]. So, please read the above comment with a suitable amount of salt. (I’m going to edit a suitable remark in the original comment.)
[1] Review in InfoWorld, Apr. 1984: https://books.google.at/books?id=iy4EAAAAMBAJ&lpg=PA60&ots=QJ7K9nNVeC&dq=convergent%20workslate%20apl&pg=PA60#v=onepage&q&f=false
That said, it’s still a beautiful and very peculiar little machine. For a bit of context and detail, have a look at this series of restoration videos: https://www.youtube.com/watch?v=Wr-f9BsC730
]]>Cheers,
Andy
I did find this page setting out not one, not two, but three spreadsheets implemented in K, one of the successors of APL:
http://nsl.com/papers/spreadsheet.htm
After leaving university, I joined an Avionics company where I banged on about Transputers so much that eventually I was involved in the design and development of a multiple Transputer based Avionics product which was very successful.
(Emphasis added!)
]]>Take for example the Convergent Technologies Workslate, where everything was a spreadsheet implemented in APL:
https://www.old-computers.com/museum/computer.asp?st=1&c=891
[Edit: Please take the previous remark regarding the Convergent Workslate with a grain of salt. While I have definitely filed the Workslate under “APL, but inaccessible” in the humble vaults of my brain, I am unable to find any documents verifying this or giving any hints at all regarding the internal software of that machine. Accordingly, this is better to be regarded as an erroneous claim.]
or the Ampere WS-1, a 1985 laptop running APL-68000 by Nippon-Shingo (production design by Kumeo Tamura, who also designed the Datsun 240z), which may have been the last of the portable APL machines:
https://www.old-computers.com/museum/computer.asp?st=1&c=66