Slightly Off-Topic: Why Were Color TVs Tiring?

Something I recall from when the change from b/w to color TV happened: Color TVs were often deemed tiring, even exhausting. Viewers may even react with headaches and/or nausea and other symptoms. (E.g., I recall that when my family still had a b/w set, my grandparents got a color TV. It was nice, but also exhausting to watch. After a film, you were certainly tired and not eager to watch another one.) You had to become accustomed to color TVs and there was a transitional period. This was then a commonly acknowledged fact.

But, why was this so? Was it for the pixel mask? (B/W CRTs didn’t need this and had just an even phosphor coating.) Was there more of a glare (maybe, because of the pixel mask)? Was it just a slightly increased instability of the picture? Was it for the higher contrast produced by three electron guns? Or for a slight misalignment of the cathodes?

And, did this have any consequences for home computing? Was there a difference in habits of those who had access to a color TV as a monitor and of others who used a b/w set or a monochrome monitor? Was there a difference in productivity, when it came to doing your own programs?

I thought it had to do something with the refresh rate and the resolution.

My recollection when I first came across a colour TV was that it was very bright and a very large screen, compared to what was normal for me. Some people are relatively insensitive to flicker and some are more sensitive. I’m fairly sensitive and I live in a 50Hz country, so that would be bad for me. It’s just possible that the phosphor decay time is shorter (there are, after all, three different phosphors involved) and that might have made flicker worse.

1 Like

A shorter phosphor decay makes sense!

One other thing to keep in mind: Color TVs were never quite as focused as monochrome monitors, so that’s why you didn’t see a lot of 80 column screens on color TVs (I used Omnicom on the Atari 8-bit series for a bit, and it was suboptimal).

1 Like

I was mostly monochrome in my early computing, but I now remember I bought a colour TV in the last week of my Philips employment, with a little discount, which had teletext and also a SCART input. That TV I later used with my Amiga, I think. So, at least I might have avoided quality problems from RF and composite. But even so, a TV isn’t built for such close viewing and as I recall the flicker could be pretty bad.

In 1970 I moved from Brazil (which was about to introduce color TVs, but hadn’t yet) to the US where color TVs were very common. I didn’t find the change tiring nor did anyone else in my family. Since my father worked with TVs for GE, he would often bring home a competitor’s model for us to watch for a couple of weeks. None of them caused any problems.

In 1975 I moved back to Brazil where color TVs were now the majority. Though in hindsight I feel that the PAL/M was slightly fuzzier than the NTSC I had watched for 5 years, I didn’t really notice at the time, specially given all the noise analog transmissions used to have.

Perhaps it is just a matter of what you are used to? Young people often can’t stand to watch a whole movie in black and white so those rather nasty colorized versions are shown instead.

There’s quite a difference between 50Hz and 60Hz - I wonder if that’s a factor.

Yes, this seems to confirm that it was about 50 Hz and image stability / refresh rates. (I believe, PAL/M is also 60 Hz?)

As I remember it, color monitors were much more blurry, at least, this was the common consensus, but this wasn’t so much an issue with moving images, for which TVs were optimized (which also gave you those overdriven color edges, when using them as a monitor for a home computer).

Anecdotally, I recall some 30" color monitors, where you couldn’t tell a dithered color from a solid one, and this was already in the mid-1990s. (On a Trinitron, of course, you could tell the difference. I guess, this kind of blur was connected to the pixel mask and gun alignment.)

Yes, PAL/M and NTSC are nearly the same except for a tiny difference in the color frequency (3.579545 MHz for NTSC and 3.575611 MHz for PAL/M)and the alternating lines. That made conversion between the systems very easy.

In the mid 1980s I was looking at different 14 inch CRTs in a Philips catalog and they got very expensive if you wanted a fancy 0.2mm dot pitch instead of the 0.3mm used in TV sets. If your signal was limited by the bandwidth of the RF channel it didn’t make sense to waste money on a more expensive tube, but for a monitor it made a huge difference yet many early models went with the cheap option. By the VGA era you had to go with a high end CRT to be competitive.

2 Likes

Well, PAL has about 20% higher vertical resolution than NTSC. I also doubt that refresh rate would be an issue; that’s not usually related to clarity, except for things like transfers of 24 fps film to NTCS with 3:2 pulldown, which can produce interlacing artifacts.

The vertical resolution of the signal is based on the number of lines of output and whether the display is interlaced. For standard non-interlaced computer signals there’s generally no problem for the number of lines that particular resolution supports (usually around 400 for standard-definition 15.75 kHz signals). Of course, if you want more vertical resolution than that you’ll have to scan the lines at a higher rate and you’ll need a monitor that supports that higher bandwidth: around 25 kHz and 31.5 kHz (VGA) were popular on '80s and early '90s computers.

The horizontal resolution of the image will be affected by the following. (Note that the resolution of the colour component may be lower than the resolution of the brightness; e.g., you may be able to distinguish two adjacent points of different intensity, but not of the same intensity but different colours.)

  • The bandwidth of the computer’s video output, i.e., how fast can it change the signal to different intensities.
  • Whether you’re connecting to the display via a baseband video signal (the composite video output) or by modulating that onto broadband RF signal. The latter is limited to a 6 MHz bandwidth and also may suffer degredation due to the RF modulation and demodulation processes.
  • The bandwith of the monitor’s video input circuitry (after demodulation, if that’s occurring).
  • The bandwidth of the circuitry driving the electron beam displaying the image on the picture tube.
  • The size of the shadow mask or aperture grille and the coloured phosphors on the face of the tube, when using a colour monitor.

When looking at the specs for a monitor, horizontal resolution is given in “TV lines,” which tells you how many lines can be resolved across a horizontal distance equal to the height of the screen (i.e., across 75% of the horizontal width of the screen on a 4:3 monitor).

As an example, my Fujitsu FM-7 and NEC PC-8001 use a 640×400 resolution in 80-column mode. I had originally bought a Sony PVM-9042Q monitor without knowing much about this kind of thing, and 80-column text was only barely readable. It turns out that the 9042Q has only 250 lines of resolution, and what I really wanted was the (also easily available) 9045Q model which is almost identical except that it has 450 lines of resolution. Procuring one of those did the trick; it’s still not what I would call excellent quality, but it’s much better than the 9042Q.

I’m not sure many 9" monitors suitable for displaying 80 columns were ever made, actually. (The 14" version of this monitor has 600 line resolution.) But for me space is a huge issue; I keep seeing 14" monitors come up for sale, keep going back and measuring my work area, and keep sighing with resignation. :-( I could probably fit a 12", but I’ve not found a 12" PVM with all the features these ones have (supporting basically all television standards, RGB input, separate sync input, input pass-through, etc.) At least, not anywhere near the kind of price these ones go for (often under $40 on Yahoo Auctions).

1 Like

Maybe there’s been some topic drift: “tiring” is the word used, and in a television context. As someone who is sensitive to flicker, I can assure you that 50Hz and 60Hz are worlds apart for me. We’re not talking 20%, we’re talking a factor of two. Now, of course, everyone’s biology is different, and someone who is not sensitive to flicker might not imagine there can be a problem.

Of course, there’s lots to say about video: about the timing, and the perception, and the signal degradation, about spatial and temporal frequencies, and so on.

Clarity isn’t always an issue. If you ever had a graphics card with video input, you may recall that you actually wanted to blur a TV image (slightly, at least). I guess, contrast and sustain vs refresh rate are more of an issue when it comes to eye strain, especially, when “well trained” b/w users found themselves in need to accustome themselves to color TV.

Regarding horizontal resolution, it’s actually tricky. In b/w there is no such a thing, it’s just an analog signal exciting an even phosphor coating. With color, there’s both a granularity of the colored phosphor and the pitch of the mask, but neither of them correspond directly to pixels. (Hence you could do things like multi sync/multi resolution monitors.) However, while there are no actual pixels (or pels) there’s still a maximum resolution you may get from the signal. A fixed horizontal resolution and by this pixels only become a thing with digital signal generation. Then, there’s the circuitry for image processing in a TV, e.g., sharpening for motion images, filtering for noise reduction, comb filters, later b/w sets filtering out high frequencies related to color encoding, etc, which may clash with the signal generated by a computer. If you combine a traditional TV with a computer, what you get is actually quite a mess
 (Especially, since there’s no notion of such a thing as a horizontal resolution on the TV side.)
Moreover, the various color systems do pose slightly different problems here.

It’s right that it comes eventually down to bandwidth, which is somewhat fixed with RF signals. Another factor is the excitability of the particular phosphor used. It takes a while, until the activation of the phosphor ramps up (and there’s also a ramp on the falling edge). The same is true for the circuitry, which triggers the cathode(s). There seems to have always been a trade-off regarding sustain/stability and reaction time, which could only be overcome by higher cycle times / higher bandwidth. As I recall it, a single vertical line seldomly reached full activation (or saturation). Which is also, why you could get only about 40 characters from a 320 pixel line, since you had to have at least 2 adjacent pixels activated for a vertical line/edge, worth 4 vertical lines max for what were actually 8 pixels.

This is also true for TV-grade CRTs used as dedicated monitors. E.g., the VT100 used a normal, consumer grade, monochrome TV tube in non-interlaced mode (hence the visible scanlines) and, while feeding a direct signal to the tube, had still to double pixels in a dedicated circuit in order to generate a sufficient saturation that crossed the minimal threshold for an even, balanced image. To close the circle, a bit of blur actually helps with many of these problems. (Generally, if you have an accessible focus control, you usually wouldn’t want to dial this to all focus. Clarity isn’t always a primary target.)

It turns out to be quite difficult to take a good photo of a CRT, but here’s my effort from this morning - the 80 column text from the Acorn BBC Master as shown on the 5 inch monochrome TV is better in reality than in this photo. It’s not a clear display, but it is readable. Back in the day I bought second hand (junk) monochrome TVs for computer and conventional purposes, maybe 20 inch models, but when I got my Beeb I bought a 12 inch portable from a friend. That was perfectly usable, but most likely I mostly used the very legible 40 character wide teletext mode. (Turns out it was his mother’s TV and he hadn’t asked
)

Click through for a photo of the screen itself. As I say, it’s a bad photo.

1 Like

That’s surprisingly clear for a consumer 5" display! (Smaller colour displays seem to tend to have lower resolutions, I guess because you’d need a higher-pitch shadow mask for the same resolution on a smaller display.)

For what it’s worth, I do happen to have a close-up of a 640-horizontal-pixel display (80-column text from an FM-7, using the monochrome composite output) on my Sony PVM-9042Q. That has horizontal resolution of 250 TV lines, which is about 330 lines across the display.

Looking closely at the image, particularly the stepped sequence of vertical block fills (where you can see individual colours on the top and bottom edges), it seems clear that the resolution limitation here is the aperture grille, which has only three slots per character (approximately).

One wonders if it wouldn’t make sense actually to use a horizontal aperature grill, instead of vertical, on monitors like this. The vertical resolution of the monitor is fixed (525 lines per frame, 486 of which are visible), so one could space the aperature grille exactly for that (with 486×3=1458 lines of R/G/B-coloured phosphor), and then the horizontal resolution would be entirely analogue, limited by the frequency response of the electronics, beam and phosphor.

1 Like

While there are some pretty visible “vertical scan lines”, I’m not sure, if the most limiting factor here isn’t the reaction time of the phosphor. Where there are continuous, horizontal regions of activation, the image isn’t that bad and pretty legible (compare the adjacent block characters), while it is really problematic where there are merely 2 adjacent pixels, like for a vertical bar as seen in an “I” or the katakana characters. Judging from the photo, I’d say, there isn’t full activation reached for any of those characters. (BTW, the photo is pretty good!)

Right, so I got curious about this and hauled out the FM-7 to do some experimentation. The theory I now have for the FM-7 display is:

  1. The system always generates 640×200 video.
  2. The standard glyphs are stored as a 6×7 dot matrix.
  3. The sub-CPU, when asked to print a character at a certain spot on the screen, renders it as-is in WIDTH 80 (80-column) mode and double-width in WIDTH 40 mode.

I worked this out by using the SYMBOL (x,y),"c",hscale,vscale command in FM-BASIC, which plots character c at (x,y) on the screen (x ranging from 0-639 and y ranging from 0-199) scaling it horizontally by hscale and vertically by vscale. Scale factors of 1,1; 2,1; and 8,4 look like this:

(This image was photographed from on my 450-line PVM-9045Q, and under different conditions from the photos in my previous post, so don’t use that to compare resolution, just use it confirm the character dot matrix.)

The ペペペラăƒȘルハロロ line at the top was printed using the standard (40-column) text output so you can see the standard inter-character spacing, which I can’t do with SYMBOL since that plots only one character at a time at arbitrary co-ordinates. It looks like those add one column of pixels for spacing, giving 7 pixel-wide characters, but they should be 8 pixels wide for 80 chars on a 640 pixel line. I have no idea where the extra pixel went.

Anyway, going back to my previous post’s image:

I can’t find the “I” you’re referring to, but if we have a look at the exclamation point, which is one pixel wide, and the double quote and octothorp to the right of it (the !"# character sequence), we see pretty clearly that each character is getting about three lines of horizontal resolution on the 250-line PVM-9042Q, and can see the adjacent red, green and blue phosphors in each line:

image

I think you’re right that there’s a “reaction time” thing going on here, but I don’t think it’s anything to do with the phosphor: it’s simply that the analog signal, continuously varying across the scan line, is (partially) “digitized” by the aperture grille, i.e. you see only the sections of signal where it’s not been blocked by the grille. (There are actually three signals being displayed, for R, G and B, but since the composite output is monochrome, R, G and B will all be at the same level.) So if you have several adjacent horizontal pixels lit up that fully cross the R, G and B phosphor columns in front of the grille, you’ll get a nice bright white, as you see for the horizontal lines in the octothorpe. But when only one pixel is being lit by the computer, that’s not enough to cover the full width of an aperture grille column, so you’ll see that section only partially lit up because the signal is “on” for part of it and “off” for part of it. (Actually, it is truly analogue; there will be a rise and fall time there, but I imagine that the column will simply average out the total energy value of the signal as it varies over that column.) Thus the double-quote, pattern .█..█. , ends up more or less evenly lighting all three columns.

This also seems clear in the sequence of â€œăƒšăƒ©ăƒȘăƒ«ăƒŹăƒ­â€ where there are eighteen vertical slices of the aperture grille across the six characters, and though I suspect the beam can resolve this, the aperture grille cannot:

image

Here’s the 80-column character sequence on the 450-line PVM-9045Q.

I’m not sure how reliably these can be compared, because my camera doesn’t have manual controls for me to get consistent images, but if we extract the same â€œăƒšăƒ©ăƒȘăƒ«ăƒŹăƒ­â€ sample to show it here full size:

image

I count something like 35 vertical lines there, which isn’t too far off from the calculating 18*450/250 = 32.4 lines. The individual coloured columns don’t come out in that image, or any others I’ve taken, though. I don’t know if that’s an issue with the photography or if something’s different about the monitor.

Now that I’ve done all this, I’m not sure if I’ve become less confused or more confused about this. But at least I know to check for lines of horizontal resolution on colour monitors and that 450 lines is kind of a bare minimum for 80-colum work. :-)

Regarding phosphor response and brightness loss, I once wrote a blog post on this and the character generation on the VT100. (Maybe, some similar mechanism is responsible for the “missing” pixel?)

It comes down to this image, composed from drawings in the VT100 Technical Manual, which seems to correspond to your photos quite well:

And, yes, it’s both about signal ramps and latency in the phosphor response, but the latter is the more decisive factor.

P.S.: Given how difficult it is to shoot a neutral photo of a CRT without adding bloom etc, your photos are pretty excellent!

PPS: I recall phosphor response times being among the technical data commonly found in advertising for a given line of monitors. I didn’t understand the importance then, but at that time and the state of monitor tech, it was probably the most decisive figure for what resolutions you could get from a device. (At the time, it was often thought to be about ghosting, but this would have been more due to phosphor sustain, which wasn’t advertised at all.)

Last thoughts on this: As provided by your example of the Sony monitor, a vendor would provide a monitor with different kinds of phosphor coating (at no extra costs involved), so that you could chose the right one for an intended resolution. A “slower” phosphor would probably give you a more pleasant look in low to medium resolution, with less contrast on the edges and less blocky characters, whereas a faster phosphor would provide an even brightness in higher resolution, but may be too “contrasty” or even “brutal” for a 40 character display. So faster isn’t always better, it depends on the use.

1 Like

I am certain it’s a refresh rate and phosphor persistence thing. This “color TV tiring” thing simply was not a thing here in the USA; we use 60Hz refresh rate.

However, when I got an Amiga, I switched to PAL mode for the extra resolution. Especially with interlacing, the flicker was incredible, but even without interlacing it was flickery - in contrast to NTSC mode which had no flicker whatsoever. Of course, different people have different levels of sensitivity to flicker. I am sufficiently sensitive that I can easily sense flicker in a 50Hz display (even non-interlaced). But I am willing to make sacrifices for the sake of extra resolution.

In contrast, I never noticed any flicker in the classic IBM PC monochrome display (or related Hercules graphics). Of course, the reason for this is the long persistence phosphors.

1 Like

In the 1990s, when multi-sync monitors became a thing (and video RAM became the decisive factor as in resolution vs color depth vs bandwidth), I couldn’t stand less than about 75 or 80 Hz, preferring rates above 100 Hz. But then, phosphor sustain had also shortened significantly.

The IBM 1551 monochrome display was well known for its persistence. Have a look at this video (by LGR): LGR Oddware - IBM 5151 Monitor with MDA & Hercules Graphics - YouTube

1 Like