At DVXuser, there’s a detailed article called Sensor Artifacts and CMOS Rolling Shutter by Barry Green. He discusses, and does a very good job at showing a phenomenon whereby the image captured by a camcorder’s imaging chip is not gathered all at once (what I’ll call “progressive” like a frame of film behind a shutter) but may end up being collected across the chip like a farmer collecting corn from his field. This can create footage that has unique problems. He says:
While CMOS and CCD sensors do the same basic job (gathering light and turning it into a video image), they go about it in different ways, and the differences can have very significant impact on your footage… CMOS sensors (equipped with “rolling shutters”) can exhibit skew, wobble, and partial exposure; CCD sensors are immune to those effects. And a CMOS sensor with a “global shutter” would also be immune to them, but since no current CMOS camcorders are equipped with global shutters, a camcorder buyer needs to be aware of what the implications of a rolling shutter would be.
Many of today’s new videographers won’t know anything about this but in the early days of video, camcorders had tubes. Either one tube for a black and white camera, or three tubes for a color camera. Aside from the numerous challenges of keeping the three tubes aligned so all three of your color images had the same registration, the tubes suffered from other problems- like burn in & smear.
Watching video coverage of night sports games always had low-angle shots of the players on the field, with the bright stadium lights behind them. TV tubes can burn in. So can video camera tubes- and it can happen pretty quickly. So much so that as a tube-based camcorder panned across those stadium lights, or even the scoreboard at the end of the field, you’d see little “comet trails” from the lights burnt into the tube itself.
Another issue is that tubes scanned the image from top to bottom, just like the way Mr. Green describes the CMOS chips. Let’s go back to Barry Green’s article:
In order to understand the various image artifacts that arrive, the important thing to note here is that in the rolling shutter, different portions of the frame are exposed at different times than other portions. If the subject or the camera were to move during the exposure, the result would be reflected in the frame as one of the three Rolling Shutter Artifacts (Skew, Wobble, or Partial Exposure). For example, here’s a simulation of what would happen if a rolling-shutter camera were to pan horizontally during exposure…
He shows how a vertical tree, when scanned line by line, ends up looking like a diagonal tree on the display. What he fails to mention is that the problem is actually a mismatch between the imaging chip and the display. It is not a problem with just the chip. If you record with a rolling shutter (i.e. a scanning) imager and show it on a progressive display, the image will be distorted. But if you showed it on a scanning display, like a TV tube, the scaning of the imager would match the scaning of the display- at the same time- and there would be no distortion.
To make my point, it requires that you understand the basic gist of how the errors occur through the rolling shutter, and then realize that they have only become problems now, with the use of progressive displays.
You see, tubes scanned the image over time. But for decades, there was really no problem because the tube displays also scanned the image onto the phosphors of the display over time. This meant that, as the camera panned past the tree trunk, each scan line has the tree in a different place. Similarly, each scan line of the TV tube has the tree in a different place. The entire process was analogue so as the image was scanned, it was transmitted and displayed. This happened over time. No one watching the set saw a diagonal tree trunk as Mr. Green shows in his example.
When it became possible to freeze frames of high motion captured by a tube camera we were able to see a distortion that was never before evident.
But Mr. Green does not mention the fact that the opposite is also true- that progressive imaging devices (CCDs, film) capture the entire “image” like a picture, and it can appear distorted when those progressive images are displayed on scanning displays, like tube TVs.
In fact, this discussion has been around for years.
Each frame is comprised of two fields, each with 243 active scanlines. One field is displayed in 1/60 of a second, then the other follows 1/60 of a second later. Since the critical flicker fusion frequency of the human visual system is roughly 60 Hz, we can look at a TV and see what looks like a solid, steady picture, even though most of the time the CRT’s faceplate is dark (LCDs, DLPs, and plasma panels work differently, but that’s another article…)
Like other CMOS camcorders, the V1’s chips use a rolling shutter, sampling scan lines sequentially instead of capturing the entire frame at once. Still frames taken from fast pans show tilted vertical lines-just like stills grabbed from a tube camera’s clips or photographs taken with still cameras using vertical focal-plane shutters. The 1x playback of such pans on CRTs look better than CCD camera pans do, because CRTs are also sequentially scanned, but all-at-once displays may make the V1’s pix look distorted.
(Of the two LCDs I use, the Panasonic BT-LH1700W scans like a CRT: the Z1 CCD pans look tilted and the V1 CMOS pans look normal. The HP L2335 buffers the image and displays it all at once: the Z1 pans look normal and the V1 pans show tilt.) Fast-moving objects can look distorted in extracted stills, though I have yet to see anything objectionable in moving video. If you’re considering a V1 for film-out, you’ll want to test this aspect of the camera’s imaging and see if it suits your needs.
Here’s an excerpt from Mark Schubin in Videography, April 2004:
Imaging tubes also precisely matched the scanning of picture tubes. Chips, on the other hand, capture entire images at once, whereas picture tubes present the top first and the bottom last. As a result, straight vertical lines can seem curved as chip cameras pan, and there can even be audio-sync issues with imaging chips. But few would trade their chip-based cameras for older tube models.
Here’s an excerpt from Mark Schubin in Videography, July 2006:
That changed a bit with the introduction of solid-state cameras. Imaging chips capture the whole image at once, rather than starting at the upper left and proceeding via scanning lines to the lower right. But picture tubes (cathode-ray tubes, CRTs) still start at the upper left and proceed to the lower right. As a result, if picture and sound are perfectly synchronized at the top left of the picture, they’ll be about 17 thousandths of a second out of sync at the bottom right.
That slight delay is relatively insignificant. Given the speed of sound in air at sea level at about 15 degrees Celsius, it’s roughly the same delay that sound would have reaching a microphone (or an ear) 24 feet away. We are accustomed to such minor sound delays. If someone were to say “Hello” from across a large living room, the delay would be comparable.
Unfortunately, given the perceptually instantaneous speed of light, we are not accustomed to picture delays, and that’s what the solid-state imagers introduced. The lower-right portion of the image is delivered later than the upper right. Fortunately, the center of the image, where a person’s mouth is likely to be, has only half the delay — and it is relatively small.
Unfortunately neither of the complete articles are available online.
Shame on Videography!
This brings up Progressive versus Interlace
Actually, the issues I am discussing here are not directly related to progressive or interlaced imaging chips. The rolling shutter issue goes further on to how, exactly, the information from each row of chips is read off and sent to the processing unit of the camera.
Going back to tubes, the information came off the tubes as it is scanned.
That is why I call them “scanning” imagers.
CCD’s dump the entire imager to a buffer which hands it off to the processor.
Actually, at the dawn of CCDs, there were two different types. FIT sensors dumped each row of pixels to a masked neighboring row, which then dumped it to a masked “chip” before being set off to the processor. It captures the entire frame at once- what I’m calling “progressive.” I’ve included an image here from Sony about the design of FIT chips so you can understand how the entire chip can be read (actually, information is “dumped” from the pixels) at once. In reality, it’s dumped to the next pixel, then next door, then off to the processor. Quite a lot of data movement going on.
CMOS chips can be read any number of ways. Currently, most are designed so that the information is handed off much like tubes used to be. However, there are numerous consumer and professional CMOS cameras that can also alter how the information is read off the pixels to enable other effects- live variable frame rates. This CMOS chip image is from Molecular Expressions which has an excellent and mind-numingly detailed explanation of how CMOS imagers work.
One example of how malleable the data is that comes off these chips is Sony’s own HDV camcorders that offer the ability to change the way the CMOS imager sends pixel data to the buffer. Instead of taking the data from top to bottom (rolling shutter) of the entire chip, you can enable a high speed mode which divides the image area into numerous sub-sections and reads that data off faster than the normal 30 frames a second. This can only happen until the chip’s buffer is full, and then that data is recorded to tape in real time, providing true slow motion, at reduced resolution.
People are using this already and, on Your Tube, the descreased resolution of the camcorder is still far above the resolution of the web video. Check out this video by MoeSkate86:
Even the very high end high speed cameras that utilize CMOS imagers use the same method. Photron’s ultima fastcam APX-RS offers 3000 fps at 1024 x 1024, but if you want 10,000 fps, you only get 512 x 512. The key is that it records to its own internal RAM subsystem- currently 16GB memory for as much as 12 seconds at 1,000 full resolution fps.
In the end, Mr. Green cites a reference noting that this can’t be automatically corrected:
Chia-Kai Liang wrote a research paper attempting to correct for rolling shutter issues; an abstract and examples are HERE (click to view).
His CONCLUSION (click to view) is that it can’t be compensated for automatically.
Actually, there are already patents on the books to do this correction. But they are not describing what you may think. They do not describe CMOS imagers, nor any of today’s leading edge technology. But they are designed to fix image errors of scanning (rolling shutter) imagers as recorded by other progressive imagers (film) – i.e. Kinescopes.
Kinescopes are films made of early television programs because there was no video tape. (cool link)
They literally pointed the film camera at a calibrated TV tube and filmed it. The shutter in the film camera captured the entire TV image at once and caught the inherrant image distortion caused by the scanning tube. Our eyes never saw the distortion on the tube itself because the tube scans the image over time. It was never meant to hold the entire image on there at once. But the film gathered up all the lines, both even and odd fields, and then closed the shutter to catch the next frame.
The patent says, in part:
A horizontal scanning correction module generates a predistortion signal responsive to the compensated gain signal… A field scanning correction module generates a correcting wave responsive to the compensated gain signal and the first correcting signal…
Which basically means, it takes each line of scanned imagry and corrects for the time that has passed in between the lines. At the end, the corrected image will be displayed to be recorded by another device.
The distortion Mr. Green discusses is real. However, it is not limited to just CMOS imagers.
It is worse than he describes because tube displays are scanning, LCD and plasma displays may be progressive (all at once) or scanning, depending upon the circuitry built into the display, as Adam Wilt has already related. (Not all LCDs act the same.) Camcorders may either be scanning (CMOS) or progressive (CCD). Existing historical footage may be scanning (tube camera) or progressive (film), or even a mixture (Kinescope). In the end, any number of combinations can occur.
As Mark Schubin summed it up well in Videography in December of 2006:
Unfortunately, whatever reference-monitor technology is used, it will no longer be consistent with those of all of the video screens the audience uses. Despite the fact that flat-panel TV set sales exceeded those of CRTs in the third quarter in North America, there are still more than 300 million existing CRT TVs in U.S. homes. It will be a long time before they are all replaced, and, when they are, they could be replaced with LCD, plasma, DLP, or other display technologies, each with its own characteristics. As CRTs fade away, consistency between reference monitors and TV sets is fading with them.
For the future, with the overall move to progressive displays, the importance of capturing your images with a progressive imager is up to the end user. The examples Mr. Green links to clearly demonstrate extreme examples of scanning imagers producing distorted video on progressive displays.
Would the video be as bad if the cameras were better stabilized? No.
Can that footage be easily fixed with readily available production tools? No.
So understand your camcorder. It is a tool. One of many tools you can use for a production. A deep knowledge of what your tools can, and can’t do will help you pick the right tool for the job at hand. Lastly, know your destination before you start. If you are looking to eventually go to film (a progressive format) capturing your images with a scanning CMOS imager may not be the best choice. Mr. Green notes that the RED camera uses a scanning CMOS imager. So does the lowly Canon HV20.
As has oft been said: Knowledge is power. How knowledgeable are you?