[Written for Better Digital Photography magazine, 2004]

 

Need to know: Sensors

Without one, your digital camera wouldn’t see a thing, so how do image sensors work?

 

The one vital component that all digital cameras have in common is an image sensor. There are several different types and many different sizes, but the all basically do the same job. They convert the image that the camera ‘sees’ into an electronic signal that can be stored as digital information.

Most people now know that sensor resolution is measured in Megapixels, but there are other factors to that affect the quality of the final image. The first and foremost is its actual physical size of the sensor. In many larger compact cameras, the sensor is of a type known as 1/1.8”, which is 7.176 x 5.319mm in size. Some smaller and older cameras use a 1/2.7” size sensor, which is even smaller at just 5.27 x 3.96mm. There are many other different commonly-used sizes of digital sensor. The odd type codes (1/2.7”, 2/3” etc.) are a hold-over from the diameters of video camera tubes in the 1950s, and bear little relation to the actual dimensions of the sensor. See the table below for the actual sizes of various sensors.

Into the diminutive surface area of the sensor will be crammed anything up to five million individual photoreceptors. Obviously the more photoreceptors you squeeze into a given area the smaller they become, and this has a negative effect. It means that the light-gathering power of each photoreceptor is decreased, reducing the dynamic range. Dynamic range is the ability to record the subtleties of tone from white to black. A reduced dynamic range will compress the finer gradations of tone between the extremities of highlight and shadow. A larger sensor is therefore preferable, in order to capture more subtle tones.

Sensor types and sizes:

Type Width (mm) Height (mm)
1/3.6" 4.00 3.00
1/3.2" 4.536 3.416
1/3" 4.80 3.60
1/2.7" 5.27 3.96
1/2" 6.40 4.80
1/1.8" 7.176 5.319
2/3" 8.80 6.60
1" 12.8 9.60
4/3" 18.00 13.50
35 mm film 36.0 24.0

 

Silicon solution

So how does the sensor actually convert incoming light into a digital image? The sensor is made up of millions of photoreceptors arranged in a grid, with each photoreceptor corresponding to a pixel in the final image. Inside each photoreceptor is a small amount of a substance that is sensitive to light, and attached to it is an electronic component called a capacitor, which can store electric charge. When a photon (a single “particle” of light) strikes one of the atoms of this substance, it causes it to emit an electron (a particle of electric charge), which is then stored in the capacitor. As more photons hit the photoreceptor, more electrons are released, building up as a charge in the capacitor. The higher the charge, the brighter the signal from that spot on the sensor. Multiply this operation by several million and all the signals make up a monochrome image. By applying a coloured filter mask over the surface of the sensor, this can be converted into a colour image.

 

World of colour

The majority of imaging sensors see only in black and white. To get over this problem, filtration is needed. Most sensors are composed of a grid of square pixels. Over each of these is a coloured filter, red, green or blue. Due to the way in which we see light, there are twice as many green filters as there are red or blue. Light passes through the filters before hitting the photoreceptors, which as we’ve seen can only measure brightness.

Software in the camera is able to measure the brightness of pixels filtered by these colours and use that information to build up an accurate colour image. It does mean that roughly two thirds of any colour digital photograph is basically guesswork, but the software that does the guessing is getting better all the time. The quality of this processing software is another major factor that affects the final quality of the image, as well as the speed of the camera.

 

Foveon X3

The Foveon X3 sensor is a slightly different proposition to most other sensors on the market. It utilises the light-absorbing properties of silicon and does away with RGB filters. Three layers of silicon, with 3.4 MP on each, absorb the RGB light rays. In effect the current sensor has 10.2MP but produces a 3.4MP size image. This quite radical approach has been both lauded and criticised. Tests have shown an increase in colour depth and sharpness over other traditional sensors. At present, the X3 sensor is only used on Sigma’s SD9 and SD10 digital SLRs, with a 4.5MP compact from Polaroid due out soon.

 

Fujifilm SuperCCD

Fujifilm turned the imaging world on its head, almost literally when it released the Super CCD. Now up to the 4th generation, the pixels are turned 45°, and use a hexagonal pattern to provide a greater surface area, with which to capture the light. Fujifilm claims more detail from the sensor, which it can then utilise to produce a larger output size, from 3 to 6 million pixels, for example. The latest sensor is also available in a Super Resolution version. This has two photoreceptors at each photo site. The second receptor records an underexposed image, producing better highlight detail. First seen on the Fujifilm F700, the SuperCCD SR sensor will appear on several new models from Fujifilm this year.