Notes on why I use RAW
Return to Home Page
I routinely use the RAW format (NEF) when taking natural history photos with my Nikon DSLR. This technical note explains my reasons for this and the key differences between RAW and other formats, especially JPEG. I start with a description of the most common type of digital photo sensor and the ways in which the information from this sensor can be turned into a photographic image.
Digital Photo Sensor
A digital photo sensor consists of an array of tiny photocells arranged on the surface of a thin rectangular piece of silicon. In the case of the Nikon D800, there are 7,360 photocells across the width of the sensor and 4,912 across the height, resulting in a total of 36,152,320 cells, packed into an area measuring about 36mm x 24mm - the same size as a frame of 35mm film.
Other cameras have different numbers of photocells and use different areas of silicon. A 'typical' DSLR currently has about 15 million cells in an area measuring about 24mm x 16mm (called APS-size), while many compact cameras fit a similar number of cells into an area of only 6.2 x 4.6mm.
These figures show that the individual photocells have to be very small, in order to fit so many onto such a small area of silicon. The small size limits their light-gathering capability, which can seriously reduce their performance in dim light or when rendering the shadow areas of a photograph.
The photocells are not colour-sensitive but respond to all colours of visible light and also to infra-red radiation. Most cameras contain an infra-red filter, to block this type of radiation from affecting the final image. To obtain colour information, it is necessary to add colour filters over individual photocells. The most common layout of the coloured filters is known as a 'Bayer array', where each square of four photocells is covered by two green filters and one each of blue and red filters.
Figure 1 - Bayer
array of filters over an array of Photocells
(image from Wikimedia)
More green filters are used because the human eye is most sensitive to light of this colour and it most accurately defines the 'brightness' of the light.
Reading Photo Data from the Sensor
The exact method by which information is read from the sensor depends on the technology used for the photocells, so there are significant differences in detail between CMOS and CCD type cells.
In all cases, however, the brightness recorded by each photocell is read out as a sequence of individual values along each row in turn, until all the cells on the sensor have been recorded. This sequence of brightness values constitutes the RAW data from the sensor. The RAW data are therefore a string of several million brightness values, indexed according to the location of the cell to which each value relates. Some of these values are from green filtered cells while others are from red or blue filtered cells.
- Appearance of RAW data from sensor
Fig. 2b - Enlarged view of RAW data
Digital processing of these brightness values requires that the values are digitised into a series of numbers. Most cameras use either 12-bit or 14-bit digitisation, which means that there are either 4,096 (12-bit) or 16,384 (14-bit) individual brightness levels, between the minimum and maximum levels that can be delivered by the sensor.
Turning all these numbers into a colour image requires a lot of digital signal processing, which can be done either inside the camera or by a separate computer. The camera needs to be able to process the data itself, in order to display an image on the screen on the back of the camera or in an electronic viewfinder.
Processing Photo Data in the Camera.
The first stage in making sense of all the numbers is known as de-mosaicing. There are several different approaches to this process but all involve complex mathematics. The aim is to obtain as high a resolution image as possible, while minimising the inevitable errors that arise when combining information from separated, differently colour-filtered locations. The biggest problems arise where there are sudden changes in brightness, such as at the edges of objects in the final image. The reconstructed image is typically very accurate in uniformly coloured areas, but shows some loss of resolution (detail and sharpness), as a result of combining data from blocks of four pixels, and may introduce coloured fringes at sharp edges (known as 'edge artefacts').
3 - Image data from Fig. 2 after processing the
processed using Iris v.5.59 software
The end-product of the de-mosaicing process is a new array of coloured 'pixels' (picture elements), equal in number to the original photocell sites, with each pixel colour-coded by three numbers, representing the red, green, and blue (RGB) components of the colour. Each of these components can have 256 values (8-bit), allowing a total of 256 x 256 x 256 (16,777,216) different colours to be represented by each pixel. This is an ample number of colours to display a realistic image, as the human eye is reckoned to be able to perceive about 10 million colours.
The colour image is processed further in the camera, to adjust aspects such as white balance and sharpening, and is then compressed into the JPEG format. Compression was very necessary when a 256Mbyte storage card was considered large but is less of a consideration now that multi-gigabyte cards are affordable.
The colour processing algorithms in most modern cameras are very effective at producing excellent results under a wide range of conditions but, just like the old colour print processors, there are times when they fail. It is when this happens that image-editing software can be used to improve the result.
Image Editing in a Computer
When data are transferred from the camera to the computer in the form of a JPEG image, any subsequent editing has to be performed on the JPEG data. These JPEG data from the camera have, however, already been compressed, by using processing within the camera that 'loses' some of the original image data. There is a trade-off between the amount of compression and the quality of the image so, for best results, select the highest quality JPEG setting on the camera. Once information has been lost in the compression process, it can never be recovered and, in addition, more data are lost every time a JPEG image is processed by computer software.
Another problem is that there are only 256 values for each of the RGB components of the JPEG pixels and this range of values is not distributed linearly with increasing brightness. The non-linearity is applied as part of the sRGB specification, to improve the display of the most significant parts of the image.
As a result of this limitation, it is not possible to make major changes to the contrast or brightness of a JPEG image, or part of an image, without adversely affecting the overall quality of the image. I discuss this in more detail in my technical note on Colour Control in Photoshop. The same limitation means that it is not possible to extract much detail from the darker parts of the image, as very few of the digitisation levels are used to represent such areas.
These limitations can be removed if the RAW data are transferred from camera to computer. If this is done, there are up to 16,384 values for the brightness of each cell and these are distributed linearly from black to white. A RAW editor allows these levels to be adjusted before conversion to either a 16-bit (65,536 levels for each RGB component) TIFF image or an 8-bit JPEG. Because there are many more brightness levels associated with each cell, the RAW image allows detail to be extracted from dark areas of an image, in a way that is impossible after JPEG conversion.
Careful editing of a RAW image, therefore, allows optimum use to be made of the dynamic range between the darkest and brightest parts of an image, which always exceeds what can be viewed on a computer screen or a paper print. In addition, the resources available in a desktop computer greatly exceed those in a digital camera and can be used to optimise the RAW to JPEG conversion process, to suit a particular image. Programs like UFRAW offer alternative conversion algorithms, which can be applied to obtain the best result from any given image.
Of course, when using RAW data, more effort is required from the user to obtain a good image. Matters such as colour balance, sharpening, etc., which are handled automatically in the camera, when it produces a JPEG image, are entirely under the control of the user and appropriate decisions have to be made. All this means that processing a RAW image can be a complex process, although there are now many tools available to allow a user to assess the visual effects of the various choices.
Figure 4 -
Lapwings over Otmoor, Oxon
Image processed from RAW original
A RAW data set is sometimes described as a digital 'negative' and there are certainly some parallels to this stage in the processing of film images. Many of the great photographers of the past have commented that a large part of the art of photography is carried out in the darkroom. Techniques such as 'dodging and burning' were an important part of producing an exhibition print from a film negative.
If the original RAW image data are stored, then it is always possible to re-process these data at any time, in order to produce an image that is optimised for a specific application. In addition, it may well be possible, in the future, to take advantage of new algorithms, which will improve the de-mosaicing process from RAW data to a visible image.
Additional Notes on ISO 'Speed'
When film was the usual photographic medium, the sensitivity of a film could be influenced by the manufacturing process. The main method that was used to make more sensitive film was to increase the size of the silver halide grains in the emulsion. As a result, more sensitive, or 'faster' films usually displayed less detail, because of the larger grain size. Each manufacturer supplied a range of films for different applications, depending on whether fine detail or 'speed' was more important to the user. Various methods were devised to define the 'speed' of any given film and these were eventually standardised within the ISO system. So, a photographer could determine the exposure for a given shot by taking account of the published information on the ISO 'speed' of his film.
In a digital camera, it is not possible to change the sensor, so its sensitivity is fixed during manufacture. Advances in manufacturing techniques have increased the sensitivity of the more recent designs, which allows them to recover more detail in the darker areas of an image.
Because the output from the sensor is an electronic signal, camera manufacturers allow this signal to be amplified, so that a larger output can be provided from a given sensor signal. This has an analogous effect to increasing the 'speed' of a film, and has come to be described as 'variable ISO'. Just as a high-ISO film gives a grainier image, the amplification in a digital camera reduces image quality, because it also amplifies the differences between adjacent pixels, resulting in a 'grittier' or 'noisy' image.
If RAW data are transferred to a computer, it is equally possible to amplify these data in the computer, providing the RAW converter allows this. The result is much the same as if the ISO setting is altered in the camera, as shown in the images below. This is another aspect of extracting more shadow detail from a RAW image.
(a) - 'correct' exposure at ISO200
- 'correct' exposure at ISO3200
under-exposed at ISO200 then
Figure 5 - Comparison between
increasing camera ISO and adjustment of RAW data
İMike Flemming, June 2012
Return to Home Page