Corrections in digital imaging

1. Introduction

Digital photography offers exciting possibilities to mitigate image imperfections by digital signal processing. Such imperfections are often due to lens faults, but can also be caused by the camera or by the user. Available tools range from proprietary in-camera algorithms and raw converters, to third party software suites and math-based programming environments. While exploring internet fora, one often gets the impression that every imaging fault can be corrected. Is this true? Much depends on the meaning of “corrected” and the degree of image degradation. Most image faults discussed below below can certainly be improved in post-processing. The fault becomes less distracting, and the resulting image is closer to the desired result. If “corrected” is understood to mean that the processed image is just as good as a direct capture in the absence of the defect, then only a few faults can be corrected. However, “just as good” has a subjective side to it, and a discussion is required for each of the faults. Understanding the nature of an imaging error sheds light on the restoration possibilities. The discussion will use the following definitions:

  • Sharpness: Subjective term for the level of detail in an image.
  • Resolution: Scientific measure of the level of detail in an image.
  • Contrast: A measure of the difference in brightness between dark and bright parts of the image. Global contrast addresses the entire image, whereas microcontrast refers to small-scale differences.
  • Improvement: Processing yields improvement if the resulting image is closer to the desired result.
  • Correction: Processing corrects a fault if the resulting image is indistinguishable (under normal viewing conditions) from the image that would have been obtained with an imaging system without that fault.
  • Artifact: A feature of an image which may lead to misleading conclusions about the true scene. Examples are sharpening halos, ringing, moiré, curvilinear distortion, and flare patches. Blur is not an artifact, but a matter of resolution and contrast.

2. Convolution and deconvolution

An image is blurred when the light of a single point in object space is spread out on the sensor. The blur can be intentional, but often it is unintentional. The size and the shape of the blur patch, and the distribution of light within the patch, characterize the point spread function (PSF). Mathematically, the blurred image is described as the convolution of a sharp image and the PSF.

Slightly blurred images can be improved by various routine sharpening algorithms which increase edge contrast. These methods enhance the sharpness impression, but do not increase resolution. Blurred text does not become readable with unsharp masking or high-pass sharpening. Deconvolution, however, can achieve this, acting as the inverse process of convolution. For each point in object space, deconvolution algorithms attempt to harvest the light from the corresponding blur patch in image space (the PSF), bringing everything back to a single point. Deconvolution requires implicit or explicit knowledge of the PSF, and can be an ill-posed problem that consumes a lot of processing power. Boundary effects, ringing, and noise amplification are likely side effects of deconvolution. Highlights exceeding the dynamic range of the sensor may clip many pixels in their blur region. Such areas are beyond rescue even with the best algorithms. Another challenge is discrimination between unintentionally and intentionally blurred parts of the image, when those coexist.

Despite the treacherous terrain, deconvolution offers exciting possibilities for image restoration in the digital domain, where a high bit depth, low noise, and gamma correction help to keep numerical errors within bounds.

3. Case descriptions

3.1 White balance

Differences in spectral transmittance result in some lenses being “colder” or “warmer” than other ones. By adjusting the white balance of an image, it is possible to obtain a consistent color balance for a series of images shot with different lenses. Correction is possible when the required adjustment is small, which is usually the case. The situation is different when one attempts to make different light sources look like one another. Street lights or candlelight, for example, produce light in a relatively small part of the visible spectrum, and images cannot be processed for a daylight look. Similarly, it is not possible to process a person illuminated by colored spotlight for natural skin colors, simply because the required information is missing. Correction would require amplification of color channels carrying little energy, resulting in noise amplification.

3.2 Flare

Reflections by lens elements, filters, sensor filter toppings, lens barrels, or scattering by impurities in the lens glass, may lead to false light in the image. Typical manifestations are flare patches of varying colors and shapes, or a more global haze known as veiling glare, affecting global contrast and color saturation. Localized flare patches in areas of uniform color and brightness (e.g. a blue sky) can be corrected by copying parts of neighboring areas over the flare patch, but the situation is much more complicated when the flare affects areas with lots of detail and tonal variations. Correction is generally not possible without knowing beforehand what the affected areas should look like in the absence of flare.

In the case of veiling glare, when the haze is mostly uniform over the frame, the prospects are not so bad. Figures 1A and 1B show a backlit signpost photographed with lenses with good flare control and poor flare control, respectively. Focal length, aperture, exposure, and processing are the same. The veil in Fig. 1B ruins global contrast and color saturation, and results in a dull image.

Figure 1A. A backlit sign photographed with a flare-resistant lens.
Figure 1B. A backlit sign photographed with a flare-sensitive lens.

Post processing is performed in Matlab [1], simply by subtracting a vertical-gradient gray value and readjusting the levels. Subtraction of a constant gray value already yields a significant improvement, but the present example uses a gradient to achieve an even better result. It reflects the fact that the flare is not completely uniform, but decreases somewhat towards the bottom. The corrected version in Fig. 1C has a much improved global contrast and vivid colors. There is nothing above a good original though, as shadow detail is inevitably lost.

Figure 1C. The image of Fig. 1B after post processing.

3.3 Vignetting

Optical vignetting and natural illumination fall-off gradually darken the image towards its borders and corners. Since the noise power is mostly constant over the sensor, the signal-to-noise ratio (SNR) is lower in the image corners than it is in the center. It is straightforward to get rid of the dark corners, simply by applying a brightness compensation with a suitable dependence on the radial distance from the image center. However, the corner SNR does not improve as the compensation is a simple scaling operation that affects signal and noise equally.

The reduced corner SNR is often not noticeable under normal viewing conditions, but sometimes it is. This depends on the ISO setting and the amount of vignetting. Obviously there is nothing that can be done for mechanical vignetting, when the corners receive no light at all.

Improvement of vignetting is illustrated below. The picture in Fig. 2A was taken under poor lighting conditions at a high ISO setting. Application of a radial brightness adjustment yields a more even illumination: Fig. 2B. Unfortunately, the adjustment also reveals that the corners are noisy with a lack of valid detail. In this case, the result of the treatment cannot be called a correction, because a lens with less vignetting would have resulted in corners with more detail.

Tree at night in the Namib
Figure 2A. Photo of a tree in the Namib Desert at night. The corners are extra dark due to vignetting.
Tree at night after vignetting compensation
Figure 2B. Image after compensation for vignetting.

3.4 Lens aberrations

3.4.1 Distortion

Curvilinear lenses render non-radial straight lines in object space as curved lines on the sensor. Barrel, pincushion and moustache distortion are familiar terms to describe different curvatures. Distortion is the only Seidel aberration that does not blur the image, and a high level of correction is possible by means of a two-dimensional resampling operation (alternatively called rescaling or interpolation), using a resampling factor that is a function of the radial distance. The loss in resolution and contrast due to the resampling varies between negligible and moderate, depending on the amount of distortion.

Correction of distortion is illustrated below. The raw capture in Fig. 3A was taken with a lens that renders the red edge at the top of the building in a characteristic moustache fashion. A treatment in Lightroom [2], using the Adobe profile for this lens, removes the artifact by neatly straightening the edge of the roof: Fig. 3B.

The dark edge at the bottom is the blurred image of a straight object held close to the lens. Unlike the rooftop, this object is subject to strong barrel distortion. This is possible because distortion, like all lens aberrations, depends on the subject distance. The distortion of this object is of course not corrected by a profile meant for infinity focus.

Curvilinear distortion
Figure 3A. Image as captured.
Image after distortion correction
Figure 3B. Image after distortion correction.

In recent years, some lens manufacturers have relaxed the distortion requirements of their lenses. The reason is that other aberrations can be much reduced by allowing more distortion in the design. Correction is automatically performed by firmware in the camera, and some users never get to see the raw lens performance.

Image rotation, for example to level the horizon, falls in the same category as distortion correction. The impact on resolution and contrast is usually small.

3.4.2 Chromatic aberration

In dealing with chromatic aberration (CA), one should distinguish between lateral (transverse) and longitudinal CA. Lateral CA, also known as lateral color, causes the image magnification to be a function of the wavelength. The lens casts sharp images at all wavelengths, but at slightly different focal lengths. Off axis, a single point in object space ends up at different positions in the image for the constituent wavelengths. The result is a drop in resolution and microcontrast, as well as visible color fringing. When the fringes are minor, say up to one pixel wide, correction is possible by a two-dimensional resampling (rescaling) operation applied separately to individual color channels. However, since each color channel spans a wavelength interval, severe cases of later color will blur the image within each color channel. In that case, resampling is only a partial solution.

Longitudinal CA, also known as axial color, does not affect image magnification. Rather, it causes the position of the image behind the lens to vary with the wavelength of the light. In the plane sampled by the sensor, different wavelengths have blur disks of different sizes. Resolution and microcontrast are affected, and color fringing occurs. CA is only defined for the plane of best focus, but its cause (dispersion) may also affect the foreground or background blur of images with selective focus. Defocus color fringing can be as annoying as axial color.

Axial color and bad cases of lateral color require deconvolution, solving the inverse problem separately for each color channel while discriminating between the plane of focus and a possibly blurred background. Attempts with Lightroom (V5.4) were not very successful. The tool for lateral color just seems to desaturate the affected areas, and mixed results were obtained with the tool for axial color. In both cases, the distracting colors may disappear from the fringes, improving the overall image appreciation. However, the drop in resolution and microcontrast (which is not restricted to the fringe areas) remains.

Figure 4. Top row: A cross affected by lateral color. Bottom row: Improvements obtained in post-processing.

Lightroom’s treatment of lateral color is illustrated by Fig. 4. The top row shows a cross, placed in the top-left corner of the frame, photographed with three retrofocal wide-angle lenses at f/11. The first two lenses are rather poor designs with significant transverse CA, while the third lens is well corrected for this aberration. The situation after treatment is shown in the bottom row. The color fringing is less pronounced, but has not disappeared completely. Edge definition has not improved at all, and the treated images of the lesser lenses are still worse than the untreated image of the good lens.

3.4.3 Other aberrations

Spherical aberration, coma, astigmatism and field curvature blur the image in various ways. A single point in object space becomes a blur patch in the image, whose size and shape is generally a function of the position in the field. Chromatic variations of these aberrations further aggravate the situation. The cover glass on digital sensors also introduces aberrations, unless the lens design takes its presence into account. Slightly blurred image regions can be improved by ordinary sharpening techniques, but correction requires deconvolution, which is an exceedingly difficult task. The algorithm would need to know, or figure out, the PSF of the lens as it varies over the field, and how it varies with the object distance and wavelength.

There is speculation of manufacturers applying deconvolution in their proprietary (in-camera) algorithms, for instance to deal with the astigmatism of the sensor filter stack. If this is true, it remains to be seen whether it works well under all conditions. Chances of success with third-party software suites are slim.

3.5 Defocus and motion blur

A focus error, or motion of the camera or the subject during exposure, yield a blurred subject. As with all types of blur, correction requires deconvolution. The restoration task is not as hopeless as it seems, because defocus blur, and some types of motion blur, can be fairly uniform over the field. They are also independent of the wavelength. The PSF may be reconstructed approximately, either by guessing or by measurement. Indeed, if the subject features bright object points in otherwise dark areas, the blur patch registered by the sensor is a direct measurement of the PSF. There are also blind algorithms, which try to figure out the PSF from scratch. Regardless of the method, it is difficult to avoid artifacts and noise in the restoration process. The following examples were processed in MATLAB [1] and BiaQIm [3]. In all cases, the PSF was approximated by a round disk of uniform intensity.

Figure 5A shows a center crop of a photograph of books on a shelf. This image serves as the ground truth for the restoration processes that follow. The image in Fig. 5B results from defocusing the camera lens. The PSF has a diameter of about 10 pixels, rendering the small print unreadable. An attempt to undo the blur is shown in Fig. 5C, in this case with the Lucy-Richardson algorithm from the image processing toolbox in MATLAB. Clearly the attempt is a big improvement, but there is also some ringing and noise.

Figure 5A. Subject in focus.
Figure 5B. Moderately blurred subject obtained by defocusing the lens.
Figure 5C. Deconvolution of the image of Fig. 5B with the Lucy-Richardson algorithm in Matlab.

The image in Fig. 5D was obtained by further defocusing the lens. The diameter of the blur disk has grown to some 30 pixels, which blurs all text beyond legibility. Figures 5E through 5H give the restoration results for a few different deconvolution algorithms. The results are not as good as the result of Fig. 5C, which does not come as a surprise. The Lucy-Richardson and Landweber solutions arguably look better than the simple Fourier and Wiener filter approaches, but this comes at a price: See the processing times mentioned in the figure captions. These times are for the shown crop, not the entire image, and were measured on the same work station. Neither the results nor the processing times should be seen as definitive characteristics of the tested algorithms. All methods have one or more input parameters, which may affect the performance and the run time. Figures 5E–5H are shown to give an idea of what typical deconvolution results can look like for a case with lots of blur. Perfect reconstruction is not feasible, but the readability of the big print can at least be restored.

Figure 5D. Blurred subject due to a generously defocused lens.
Figure 5E. Deconvolution of the image of Fig. 5D with a simple Fourier method programmed in Matlab. (Processing time 0.25 s.)
Figure 5F. Deconvolution of the image of Fig. 5D with the Wiener filter in BiaQIm. (Processing time 8 s.)
Figure 5G. Deconvolution of the image of Fig. 5D with the Lucy-Richardson algorithm in Matlab. (Processing time 33 s.)
Figure 5H. Deconvolution of the image of Fig. 5D with the Landweber method in BiaQIm. (Processing time 1200 s.)

Note that the shown crops are a bit smaller than the crops used in the processing, in order to get rid of boundary effects in the figures. Also note that the examples in this section concern pictures taken with an actually defocused lens. Restoration of these images is much more challenging than undoing synthetic blur.

To be sure, the amount of defocus blur makes the image of Fig. 5D a real challenge, but otherwise it is relatively simple case. The book spines are in the same plane, perpendicular to the picture taking direction, and a well-corrected lens was used at an aperture in the middle of its range. These conditions ensure that the blur disk is fairly uniform over the field, and allow to approximate it by a disk of uniform intensity in the deconvolution algorithms. The following complications may arise under more general shooting conditions, where the algorithm is not only confronted with a blurred subject, but possibly also with intentional blur:

  • Object points at different distances from the lens yield blur disks with different sizes.
  • In case of a non-circular aperture, the orientation of the blur polygon is mirrored between foreground blur and background blur.
  • Optical vignetting and some aberrations cause the shape of the blur patch to vary over the field.
  • Aberrations affect the light distribution over the blur patch. Aspherical elements and diffraction may do the same via the onion-ring effect. And let us not forget the impact of dust particles on the blur disk.
  • When chromatic aberration and chromatic variation of other aberrations leave their fingerprint on the out-of-focus areas, the inverse problem has to be solved separately for the individual color channels.

A nitpicker might add that it is not possible to perfectly correct an accidentally defocused image in the first place, because defocusing also alters the perspective and the field of view. The resulting image is a blurred version of a different image than the in-focus image.

3.6 Diffraction

Diffraction causes blur, and correction thus requires deconvolution. There is certainly hope, since diffraction blur is well understood, reasonably predictable for a known f-number, and uniform over the field. Deconvolution sharpening of diffraction blur was previously the domain of offline processing, but, with an ever increasing processing power, in-camera compensation is becoming common practice. The bad news is that the downsides of diffraction still apply, and that the correction is only partial. The wavelength dependence of diffraction complicates matters, and accurate deconvolution also requires knowledge of the precise shape and orientation of the aperture. Worst of all, the deconvolution has to deal with an exceedingly large PSF, because diffraction affects the contrast also at low spatial frequencies (Fig. 6). The diffraction stars emanating from street lights in night photography illustrate that the blur area can be very large indeed. All image points radiate out in the same way, highlight or no highlight, except where the subject is pitch black. In post-processing one can hope to increase the resolution, but the reduction in contrast over larger spatial scales cannot be undone. The former requires deconvolution of the relatively small central region of the diffraction pattern, whereas the latter requires an algorithm that brings the entire diffraction star back to a single point. That is not going to happen anytime soon.

Diffraction MTF
Figure 6. Diffraction-limited MTF for a circular aperture and green light.

Figure 7 corroborates these presumptions. Panel A shows an unscaled center crop from a raw image captured with the EOS R6, shot with a 70-200 RF lens at 120 mm and f/4. It was saved from Digital Photo Professional (DPP) 4, with all possible corrections and sharpening turned off. Panel B shows the corresponding crop at f/32, but otherwise with the same settings. This crop is soft, and diffraction stars emanate from the two outdoor wall lights. Panel C shows the f/32 crop, after switching on the diffraction correction in DPP. There is a noticeable improvement, but the image quality is still far from the uncorrected f/4 image. DPP 4 also has a tool called digital lens optimizer (DLO), with a slider for an adjustable amount of compensation. Panel D shows the result for the maximal correction of 100%, which has a larger effect than the simple tool for diffraction correction (which corresponds to about 50% on the DLO scale). The legibility of the small print on the car has been improved, but there is also increased noise. Clearly, DLO is using deconvolution to combat diffraction, achieving an increase in resolution at the expense of artifacts and noise. As expected, it works with the central diffraction region and stands no chance of removing the diffraction stars.

Canon diffraction and correction
A: Unscaled center crop at f/4. B: Same crop at f/32. C: f/32 crop after diffraction correction in DDP 4. D: f/32 crop with the DLO at 100%.

3.7 Aliasing

When the lens casts an image containing energy at spatial frequencies beyond the Nyquist frequency of the sensor, and if the sensor lacks an anti-alias filter (AAF), aliasing occurs. The optical power associated with these high frequencies is redistributed over the frequency regime below Nyquist, adding to valid power at those lower spatial frequencies. Aliasing affects areas with sharp edges or high detail, or wherever the image has high-frequency components. Artifacts are plentiful and include jagged slanted edges (staircase effect) and crunchy looking vegetation. Aliasing of a regular microscopic pattern may lead to a macroscopic pattern, known as moiré. Sensors with a Bayer color filter array (CFA) have more aliasing in the blue and red color channels than in the green channel, leading to colored moiré patterns. Color moiré is not unique to Bayer filters, however, and also depends on the demosaicing algorithm.

The staircase effect is illustrated by Fig. 8. The left crop shows a window with blinds, captured on a sensor with an AAF. The middle crop shows the same scene, photographed with the same lens, on a sensor with the same resolution, but without an AAF. This sensor also has a different CFA, X-Trans instead of Bayer, which is not equally well handled by all raw converters. For comparison, the third crop shows the same X-Trans capture, but in a different raw converter. The capture without AAF looks more crisp, but the crispness comes at the expense of jagged blinds. Although the X-Trans CFA is an uncertain factor, the artifacts have all the characteristics of aliasing. Chances are slim to none that the blinds are jagged in reality, and Fuji’s claim that X-Trans does away with the need for an AAF is nonsense, anyway.

Figure 8. Image crops of X-A1 in Lightroom (left), X-E2 in Lightroom (middle), and X-E2 in Photo Ninja [4].

Aliasing is a completely different beast from lens faults. Correction of lens aberrations requires knowledge of the point spread function of the lens, whereas correction of aliasing requires knowledge of the subject matter. Automated algorithms have no way of telling aliasing artifacts apart from valid detail, not even in theory. (“Are these stripes moiré, or is this the actual motif of the curtains?”) Available tools are limited to a symptomatic treatment of moiré. Algorithms differ between software suites, but desaturation is often included. The improvement can be tangible, but valid detail inevitably suffers in the process. Downsizing an image does not eliminate the problem, as it does with blur, because moiré typically occurs at the spatial frequencies of relevance to the smaller image.

Figures 9A–9C show an example of color moiré and two restoration attempts. The subject is a newspaper photograph, whose microscopic halftone raster gives rise to an aliasing pattern of green and purple stripes. Use of the moiré brush in Lightroom desaturates the stripes, but there is significant collateral damage as the algorithm also attacks valid colors. Moreover, the stripes remain visible as a pattern of alternating brightness. Neat Image [5] does a much better job at removing the patterning, in this case at the expense of a hazy softness. There is less desaturation compared with Lightroom, but in the absence of a ground truth it is not possible to tell whether the restored colors are accurate.

Figure 9A. Crop of a newspaper reproduction with an Otus 1.4/55 on an A7r. (Image courtesy of 3d-kraft.)
Figure 9B. Treatment of the image of Figure 9A with Lightroom’s moiré brush at 50%.
Figure 9C. Treatment of the image of Figure 9A with Neat Image Pro V7.6. (Restoration work by John Michael Leslie.)

4 Verdict

Signal processing can be used to deal with imperfections in digital imaging. In most of the discussed cases, the image can at least be improved. There are good reasons why manufacturers are increasingly allowing more vignetting and distortion in their lens designs. These faults are among the easiest to correct, whereas their presence in the design can be exploited to reduce aberrations which are much more difficult to correct after the fact. Lens specifications can be improved, like the resolution, the maximum aperture, the zoom range, or the lens can just be made smaller, lighter, and cheaper to produce. However, there are of course limits. As the amount of vignetting and distortion increases, full correction is no longer possible. A low corner SNR cannot be corrected in the case of vignetting, and while the distortion itself may be correctable, the resolution will suffer from image stretching.

One should not hesitate to use the tools at one’s disposal, and the end result is good if the user is happy. At the same time it is clear that genuine correction of many image faults is not possible. A software tool is not always a substitute for a good lens and good technique. Uncompromising image quality can be approached without digital corrections, but this comes at the expense of a premium price and a heavy camera bag. One cannot have it all.

As to aliasing, it suffices to say that an image free of artifacts is necessarily a bit soft at the pixel level.

5 References

[1] MATLAB, http://www.mathworks.com
[2] Adobe Lightroom, https://lightroom.adobe.com/
[3] P. J. Tadrous, BiaQIm image processing software, http://www.bialith.com (version 2.9 alpha, 2011).
[4] Photo Ninja, http://www.picturecode.com/
[5] Neat Image, http://www.neatimage.com/