Monday, 28 December 2015

Forensics - Femto-photography




In a photo we may marvel at the frozen movement of a bird, captured in one thousandth of a second.  High-speed cameras have existed for decades that can capture a speeding bullet in just one millionth of a second.  Take the iconic image above captured by by Harold "Doc" Edgerton in 1964.  Imagine if we could freeze not just the movement of our subject in time, but also the movement of light itself with an infinitesimally short exposure time? 

Light travels at a speed just shy of 300,000,000 meters per second.  How fast would a camera shutter have to be to truly stop light in it's tracks?  Not one millionth, or one billionth, but one trillionth of a second!  I invite you to watch this amazing TED Talk by one of the discoverers of a new field in science and photography.  Here is Prof. Ramesh Raskar, on Femto-photography


Potential in Birding

Biometrics
In the posting about Measurements From Photographs I discussed the difficulties involved in making correct size measurements from 2D images.  3D technologies would seem to be the answer and the femto-camera could be one way of going about it.  Where a bird is moving about it may be possible with current technology to generate a 3D image through a composite of 2D grabs, as outlined HERE.  However femto-photography goes a step further.  The ability to effectively scan around corners by bouncing light off objects creates interesting options to allow an observer generate a 3D image without having to change the position of the camera or subject.  Working at near the speed of light such a powerful camera might potentially scan a scene in 3D using multiple exposures in a fraction of the time it would take a modern DSLR to generate a sharp 2D image.  How long before this kind of technology becomes reality in the field?

Light and Materials
The ability to study how light interacts with materials will teach us a lot.  Light is absorbed, reflected and transmitted through different anatomical structures on the surface of a bird.  Imagine being able to watch single photons of light interact with an iridescent, structural colour on a bird's feather or watch a photon bounce around inside a bird's beak illuminating it's internal structures before passing through.

For more visit the dedicated web-page HERE.

Monday, 23 November 2015

Birding Image Quality Tool - Rev. 4.0 Field Marks

For the seasoned birder in the field many an initial identification may be based on hearing a call or knowing a bird's distinctive gestalt.  But if you stop and take a critical look at any bird, and certainly if you need to identify a bird from a photograph, field marks play a big part in the identification process.  For clarity I like to consider field marks as a bird's distinctive markings and colours alone.  Sometimes size and shape (broadly morphology) is also considered a part of what defines the term field marks.  But I like to keep a bird's markings and it's morphology separate for the purposes of this blog.  I cover morphology under the heading of gestalt.

In the blog I have a page devoted to the subject of field marks called A Spotlight - On Field Marks.  This year I have spent a good deal of time considering field marks for the purposes of identification from bird images.  I have concluded that there are two basic classes of field marks - The Bold and The Bland.  The crucial distinction between the two is that bold markings and colours can be appreciated in even the worst of images because they exhibit characteristics that make them stand out under pretty much all observational and photographic conditions.  Obviously bold field marks perform some vital signally function so it is not too surprising that even in the worst viewing or photographic conditions they hit the mark almost every time.  Take for instance the bright, contrasting fresh adult scapulars and coverts of this 1st calendar year European Turtle Dove.  Compared alongside the faded, diffuse brown gradient at the centre of the older juvenile feathers, these newer feathers create a bold impact.  The older feathers are clearly bland by comparison.  They probably form part of the bird's camouflage, and therefore not surprisingly they evade the camera just as effectively as they do the eye.  My analysis of the bold versus the bland has been consistent, whether the problem is image resolution, focus, exposure, colour accuracy, artefacts, or all of the above.




So, having recently expanded the Image Capture Quality Tool to include a tool that measures the quality of image lighting and another that measures the accuracy of colours, is it possible perhaps to generate a tool who's purpose is to gauge the overall quality of field marks captured in an image.  I believe it's possible and here is my first stab.


Essentially what I have done here is build upon the three other tools with a simple analysis of the effective capture of bold markings and bland markings.  Unlike the other tools however I have allowed the operator to deselect anything that one considers may not be relevant to the analysis.  


Bold Field Marks and Bland Field Marks
Your subject may be all bold (such as this stunning male Moussier's Redstart on the left), in which case the analysis of bland field marks is not applicable and can be deselected from the analysis.  Or visa-versa - this male Trumpeter Finch on the right, though no less stunning consists of subtle field marks (apart perhaps from the bill which could be considered bold).  So, one may decide to exclude bold field marks from the analysis if one so chooses.

Lighting and Colour
Lighting is critical to the accurate capture of field marks in bird images.  After all, light and shade can easily mirror the impression of a field mark to confuse the unwary (eg. HERE).  The Lighting Quality Tool captures all the key elements from lighting quality, direction and shadows to dynamic range issues and the effect of multiple lighting sources.   Colours can be of significance in some identifications but less so in others.  So once again I have given the option to exclude colour from our analysis if it is deemed more helpful to do so.  On the other hand, where colour analysis is critical to an identification the Colour Quality Tool provides a very good measure of accuracy.

Conclusions
Broadening the Image Capture Quality Tool out to include additional critical analysis tools has allowed me to draw a line under much of the work of the last two years in this blog.  The purpose of the blog has been to work on a manual to assist birders who are interested in identifying birds from photographs.  These tools aid that effort by getting the observer to focus on those factors that confound an identification, be it a problem with how an image was captured, or the lighting, or how accurately the colours and details have been expressed in the image.

Once again, all that remains now is to provide you with the tools so you can play around with them and have a go at scoring the quality of your own bird images.  Feedback is as always much welcome and appreciated.

(Note you will have to download the file and open it in MS Excel for the tool to work properly).

In the example below image capture was very good at a score of 98%.  Particularly good for an old digiscoping camera (the classic work horse Nikon Coolpix 4500).  Lighting wasn't bad but bright sunlight does create problems such as blooming artefacts, clipping and shadows.  So the score was lower at just 75%.  Though the colour quality looks quite good, as the image is a jpeg that has underwent a certain amount of manipulation including manual brightness, contrast and manual white balance adjustment, colour reliability is really quite poor overall.  This is reflected in a score of just 40%.  However all is not lost.  When assessing overall the quality of the field marks capture it could be argued that accurate colour isn't all that important for this particular species.  Booted, like a lot of Old World warblers is a fairly bland species.  So I have discounted both the bold field marks and colour quality elements of the field mark test.  Field marks are therefore scored on the basis of the bland field marks, the overall image capture quality and the overall lighting quality, yielding a pretty good score of 91% overall.

There is subjectivity in deciding whether a bird should be scored on the basis of all five field mark parameters.  At least by presenting all the data everyone can see how the score was arrived at.  I hope you find these tools of use not just for assessing your image quality, but also for drawing your attention to parameters that you might often overlook or take for granted when studying bird images.

Friday, 20 November 2015

Birding Image Quality Tool - Rev. 3.0 Colour

Having recently 'bolted on' a Lighting Quality Tool to the Image Capture Quality Tool I figured it was as good a time as any to drive on with a third tool, this time devoted to Colour Quality.


As with the Lighting Tool the Colour Tool is in effect a summary of the findings of the Spotlight - On Colour thread and a way of drawing a line under that chapter.  And, as with the other tools the Colour Tool attempts to provide birders with a representative, quantitative tool for analysis for colour quality and accuracy in your bird images.  

Before I start of course I have to point out once again that digital colour is only a representation of natural colour.  Of the potentially infinite array of colours produced by light humans can only perceive a certain colour gamut.  Within this range digital cameras can only capture a much smaller gamut of colours.  Finally, within that cluster of colours we have a smaller gamut again called sRGB colour space.  Most digital imaging devices including cameras, display screens, scanners and printers tend to operate in sRGB for the most part.  sRGB is also the colour gamut used by the internet.  So that is the colour space which I have restricted myself to in this blog.  



The colour parameters which I have selected for the tool all narrow down the accuracy with which colours are captured and selected within this sRGB colour space so as to approximate, as closely as possible, the colours captured in nature.  After all, we can expect no more than this from our camera equipment.

Sensor Calibration
No two cameras are identical.  Due to slight variations in the way individual camera lenses, sensors, filter arrays and processors capture colours every camera is unique.  This is our first stumbling block on the road to 'accurate colour'.  Professionals use a tool called the X-rite (formerly Gretag-Macbeth) Colorchecker Passport to get over this first hurdle.  The Colorchecker is a standard calibration tool.  Having photographed the Colorchecker in RAW, at 100 ISO the photographer uses software to assess the performance of the camera setup.  From this a special colour profile is created (called a DNG Profile) which can then be used to correct for any slight variations in the camera setup when compared with a recognised professional colour calibration standard.  This profile only needs to be created once for a given camera, lens and lighting setup.  Afterwards, any time a RAW file is opened in a RAW work flow the DNG profile can be selected and this will automatically calibrate the colours in the image to that recognised standard.   For anyone interested in bench-marking and analysing colours from their images this tool is a 'must-have'.  For more on DNG profiles see HERE.

White Balance Calibration
In theory a DNG profile should be the only calibration needed to capture colours as accurately as any camera can.  The problem is, as humans we don't see the world quite as it actually is in nature.  Sunlight is ever-changing owing to the position of the sun in the sky.  Humans can correct for this changing light using a white balance adaptation.  We also use this to correct for the unnatural colour of artificial lighting indoors.  Camera manufacturers needless to say aim to produce images which match the world as the human eye sees it so cameras are equipped with white balance correction.  Unfortunately cameras are not as adept at this skill and frequently get this calibration wrong.  The only way to be absolutely certain the camera has corrected white balance approriately is to use another calibration tool called a Grey Card.  White balance correction then becomes the second prerequisite for accurate colour capture.  

Of course white balance correction can be closely approximated, particularly if there is some reasonably neutral grey in an image.  But this approach can be a bit 'hit and miss', particularly if it is being done by eye and particularly if the display screen used is not itself perfectly neutrally calibrated.  I have made an allowance for manual white balance correction with Colour Quality Tool, the caveat being that one would hope the observer is exercising caution and that the correction is reasonably accurate.


The X-write colourchecker comes with a colour grid (shown above) also containing two neutral grey patches.  This panel flips over to reveal a large neutral grey card beneath, so the passport caters for this dual purpose of DNG profiling and white balance correction.  The white balance of this image from the Collins Bird Guide second edition was created very simply by placing the white balance eye-dropper cursor in Adobe Elements over one of the neutral grey squares and clicking the mouse.  Having also corrected the colours using the camera's DNG profile the colours of this image are now matched to a professional standard.  Note how pure and saturated the colours are and how neutrally white the pages appear.  For more on white balance see HERE.

Image Manipulation
The next challenge is that, having corrected colours as accurately as possible the temptation might be to start manipulating an image further to correct for slight lighting, shadow, or exposure issues.  If this is done carefully and with a great deal of attention it is certainly possible to improve an image and draw closer to accurate colour.  But it can just as easily go wrong and lead the observer away from the target objective.  Where the only manipulation of a RAW image is it's DNG profile and white balance correction that is considered a perfectly acceptable manipulation, as this is the minimum correction needed to calibrate colours.

If an image on the other hand requires some additional lighting, hue or saturation manipulation to try and draw out representative colours there is a risk of deviating on quality so the image scores lower in the Quality Tool.  If the image being measured is not a RAW file at all but a Lossless image file format like a PNG file then manipulations are going to result in some clipping of image colour data so once again the quality score is affected even more.  Lastly if the image being manipulated is a lossy file such as a JPEG image then the impact on colour quality is greatest so all forms of manipulation will damage colour accuracy and drive the colour quality score down to its lowest setting.  So to achieve the maximum quality score the goal should be to capture a good quality exposure that requires little or no manipulation other than colour calibration in the RAW workflow.

Lighting Quality
Lighting has a huge impact on colour capture.  Simply take the score obtained using the Lighting Quality tool and apply it here.  The lighting tool captures everything from the quality of scene lighting to lighting direction and shadows on the subject to dynamic range and multiple lighting issues.  The best lighting conditions overall provide the best colour capture.  For more on the Lighting Quality Tool see HERE.

Sample Point Quality
Last but not least we have to consider the quality of the sampling method.  Since coming up with an effective way to sample colour (see HERE) I have completed an analysis including coming up with an effective quality control method for choosing appropriate sample points (HERE).  It simply involves using artificial defocus to test sample homogeneity.


I have also tested the effect of varying image resolution on the effectiveness of the sampling method (HERE).  So far the analysis points to a sampling method that is very robust.

Summary and Conclusions
By gathering together various elements that define accurate colour capture and presenting them here as an Image Quality Tool I can now draw a line under this chapter.  Along the way I have learnt an awful lot about colour in birds and the processes involved in human and digital imaging.  No doubt I will continue to add further insights to this thread and may update the quality tool along the way.

Important to stress that the calibrations referred to here manipulate the image content to reveal accurate colours.  These individual colour pixel contains an RGB value and these values can be identified using any standard photo editing software (eg. MS Paint).   However there is one more calibration required to view your images properly.   Correct visual presentation of colour on a screen or printer depends on the quality of that device and its accurate calibration.  Obviously if you decide to bring your images to a lab to have them printed you are relying on the lab to have properly calibrated its printer.  If you decide to print them at home using a high quality inkjet printer at least you have ultimate control over its calibration.  

Having calibrated and sampled the colours of an image the obvious next step is to put a name to each colour.  You could simply decide to name the colours subjectively.  However, after having gone to so much trouble to calibrate your images at every step why leave the final stage to chance.  Through the blog I have developed the Birder's Colour Pallet as a standard reference tool for colour nomenclature in the sRGB colour space.  Using RGB values it is possible to assign a name using a scientifically repeatable methodology to any colour in your image.  The real beauty of this approach is that your display device doesn't even have to be calibrated.  The RGB values remain the same regardless of how they are displayed on your screen.  For more on the Birder's Colour Pallet see HERE


All that remains now is to provide you with the tools so you can play around with them and have a go at scoring the quality of your own bird images.  Feedback is as always much welcome and appreciated.

(Note you will have to download the file and open it in MS Excel for the tool to work properly).



Tuesday, 17 November 2015

Colour - Saturation Finally Explained

Not for the first time I stand corrected!  I have found it hard to get hold of a clear explanation for colour saturation.  Sure it's easy to visualize colour saturation when we see it illustrated graphically, as in this simple depiction below.  Saturated colour is rich and pure in appearance while desaturated colour looks washed out, fading eventually to greyscale.  But what actually is colour saturation and how is it measured by the camera?


In earlier postings I fell into the trap of assuming that colour saturation is not actually measured at all by the camera but is one of the camera pre-sets.  This statement is partially true.  Saturation, like contrast is one of the pre-sets that is laid down as the raw image data is converted to JPEG.  It is also one of the settings that needs to be adjusted in a RAW workflow using Camera Raw or whatever you have.  

It is also evident, in the design of the camera sensor that there are only two elements used to measure colour.  Each photosite directly measures luminance, one of the three parameters that defines a colour.  Then we have the Bayer filter array.  In the majority of cameras the Bayer filter array is the critical element that defines colours accurately in digital images.  Without it the image would be black and white.  Over each photosite sits either a green, a blue or a red filter.  The filter blocks all of the visible spectrum, apart from the region of the spectrum that corresponds to that filter.  So in effect each photosite measures light intensity over a limited region of the visible spectrum.  So, how does the camera actually decide based on this limited information what the hue and saturation for a given pixel should be?


The answer is interpolation, or more specifically de-mosaicing.  Each pixel on the final image does not correspond to a single photosite on the digital sensor but rather to a cluster of neighbouring photosites, generally two green filtered and one each of adjacent red and blue filtered photosites.  This starts with the creation of a Raw Bayer Image as illustrated below.


So, what is Colour Saturation Exactly?
Having been researching colour for some time I kept finding the trail going a little cold at this point in the journey.  Then, finally I found a proper explanation for what colour saturation actually is, and it all starting to fall into place.  When we look at a depiction of a saturated colour compared with a desaturated equivalent it is easy to assume that saturated simply means 'more of the same colour'.  That is kind of true, but more specifically what saturation is is 'purer colour'.  When we analyse the spectral distribution of a colour in nature what we find is that the colour is often made up of a range of different colour wavelengths, not just the wavelength that corresponds most to the colour we see.  We are all familiar with the concept of mixing paints to create different colours.  Colour is itself a mixture of different quantities of other coloured wavelengths.  Saturation is a measure of the purity of the most dominant wavelength.  If a colour is almost entirely made up of one wavelength of light (eg. a laser) it will appear richly saturated.  If however the colour consists of one dominant wavelength plus a lesser amount of a range of other wavelengths then that colour may still have the same dominant hue but it will appear less saturated, i.e. less pure.  What an extraordinary and somewhat counter-intuitive revelation!  And yet I kept missing this vital point while researching colour and saturation.  I finally found all of this neatly explained by the experts at Cambridge in Colour (once again) under their tutorial on Colour Perception HERE.

How do cameras measure colour Luminance, Hue and Saturation?
So its all finally fallen into place.  Cameras measure luminance directly at each photosite based on the number of photons collected.  Then, taking the colour information gathered from the green, red and blue filters the camera can measure both hue and saturation to a high degree of accuracy.  If for example the object being photographed is a pure, saturated green colour then only the green filtered photosites will likely register an image of it.  If however the object is a dull desaturated green then chances are its spectral signature will register to a greater or lesser extent across all three colour filtered photosites.  During interpolation this data can be combined to identify both the correct hue and saturation level for that colour.



Saturation pre-sets and other post-processing 
The story doesn't end there.  Most photographers would agree that digital images are not as well saturated as equivalent film or slide images.  Manufacturers leave the choice of saturation preferences up to the photographer.  Those who like more saturation in their images can select a stronger saturation pre-set if they prefer.  Different camera manufacturers are likely to put their own non-adjustable pre-sets in place depending on exposure and other factors to boost the overall quality of the images offered.  So there is probably a limit to how well we can really analyse the true saturation of colours based on digital images.  Hopefully though the overall accuracy is good enough for our purposes.

Colour management is a process that aims to maintain accurate colour from the scene to the camera or scanner, to the screen and ultimately to the printer.  Doing this properly takes a great deal of effort and starts with the proper calibration of the camera sensor.  No two cameras generate colours exactly the same way.  Sensors vary slightly and are not calibrated by the manufacturer to any globally recognised standard.  For some reason manufacturers don't think this is important.  The most well recognized tool needed to calibrate a colour sensor to a standard is the X-rite (formerly Gretag) Colorchecker Passport.  Meanwhile, in the field the human eye is constantly adjusting to changing light temperature (white balance).  The camera also tries to compensate accordingly but often fails in that regard.  Once again there are ways to correct camera white balance properly and to a recognized standard.  I have discussed these various processes in detail HERE.

In summary
So, once again I have finally jumped another stubborn hurdle on my journey of discovery.  I have found colour saturation to be one of those conundrums that didn't quite fit in to my understanding of light and of digital photography.  Having established that the key word in the definition of colour saturation is 'purity', finally it all now makes sense.  The camera can indeed measure very effectively all three parameters that go to make up what we call a colour.

Saturday, 14 November 2015

Forensics - Bit Depth Limitations

Contrast Ratios and Bit Depth
If you think about it, the only meaningful way to describe the brightness of light, other than by direct measurement, is by making comparisons between different brightness levels.  We experience and refer to this difference as contrast.  An obvious question might be, how well do humans perceive a subtle change in brightness?  Human sight is arranged to buffer or mask global and local changes in lighting.  On entering a building from outside for instance the eyes adjust almost instantaneously and imperceptibly to the large drop in brightness by widening the aperture or iris of the eye.  The indoor environment might still be perceived as just a little darker than the outside environment, perhaps still visible through windows of the room.  In reality the brightness indoors is only a small fraction of the brightness outside.  This difference can be measured with a light meter or lux meter.  Typically illuminance might be 400lux in a bright room but as high as 100,000lux on a bright day outside.  This represents a 250 fold difference.  We don't tend to perceive the gap as being quite so large.

While humans may not be good at detecting subtle changes in global lighting because of this adaptation, at the same time when faced with an image we can be very good at detecting very subtle brightness differences at the local level.  By placing tonal swatches beside one another and asking observers to detect a difference in brightness it should be relatively easy to assess the capabilities of human vision.  The CIE have been studying these capabilities for almost 100 years and, as a result we know quite a lot about the capabilities and limitations of our eyesight.

The manufacturers of digital display devices obviously have a great interest in the capabilities of human vision and how to provide the clearest, sharpest and most efficient imaging technology.  Rather surprising then that there is no accepted standard covering all of this (see HERE) allowing manufactures of TV's and other screens to exaggerate the capabilities of their products.  But surely there must be some benchmark?  What if the ability to perceive detail on a screen was a matter of life or death?  In the medical industry display devices must meet certain standards.  I found this useful reference while looking at medical imaging devices.  Using a term called "just noticeable difference" (JND) it turns out that most people can detect a minimum of 720 different shades of grey on medical displays which have an output brightness range of between 0.8 - 600 cd/m2.  So when deciding what bit depth to apply to these screens 8-bit (256 shades only) is clearly not taking enough advantage of the available contrast ratio while 16-bit (65,536 shades) goes way beyond what is required, as humans simply cannot perceive all of these shades at once on those devices.  The optimum or most efficient for these medical devices has a 10-bit depth or 1,024 distinct shades therefore.  So there are just enough distinct shades to cover the range of human sight for the device without adding any unnecessary bandwidth and cost.

So, what about our average computer or smart phone screen?   Once again, for practicalities the standard devices that we use every day don't currently need a very high bit depth.  For a start these screens do not have as high a brightness or contrast range as a medical screen.  Therefore the number of effective JND shades is much lower than the minimum 720 shades possible with a medical screen.  In actual fact on typical computer screens used in typical indoor settings the average person is not going to be able to discern much more than 100 distinct shades at any given time.  As illustrated below 5 or 6-bit depth is not quite enough for a typical computer screen but 8-bit is certainly plenty.  It's no coincidence then that the Internet runs on 8-bit sRGB colour space.  If ever the public developed an appetite for higher bit depth images the whole industry from screen manufacturers to computer manufacturers and Internet providers would all have to increase bandwidth to cope with the extra demand.  This obviously all adds significant cost to all stages and so it probably wont happen any time soon.

In the meantime we can still take advantage of the higher bit depth captured in our digital RAW images by playing around with brightness, contrast and another lighting settings in Camera Raw and other programs as discussed HERE and HERE.


Digital Image Encoding, 12-bit and 10-bit
As indicated all forms of standard digital imaging technology from digital cameras and scanners to screens and printers are set up to capture and/or display images in 8-bits.  Most high end digital cameras however encode images in 10-bit or 12-bit instead of 8-bit.  Why?  The answer is Gamma, that adaptation of human vision where humans perceive a greater tonal range in the shadows than in the highlights.  Digital sensors record and encode digital image data linearly.  This data is later subject to a gamma correction for sRGB colour space.  In order to ensure there is enough tonal data to accommodate gamma correction without resulting in a noticeable postarization in the shadows as much as 3.5 bits of additional tonal data must be captured and encoded.  Hence the 10 - 12-bit encoding.  For more on gamma correction see HERE and HERE.   

Wednesday, 4 November 2015

Birding Image Quality Tool - Rev. 2.0 Birds and Light

When I initiated this blog in early 2014 I started with the concept of an Image Quality Tool for birders to use to quantify the overall quality of bird images based on key image capture quality parameters including resolution, focus, exposure, colour and artefacts.  Soon after I began to explore the first of my in-depth analyses, namely on the subject of Birds and Light.    The journey has been rewarding and I think it's time to gather the main findings and put them to some use.  One way I have chosen to do this is to augment the image quality tool with another quantitative tool devoted to scene lighting quality.


Once again I have chosen a series if parameters intended to provide a good overall representation of the subject.  Here I will go into some detail on each of these parameters and direct readers to some relevant postings.

Light Quality
There are a number of things that determine the quality of light, perhaps the most important of these being the sun's position in the sky.  As we rapidly head towards winter solstice the light here is already starting to decline in quality.  As I type it is roughly 1.00pm on a very unusually balmy Irish November day.  The sun is as high in the sky as it will get and yet there is a very noticeable yellow tint to the light.  By this hour in mid-summer the sun would be much higher and the light would be harsh, crisp and strongly white.  From here until early spring there won't be too many days in the field when lighting quality will be quite as reliable as it was mid-summer.

I really enjoyed constructing this animated gif.  It represents the changing quality of light during the year.  Imagine you are lying on your back with your head pointing north and your feet south, watching the sun trail across the sky for the day.  Well, in Ireland this animation depicts what you would witness over a 24 hour period. The long days of summer bring a constancy to the quality of light that we quickly take for granted.  In mid-winter we must make do with an ever-changing pallet from a reddish dawn to a yellowish morning and evening light.  Because the sun is low in the sky the light must travel through denser atmosphere to reach us.  The atmosphere scatters the shorter blue wavelengths of sunlight (Rayleigh scattering), with the result that the light is richer in the longer yellow and red wavelengths.  Hence the changing colour of sunlight and the blue colour of the sky.

Light quality can effect bird images in a number of ways.  The most obvious impact is on colour.  Human's have the in-built capacity to adapt to changing light colour temperature by correcting colours to maintain colour constancy.  We call this colour balance or white balance correction.  Cameras are equipped with white balance correction but accurate white balance can sometimes be hard to achieve.  So this is a quality parameter we need to watch for.  In the image quality tool I have captured white balance under the colour parameter and in the lighting tool it makes another appearance here under the concept of lighting quality.  For more on white balance see HERE.


The other key impacts on images are the angle of sunlight and the intensity of light.  I have tended to elaborate more upon these aspects in my various postings on lighting environments.  For instance here Against The Sky the low angle of the sun in the morning can totally alter the appearance of a bird in the sky when compared to later in the day.



Whereas, On Snow And Ice the major issue to watch for is albedo or surface reflectance.  So light intensity is the big factor to consider in that particular lighting environment.



Overall, when we start to analyse lighting quality we begin to establish the optimum lighting quality conditions for birding and photography.  Hopefully through my analysis I have arrived at a good categorisation and weighting system for lighting quality in bird images.  For more on this subject see HERE.

Light Direction
Having analysed the overall quality of the light we are dealing with its now time to take a look at the direction of light.  As observers and photographers we are all acutely aware of the benefit of putting the sun to our back so that our subject is uniformly lit from the front.  Unfortunately we can't always control this.  When faced with a tricky bird identification the lighting conditions are often a very significant factor.  We might feel that judging the sun's direction should be fairly easy in a photograph but very often it is not.  In the posting on Lighting and Shadow Direction I looked at a couple of methods to help pinpoint lighting direction with a finer degree of accuracy.


If the sun is shining and we have positioned ourselves fairly well in relation to the sun and our subject then we may find that the sun's position can be pinpointed due to the specular highlight in the bird's eye.  We may not have an exact three-dimensional direction but we can at least draw a conclusion within the two dimensions of our digital image.

Failing that, if we are lucky we might be able to establish light direction based upon surface normals.  The principal, Lambert's Cosine Law is that light reflection is at it's brightest where the light source hits a surface at a 90 degree angle (or 'normal' to the surface).  Here by drawing surface normals at the brightest points on the surface perimeter of this Ring-billed Gull (Larus delawarensis) I have been roughly able to establish the direction to the light source.  The analysis is consistent with the eyeball method above.  Were the bird to have been lit from the side or rear, so that it's eye were in shade, this technique would offer a workaround to establish lighting direction.


So why is lighting direction of relevance to us?  Well, if we know the lighting direction we can establish the direction in which shadows are falling.  Then when it comes to an analysis of field marks we are in a better position to analyse for potential anomalies caused by shadow.

From the perspective of the image lighting quality tool I have kept this analysis fairly simple for now.  It should be possible to establish if a bird is front lit, back lit or side lit without having to resort to such a fine level of analysis.  But I have presented these finer tools here should such a more critical analysis be justified.

Shadows 
Shadows, as just stated regularly cause confusion during bird identification from photographs.  In fact shadows fall so consistently within the topographical recognisable areas of the plumage that I think an argument can be made for a Shadow Topography.  And in fact, that is just what I have proposed in the posting of that name.




What if the day is overcast I hear you ask?  Well as it happens I have spent a great deal of time teasing out this and related questions, the most recent of which was the posting entitled Lighting and Perspective (Part 2).  Through experimentation I have been able to demonstrate how effective cloud cover is as a diffuser of light.  The simple answer to the question of light direction on a cloudy day is light is scattered in all directions.  Therefore from the perspective of the camera the subject is being lit from the front at all times.  Meanwhile shadows are also being cast in every direction.  The shadows are also soft and diffuse because they are being diluted in effect by the scattered light.  This is why, in terms of observation and photography a bright overcast day will tend to trump a bright sunny day every time.


Dynamic Range
This is one of the technical aspects of photography that can sometimes be difficult to get ones head around.  Put simply it is the range of light intensity that can be captured by an imaging system from the brightest point to the darkest point.  A good way to start thinking about this subject is to consider how well ones eyes adapt when suddenly faced with bright sunlight or total darkness.  Our eyes have an incredibly broad dynamic range, much broader in fact than any digital camera.  And yet we have our limits.  We have evolved methods to adapt to changing lighting, some of which take time to kick in.  In a way a digital stills camera is at a bit of a disadvantage as a still image is not dynamic.  The camera has a brief moment to capture an image.  After that the camera can do no more with the lighting presented to it.

That said there are techniques available that can boost the dynamic range of the camera and indeed simple ways of adapting to the available light, just as our eyes can.  The simplest of these is of course exposure compensation.  By adjusting exposure time, aperture and/or ISO settings a photographer can peer into the brightness or the gloom and even see beyond the dynamic range of the human eye.  But the camera can only shift exposure beyond the human range.  The camera's overall dynamic range has not improved.    For instance, a camera may be capable of capturing the detail on the surface of the sun and also capable of peering into the dimmest corners of the universe.  But it cant do both at the same time.  If it could it would have a dynamic range far beyond that of the human visual system.


Below I have attempted to combine an understanding of camera exposure and dynamic range in one graphic.


There are also techniques that allow the dynamic range of the camera to be boosted.  These are referred to as High Dynamic Range Imaging techniques or HDRI.  I have explored some of these techniques in a posting HERE.
Obviously what makes dynamic range of relevance to us is that, much like the challenging light that makes it difficult for our eyes to see properly, if a camera's dynamic range is exceeded image content suffers and the challenge for bird identification is made much greater.  So how do we establish if an image has suffered due to dynamic range issues?  The answer can be found by studying the image histogram (see image above).  A histogram is simply a graphical representation of all the tonal levels in an image.  If clipping has occurred the graph will show a spike at one or both edges of the histogram.  On an exceptionally bright day, such as the early winter's day when this European Robin (Erithacus rubecula) was photographed the harsh light exceeds the dynamic range of the camera.  In this case the brighter levels (the highlights) have come close to clipping while there is also a considerable accumulation at the dark end of the histogram (the shadows).  In the original image on the left the middle of the histogram is quite low and flat and this reflects the high contrast of the image.   HDRI techniques can be remarkably good at restoring a balance to an image, reducing the overall contrast and in doing so bringing out detail from the mid tones.  The right hand image was created using a HDRI tool.

So, in the context of our overall lighting quality tool we are looking for evidence of high dynamic range issues.  Is the image high in contrast?  Is there evidence of loss of details in the mid tones and is there evidence of clipping?

Multiple Lighting
Without having given much thought to the consequences of Rayleigh scattering or cloud cover light diffusion one might assume that there is only one light source in the heavens.  But when we take a much closer look it turns out that an image is often made up of lighting of different sources and colours.  Take for instance a normal sunny summer's day.  In the sun the light colour temperature will be very close to perfectly white.  But in the shade the light temperature is very different.  The blue sky canopy scatters light into the shadows on a sunny day and renders the shadows blue in colour.  This also has a baring on the appearance of our subject.  I have studied the lighting qualities of different lighting environments in great detail in the posting Lighting Under The Microscope.


This dual lighting can be very frustrating.  There is nothing worse than being presented with a perfect portrait opportunity, like this mega rare Eastern Olivaceous Warbler (Iduna pallida) only to be frustrated by the lighting.  There are effectively two different white balance settings to choose from in this one image, the white balance for the shade and the white balance for the sunlit area.  This image also illustrates the dynamic range challenges created by bright sunlight.  There is some clipping and blooming (an artefact associated with highlights clipping) both evident around the top of the tail and rear toe and claw of the left leg.


Summary and Conclusions
By gathering together various lighting aspects and presenting them here in the form of a concise image quality measurement tool, hopefully I have met my objective of summarising and drawing a line under the chapter Birds and Light.  That said I have found myself repeatedly coming back to this subject because I find it so interesting.  No doubt I will add more postings to this page but I'd like to think I have gathered enough information to provide this broad summary.

No doubt the measurement scales can do with some fine tuning but I am happy for now at least with what I have achieved here.  For those who wish to download and play around with both the Image Capture Quality and Lighting Quality Tools please follow the link below.  Feedback as always welcome and much appreciated.

DOWNLOAD Birding Image Quality Tool Rev. 2.0
(Note you will have to download the file and open in MS Excel for the tool to work properly).

Here is an example of both the Image Capture Quality Tool and Lighting Quality tool in use using one of my favourite images ever - a displaying and aptly named Sunbittern (Eurygypa helias) from Venezuela.


Wednesday, 28 October 2015

Time To Exhale

This blog is both a personal journey of research and exploration and also a means to an end.  The objectives set out in the Introduction have not changed.  The scope of the project is essentially contained within the first figure in that introduction.  The ultimate goal is a standalone guide and set of tools to aid in the identification of birds from photographs.

As 2015 winds to a close it is time to halt the research and consolidate the learning.  By the end of the year I hope to have published a revision of the Quick Reference Guide.  Between now and then I will be pulling together and summarising the key findings of the blog.  

There is almost no end to the realms of research that this blog could potentially expand upon.  Identification is itself a hugely broad area, undergoing constant development.  But the blog is not really about cutting edge bird identification.  It is about designing a set of tools and standards to aid bird identification from photographs.  And yes, strange as it might seem, some of these tools haven't existed up to this point.  For instance, there hasn't even been a standard method or nomenclature for properly sampling and describing colours from digital bird images.  And colour is just one of the areas I have been looking at closely.  I have cast out a number of nets in order to gather the information needed to develop the tools I need.  I think finally it may be time to start pulling back in some of these nets to examine the catch. 

All of this essentially boils down to light.  I have spent a great deal of time deconstructing light, almost to a point where it has become a bit of an obsession.  A clear understanding of light is key to bird identification from photographs and that is why I feel all this has been fundamental and therefore clearly worth the effort.  Because the camera is such a masterful tool for playing with and dissecting light I have been able to use my camera to obtain clearer answers to many of the questions I have.  I have particularly enjoyed constructing experiments that help understand this subject.  This year the posting which I think has borne the most fruit was the set of experiments looking once again at lighting and perspective (part 2).


The major reveal from the particular experiment above was that from the perspective of the camera lens the diffuse shadows on an overcast day all fall towards the centre of the digital image.  In outdoor photography we are used to positioning ourselves relative to the sun in order to obtain optimal lighting on our subject.  But on an overcast day sunlight is scattered very efficiently by the clouds, to the extent that the entire sky dome becomes a fairly uniform light source. All angles should offer fairly decent lighting and the sun's position shouldn't matter a great deal.  This is why an overcast day is far superior to a bright sunny day for photography and observation generally.  While this might seem fairly logical its not something we often consider or analyse in our images.  These types of experiments have made me think a lot more about how light and shadow falls on a subject in a photograph and, more importantly, why light and shade works as it does.

Spotlight - On Colour
Having expended considerable effort during 2014 on the subject of digital colour reproduction, including even a sojourn into the ultraviolet realm to try and see a bit more as bird's do, I thought perhaps I had figured colour out.  Not so it seems.  Not only did the pride of my earlier exploits the Birders Colour Pallet need a bit more thought and explanation in Rev. 2, but I had failed to spot a fundamental point about digital colour capture and reproduction.  I hadn't given enough thought to one of the three parameters that make up colour, namely saturation.  Cameras capture and measure the first of these parameters very well - the brightness or 'luminance' of colour.  This is possible because each photosite on a digital camera sensor measures quite effectively the actual intensity of the light falling on it.   Then, thanks to the Bayer colour filter array which sits over the sensor itself, cameras can identify quite accurately the 'hue' of each of the colours reaching the sensor, albeit within the constraints of the digital colour space we operate to.  However, what about saturation?  the sensor does not have the facility to measure the third colour axis, namely colour 'saturation'.  Image saturation is actually one if the camera pre-sets by the camera manufacturer and takes effect during image processing.    I am very mindful of it's importance of accurate colour saturation reproduction in terms of an accurate colour standard.  After all the saturation of colours form an integral part of colour nomenclature, both traditionally (eg. Robert Ridgway's Color Standards and Color Nomenclature) and indeed in my own Birder's Colour Pallet.  I have finally tackled this question HERE.  For more see HERE.


Spotlight - On Field Marks
Field Marks form the core of many a bird identification, and certainly most ID's based on photographs.  I started looking into this whole area early in 2015 and by mid-year I had a significant body of work done.  Whether rightly or wrongly, the approach that I took was to look at plumage patterns starting from the centre of the feather and working outwards.  So I started by looking at simple plumage patterns arising from shaft-streaks which collectively form tramlines.  Then I looked at solid, diffuse and more complicated patterns associated with the broad feature centre.  Lastly I focused on the feather edge and tip.  I was kind of surprised to find that this simple approach tended to encapsulate most of the vast array of plumage field marks that exist in birds.  Lastly I tidied up the set of postings with a look at bareparts patterns and also colour and field marks, in a broad sense.

As I worked through the problem it became apparent fairly quickly that there are broadly two sets of field marks in birds, the bold and the bland.  While testing the effects of different image quality parameters on bold versus bland features I began to observe consistent patterns.  Bold field marks are more robust, able to withstand a far greater level of image quality deterioration than bland field marks.  So, in effect, when we analyse bird images for field marks we need to know how 'volatile' those field marks are and we need to consider that point within the overall image quality context.

Another area that particularly interested me was the concept of false field marks.  These are false markings produced by the interplay between light, shadow and avian anatomical structures.  By its very nature anomalies due to lighting or posture tend to be harder to detect in a still image then they would be while observing a bird in life.  Because birds are normally moving about we judge and compensate for lighting and the momentary movement of feathers often without even having to think about it.  But faced with a single still frame, all of a sudden that shadow or bright spot, or slightly odd posture or misaligned feather becomes a major source of confusion. In the end I thought it appropriate to develop a topographical nomenclature to describe some of the consistent anomalies we find in bird images and I call this Shadow Topography.  This is not simply me making up a lexicon for the hell of it.  Before we can understand and deal with an issue we need a way to describe it.  Or as one of my daughter's kindergarten teacher's cleverly puts it..."name it to tame it".


Spotlight - On Forensic Image Analysis
I haven't added too many postings to this part of the blog so far this year.  Having made reasonable strides towards a forensics manual last year the postings this year tended to be more about delving that bit more deeply into one technical subject or another.  It's probably fair to say none of the postings make for exciting reading and I don't suspect that the really in-depth analysis of digital images will float many a birder's boat.

If I had to select one posting worthy of particular mention here it would be fringe artefacts while working in RAW.  I had imagined the RAW work flow as this pure, unadulterated form of image analysis.  So when I started to see strange artefacts appear in files that had undergone hard restoration with Camera RAW I started to wonder was I imagining things.  Sure enough I found an explanation for these artefacts.  It turns out that some RAW work flow tools leave behind artefacts when they are a little over-used and this is something to be really mindful of, especially when the goal of working in RAW is to bring out hidden field marks.


Spotlight - On Gestalt
The gestalt page of the blog is another aspect of the journey that I have only really started to develop in 2015.  I know that there is going to be a real limit to the extent to which a bird's gestalt or jizz can be revealed by digital stills images.  Most of the time, when we are talking about identification of birds from images we are referring to as few as a single digital image.  So lets not kid ourselves.  That said, in defining the distinction between field marks and gestalt for the purposes of this blog I have been clear to point out that I consider a bird's size and shape, structure or morphology as all falling within the broad definition of gestalt.  Some might include these in the definition of field marks.  The obvious question when faced with a single image - can we take size or proportional measurements from an image which would help us identify the species in the photograph.  Most of my postings on gestalt to date have been about tackling this question.  The conclusions so far would tend to be a resounding NO to that question.  The problem very simply is that the real world is three-dimensional while a digital image is two-dimensional.  Whether we are trying to measure primary projection, bill to eye ratio, tibia to tarsus ratio or some other measurement or proportion we constantly run into problems of foreshortening and/or features which are offset from one another by small angles which we cannot hope to measure.  In other words all attempted measurements from digital images tend to be estimates at best.


The solutions to these problems lie in 3D modelling (eg. HERE).  Modern technology is starting to provide us with practical 3D modelling solutions.  Before too long we may well be able to judge size and proportion extremely accurately in the digital images of the future thanks to 3D photography.  But for now at least we need to be mindful of the limitations that exist with our 2D images.

This incomplete 3D model created using some clever, freely available software was made by simply feeding a number of 2D images into the software and letting it crunch the numbers.  More sophisticated forms of this type of technology may offer better solutions in the future to allow the accurate measurement of features on birds based on images captured in the field.

Spotlight - On Human Bias
This is the last of the specialist fields of enquiry that I have so far opened up on the blog.  Starting in late December 2014 into early January 2015 I opened up the blogging year with a lot of cognitive science jargon and concluded with 10 tips for avoiding cognitive bias during the process of identifying a bird from digital images.  It's probably fair to say that cognitive bias can play just as big a role in the identification and assessment process as ones technical knowledge of an ID subject.  On a bad day even the most expert birder can fall foul to their own biases and be misled by a misguided trail of clues.  I guess if someone were to say to me that they have a difficult identification to pour over from a set of bird images and were wondering where to go first on my blog for some useful advice, this posting is where I would direct them.  Its about having the right mindset before engaging any identification puzzle and trying to approach it as objectively and open-minded as one possibly can.  Unfortunately, despite our best efforts we can never fully turn off our biases - they are a fundamental part of how we work.

Many of the biases that I have gone on to discuss are associated more with the mechanics and wiring of the human visual system than human cognition.  As observers and identifiers our ability to visualise and analyse the images we see are subject to the limitations of our eyes and brain.  The dress viral phenomenon created quite a storm of attention on social media for a short period in March 2015.


Those who observed the poorly exposed photograph of a dress were divided between observers who believed it was blue and black and those who were equally convinced it was white and gold.  The bias in this case seems to be from a subset of optical illusions referred to as Brightness Illusions.  In a roundabout way this leads us full circle back to Birds and Light.  While I hope the blog will continue to grow and develop I am getting the sense that it may be time to start pulling together the threads to weave the first couple of chapters of the manual.  At the end of the day, like a PHD student who just can't quite finish a thesis, I could go on and on with all of these disparate fields of study.  But the average birder is likely to only want a few simple and effective tools to approach a bird identification with a degree of knowledge and confidence to deal with the variables and challenges that might be thrown up.  Time to consolidate.


Tuesday, 13 October 2015

Colour - Resolution and Colour Sampling

Among the various aspects of this blog the subject of colour has been one of the more interesting and revealing.  Colour in birds is discussed in detail HERE.  From a distance colour plumage patterns may appear quite uniform and homogeneous.  But, take a closer look at say a Northern Wheatear (Oenanthe oenanthe) in the hand or through a scope and one can see that the micro structure of the feather plays an important role in how colours are actually presented to us.


Colour In Layers
The splayed barbs of downier scapular, mantle and coverts feathers add a degree of texture and complexity to a bird's plumage.  By contrast the more tightly-knit barbs of the flight feathers appear less textured, smoother and, as a result may appear more uniformly coloured.  Bird illustrators might resort to crosshatch or fine brush strokes to try and capture this subtle difference in texture.

Take a close look once more for instance at the Northern Wheatear image above.  The highly magnified scapular feathers (lower left inset) have splayed barbs, which in turn reveals a darker background beneath these feathers.  Similarly, on the throat we can see fine white feather barbs, like fine brush strokes lit against a rich, orangish background.  When we pull back and look at the bird in lower resolution (top left) we generally miss these subtle distinctions.  The scapulars appear instead as light tawny with perhaps a hint of fine texture evident.  But the dark background is not so readily apparent.  The throat appears a rather uniform pale orange, though without the same contrast between layers needed to appreciate the texture between the layers of feathers in that area.  Contrast is an important element of all of this.  Contrast determines how the human eye resolves detail and is an essential element of what we term sharpness or acutance, as for instance explored HERE.

For those with experience of handling and describing birds from museum skins or from ringing birds this is perhaps not much of a revelation.  Those who often work up close with birds experience their colours and morphology quite differently to those who watch them solely in the field.  I imagine many field birders miss this subtle distinction between overlapping feather layers and their colours when describing the birds they observe.  I know I have.  Very often when we describe the colours of plumage in the field we are not actually describing the individual feathers.  Very often it is a combination of colours of one or more overlapping feather layers.  This should be borne in mind when it comes to comparing our field descriptions with those obtained from the text of a scientific description, or from a ringer's guide such as the Identification Guide to European Passerines by Lars Svensson or the North American equivalent, the Identification Guide to North American Birds by Peter Pyle for example.

The question is, do we need to be able to observe and capture the kind of detail only obtained by viewing a bird up this close?  Clearly, in most cases we do not.  Standard field guides and their identification plates rarely go into such minute detail and most birds can in fact be described and identified without reference to colour at all.  Moreover, there can be considerable intra-specific variation and variation due to feather moult and wear, particularly involving feather edges and tips.  Not surprisingly then subtle colour detail such as this is not commonly high up the agenda for many birders in the field.

Pixel Resolution
Resolution in photographic terms is the image quality parameter that best represents that distinction between that forensic 'up close and personal' plumage analysis and that, generally far less exacting analysis achievable in the field.  Without a very high level of resolution we can never hope to get down to the fine feather detail needed to for example distinguish between the colours of layers of overlapping feathers as described above.  It is quite rare that a bird will allow such a close approach as this Northern Wheatear.  Typically speaking the image resolution obtainable with a camera in the field is not a close match for such in-the-hand analysis.  In actual fact, most of the time the camera struggles to even come close to capturing the level of detail we can see with our field optics.  Though of course an image may be subject to far more rigorous scrutiny than a field observation, no matter how close and prolonged the sighting.  It could also be said that modern digiscoping and the latest crop of DSLR cameras are making significant ground all the time on field optics in terms of image sharpness, resolution and magnification.  And this trend is only set to continue.

While the resolution we can perceive is dependent on our eye-sight and the resolution of the screen upon which we view our images, we can of course use the zoom function in our imaging software to magnify an image to resolve fine details down to the level of individual pixels.  So when we talk about forensic analysis of detail in digital imagery what really matters is resolution in terms of total pixel count.


The image captured by photographing a bird with a small lens at close range is not the same as the image captured from further away with a longer lens.  Where we are concerned with very fine detail every pixel counts, and that means that image artefacts come into play.

Up closer we can fill the frame and capture the image at much higher resolution.  We also tend to have much better control over lighting and exposure with a smaller lens.  At greater distance we have environmental factors like heat haze, dust and moisture to distort the image.  We rely on larger lenses which generally means compromising optimal exposure and introducing noise and other artefacts.  While these are all clearly important considerations, for this posting I am keeping things rather more simple.  Rather than try and capture similar images of our subject from varying ranges I have instead mimicked this approach by simply taking a really sharp image and gradually lowered its resolution by resizing it smaller.  

There are some similarities between this approach and photographing the subject at varying distances.  The key distinction however is that resizing requires image interpolation which alters every single pixel in the image.  Therefore a resize is an adulteration of the original image content.  On the other hand, when we simply stand further back from our subject and take a smaller image of it we are achieving the same reduction in overall pixel resolution but without that added interpolation step.  Despite all of that, starting with a sharp, well exposed image taken at close range and resizing smaller always tends to produce better, far more reliable results.

Resolution and Sample Homogeneity
I have discussed image colour sampling and analysis is detail elsewhere.  We can sample and identify the colour RGB value for an individual pixel using any standard image editing software.  But when it comes to sampling a patch of colour we face a problem.  The pixels across the surface of a colour swatch will tend to vary slightly, no matter how perfectly homogeneous the swatch might appear.  One need only zoom in to the pixel level to appreciate the level of variation occurring at that minuscule level.  We may inadvertently sample a pixel of noise and end up with a completely spurious result.  Or, for instance in the case of the very sharp image above we may set out to sample the tawny feather colouration of the mantle and inadvertently sample an underlying feather or the feather shaft instead of the barb.  The purpose of the postarizing method that I developed (HERE) is to reduce the margin for error by creating a homogeneous patch, thus eliminating that background noise within the image.  What I am particularly interested in here is the effect of the postarizing method on a really sharp image versus progressively lower resolution images.  Afterall, by resizing an image smaller the interpolation step is liable to introduce a certain amount of portarization itself, filtering out much of the noise in the process.  But the fear might be that in doing so colour accuracy is compromised.

I have already carried out an experiment involving colour patch homogeneity (HERE) and in doing so I think I found a pretty good quality control tool for selection of appropriate sample swatches using defocus as a tool to test homogeneity.  Here I am testing something slightly different.  This is a test of the image resizing interpolation algorithm to see if that process introduces unwanted drift among the colours of the image.


Once again I have taken the very sharp original image.  I have drawn five sample boxes within the image and collected each of those samples.  I have then resized the image and resampled the same boxes over and over at progressively lower image resolutions.  This leaves me with a total of twenty five samples ready for analysis as displayed above (lower left).  The next step involves postarizing each sample.  Once again for this I have used a simple tool in the MS Office 2010 suite called "Cutout".  Taking each swatch in turn this tool reduces the colour pallet of each swatch down to at most two or three sample colours to choose from.  It is quite easy to select a pixel of the appropriate colour from the limited choice.

The results are striking.  Even with the highest resolution image, with its fine textured detail, the postarizing tool has arrived at the same basic colour sample result as the same swatch taken from a much lower resolution image.  For me this strongly suggests that the interpolation tool which I have used to resize the image (MS Paint) is very good at preserving colours down to a very low resolution.


A closer look at the actual RGB numbers shows that there is actually a slight drift evident, most noticeably towards the lower resolutions.  This is not too surprising given that a resize down to 1% of the original image means that every pixel has to somehow convey the detail contained in 100 pixels at high resolution which it has amalgamated and replaced!

Turning to the Birders Colour Pallet which I created to try and put a standard name to colours in sRGB colour space note I have intentionally allowed for a fair degree of latitude between each named colour.  This is an allowance for a fair degree of colour drift during interpolation and other forms of image manipulation.  The result is that, for the most part the Birders Colour Pallet key delivers the same colour result across the board in this experiment.  Only in the case of the dark mauve swatch has the drift exceeded the boundary and arrived at a neighbouring named colour (dark maroon and dark heather in the case of two of the results).  For me these minor outliers are perfectly acceptable and the results further enforce this blog's colour sampling method as a valid and useful one for birders to adopt.

Conclusions
There is a lot to be said for studying plumage up close and personal.  While most field guides and indeed many field observations might give the impression that birds are clad in fairly uniform colours, the reality is plumage colouration is highly complex and subtle at the micro level.  Then again we can get by in the field without having to study birds in such fine detail.  The same can be said for field photography as a means to capture and analyse colour in birds.  What this experiment has shown is that image resolution is not a significant factor we need to be to worry about.  Whether an image is a full frame, high definition mind-bender, or a much lower resolution, simpler representation, it is still possible to analyse colour consistently using the colour sampling method and Birder's Colour Pallet.