Hoorray for the Red, Green, and Blue

While sifting through the seemingly endlessly nested menus on my Nikon, I came across a setting that allows the camera to take multiple exposures (combining two or three shots into one composite image). We used to do this in the old days of film, often by accident (not all cameras integrated film wind and shutter lock, you know). There really isn’t much point to doing it on purpose, except for certain artistic expression or trying to fool people with “ghost” images. Nevertheless, once found the setting must be tried.

In my usual semi-instructive manner I decided to try it out by assembling the red, green, and blue aspects of a picture – much as our display screens do. So with the camera on tripod on a day that was still a bit too windy (causing some blur in the composite image) I took the same scene through the red, green, and blue filters:

And then the camera puts them together, and we can compare this to a straightforward “single shot”:

Note the composite picture has a somewhat magenta cast to it, owing to the filters not being “ideal” shades for RGB separation. But you get the idea of how it works. The camera even assembled a picture from the first two filters used, red and green:


You can see the lack of “blueness” in it, which is a strong indicator that the blue filter is the one which isn’t quite on spec because the RGB composite shows a similar lack, just not as severe.

In art class we learn that the primary colours are red, yellow, and blue. The secondary (or complimentary or ‘opposite’) colours are orange (made of red and yellow, opposite of blue), green (made of yellow and blue, opposite of red), and purple (made of blue and red, opposite of yellow). Then in photography they throw cyan (pale blue heading towards green) and magenta (pale red heading towards purple) at us, and explain it’s because light is yellow to begin with so everything has to shift a few degrees on the colour circle. Different shades of yellow too, depending on the light (the all-important white balance). Gee, did someone mention ROYGBIV – the seven visible colours of white light? Yeah we get in trouble picking which shade of Indigo or Violet we want to call purple. Oh and along comes the age of colour TV & digital imaging and suddenly we’re supposed to understand that because light is yellow (especially artificial light) we have to make colours from red, green, and blue instead. And this doesn’t touch on infrared, near infrared, and ultraviolet – which aren’t visible to us but are to our cameras.

Confused yet? If you are, you’re learning.

The Infrared Zone

(The title is a play on “The Twilight Zone”. Since Rod Serling died in 1975, we’ll just have to do the best we can without him.)

I’ve written a bit about infrared digital photography before, and about how the closest you can get to it without spending a small fortune on a dedicated infrared camera is to use an infrared filter – and a lot of experimentation to get interesting results. Since what we’re really seeing is near infrared the results can be quite varied. Or random if you don’t pay attention to what you’re doing.

Here’s my first tip which can be used for more than just infrared photography: dedicate an SD card to the project, starting with making the first shot on the card the ‘white balance’ exposure so you can set custom WB suitable to the job. You can then go back and shoot more IR any time you want by skipping ahead to that first shot and telling the camera “this is white”. That’s absolutely necessary for IR shots: the camera’s normal exposure evaluation will not work through the very dense IR filter.

There are a variety of such filters, all rated in nanometers corresponding to the wavelength they allow to pass. The standard is 720nm, which allows some visible light to pass. Do not expect that to mean you will be able to see through it to frame and focus as it’s still quite effective at blocking all but the brightest light. Yes, bright light is a good idea; midday in Summer, for example. A 550nm filter will allow more visible light through, and an 850nm less. Which do you try first? How much can you afford to spend? Here’s an article that goes into depth on filters: Choosing an IR filter. In fact they have about all the info you’d want regarding infrared photography and it’s worth a read before you spend a dime. This article is just about my experiments.

For default purposes I’m using a 720nm. (Side note: I think manufacturers may go a bit overboard when blocking near infrared from the sensors, thus the reason so many cameras are strong on green-blue but weak on red.)

Step one: set white balance. Step out in the broad daylight with your camera and shoot a ‘properly exposed’ picture of the white card through your infrared filter. Okay, that “properly exposed” part is the difficult bit. We are looking at perhaps 16 stops more exposure than the standard picture would be under the same lighting. You can only open your lens up so wide (and you don’t want to, as depth of field is very helpful for getting sharp IR – more on that later), and as you increase the ISO you get more noise (your camera may allow you to increase noise reduction on high ISO – mine does, but it’s not terribly effective). That means long exposure times are inevitable, and that means motion blur is too. Best to shoot immobile objects like landscapes on a wind-free day. You absolutely will be using a tripod.

Step two: proper exposure. This step is recursive because you need the right exposure to get the white balance correct, which you need to get the exposure correct. Some (okay, a lot of) trial and error is necessary. You can start with an educated guess by taking a meter reading without the filter and then adjusting ISO, shutter speed, and aperture to increase exposure by 8 stops to begin with. I found going above ISO 400 adds a lot of noise without much gain in exposure (i.e. you’re already into long shutter time). So we get a change-up like this:

If normal exposure is ISO 100, f8, and 1/125 sec shutter speed, you add 2 stops by going to ISO 400 and 6 more stops by slowing down to ½ second. Use this as a starting point; you may have to increase to +12 stops, which would be 8 seconds exposure. I found this works most of the time, but beware: the time of day (try to shoot midday) and reflectivity of the scene will affect exposure and you have to guess at compensation as digital cameras do not have good exposure latitude. Keep in mind not all cameras are alike; some will pass more IR than others, or should I say some will block more?

Step three: frame and focus. The odds of you being able to see through the viewfinder (or even the LCD in bright light) to do this are around ‘not a chance’: if the light was that bright you couldn’t stand to be in it. So you can either take the filter off and put it on for every shot (you won’t be shooting moving subjects, remember) or you can ‘guestimate’ the scene and hope you’re right. Focus is another issue as IR does not focus at the same distance as visible light. Many old lenses were marked with an IR focus line. Rest assured new ones aren’t, and your autofocus isn’t going to work even without the filter because of this. With the filter … forget it. The trick is to try and use a small enough aperture to give enough depth of field to compensate for the variation. It helps to use a wide-angle lens which gives more depth of field for a given aperture. (Note that aperture affects other things with IR, including ‘hot spots’ the lens may have.)

Step four: post-shoot processing. Good luck. There are unlimited possibilities here, no matter which processing program you use, and the end result is entirely subjective; if it looks good to you, it’s good. I’ve got several ‘partial’ IR shots which some people might call “screwed up colour” but I quite like them.

Step five: move on or move away. You’re either going to like the results and continue, or you’re going to say “that’s not for me” and do something else. You might buy a few more different filters before you make the decision. Hopefully you won’t invest in an IR converted camera before you decide not to do it anymore.

So here are the photos. First, a series showing the initial shot and subsequent processing steps I used to get the ‘other worldy’ effect I like:

The first change is using Auto White Balance adjustment in GIMP. The second change is moving the Hue to -130. Note if you shoot in RAW and use Photoshop or Lightroom you’ll have much more control over the outcome. I don’t use this method because it is more work and time consuming, therefor an anathema to my laziness.

This next shot contradicts my own statement about taking pictures of things that aren’t going to move. What can I say? Marley is a strange dog and she stood there for 8 seconds just to have her picture taken.


This next one came out looking almost like a normal picture. Yet there’s something just a little bit off about it:


This one could be a fall scene or a hot, dry Summer (which we’ve had a lot of here and they can be dangerous):


Not the typical ‘white foliage’ IR pictures, right? Well they could be if processed differently. This is just a fast, relatively easy way to get some unusual results with infrared for anyone who wants to experiment without investing a huge amount of money and time.

Of course IR doesn’t have to be in colour. I shot a few frames in B&W that look more like what you would expect. This is a nice one, I think:


BTW if you get some unsatisfactory colour shots desaturating them to B&W may save them:

Personally I don’t think I’ll be investing any more in infrared equipment. It’s interesting to experiment with, but in the end has too many limitations to be anything more than an occasional shot for effect.


Kodak’s Ektachrome (and Ektacolor) film was considered “professional” on the retail market. It offered some advantages over Kodachrome, such as being about 1⅓ stops faster. That may not sound like much, but keep in mind ISO 800 is only one stop faster than ISO 400. One stop may be all you need to get that shot you’d otherwise miss.

Unlike Kodachrome, Ektachrome did not have the warm and highly saturated colours that consumers love so much. Let’s face it; we like to fool ourselves. We look at reality every day and it’s usually pretty dull visually, so when we capture a moment of time we want it to look as fantastic as our memories will pretend it was. They actually tested various renditions of images on people to see which version of the same picture they preferred, and warm & saturated won.

Professional photographers are (we hope) more discerning. They want a properly exposed image to look just like what they shot. If there’s any playing with it to be done, they’ll do it themselves thank you very much. So Ektachrome tried for more realistic colour rendition. I won’t say that it achieved it, though. At least not always.

If you’ve ever seen any of the old slides you are probably now saying “what on Earth is he talking about? Ektachrome was crap!” The truth is professional grade films are picky. They don’t like to be mishandled, whether in exposure, processing, or age. Ektachrome did not age well, either before or after being shot. Store it too warm, it looks bad. Wait too long, it looks bad. Do anything wrong, it looks bad. Let the finished images sit around for a few years and they deteriorate and look bad. In fact if you want to duplicate the Ektachrome you’ve probably seen, just screw any coloured filter on the front of your lens, overexpose the image and tell people: “It’s supposed to look like that; it’s Ektachrome.”

This is the way you’ve probably seen Ektachrome:


It’s not the only film that suffers this fate either. There is quite the industry built around restoring old movies that have faded or shifted in tone. The worst ever example would be South Pacific (1958) in which Joshua Logan purposefully shot some scenes with coloured filters in an attempt to “even out” the severely changing lighting of location shooting and in some cases add a “dream-like” tone. Instead, the story goes, the processing lab tried to “fix” these scenes and made them even worse. Subsequent attempts to correct the colour haven’t been fully successful because … well who knows what the colour was supposed to be in the first place? I have both the ‘restored’ and ‘original’ versions: restored is awful, unrestored is some kind of visual torture. Nice songs though.

This is the way Ektachrome ought to look:


Okay let’s get back on track. We’re trying to duplicate Ektachrome results, the right ones, on digital imaging. Once again it’s not as easy as just dropping the ISO to 64 because A). you can’t owing to cameras having a minimum ISO setting of 100, and B). the colour is going to be wrong. Fortunately unlike with the Panatomic-X experiment we don’t have to correct the colour for gray scale, we just have to ‘fix’ the sensor’s exaggerations.


It’s easier than you might think. I found after a few trials that the gray filter from my Neewer set not only knocked out the approximately ⅔ of a stop in film speed, but also muted the more exaggerated colours while not noticeably depleting the more subtle ones (think: red shades). This gave results with only some excessive saturation which can be turned down in post-processing (I used -20 in GIMP and feel it looks right). That flattens the tones too much, so I also turned the contrast up +10 (like compensating for a cloudy day). I also left the resolution at ‘full’ 18MP for this so that the compression which occurs when I shrink the image to useful size retains detail better (the computer has more to pick from), giving the impression of a finer grain film. I could probably cut it down to 8MP or even 4.5MP and still be fine, plus get faster processing of the images – but it works to leave it high, adds the ability to do some post-shoot digital zooming (if such would improve the picture), and allows for a large print if desired.


Now, am I apt to shoot lots of pictures like this? No, probably not. Once again chiefly because it isn’t my style. I’m lazy and don’t like having to rework every shot on the computer to get the results I want. So since they refuse to give me a ‘film selector dial’ built-in to my camera I’ll just continue to focus, ha ha, on more straightforward images.


I can’t prove to you that these images are accurate to real life, so you’ll just have to take my word for it. Or try your own experiment. As far as I’m concerned the results are uncannily correct.


Kodak’s venerable black & white films were Panatomic-X with an ASA of 50, Plus-X with an ASA of 100, and Tri-X with an ASA of 400. Yes, they all started with lower exposure indexes, but those were the speeds they ended up at and thus the ones most people would be familiar with. You can probably see the relationships in the names: add one stop to Panatomic and you get Plus, add three stops and you get Tri.

There are many people experimenting with Tri-X simulations in the digital world today. There are even ‘filters’ built-in to some software to give your images that Tri-X look with a click of a button. There seems to be an obsession these days with ‘high speed’ exposure indexes, as digital cameras go up to ISO 6400 or more. That’s +6 stops from ‘normal’ daylight speed. Where are you planning on shooting? In the bottom of a mine shaft? Of course use of such high numbers then gives you the opportunity to complain about (and try and remove) the ‘noise’, which is the digital equivalent of film’s grain. In my day we tried to avoid that.

Here’s me, the old contrarian, complaining about the opposite; cameras’ minimum ISO setting seems to be 100. This is fine, but just as higher speeds have advantages so do lower speeds. You may find that hard to fathom because the problems of lower speeds are what drive us to use higher ones. Yet neutral density filters sell. Everything is a compromise. Just be thankful you aren’t using glass wet plates, okay?

So why would we want lower speeds? Not just for blurring motion or limiting depth of field (I will die before I use that stupid ‘b’ word). Just as high speed gives you more contrast, low speed gives you more range. In black & white we’re talking about defined gray tones. Think of it like this: let’s say Plus-X gives you ‘X’ (I wonder where I got that algebraic variable from, eh?) tones. Tri-X gives you less than ‘X’ tones, and Panatomic-X gives you more than ‘X’ tones. It’s like having more colours on your palette – only they’re all gray.

Mostly as an exercise in further exploring digital photography for myself, I set out to try and simulate Tri-X and Panatomic-X. The first was easy to achieve, and every bit as disappointing as the original film. Frankly I never liked using Tri-X because it was grainy and contrasty. It’s not my style. I’m really a boring, Kodacolor kind of guy. Nevertheless, this was an experiment for science!

There are two steps to it: determining what the difference is between how the camera shoots and how the film looks, then finding a way to make the camera output look like the film. It’s not as easy as it sounds: merely changing the camera setting to monochrome doesn’t work as it relies on what the manufacturer thinks is the right tonal quality and gradations. Slipping on a neutral density 2 filter to knock the ISO down to 50 doesn’t solve the problem either; it just lowers the EV. In fact, even in combination it comes up lacking. Why? Because your camera is biased about colours.

Black & white film (aside from specialty films) is made to be fairly even in its sensitivity to all of the visible spectrum. It doesn’t necessarily achieve this, and under certain lighting circumstances some adjustment is needed to render an image that is satisfactorily normal. If you’re familiar with shooting B&W you’re probably familiar with adding a K2 (yellow) filter to heighten contrast of the sky. In this case the bias in the camera’s sensor has to be overcome in a like manner.

I found my cameras all lean towards green. In converting the colour images to B&W (my preferred method – you’ll see why later) the various colours become gray tones based on luminosity or lightness or a combination of the two. If a colour does not reflect ‘equally’ it will slide into an incorrect tonal range depending on what conversion method is employed. Thus a colour may become lighter or darker gray than it should be. Admittedly this will not be noticed by most people, and probably doesn’t really matter. So long as you like the end result, it’s fine.

In my preliminary testing of that which I knew would happen, I shot the back end of my Xterra next to grass so I had red tail lights and green foliage to look at. Sure enough, changing to monochrome produced different results with different methods. All would have been acceptable by the end user, except where the ‘end user’ is me trying to simulate wide-tone Panatomic-X film. So it was time to apply some thinking, and some filtration.

Knowing which process turns which colour which way is a good place to start. My finding was: Lightness makes red tail lights lighter, green grass darker, yellow flowers darker; Luminosity makes red tail lights darker, green grass lighter, yellow flowers lighter. The second key to the puzzle was knowing the camera favours green. So we have to make the red tones lighter and use the luminosity function in order to get the most accurate conversion.

Enter the orange filter. Fortunately this is not a true ‘Wratten 21’ orange as that would be too extreme I think. Just enough to shift the colour, and incidentally require 1 more stop of EV – effectively lowering the ISO to 50. (The Wratten 21 would be 2 stops.)

I have to say I spent some time adjusting the camera’s “picture type” settings as well, to see if that would help. Subtle 1 step changes didn’t really show at all – or at least I couldn’t see them. I went with as ‘neutral’ as the settings could be, with an increase in sharpness to help the definition. I also reduced the MP to 4.5 for shooting, with a further reduction for Internet display. Yes, I realize this alters what you see. So does your screen and your eyes. You’ll just have to take my word for it that it’s right.

First, the picture as-shot. It’s a kind of snap-shot thing because I wanted to get a lot of variety in the composition; it helps with determining the quality of the final result as well as aiding the computer in doing the conversion (the more monochromatic the original, the more inaccurate the change to B&W).


Next, the black-and-white version. True blacks, true whites, and many gray tones in between:


Last, the colour-corrected version. This is why I like shooting in colour all the time; if this had been monochrome but would have looked better as colour, it would be lost. As it is the filter correction for B&W is subtle enough to be eliminated and deliver an acceptable quality colour rendition as well:


Perhaps a tad magenta. That could be fixed if I wanted to spend the time working on it.

Here’s a more dramatic series, same processing order. You can see what I mean about what happens when starting with a fairly monochromatic image.


Anyway, I plan to shoot some more like this and some starting with monochrome but ‘slowed down’ with a neutral density filter to see how that compares. It may be just as good, or better, or worse.

I also plan to try simulations of Kodachrome and Ektachrome once I’ve worked out the initial settings. But probably not Agfachrome with its muddy “European” colours. 😀

I’m a photography snob, I guess

Random commentary and advice for beginning photographers.


“The Lexx”

Partly because I have the hubris to think that 50+ years of shooting pictures with literally hundreds of different cameras means I must have learned something about the subject. Possibly about the predicate as well.

Partly because I look at shots from today’s pros and wonder if they’re really trying with the pictures they present or are just posting up random shots, holding their best work in reserve for paying gigs. Maybe it’s only my opinion that they are trying to demonstrate their ability with what amount to little more than snapshots. Or maybe those shots are what are deemed great work these days. O tempora, o mores?

Heaven knows I post a lot of junk shots, but always with a purpose. How many dozen photos of the old sheds have there been? That’s because you need a consistent subject to demonstrate other variables, and that particular scene has a lot going for it in terms of resolution and colour potential – as well as being close at hand.

One of the things that makes me question my own ability is the lack of feedback on images I’ve done which I think are really top-notch. Perhaps I’m wrong in my evaluation of composition, framing, and exposure. Maybe my pictures are the amateur ones and I’m doing it all backward. It could be that I’ve learned nothing over half a century, or learned it all wrong.

Or possibly not.

Anyway, I still like them and when it comes down to evaluating your pictures you are the best (if inevitably overly self-critical as we all are) judge. Did they come out the way you wanted? Are you satisfied with the result? Yes? Then they’re fine.

Fortunately I generally keep my big mouth shut in respect to others’ works specifically. If someone asks I might try and gently guide them toward what I think is good and what could be improved. Sometimes I’ll see something and make a hopefully innocuous suggestion about a potential alternative rendering. Usually I’ll just go with commenting on what is good in it and remain silent about any perceived flaws, no matter how glaring. But I’ll give you all this bit of generalized advice gratis: the most persistent mistakes are in framing and composition.

Okay, enough of that. Here’s some random comments on photography equipment, for whatever they’re worth. Maybe you can glean something worthwhile from the chaff.

Full-frame sensors. I just read an excellent commentary on these about how people are wrapped up in the psychology of the terminology: “full” must equate to “better”, right? Nope. This is a holdover from film days when larger format was how you got better resolution because the grains of silver on the celluloid were all the same size regardless of film type and you can only pack so many in the space available. When it comes down to molecules, film is digital too. Consider how the image is to be displayed and you will see that even the 18 MP sensors are usually ‘overboard’ for resolution. One of the nice aspects of them (as I have demonstrated previously) is the ability to do some “digital zooming” in the post-processing stage and still have a shot that can make a decent full-size print. There are even people who turn their camera’s resolution down because they don’t need/want >10 MP (yes you can do this on many cameras).

RAW format. I can’t see why people get so hung up on this. You don’t really need it, even if you are a “pro”. It takes up a lot of memory space (not much of a problem considering how cheap that is these days) and what’s more takes up a lot of time to process. For the most part, spending huge amounts of time processing RAW data to get a shot that could have been had straight out of the camera in JPEG is just ridiculous. There are times when a lot of hard work with the RAW file will render the results you want, but how often? I have seen such shots looking “too real”: they may be spectacular art, but they necessarily are in the same category as other “processed” photo art; more art than photo. Also, have you not read the complaints from photographers who realize they get caught up in processing and never know when to quit? It’s easy enough to do that when just tweaking contrast. Maybe I’m an old school film-type snob, but you ought to be able to get the basic shot you want right out of the camera (and without in-camera tricks). Admittedly I have been known to alter reality, or at least the camera’s perception of it, to achieve this. But it still counts.

Brands. Oh pul-eeze! Are we still doing this stupid, childish bickering about product names? My quick test of Canon vs. Nikon with two different camera types showed the Nikon superior. If you want I can do another test with the same two cameras and get the opposite results: I just have to change the evaluation criteria. One of the things I have learned over five decades is that every company, regardless of what they make, has its mixture of successes and failures. Much of the evaluation criteria is purely subjective; does it work the way you want/expect it to? Yes you will see a difference in lens quality, sensor rendition, and perhaps even exposure accuracy (read the manual and you can probably make up for that), but it’s pretty rare to come across a camera these days that’s absolute junk. At least among the reputable brands; there are some low-dollar point-and-shoots out there that I wouldn’t trust to function at all, much less return a decent shot.

Zoom vs. Prime. Why is there even any discussion about “which one is better?” That is a question asked by those just starting out, and that’s the only time it is valid. For me the zoom is more suited to my shooting because I keep having to change focal lengths along the way, usually with more speed than twisting a lens on and off will allow. But I realize I’m giving up potential sharpness in doing so. Also being able to change position for framing/compostion isn’t always possible in the places where I shoot. But that’s me. It can be argued that the zoom lens is better to start with because it gives a person a chance to try out many different focal lengths in one unit. As they get better at photography in general they may see the benefit of using a fixed-length lens (and which fixed length, come to that). Just don’t decry one or the other. You may state a particular lens within a type is better than another of the same type or the average of the category because this can be true. Otherwise you’re saying pick-up trucks are better than economy cars because they are meant to do a completely different job.

In short it’s all about the way you shoot and the type of photography you do. Not what I do or what Jack does. When you’re just starting you probably don’t know how you will shoot or what type of photography you’ll do, so you’re allowed to be ignorant. But you’re not allowed to be stupid: you can read immense amounts on-line about photography. How do you sort out what to read? Poke around with the search engines for what interests you, what answers your questions about photography, and what you think you’ll do with it. You’ll find some references to cameras which you can then look up for further information. Now here’s the big advice: download and read the instruction manual before you buy it! You’ll be amazed at how helpful that can be not only in finalizing a choice of equipment but in advancing your general knowledge of photography as well; the manuals contain not only info on which control does what, but also about under what circumstances they should be used.

Lens accessories: filters, diopters, extenders, and other items. Well there is a vast array of this equipment and I can only speak to general terms. Cameras and computer software now have built-in ‘filters’ for post-processing, but it’s just not the same. You can do quite a few after-the-shot tricks, but if the data isn’t (or is) there to begin with that’s another problem. In film photography we were up against negatives that were too ‘thick’ (dark) or too ‘thin’ (light) and all of the solutions (sometimes literally; Farmer’s Reducer) to these problems were less than perfect.

UV filters, always ‘suspect’ even in the days of film, do little more than protect the front of your lens. Without expensive equipment and testing you will never know if you’ve got something blocking invisible ultraviolet or just a chunk of glass; the sensors don’t really pick it up.

CPL filters manage to cut down some glare and slightly increase contrast. Of the two I’d pick the CPL as the better investment because getting rid of that glare is one of the things you can’t do later.

There is also a plethora of coloured, half-toned, neutral density, and ‘warming’ filters. Best to leave these off your initial shopping list: a good camera has a wide range of exposure and white balance settings which will handle most situations, and you can alter over-all colour in post-processing.

Diopters get sold under all sorts of descriptions like “close-up filters”. You’ll know them by their ratings: +1, +2, +4, et cetera. They screw on the front and make it possible to get really close to your subject. Although there’s some advantage to them, there are some disadvantages too. Before you go dumping money into diopters try and see how close your camera can focus on its own. Many of them have macro settings which will do a very good job of it. Oh and there’s a decided advantage in using a long focal length lens in macro mode as it keeps you from getting physically close to the subject and thus getting in the way of the light falling on it. Maybe you won’t even want to do close-up photography, or not do it often enough to invest heavily in it.

Extension tubes and lens reversing rings are two more ways to get up close for cameras with interchangeable lenses. The reversing rings are again not being called by their correct name these days, but the description is the same: screws on the front of the lens and allows you to turn it around and fasten it to the camera backwards. Extension tubes merely move the lens forward, altering the focusing distance. Neither uses any optics so will not introduce any kind of distortion from glass (which diopters may) but the tubes will decrease light noticeably and thus effect exposure. Reversing rings will eliminate the lens-camera information connection necessitating manual exposure and focus, as will the tubes. These can only be used with cameras that have interchangeable lenses, so probably are not going to be on your list of “must haves”.

Lens extenders traditionally go behind the lens and change the focal length by factors of 1.5, 2.0, or 3.0X, hence effectively turning a 50mm lens into a 75mm for example. They do have optics, can introduce distortion, and will cut down on light. There are others which screw on the front and can be used with any camera that takes a screw-on filter. These are not as useful, in my opinion. Some are meant to give you a wider view, others more telephoto. They will vignette the image in some cases (such as on wider lenses) and can introduce distortion. They have the advantage of being inexpensive (and sometimes cheap) and fitting any camera that has screw threads on the front as well as maintaining the lens-camera data connection (although it may not work correctly). Worth it? In my opinion, no. Shoot first with what you’ve got and then see if you really want wider or longer lens capacity – then buy the appropriate lens. If you’ve bought a ‘bridge’ camera you already have a wide-tele zoom and one of these isn’t going to offer much advantage and may not even work right. The behind-the-lens type for DSLRs is a better option, but I notice not a popular one. This is a bit odd I think because I can’t help but notice the range of lenses being offered too often looks like the people designing the cameras have never shot with one, if you know what I mean. If you don’t … well the explanation takes too long.

Infrared. Here we arrive at photographica esoterica maxima. Camera manufacturers purposefully block out near-infrared from the sensors because our eyes don’t see it and thus if they didn’t – we would. In other words we’d be looking at messed up photos where the near-IR light has intruded and been presented as visible. But it’s interesting to look outside the spectrum we normally see in, so sometimes we may want to do this. It is not easy. The best way is to get a camera modified so the sensor isn’t blocked from IR. Even after that there is a lot involved, including very long exposures and post-processing adjustments. There are “IR filters” sold all over which allege they can give you the experience without the hassle. No, they just exchange one hassle for a different one: they still require long exposure and a lot of post-processing to get an image and that image is not going to look like those wonderful IR shots done by people with the right equipment and experience. I bought one of these filters to play with and am getting some acceptable, albeit quite unexpected, results after just a few dozen attempts. On the whole I wouldn’t recommend them.

Colour vs. B&W. My recommendation is to always shoot in colour. Oh I know B&W is a art form in itself and have used it myself to great effect on many occasions. So why always shoot in colour? Because you can’t go home again. If you have the colour data in the image you can take it out. If it’s not there putting it in is, well not impossible but nearly so. You can turn colour down a bit too, for that faded look. Or take it right out. And crank up the contrast to lithography levels if you want. Whereas if the picture would look great in colour, but you shot it in B&W, you’re stuck.

Flash. I used to do a lot of flash photography because film had limited speed. 400 was pushing it (Tri-X actually worked better at its original 320). Cameras couldn’t see in the dark (Canon’s f0.95 lens is still the ‘fastest’ ever made). Tripods were needed below 1/50 second even if you were really steady – no built-in compensation. So I shot flash bulbs and electronic flash whenever lighting was too low for ‘natural’ shots. It tends to be harsh and contrasty. Today’s cameras can ‘hold themselves steady’ and quite slow speeds (I’ve never tested the limit) and have ISO 6400 built-in at the push of a few buttons. Do you need flash? Maybe. There are a lot of times when the scene has its own harsh shadows and fill-in flash will save the day. There may even be times when you want the special effects that can only be created with flash (or multiple ones). For the most part the built-in flash of the modern camera will handle the job, especially once you learn how to set it for fill-in, red-eye reduction, et cetera. Buying an accessory flash can come later, if at all.

Tripod. Well I have one and I use it. It’s cheap and old and I just glued the rubber feet back on. It’s a Hakuba. No, I never heard of it either. Not sure how many decades ago I bought it. Originally one of many, I kept this one because it wasn’t the worst of ’em. Some recommendation, eh? Do you need a tripod? If you do low-light or long exposure shots, yes. If you don’t know yet, no. If you’re just trying things out – get a used one for cheap and see if it fits your needs. In fact that’s good advice for any piece of equipment; when you see what the shortcomings of it are vis-a-vis your photography you’ll know just what to look for next time. And if there are no shortcomings you’ve come out ahead.

In summation, I have a Nikon P610 ‘bridge’ type camera with an absurd zoom range and a Canon T100 DSLR which gives me huge lens options (including using old film SLR glass). The Nikon rarely lets me down at getting the photo I want, and that in itself is saying something. The Canon on the other hand is a lot of fun to play around with, and frankly its controls are more sensibly laid out. For someone who has exceeded the abilities of a point-and-shoot digital I’d recommend a ‘bridge’ type camera as a next step, because you may not need to take another. I definitely would not recommend jumping in with both feet by buying a really expensive mirrorless DSLR because frankly I think you’ll be both frustrated and disappointed, as well as a good deal poorer.


Loose lens, I mean ends

Just a few things I’ve done lately while experimenting without any particular purpose in mind.

First off, a demonstration of what a polarizer lens can do. The first shot is with the polarizer not blocking glare, the second with it twisted to eliminate as much as possible.


Notice it doesn’t completely eliminate the reflected clouds, but it does reduce them greatly and under different circumstances could knock out unwanted glare completely.

Another thing it can do is enhance contrast ever-so-slightly. Again, first shot without and second shot with the lens twisted to achieve maximum effect:


Next we have two B&W shots showing the difference between the Canon’s widest angle of 18mm and the 28mm Super Takumar. Camera on tripod and only the lens changed between shots:


Despite the fact both shots were at “the same” exposure (ISO 100, 1/250, f8) the Takumar is slightly darker and has better contrast. This is either a difference in the interpretation of the aperture between lenses or in the composition of them. On the whole I like the Takumar better.

This next one is an experiment with a so-called “infrared filter”, which is just a very dense red filter. Although an interesting effect, it is not infrared and not the effect I was after. Some more experimentation is due here as mostly the camera can not manage a proper exposure or focus on its own and even seeing through this in bright daylight for framing and focus is impossible. Great if you want to take a picture of the red sun, though.


My next “experiment”, if I can get co-operation from the world at large, will be to shoot some “normal” pictures in the manner of “a roll of film where every frame matters” rather than this blasting away with dozens of minor variations.

Unless I think of something more interesting to try, that is.

Post Script: I finally shot some images in RAW format. Can’t see the reason of it, frankly. It takes too long to get the data resolved to an image you can just have in JPEG to begin with. I guess if your intent was to do something really extreme in post-processing … well I haven’t got that many years left to waste them on RAW.

Canon T100, “roll 2”

The second roll of film, so to speak. This one turned out to be 36 exposures because I was having so much fun. No, they’re not all winners by far; it’s still largely experimental as I continue to get the feel of the camera. This time the major change was to turn of Auto White Balance and set it for daylight, as film doesn’t magically adjust for lighting conditions. On the whole there was no noticeable change, but I haven’t tried anything under artificial light. Even cloudy skies didn’t throw it off. I call that success.

IMG_0047If a tree falls in the back yard and a dog hears it, is the dog to blame? Probably. *LOL* Notice the colour is just the same as previous pics with AWB turned on. Interestingly I was expecting some change (improvement) in the effect of the polarizer filter with AWB off. Instead I saw none. I’ll admit mine is a bit old (and fogging around the edges) but it does appear that as far as digital sensors are concerned the polarizing effect is limited to reducing reflective glare; there is no noticeable contrast enhancement as found with film (in pictures of clouds, for example). Not quite as bad a the UV filter’s “as good as a piece of glass” results, but still disappointing.

Now here we play “guess the film”; I played with pink (which looks like out-of-date something-or-other), brown (which gives a sort of Ekta-film look), and gray (see below). The gray filter alters the colours to look like an older photo that’s lost some of its dye saturation. I may do more of this later, as I rather like the effect.


Reduced saturation in the green, albeit at the expense of further reduction in red rendition as well (which the sensors aren’t good at to begin with). Can be nice under certain circumstances:


Yes, I’m getting a bit artistic now as I become more comfortable with the camera. For art you need good subjects, though. Some insects obliged me. By selective cropping and some slight enhancement we get results like this:


No, I didn’t paint those zeroes on it! I do miss the extreme zoom of the Nikon P610 for shots of this type, but even so I can crop to good results as these next two pics demonstrate:


Yes, those are both the same image.

Sometimes you have to do a bit more work to get good results, like when the lighting is against you:


And of course there’s controlling depth of field. One thing still missing from these modern lenses is a focusing ring with DOF/aperture indicator like this:


One of the lenses I plan to use on this camera if the adapter ever shows up: a Vivitar 135mm f2.8 M42 mount.

Still we do the best we can with what we’ve got:


On the whole I’d say this Canon EOS Rebel T100 is giving me exactly what I was looking for, making my record of choosing “complex” digital cameras 3 for 3. I’ve already ordered a 2X ‘front element telephoto’ as well as the yet-to-arrive M42 lens adapter.

I may be coughing with every breath and facing imminent extinction, but for now I’m having fun.

Philtres Part 2

I got a little impatient and decided to do a quick test of the new filters before the adapter arrived – and before the next thunderstorm rumbled in. Herewith we have ten pictures, starting with “unfiltered” and working through the nine colours: red, orange, yellow, green, blue, purple, pink, brown, and gray. These are hand-held grab shots pointed at the sky to get a mix of white/gray clouds, blue firmament, and green leaves. You can see my fingers in some of the shots due to the hand-holding of the filters. No post-processing was done other than shrinking the size to be Internet friendly.











The red and the blue seem especially drastic and unsuitable for ‘normal’ colour use. The brown renders nicely subdued colour. The yellow looks like aged film, as does the pink to some extent. I have not yet tried combinations, or giving the camera a heart attack by using the filters with other special settings other than B&W. Getting a feel for the results will take some time before it can be applied to get the desired artistic outcome.

On another note, it occurred to me that cameras could write a data file to the SD cards that would reset the camera to particular values for ISO or B&W/colour modes – like inserting a roll of film (some film canisters had codes on them that certain cameras could read and automatically set ASA/ISO rating for example). They could even write associated data files per picture, with alpha-numeric input from the user. I sometimes wonder if the people who design cameras think about what the user would want/need or just put in whatever it’s easiest to do. Surely they have some feedback from photographers? Of course what is done for the ‘amateur’ market is the minimum within the budget, as usual.

Anyway, I’ve got more experimentation to do.