Wishing for Sony Alpha 7R - once more

Half a year ago I tried Sony Alpha 7R for the first time. My interest in this camera stem from its 36 megapixel sensor and a compact body for a 35mm sensor size. What I thought about it then can be read in my blog from last November. At that time body and lens were still preproduction samples. Some weeks ago I took another look at A7R with production samples and this time I had it for two weeks. Another reason for this second test was the availability of Carl Zeiss Sonnar T* labeled Sony FE 35mm F2.8 ZA lens. This lens is designed and manufactured by Sony with Zeiss set quality parameters.

Alpha 7R and FE 35mm F2.8 lens makes a great combination by size and weight. This lens has also a very clever lens hood. This is something every lens maker should copy!

Alpha 7R and FE 35mm F2.8 lens makes a great combination by size and weight. This lens has also a very clever lens hood. This is something every lens maker should copy!

About

My wish for A7R is to have a camera which would give me the best hand held resolution in big exhibition prints. If I was looking for a camera to be used on tripod I would go for something else, like a small view camera or Alpa with the best digital back. I am comparing here to Olympus E-M1 simply because I really wanted to know where A7R would be of advantage.

Both the body and the 35mm lens have by now been reviewed at many sites across the web. E.g. Dpreview.com has tested both body and lens, Photozone.de has tested the lens on this body and DxOMark.com has again tested both body and lens. Because of this I won´t go into every detail and specification and omit here everything I was not interested in.

A7R has provoked also some issues which have been discussed at internet forums: Shutter shock, light leakage and Sony´s compressed (11-bit) RAW file. Maybe there are more but at least these have been discussed. As I was shooting with 35mm lens hand held, I did not run into the area of shocking shutter mentioned in these discussions: short tele on tripod. I did not notice shutter shock issues. I did not shoot in conditions where light leakage is said to be seen: long exposures. I did not notice it. A compressed and lossy RAW file is something I would not like to have. While I can bring those issues up at contrasty edges with heavy tweaking, they were not visible in normally processed (LR 5.4) prints (Epson 9900). So, I did not go digging for problems, I just shot pictures.

 

Touch and feel

Like said in my previous blog on A7R,  I liked shooting with it. Only now not as much as I remembered from half a year ago. The main reason for this is Olympus E-M1, which I have used a lot by now. E-M1 is so much more responsive in everything giving A7R a sluggish feel after it. It all starts with turning the camera on. While E-M1 is ready practically instantly, A7R takes its time. This is the most notable difference but the same trend goes through everything I did with either camera. From previous preproduction body, the shutter releasing has now been vastly improved. It is a lot smoother and more predictive. Where I had problems was raising shutter button back halfway up without letting focusing or exposure change. It is easy and a lot used with E-M1 but turned out to be almost impossible for me with A7R.

EVF in E-M1 is better both in bright sunshine and dark interiors. The first is because of E-M1´s automatic adaptation to ambient light. A7R´s EVF can be set quite bright but not like E-M1, and then it is too bright in lesser light. Secondly it has more noise when there is less light. E-M1 has again shorter viewfinder lag.

With A7R one has to work more patiently and not try to be faster than the camera processor allows.

 

Zebra

During my previous time with A7R I did not realise that zebra is usable also with still photography. Zebra is adjustable in A7R and it makes Sony the second camera maker to my knowledge which has an excellent light metering system shown in EVF. Compared to Olympus orange/blue colors, zebra shows single channel saturation better. On the other hand it is more difficult to judge smaller areas with zebra, because zebra, like the name says, is shown as moving stripes. And then, of course, Olympus has nothing compared to zebra for video shooting. All in all my exposures with A7R were spot on when RAW files were opened in Lightroom, thanks to zebra.

 

FE 35mm F2.8

This 35mm lens felt to be a perfect compromize in use. Slower speed makes it smaller than most lenses of same focal length for 35mm cameras. It really makes great a combination with A7R by size and the total feel in use.

Even with moderate speed such a small lens design leads into vignetting as examples below show. Vignetting is strongest at largest aperture and it is always seen unless subject matter fades it. (These images are shot at camera´s automatic exposure setting and there were no other tweaks in Lightroom)

Distortion is not of regular ball shape and it is luckily corrected quite well with Lightroom lens profile. For landscape work there is no need to have distortion correction on most of the time. With buildings the lens profile makes life easy.

Bokeh is average and out of focus highlights show the shape of seven bladed diaphragm already at f/4.

Resolution from this lens - body combination is excellent for prints up to A1 size at every aperture up to f/11 over the whole frame. Only corners at f/2.8 may show how big the difference between center and corners actually is. Center of image is outstandingly sharp.

 

A Lightroom or Sony RAW Problem?

When you click on the image below, you can see how Lightroom shows jaggies with some of the fence lines. The crop is 200%. Out-of-camera JPEG doesn´t have this problem. The problem does not exist against neutral, blue or green background, only against reds, oranges and yellows. Those jaggies can be seen in A1 prints and they actually are irritating for a perfectionist like me. I can´t help seeing them... People who did not know to look for them did not notice them at all. At print sizes A2 and below jaggies are too small to be seen. Also in the detail of image above you can see some jaggies. They can´t be seen in A1 print with naked eye. 

 

Prints and details

A1 prints piling up. Bike image is here in two parts as it was the first try to see the look of two different materials: Canson´s Infinity Baryta Photographique and Infinity Platine Fibre Rag. Both are semiglossy papers and the smoother surface of Baryta Photographique brings out  beautifully A7R´s detailed images. Normally, if the aim is not to do pixel peeping on paper, I like the character of Platine better. (Image shot with iPad)

I printed quite a few images at sizes up to A1 and compared them to E-M1 prints. Side by side A7R shows more details if the image has lots of very small details, especially in central area. Still, the difference over the frame is surprisingly small and people whom I asked just "what´s the difference" never told A7R and E-M1 prints being different because of details. It was always something else based on various personal preferences. With many subjects there is nothing to show any difference at all.

Theoretically, on sensor level, A7R has a 50% linear resolution advantage over E-M1. For camera/lens combination, Photozone.de figures show it (with this 35mm lens) having at best an 20% advantage over the height of landscape image compared to Panasonic GX1 with Olympus 12-40mm PRO Zoom lens at 17mm focal length. And GX1 has an AA filter while E-M1 has none. With E-M1 we really are ending up with a very small difference, especially if we look over the width of a landscape image, not just image center. Note: Comparing MTF numbers of two camera-lens combinations is actually not this simple. There are many factors which affect MTF results. Also Imatest MTF50 tells only about the ability to shoot a flat paper target. Real subjects have so many other and diverse qualities.

Beyond small details (and above mentioned jaggies) there really is no other factors I could say giving A7R or E-M1 prints any advantage above each other.

In my earlier comparison with Nikon D800E and Olympus E-M5 I gave some studio examples to be printed. Try them out, they are the harshest differencies in favor of megapixels you can think of. With real subjects many other things affect a lot.


Am I shaky or what?

In my previous test with A7R I was schocked to realize how long exposure times I had to use to get really sharp (pixel level sharp) images with A7R and 50mm lens. Now with 35mm lens and the new smoother feeling shutter actuation my sharpness procent at 1/30 seconds is merely 22%. Roughly one in five! By sharp I mean here sharp at pixel level, no shakiness at 100% level. With E-M1, IBIS and 17mm focal length I can do better at 1/3 seconds. That´s over 3EV steps. Naturally I can´t blame A7R here, it´s me and the camera. Worse still, the difference in favor of E-M1 grows when my shooting position is compromised in any way.

The only way to beat this difference in shutter speed abilities would be rising ISO in A7R. Sadly the limit to rise without getting more noise is here just one 1EV, as can be seen also in above linked dpreview.com comparison chart.

 

Making a compromize

I have been looking for more hand held megapixels in the meaning of details from Nikon D800E and Sony A7R. With both I can have more megapixels, but more actual details only under limited circumstances for various reasons. With Sony one of those reasons is also their limited set of lenses. My reference of quality is prints at size A1 and I have found that small gains in details against bigger losses in usability and usable circumstances (subject, light level) are not worth having another set of cameras and lenses. There are so many things one can wish for, sadly one may run into more problems when a wish comes true... Shooting with view cameras has been my bread and butter, benefits and problems of this kind of work are so well known for me, maybe I should not look anywhere else if I really want to go for big landscapes?

-p-

 

 

 

 

How Clean is a Camera Pixel?

This fourth and last part of my little series on pixel. In the first part (How Big is a Pixel) I said that most of the time we don´t know if the color of pixel is right just by looking at this single pixel. It is possible only if this pixel is inside area of known subject color. Normally we need a bunch of pixels to find good and bad ones.

A reference target like this SpyderCheckr helps in determining how accurate colors a camera produces, because all these colors have known values. This would make camera profiling possible. An ICC profile corrects camera´s color rendering into a very accurate level. Color management is extremely important if right colors are manadatory in your photography. Again remember: Color is the only property any pixel in any image has (together with its known place in image grid). Everything in an image comes from these color values.

A reference target like this SpyderCheckr helps in determining how accurate colors a camera produces, because all these colors have known values. This would make camera profiling possible. An ICC profile corrects camera´s color rendering into a very accurate level. Color management is extremely important if right colors are manadatory in your photography. Again remember: Color is the only property any pixel in any image has (together with its known place in image grid). Everything in an image comes from these color values.

What is the color of her lipstick? Our images are not about solid colors but colors are changing all the time because shapes are curved and light has direction. That´s why every pixel has most of the time an unique color which is at least slightly different from neigbouring pixels. This makes determining right color quite difficult many times, even if we knew what the color of this lipstick should be. With a correctly made profile one´s camera would make lipstick color seem visually correct on the whole even if none of image pixels had the exactly correct value. Basically this gradual change in pixel colors comes from shape and light but there is always one more factor influencing single camera pixels. This is noise.

What is the color of her lipstick? Our images are not about solid colors but colors are changing all the time because shapes are curved and light has direction. That´s why every pixel has most of the time an unique color which is at least slightly different from neigbouring pixels. This makes determining right color quite difficult many times, even if we knew what the color of this lipstick should be. With a correctly made profile one´s camera would make lipstick color seem visually correct on the whole even if none of image pixels had the exactly correct value. Basically this gradual change in pixel colors comes from shape and light but there is always one more factor influencing single camera pixels. This is noise.

Noise

In previous blog we saw how noise limits dynamic range. Nikon D4 had a lesser dynamic range than Nikon D800 at ISO 100, even if it has deeper (bigger) camera pixels. The sensor in D4 has lots of read noise compared to D800´s sensor at ISO 100. With increasing ISO speed D4´s deeper pixels take command and D800´s pixels can´t compete because they receive less fotons and less fotons means less accurate measurement. This loss of accuracy is also one type of noise, which comes from electronic circuitry. Actually there are several types of noise which all come from light´s particle behaviour (photons) and the nature of camera pixels as photodiodes. A photodiode is a semiconductor which converts light into electric current. In most cameras today semiconductor type in use is CMOS. Cameras have CMOS sensors. To summarize this all very simply is to note that every single photodiode (camera pixel) is a tiny voltage meter. That´s all it does. And it is not perfect as a meter because of those imperfections which cause inaccuracy - noise.

Two crops from larger images. Both were shot with the same camera. In left side image exposure was set to fill lightest camera pixels up and the one on the right got less exposure, less light, as determined by camera´s automatic exposure system. Both images were brought to same brightness in Lightroom. The one which received less light has more noise, and noise destroys wispy clouds in the sky. On pixel level neither sky is noise free. In this case it is impossible to know which pixel is correct. The fact to know is that less light gives those tiny voltage meters less photons. If the difference in exposures is one f-stop, the difference in photons is double / half. One stop less exposure gives our voltage meter half the number of photons to calculate and turn into electricity, and naturally it can´t be as accurate with half the information. 

Two crops from larger images. Both were shot with the same camera. In left side image exposure was set to fill lightest camera pixels up and the one on the right got less exposure, less light, as determined by camera´s automatic exposure system. Both images were brought to same brightness in Lightroom. The one which received less light has more noise, and noise destroys wispy clouds in the sky. On pixel level neither sky is noise free. In this case it is impossible to know which pixel is correct. The fact to know is that less light gives those tiny voltage meters less photons. If the difference in exposures is one f-stop, the difference in photons is double / half. One stop less exposure gives our voltage meter half the number of photons to calculate and turn into electricity, and naturally it can´t be as accurate with half the information. 

Do you know which source of noise is the most important one lowering image quality in your images? Sadly it is you! Unless you know how to expose right and fill camera pixels up (but not over) every time. Only after learning to expose up, the sensor becomes dominating source of noise. What we also noticed earlier was that you should always keep ISO as low as possible for the purpose of shooting. Naturally it is always a compromise and it is better to take sharp but slightly noisy images rather than so smooth but hopelessly blurry ones.

Counting Photons

Lets take one more look at  those photon numbers. Sensorgen.info has those numbers and because of my own interest I chose Olympus OM-D E-M5 from there. Its full well capacity or Saturation (e-) is 25041 photons which each ideally kicks one electron moving in circuitry. As ISO is increased, every doubling of ISO drops saturation point or well capacity into half. This happens in every camera and sensor: doubling ISO or exposing one f-stop less leads into half the accuracy.  There is slight variations from sensor to sensor so that ISO doubling might lead into slightly less or slightly more than half, but the principle stays. As you can see at Sensorgen.info, for E-M5 the amount of electrons (countable photons) at ISO 25600 drops to mere 176. From 25000 to 176... Counting every single little photon and inducing exactly one electron each time becomes suddenly extremely important.  And this 176 is for the very brightest point of your scene - thus leaving those camera pixels which are counting photons from shadow areas quite helpless: One may get a few while neighbour has none... No wonder any E-M5 image at ISO 25600 has more noise than real color information.

While here, one more thing. Actually we discussed this already, but let us note once more how dynamic range does not get halved while Saturation (e-) is halved. At first that is. Read noise is largest at lowest ISOs but drops fast. That´s why dynamic range can´t be so high at lowest ISO and that´s why dynamic range does not drop immediately stop by stop. Only when read noise levels out, dynamic range starts to drop by a stop with every halving of photons. That´s how read noise is seen undirectly.

An Example on Pixel Cleanliness vs. Other Factors

This example comes from Dpreview.com. I didn´t find a way to link into the spot I wanted show, so I took also this screen capture below. Looking into link you can check other spots and also see how noise affects various colors differently with each of these cameras.

This example shows noise levels in shadows. These are at same print size, which is a fair comparison because sharp noise scales up or down linearly enough.  At these ISO differencies these four cameras have roughly equal amounts of shadow noise. This is also directly in line with dynamic range graphs we had in previous blog. As a side note Nikon D800 has less shadow noise than Sony A7R at ISO 3200 even if they (should) have same base sensors.

This example shows noise levels in shadows. These are at same print size, which is a fair comparison because sharp noise scales up or down linearly enough.  At these ISO differencies these four cameras have roughly equal amounts of shadow noise. This is also directly in line with dynamic range graphs we had in previous blog. As a side note Nikon D800 has less shadow noise than Sony A7R at ISO 3200 even if they (should) have same base sensors.

Here we see again how this kind of noise blocks details in shadows and limits the other end of dynamic range compared to read noise. It prevents any meaningful color information to exist in these darker areas of image files. Too few photons make our little voltage meters go crazy. Every one of them must try to figure things out alone without any help from their neighbours. As a matter of fact, for this kind of craziness there is a medication, noise reduction, which is sort of averaging the mistakes. It can be given at various points of process and it is not too harmfull in small doses but I really would recommend anyone again to try and help little guys stay sane and give them real, clean information instead.

-p-

How Deep is a Camera Pixel?

Deep? Well, I didn´t want to write big...  How ever, the fact is that pixels in camera´s sensor have size while pixels in image file don´t.

A single "pixel" from a camera sensor looks like this on idea level. The image says CCD but the idea is the same for CMOS sensors. Pixel´s light sensitive area is shown as Dye Layer and as you can see it is far from the area taken by the whole electronic device, which a "pixel" actually is. Microlenses are used to gather more light, which hits the pixel area, into light sensitive Dye Layer. Image source Olympus.

A single "pixel" from a camera sensor looks like this on idea level. The image says CCD but the idea is the same for CMOS sensors. Pixel´s light sensitive area is shown as Dye Layer and as you can see it is far from the area taken by the whole electronic device, which a "pixel" actually is. Microlenses are used to gather more light, which hits the pixel area, into light sensitive Dye Layer. Image source Olympus.

Pixel pitch is used many times as a measure of pixel size. Pixel pitch is simply the distance between centers of two adjacent pixels.

Image source Olympus.

Image source Olympus.

This image of a slice of sensor shows clearly how pixel pitch is actually a very bad indicator of pixel size. The distance from one pixel center to another does not define the light sensitive area at all. In practice it is true that we see light sensitive area growing with increasing pixel pitch when we compare sensors at same technical level. Theoretically it should even grow faster than pixel pitch because electronics takes procentually less space from a larger than smaller pixel. But really, we don´t know.

Fill factor is used to describe how well light is gathered from the whole pixel area into light sensitive Dye Layer. The maximum theoretical value for fill factor is 100% or 1, which is not achieved in reality. Microlenses are vital for high fill factors. All microlenses are not equal. Also it becomes more difficult to make high quality microlenses when pixel pitch gets very small.

Sadly, even with the knowledge of pixel pitch and fill factor, we don´t know how big the light sensitive area is. We could have a sensor with smaller light sensitive area and better microlenses or vice versa. That´s why I would like to describe camera pixels with another denominator than just "size" in the meaning of area.

Image source Olympus. (Again this is CCD but the idea is the same for CMOS)

Image source Olympus. (Again this is CCD but the idea is the same for CMOS)

We need to look into the cross section of a pixel. Pixels can be thought as tiny buckets which collect photons and create electrons. Again ideally one emitted electron corresponds to each captured photon. This ability to collect photons is the best measure for the size of pixel: it can be thought as the depth of photon well. More exactly we speak about maximum saturation capacity of a pixel. When a pixel is saturated it can not take any more photons. The extra photons must be removed somehow, in the image above this is marked as Drain. I leave this to engineers, for the scope of this text the important thing is to understand that with a saturated pixel the well is filled up, in the image it is pure white.

Maximum Saturation Capacity

Website Sensorgen.info lists saturation capacities for several cameras. (Hopefully they keep updating...) Their numbers are calculated from the data measured by DxOMark.com. Read more about physics and mathematics at these two sites, I skip the details here an try to concentrate on the essential outcome. For this purpose, let´s then pick up three cameras from Sensorgen as an example: Olympus OM-D E-M5, Nikon D4 and Nikon D800. I chose these three because the first two have same number of pixels but different sensor sizes and the last two have same sensor sizes but different amounts of pixels.

This graph shows maximum saturation capacities (electrons) for these three cameras at various ISO settings. (ISO settings as defined by camera makers, DxOMark measured values based on different standard.) Click on image to see it bigger.

Here we see quite dramatically the depth of photon wells in Nikon D4 sensor. Its pixel pitch is 7.2 microns (micrometers) while D800 has 4.7 microns and E-M5 3.7 microns. The other thing which is as dramatic is the drop in saturation capacity when ISO is increased. At ISO 25600 every pixel can take just a few hundred photons before it is blocked.

Dynamic Range

Dynamic range in f-stops at various ISOs. Every f-stop change means doubling or halving in light intensity scale. Click on image to see it bigger.

When looking at the end result, the photograph, we don´t always see the effect of photon well depth as such because noise may, and always will, obstruct (some of) it. Dynamic range tells about the actual ability to handle differencies in luminosities from deep shadows to bright highlights.  Comparing to previous graph, this graph looks partly as expected and partly surprising. At ISO 100 one might have expected D4 to be better than D800, but D4 has lots of read noise compared to the very smooth D800, and this clips D4´s dynamic range. Note how noise properties of sensor versus saturation capacity are here calculated differently from DxOMark´s basic scores. Beyond ISO 400 D800 starts to drop a stop per full ISO increase while D800 does the same only after ISO 1600. E-M5 has the same situation after ISO 800. These could also be seen as the limits of native sensities in these sensors. D4: ISO 100 - 1600, D800: ISO 100 and secondly ISO 200 - 400, E-M5: ISO 200 - 800. This shows also how different sensors are tuned to different purposes. D800 shines at ISO 100. D4 does the same beyond ISO 800. And E-M5 shows how a smaller sensor has been designed and tuned to be quite equal at ISO 200 to ISO 400 and gives same dynamic range as D800 beyond this.

About dynamic range: It is excellent if it has more than 12 stops. For a good print you need 8 stops. At the limit 8 is enough but then there is no headroom left for major tweaks. A slide film used to have (and still has) 7.5 stops of DR. One should never use a higher ISO setting than needed.

Conclusion

A CMOS sensor is a programmable device. You can expect something from looking at pixel pitch, but don´t be fooled to think that there is nothing beyond it. Careful design and programming of sensor can give results which are right for you, if you know what you are looking for. What you need is deep but clean pixels. Next time we take another look into cleanliness of pixels. One more thing, though...

EU!

No, I don´t mean European Union but Expose Up. I have written previously about exposing to the right, ETTR. The problem with the concept of ETTR is that it comes from the idea of histogram. If you think about what I have written here and take a look at saturation capacity graph, it should be quite obvious that you should aim to expose as UP as possible without saturating sensor. That´s the way to take advantage of all the depth your camera pixels have!

-p-

How Sharp is a Camera Pixel?

In my previous blog I wrote about pixels in image file. I recommend reading it before this one, it´s right below: How big is a pixel?

LIke said there, the only property of a pixel in an image file is its color. Very seldom it can be said for a single pixel if its color is right or wrong. We need a bunch of pixels to know their quality. This quality comes always from camera. It is not possible to add more, un-captured, information to a file afterwards. You can only enchance what already is there. Losing information in post process, on the other hand, is quite easy. Actually you lose always information even if you do just a small change (like increase of brightness) to the whole image. You lose bits. Post processing would be another story, lets see now where quality comes to any bunch of camera pixels. Resolution, dynamic range and noise are perhaps the most common attributes of quality in camera centered discussions. This blog looks at resolution.

Resolution

To get an idea of the things which determine resolution I checked various camera systems. Photozone.de has reviewed lots of lenses. I chose a bunch of different sensor sizes and megapixel amounts and looked for the BEST MTF50 resolution Photozone has measured for each. The lenses I was interested in were prime lenses with standard or longer focal length for each system. These lens tests are good here because they are always relative to a certain camera body, which leads us to pixels in sensor.

Cameras are arranged here by increasing sensor size. While sensor size is obvious, MP is the number of megapixels, PH is the height of sensor (Picture Height) in pixels (landscape or shorter edge), MTF50 is Photozone measured resolution expressed in Line Widths per Picture Height and, finally, Lines/pixel is MTF50 divided by PH. Read about MTF50. Each of MTF50 values is achieved at the best aperture for the best Photozone tested native lens on the border of image (not center, not corner).

Cameras are arranged here by increasing sensor size. While sensor size is obvious, MP is the number of megapixels, PH is the height of sensor (Picture Height) in pixels (landscape or shorter edge), MTF50 is Photozone measured resolution expressed in Line Widths per Picture Height and, finally, Lines/pixel is MTF50 divided by PH. Read about MTF50. Each of MTF50 values is achieved at the best aperture for the best Photozone tested native lens on the border of image (not center, not corner).


NOTE: This is not a camera comparison.

My aim was just to see which parameters are directly related to MTF50 resolutions and quality per pixel. I could have chosen to mark cameras as Brand A, B, C etc but I trust the reader not to draw wrong conclusions. These bodies just happen to be the ones Photozone has used for each brand or system. Remember, this would be a VERY flawed camera or lens comparison because of several reasons:

- Some cameras have an AA filter some don´t. An AA filter lowers lenses´  MTF values.

- They have used various RAW-converters during years. Converter affects MTF values.

- There were several tested native lenses to choose from for some cameras and some had maybe just one. The most unfair situation is for Sony A7R. The only tested prime so far is a wide angle, Sony/Zeiss 35mm/2.8, which on the other hand is better than any of tested Sony zoom lenses.  A7R is not included to show how bad Sony is, but to help to see how things depend on so many things.

- These numbers are for the best lens for each system or camera because I was interested in what is possible. They do not show how good any system is on average.


Resolution with Increasing Megapixels

Values from table above, arranged by increasing megapixels. Click on image to see it bigger. Red lines, Excellent and Very Good, refer to 50cm or 20 inch high landscape print. Even the best printers have difficulties in showing differencies in sharpness beyond MTF50 (LW/PH) 3000 at this or smaller print size. For bigger prints the limit would be higher and for smaller ones lower, of course. If you want to print big, you need a right combination of megapixels and lens quality.

As seen, more megapixels tends to lead into more resolution, more details. Nothing new there. If lenses were equal in relation to each megapixel amount, the relation would be linear for non-AA sensors. As they are not equal, lenses make a huge difference, not sensor size. Sensor size has its effect, though. It gets harder to achieve mechanical accuracy for an intechangeable lens camera when sensor size is very small. Like said earlier, Sony´s 36MP A7R is handicapped relatively because Photozone has not tested enough of their lenses yet and now a wide angle lens has to compete against the best standard or short tele lenses. It can´t compete even with lots of more pixels to start with, but later Sony´s bar will obviously rise with new lenses tested. In absolute terms Sony/Zeiss 35mm/2.8 breaks the line of excellency easily at best apertures.

What I think is commonly forgotten is the importance of lens. Megapixels get too much attention.

Resolution per PIxel with Increasing Sensor Size

Click on image to see it bigger. This graph shows the values of Lines per pixel (from table above) against increasing sensor size.

Here we see how good the pixels are. Value 1 would mean perfect resolution for every pixel. Reality is not perfect, but Nikons 85mm 1.4 G lens is nothing short of amazing. Other than that, I wouldn´t see a bigger sensor size having any advantage in per pixel quality. FF (24x36) DSLR is the most established system format with more tested lenses than all the others together. Remember again, there are too many variables to draw any (other) hasty conclusions.

My take

Nothing can beat megapixels if you are after the absolute fineness of details. But then you must also have the best lenses, which comes after lots of money. The best lenses don´t come cheap. Adding more megapixels is easier than making better lenses and desiging the best optical solution for high density sensors. As I see it, the most common mistake people make is to forget the lens.

However I look into this matter, and I have done it many times, nothing can beat the line of Excellency as a reference of details. It is not on or off, though. The closer you get the better. And one more thing: you must be able to achieve it by yourself time and time again when you take a picture. It doesn´t fly into any body and lens from a test lab and get attached into your images automatically.

-p- 

How Big Is a Pixel?

On several forums, I have seen discussions where people claim that an image from a camera with bigger sensor size has always more resolution or details than another image from a camera with smaller sensor size even if they have same amount of pixels.

I have wondered where does this claim come from? If the amount of pixels is the same, how can the other pixels have more information?

RGB File

Lets take a look at RGB image file. In practise all image files from digital cameras are RGB files when they are in actual use. Some may be encoded differently when in storage on a memory card or computer disk, but when we see them they are usually RGB fimages. There are exceptions of course, like CMYK, but we can forget those here.

This is a 3000 percent enlargement of an image detail. I guess this is what commonly is meant with pixels, it is "pixelated"...

This is a 3000 percent enlargement of an image detail. I guess this is what commonly is meant with pixels, it is "pixelated"...

Only, these squares are actually not pixels, they are squares consisting of 30x30 alike pixels. This enlargement was made in Photoshop using Nearest Neighbour (hard edges) interpolation, which gives us this popular illusion of pixels.

Only, these squares are actually not pixels, they are squares consisting of 30x30 alike pixels. This enlargement was made in Photoshop using Nearest Neighbour (hard edges) interpolation, which gives us this popular illusion of pixels.

What is true is that each of these squares has the same RGB value for each of its "sub-pixels", which are clones of the original pixel. This is all we know about a pixel, its color and its place in this pixel grid. There is no details inside a pixel.

What is true is that each of these squares has the same RGB value for each of its "sub-pixels", which are clones of the original pixel. This is all we know about a pixel, its color and its place in this pixel grid. There is no details inside a pixel.

This is another 3000 percent interpolation of the same image detail. This time the algorithm used was Bilinear. What it does is to calculate lots of new pixels based on the original pixels. There are no blocks but smooth transitions to fill the gaps with color values between original color values. Again, there is no structure or information inside pixels, just colors and lots of invented new "information".

This is another 3000 percent interpolation of the same image detail. This time the algorithm used was Bilinear. What it does is to calculate lots of new pixels based on the original pixels. There are no blocks but smooth transitions to fill the gaps with color values between original color values. Again, there is no structure or information inside pixels, just colors and lots of invented new "information".

A pixel has no size

So, in an image file there is no size for a pixel. A pixel is just an item consisting of three RGB values. Like white is (255,255,255) or black is (0,0,0) and medium gray is (127,127,127). Pure red is (255,0,0), the opposite color cyan is (0,255,255) and we have this reddish color (241,156,151) in the image above. This is how color is formed from light. None of these RGB values have any need for size. Any number of pixels gets an illusion of size only when the image is looked at (be it large or small) and even then the illusion can be very varied as examples above show.

Because there is no size for a pixel, a smaller sensor is not enlarged any more than a bigger sensor (if they have the same amount of pixels) if you look at an image on screen or make a print. They have the same amount of information to start from to fill the same area. Simple as that.

With film, the smaller film size needed to be enlarged more than a bigger film size (same film type) to make similar size prints. This is because of this is how it physically happened. Both had same amount of information per unit of area, and the bigger actual image size had naturally more information. The one which was enlarged more had less information in the final print. With film you had to use a finer grain film (better information density) for the smaller format to overcome this problem.

A pixel has no home

Usually digital image files have EXIF information. It tells us when the image was shot, what was the camera´s name, which lens was used, how many pixels it has... We can also throw this EXIF information away. What is left is basically just those RGB values. (There is also other information but it is included only to tell software how to open and show the image on screen.) After stripping EXIF off, the image file does not know when it was shot, what was the camera or lens. It only knows how many pixels it has and each pixel has its RGB value. It has totally forgotten where it came from, how big was the sensor, was the shooter happy or sad... This is where it all ends up.

Are pixels good or bad? 

Then, if you have two image files without EXIF information, one from a smart phone´s camera and the other from a DSLR, both having, say, 12 million pixels, can you tell which is which? Both are shot at the same time in very good light at same field of view, same depth of field, Most of the time you can´t. Unless...

Unless there was a big difference in the qualities of lenses used. Then you can see which lens was softer or subpar in some other way. Also, if we shoot this comparison in less than optimal light for the smart phone, then we can tell the images apart. Only neither of these situations is related to the number of pixels in an image file. It is related to the quality of information saved in camera. And in neither case we don´t know anything based on one pixel. One pixel is not good nor bad. We need lots of pixels to notice a difference, quality or lack of it. What we need is a relationship between pixels.

Two kinds of pixels

I am writing more about quality of information which comes from neighboring pixels. Before that we must remember this: there are two very different things which are usually referred to as pixels. The other is a pixel in an image file, which was discussed here, and the other is a pixel in camera´s sensor. The latter pixel has physical properties, like size or actually depth.

-p- 

Live Composite - Another Hidden Gem

There seems to be a peculiar pattern emerging with Olympus.

As an example, they have the best exposure metering system in industry, but they don´t tell at all about it to photographers. It is not mentioned anywhere in Olympus manuals or advertising. They also have a very clever HDR function, which lacks just two technically minor things to be really useful for actual photography. It seems like Olympus is very good at inventing brilliant features, but then it also looks like they have no clue about how to implement them properly or how to use those features to create superb images.

Now, in Olympus OM-D E-M10, they introduced another potentially superb feature which belongs to the same category. It is called Live Composite. It has two fatal flaws which come right from not thinking how photographers could use this feature. Technically there is no reason for these flaws to exist - and no reason to not correct at least the other one of them in a firmware update. It should be so simple and obvious, Olympus. And then again, in the good Olympus traditition, Live Composite is not really mentioned in E-M10 User´s Manual at all... Perfect! (UPDATE: This was correct for the early downloaded User´s Manual but now Live Composite is included in User´s Manual).

Creating movement in digital photography

Creating movement in pictures can be created basically in two ways: 1) One long exposure or 2) several short exposures in series. The latter method needs post processing, of course. You need to stack those exposures as layers in Photoshop (or other like software) and use suitable layer blending mode to make all layers show through. The first method is usable when it is dark enough or (eventually) you need dark ND filters. The second method is usable always in any light and it can be used also without a tripod.

This image is a composite of some 30 consecutive shots. I shot those free hand (without a tripod or other support) @ 1/500s, f/5.6; Olympus E-M5, continuos shooting w. Lumix 20mm f/1.7 lens. Separate pictures were stacked as layers in Photoshop and aligned with Photoshop Auto Align function. Layer see through was set as percentiles so that every picture gave same effect to the final image. Three gulls is actually one gull flying through. I picked and masked it in from three suitable frames. Good masking technique comes handy here because you can select afterwards which area or objects show movement and which stay still.

The difference in images from these methods is in continuity of movement. The first method brings total continuity while the latter one tends to show "steps" like in image above. This steppiness can be diminished, even smoothed, with adding more shots and using slower shutter speed. Dark ND filters, especially those of variable type, are not good if you are after the best image sharpness and color rendition. Like said long exposures have their natural limit with the image becoming brighter all the time. This may make finding the balance between exposure and movement difficult. Consecutive shorter exposures do not have this limitation. So, there are pluses and minuses with both methods.

Live Composite

For long exposures, several Olympus bodies have a specific mode called Live Time. It is a version of Bulb mode where you can set the camera to start a long exposure and while exposure is going on you can follow how it develops in camera monitor or smart phone or iPad/Tab. When the image looks ready, you can stop the exposure. It is very simple and very convenient way to do long exposures. Because of built-in smart phone connection (the camera creates WiFi network) you can even sit inside while operating the camera.

With E-M10 Olympus introduced a new feature which combines short exposures into a long one in-camera, without any need for post-processing. It is called Live Composite. This means you can easily choose which way of making a long exposure is the best in each situation. The blending mode chosen for blending exposures is Lighten. So, in any new exposure there must be something lighter than before to make any change. If you start the exposure and there is no change in subject or light, the composite image stays the same, nothing happens in the composite image. Same goes also for the situation if light diminishes, the composite image stays as it was when the light was at its brightest.

All this means, of course, that you have to create images where adding something lighter is beneficial. Olympus brochures tell about star trails. Fireworks against darker sky is one more example and light painting another.

Product shots with Live Composite

Below I have a video example about creating a product shot using E-M10 Live Composite and one led panel plus a white reflector card. It starts with just the led panel and and I build the composite image gradually by adding lighter reflections to the subject. Live Composite is found by first going into manual exposure mode and then going past 60 seconds in exposure times. When there, pressing Menu button you get a sub menu where you can choose the suitable exposure time between 1/2 s and 60 s. You can use any ISO speed (up to ISO 1600)

and aperture but naturally the combination must be suitable for the hoped for exposure. Setting those and pressing shutter button, you get into a ready-state where the next pressing of shutter button starts Live Composite. (E-M10 in video is a demo camera from Olympus Finland and its menus were set for Finnish language. I only realised this when preparing this blog and the camera was already gone back to Olympus. Sorry about that!)

 

Live Composite is saved both as a RAW file (.ORF) and JPEG if you set so. Below we have the final image.

This Live Composite image was converted from RAW in Lightroom 5.4 with my standard preset plus a slight darkening and adjustment in white balance.

Instead of white reflector card I could have started with a darker starting point and added light with another led or any other source of light. I could even have taken my single led panel off stand and moved it around to lighten the camera up instead of reflector card. The possibilities are unlimited here. For product shots I recommend starting from one light and one reflector. Wild light painting fantasies are a totally another genre.

Light trails

Like Olympus suggests, shooting light trails (be they stars or fireworks or lightning or whatever) is the obvious realm for Live Composite. Here I started with a darkish base exposure, let Live Composite go on and waited for some cars to come and let their lights make some trails. It doesnt´t matter how long you wait because the base exposure doesn´t change, unless it is getting lighter. The only limit in exposure time is camera´s battery. Sadly Live Composite is only available in E-M10 which doesn´t have power grip...

Subject movement

Another obvious way to implement Live Composite is to shoot water in motion. Here we stumble into the glaring and easy to correct flaw: the shortest available exposure time is 0.5 seconds. It makes shooting impossible where sun shines. In these creek images below, I had to wait for clouds to obstruct sun or find details where trees helped me, and even then I had to use aperture f/22 and ISO Low (100). With so small aperture and E-M10 you can´t get a sharp image because of diffraction! There is no reason not to be able to use any shutter speed with Live Composite. With a faster shutter speed you will eventually get what you want because there is no worry about over exposure.

Light on dark and dark on light

The other flaw comes from the blending mode Lighten. You only get what is lighter than where you started from or where you were at some point. There will never be anything darker. Think about shooting against a medium grey wall and waiting for people to walk by. There comes a fair skinned person with black shirt and white jeans. Depending on shutter speed you get streaks of head and white jeans. No upper torso ever passed. Or maybe the person is black having white t-shirt and black jeans. Great, your image shows a white t-shirt passing by. Think about shooting tree leaves against slightly lighter sky. With a nice wind the end result will show no leaves, just the lighter sky...

Come on Olympus! If you can make Lighten mode work, you can as well make Averaging mode work! You have a winner here, why not make it actually usable. A whole new feature for star trails... Shees!

Smaller hickups

On monitor/EVF you do not see your actual starting exposure before you press shutter button, but what you see is the image as it would look like with automatic exposure. Shutter speed, aperture and exposure deviation are shown numerically. Olympus thinks you should be able to visualize how the image looks like with the shown exposure deviation. Some can of course, but this camera is not actually a professional camera for professional photographers. If Olympus thinks about this camera as an entry level OM-D camera they should not expect users to be wizards of pre-visualizing exposures. So, that´s also stupid. There is a detour though. You can go back to your normal manual shutter speeds and try first what combination works best.

The algorithms for blending exposures are not always as they should. Harsh transitions could be better handled. They can be corrected in post-production easily though.

Embarrasing

As you can read between the lines and even somewhere else, to that matter, I would love Live Composite with a couple of improvements. Now it is a stroke of genius haphazardly implemented. Of course Olympus is not alone here. Many other manufacturers are just as eager to add various so-so things just to make spec lists more impressive.  It only is no excuse for all the frustration this, another un-productized, feature will create.

-p-

Getting to grips with Olympus E-P5

When I first saw Olympus PEN E-P5 it was love at first sight and immediate disappointment after I had it in hand. The "grip" it has is simply awfull. There is very little support and the edge of plastic grip is sharp against finger tips. E-P5 simply is not any good to be carried around with wrist strap.

My almost first comment was to ask for an accessory grip. After all E-P3 had nice changeable grips. Olympus sees E-P5 to be about style and mister Terada from Olympus was not at all interested in destroying design lines with any such utility accessories. Also the market segment for an E-P5 style camera and accessory grips don´t interlap enough. One more reason against extra grips was WiFi antenna being inside E-P5´s plastic grip.

Still I liked what E-P5 with VF-4 viewfinder offered and purchased one with the intention of making my own grip. It took some time for me to finally do it, but now I have used my grippier E-P5 for several months.

This grip is made of Sugru. This material is self-setting rubber which you can shape into any form you like. It dries slowly and fixes to most surfaces you can think of. It can also be removed from hard surfaces. More at their web site.

When dried Sugru surface is hard and shiny, but I did some shaping afterwards with sand paper, which lead to a matte surface. You can also use sharp tools to shape dried Sugru if needed. While doing it I also flattened the sharp inner edge of Olympus plastic.

Like said I have used my E-P5 with this grip for several months now and there is no signs of the grip getting loose. I am not sure if I would like a matte or shiny surface better, but as I got the position and shape of grip just right the first time, I have had no need to remove this grip and make a new one.

At least this much of Sugru over WiFi antenna doesn´t seem to affect WiFi in any practical way. However, I have not made any  comparisons with or without Sugru.

To get an idea of how much material and where I needed, I first made a (very) rough "prototype"  of Blu Tack.

To get an idea of how much material and where I needed, I first made a (very) rough "prototype"  of Blu Tack.

The Leica T Experience

 

Leica in Finland held this week a meeting where they showed the new Leica T camera system and gave a chance to try the camera briefly. Every new Leica leads inevitably into heated arguments across photography forums. This happened again when Leica T was introduced a week ago.

 

Much of this fuss comes from miss-understanding what Leica is. Leica is not (just) a camera maker, it wants to be seen as much as a luxury goods maker. If we want to analyze a Leica camera, we must stop behaving like photographers and think also about how successful it is as a luxury good.

There are two terms in economics which I think describe at least some of Leica: Veblen effect and positionality.

Veblen good in wikipedia: ”Some types of luxury goods, such as high-end wines, jewelry, designer handbags, and luxury cars, are Veblen goods, in that decreasing their prices decreases people's preference for buying them because they are no longer perceived as exclusive or high-status products. Similarly, a price increase may increase that high status and perception of exclusivity, thereby making the good even more preferable.”

Positional good in wikipedia: ”In economics, a positional good is a product or service whose value is at least in part (if not exclusively) a function of its ranking in desirability by others, in comparison to substitutes. The extent to which a good's value depends on such a ranking is referred to as its positionality.”

Material component

Leica want´s people to know that Leica T chassis is carved out of a monolith, well sort of: out of a solid 1.2 kg aluminum brick. I have been shooting in an aluminum factory several times and seen how they make aluminum profiles. Several meters long, red hot aluminum bar is pushed through a small, shaped (profile cross section) slit in a steel plate with brute force. For some reason aluminum profile is sold everywhere without making any noise about this quite spectacular achievement. Now, Leica T chassis comes to Leica after being carved out of that aluminum block by a CNC machine. It weights 94 grams. (Hopefully the rest 1106 grams can be re-used for another brick). It takes 45 minutes for every chassis to be polished by hand and to drill all necessary holes for screws.

Where the electronics and other body components come from, what is outsourced, what is designed by Leica or bought as OEM components, they don´t tell. A good guess for the sensor is Sony and looking into EVF gives you a Sony feeling,  which is different from e.g. Olympus or Fuji. Anyway, Leica T is finished in Germany and the finish quality says Leica.

Lenses are made in Japan. By Sony many would guess. But again, they are made to Leica standards. Actually many of the most coveted Leica R lenses were designed by Minolta and made at Minolta-Leica factory in Canada. So it is not who makes but who sets the standards and controls them.

Positionality here doesn´t really come from the material component, it comes from these ideas of carving from an aluminum brick and 45 minutes of polishing by hand. It comes from design and finish. Reading camera sites shows that Leica T seems to have everything to raise it to be the centerpiece of discussion. It is already loved and hated before the first camera is in the hands of the first buyer.

Leica T body is wider and slightly taller than my Olympus PEN E-P5. Actually it´s size is very close to Leica M size. Leica T has way better grip than regular E-P5, but I have pimped mine to even better standard. Sadly the only Visoflex EVF was not available when taking this picture.

Prices and value

With Leica, prices are not something that can be thrown to be mentioned or forgotten at the end of any blog. They too are in the center of discussion always when Leica is mentioned. Leica T body costs about 1500 €. From my perspective a real camera must have a viewfinder. Leica´s EVF for Leica T is called Visoflex. Price is at 450 €. Visoflex used to mean a reflex viewfinder for Leica M series. Now here, of course, there is no mirror. No aluminum either, Visoflex is black plastic - and the price is quite close to Sony´s EVF for Sony RX-1.

At start there are two dedicated lenses available: Leica Summicron-T 23/2 Asph. and Leica Vario-Elmar T 18-56/3.5-5.6 Asph. Lens prices are at 1600 € and 1450 € respectively. Summicron-T is claimed by Leica to be as good as 50mm Apo-Summicron-M, which costs 6000 €. Ooooh, VERY high positioning… Vario-Elmar T has been sneered at because of lacking speed. What if it beats most primes? Let´s see. Value is always relative to quality, usability - and buyer.

Then there is also Leica M-Adapter-T at 300 € which allows all Leica M lenses to be used with Leica T. Information is retained through 6-bit coding. Because of the short back-focus distance there will soon be a myriad of adapters from various suppliers to enable a myriad of lenses to be used with Leica T.

So, what we have here is a camera which leaves the shop ready to shoot at 3000 € or preferably with EVF at 3400 €. And with both lenses and EVF it is 5000 €. For an APS-C sensor camera with quite ordinary specifications. What´s the point?

Leica T design is very form und funktion. Both dials are on the back side. There is a separate video button. Shutter button has ON/OFF/FlashUp swith around it. Thats all, everything else is set via LCD touch screen. EVF is attached to the hot shoe. E-P5 looks very busy compared to Leica T but it is the more adjustable camera on eye. Without EVF Leica T´s icon based menu feels nicer.

Specifications and usability

Yes, Leica T is a very common modern APS-C size mirrorless camera by reading it´s specifications. A Louis Vuitton bag is no roomier than other bags, one can argue. I don´t know about the technical specifications of Jimmy Choo shoes. I have never asked but I guess also my wife has never thought about their specs at shoe stores.

Photography discussion today is way too much centered on specifications. Cameras are weighted by their lists of specifications (maybe divided by price). This is plain stupid.

For me a camera is a very utilitarian tool which is weighted purely by its suitability for my aims in photography. I am not interested about specs beyond my needs. I am not interested in gaining Veblen effect or positionality regarding cameras. (But I notice those in me regarding cars or watches which have very little utilitarian value for me). I am not interested in the resale value of my cameras. I only am interested in images they make possible for me.

Specifications weighted by needs leads to recognizing usability. And usability is something which can´t really be read from specifications. Much of usability can not be listed in any way at all because it is a combination of many areas.

Regarding Leica T, I like it´s pure, simple design and how it feels in hand. It is not too small or big nor too light or heavy for serious use. It has two dials as a proper camera should, but then it misses a locking button to separate autofocus and exposure. Also there is no in-body stabilization and none of present or announced future lenses are stabilized. Further missing is an as precise way to set exposure as with my present gear.

Leica T has a very nice icon based menu system. You can select your own menu items and keep the rest unseen in store. Sadly this system only works on 3.7” touch screen, and not in EVF. This again limits the usability of Leica T because you need to take it off eye to make most usual adjustments.

Of lenses the 23mm Summicron-T would be extremely interesting to test against Leica claims and the very best 35mm equivalent lenses. Leica T has the same amount of pixels as my OM-D E-M1. At base ISOs the better lens would give the sharper image. With lesser light, IS wins the game for me. Leica speaks about Leica look in Leica T images. This is again about positionality. I want to create my own look, good or bad.

Because of aluminum surface Leica T will be too cold to handle without gloves for most of the year here up North. An accessory leather protector or a snap-on skin is a must then.

A big bayonet you have... How about growing to be a big boy some day, my dear Leica T? Suddenly Leica M bodies seem to be gaining age... Retirement coming maybe?

Not my cup of tea

Now I only tried Leica T briefly, no test images, just the first feel. For me it was a very nice camera lacking usability the way I prefer. Many people have written about it having too little specifications for the price. It depends purely on the lenses if I am going to agree or not. I have gut feeling I will not, on a non-personal level that is. Like said, Leica T does not either fill any of my positional needs nor does raise any money spending urges as a Veblen good, but who am I say anything about it to anyone else...

Speaking of lenses Leica T will take any Leica M lens with 6-bit coded adapter. Here Summilux-M 35/1.4 Asph. which of course is clipped because of APS-C sensor. I would guess what you get in image quality leaves nothing to wish because of non-AA sensor. Of wider than 35mm M lenses I will guess nothing before seeing test results.

More here

Leica has their Leica-T site with accessory skins here. Dpreview has their first impressions and list of specs etc. here. Old Leica hand Michael Reichmann calls it the Sexiest Camera in the World here. And if you enjoy watching a German craftsman at work, you can do it here.

-p-

In Decline

CIPA (Camera & Imaging Products Association, Japan) published their latest statistics yesterday. It goes to show how ever deeper the decline of Japanese camera industry is. One person asked me why I care, we can go on shooting as usual? Sure, our cameras keep on working, both of us are not really affected.

Still there are consequences. The first is of course inside camera industry. Smaller and diminishing market cuts R&D. Product development slows down, life span of cameras grow longer. We are heading back to old days when 10 years was not much for a camera. Smaller market makes it less interesting and some companies will leave. Most Japanese camera brands come from minor divisions inside bigger companies or concerns. Nikon is an exception, 98% of it´s profit (per end of 2013) comes from their camera division. The other two divisions are small and the other one of them is almost as much negative as the other is positive. Nikon has so far been succesfull at cutting costs with lesser sales. Nikon also is a wealthy company after several good years, it will not be the one to fall first. But some will when the big boss says that enough is enough, losing money in camera business is over now.

Next in chain is price. Camera prices will go up because smart phones have eaten the market for cheap ones. It is better to make less with higher profit. This has already started as both the average price and positioning of new cameras climb higher.

Then there are camera magazines and camera stores. There will be less readers, less buyers. The days when every high school girl had a Canon EOS DSLR are over. There will be far less newcomers. Photography is on it´s way back to be a specialist hobby. All the others just go happily snapping with their smart phones. There will also be less advertising. I would say the days of printed camera magazines are over very soon. Specialty stores in bigger cities will live on, others not.

The 12 year boom of photography is over! The good thing might be that from now on people discuss more about photographs than cameras. Actually that will not happen, there will always be more camera enthusiasts than hobby photographers, but maybe the craziest hype finally settles down.

-p-

This graph shows CIPA camera shipments to the whole World and Europe for January-February compared to same period a year ago (red line). Total contains all digital cameras. Graph for Finland is my estimate based on discussions, and it shows shipments to dealers during the same period when compared to last year.

This graph shows CIPA camera shipments to the whole World and Europe for January-February compared to same period a year ago (red line). Total contains all digital cameras. Graph for Finland is my estimate based on discussions, and it shows shipments to dealers during the same period when compared to last year.

Importing blog entries from my old site to this new site

I have imported old posts into this new Journal. While doing it, I deleted some posts from the oldest end.  I felt they had a point only at the moment of posting and were outdated by now. This may disrupt the continuum in posts so that some posts may now feel like needing something which was said earlier but missing now. Well, that´s how it is now.

The other reason for deleting stuff was that importing texts was automatic but I had to import EVERY photo manually. You may call me lazy...

-p-

Market shares for system cameras in Finland?

While you can see continuously several sites and blogs posting on camera market shares for various parts of the world, have you ever seen one for Finland? Of course Finland is in the niche of niche but I live here and am interested in things going on here.

Actually, what got this post started was a post on mirrorless camera sales being around 10% of all system camera sales in Europe and USA at the end of 2013. Sadly I didn´t bookmark it and now could´t find it any more. I remembered it to be at 43rumors.com but their search engine didn´t give me back the result I looked for... Anyway, for a long time mirrorless sales have been at a considerably higher relative level in Finland. But how high is it?

Contacting People

As there are no statistics available I had to pick up my phone and start making calls. My target group was industry insiders at companies importing cameras and at major dealerships selling those. In Finland importers are also the ones who sell stuff to dealers. They get their own market share figures for each month from Gfk Finland, which is responsible of collecting the information and keeping statistics. But they only get their own figures, so the actual market shares for various brands are not directly available.

Last week was the school holiday week, which obviously was the reason I could not reach everyone I wanted. There are still some calls un-replied but even then I have gotten the picture clear enough. Finland is a small country which keeps market shares fluctuating month to month and my aim was not to get it up to single percent accuracy. Also most people are not willing to give their numbers exactly or at all. I needed to distill some of it out from plenty of general talk. But you know, when one person says this and their worst competitor says that and a couple of others in the know say so, it all becomes pretty obvious.

The Figures

These are the Official pekkapotka.com Market Share Numbers for Finland in the category of System Cameras right now:

1. Canon 32%

1. Nikon 32%

3. Olympus 21%

4. Sony 8%

5. All the others together 7%

(Note: 1) System camera = a camera with interchangeable lenses. 2) Percentages are not to be read as absolutely accurate, they are my best compromise to add up to 100. 3) Figures are for the number of cameras not value. 4) I couldn´t come up with reasonable numbers for brands combined under "the others", that´s why I did not pick any brand separate from that group )

It´s a tie between Canon and Nikon right now. The one having the latest model is usually leading, and both are losing their market shares because of declining DSLR sales in Finland. Which leads us to where I started from: the share of mirrorless is growing "fast". It is definitely over one third but not yet 40% of all system camera sales in Finland. I would say 35% is very close to the actual situation right now:

1. DSLR 65%

2. Mirrorless system cameras 35%

Actually, these two sets of numbers don´t add upp as nicely as I would like to have it. Either Canon and Nikon have smaller market shares than shown above or mirrorless cameras have a bigger one.

Please Do Not

Was it this way or that way, I can already hear cries and shouting: Foul! Blasphemy! Fan boy!... ;-) But no, I am not far off.

Please don´t write a comment if you have nothing else to say than: I dont´t believe it! / It can´t be so! / You must be wrong! As you can see, I wrote those opinions already here... Other comments are wellcome, of course. Like: the Finns are such techno geeks... ;-)

Please Do

Anyone having actual numbers for December and January and wanting to share them to make my numbers more accurate, please contact me by phone or email. I would like to update those as needed. I will not publish anything in any way not agreed on.

-p-

Choosing papers: Arches Aquarelle Rag

Besides "real" photographic  images I like to make prints which have a certain graphics style look to them. For the last couple of years I have printed them mostly on Canson Infinity BFK Rives. It is a very high quality matte paper. I have written about it earlier here. With more and more images printed I have realised that I am more and more working with two different kinds of images. The other ones have some depth in perspective and the other ones form actually a surface. The subject matter for the latter ones is not always a surface. It is more about the way I feel the image should look like and then tweak it accordingly.

63 1116-1775.jpg
63 0629-6033.jpg

These two images are an obvious example of the basic difference. Now, writing about printing materials is sort of frustrating because I really can´t show the results seen on print in the internet. Anyway, the upper image is of the style I want to make pop out from the paper. BFK Rives is a great material here because it lets the image breath. There is no surface that hinders any qualities of the image from coming out nor prevents your eye from going into the image. You get the colors and perspective. (As a side note, I like to have little contradicting things in my images, like seen also in this image with sensing the depth). At the same time there is an exquisite feel of the material with BFK Rives. It is both a visual thing and something that you feel when you hold the print in your hands. When I started to print on Canson Infinity papers my first choice of matte paper was their Rag Photographique because of it´s superb clarity and sharpness. But then, after a while I started to feel it as too clean and fell for the sublety of BFK Rives.

But now, these gritty and grungy surfaces of mine have also started to feel as too clean on BFK Rives. I grew to want to have a material which has the opposite property: I wanted a sort of oneness for the material and the image. Instead of popping out, the image should cling into the material. I had seen a few images printed on Canson Infinity Arches Aquarelle Rag by various photographers and had very good feeling about it. I tested it and found it to work just like it should.

Canson Infinity Arches Aquarelle Rag

This material has a definite surface. When you look at any image printed on it, you can´t avoid noticing the surface. It is so textured. As the name implies this material has it´s origin as a watercolor paper, which is also used for gouache, pen and ink, acrylics, calligraphy etc. The original Arches Aquarelle is a traditional fine art material just like the original BFK Rives is. It is also mould-made and fullfills all modern museum and longevity standards. This version of Arches Aquarelle, which is sold under Canson Infinity brand, is naturally coated for both pigment and dye printing. Don´t get mixed with these two separate Aquarelle products. More on technical aspects at manufacturer´s site

Matte-CAAqua-140214002-2.jpg
Matte-CBFK-140214004-2.jpg
Matte-CRagPhoto-140214003-2.jpg
Matte-HahPrag310-140214001-2.jpg

These four lines of text are four separate images. In reality they are enlargements from a 3 cm wide area on print. From top: Canson Infinity Arches Aquarelle, BFK Rives, Rag Photographique and Hahnemuhle PhotoRag. The last one is the material I used for many years before finding Canson Infinity papers. As you can see there is no smearing nor any other "watercolor" effects with Canson Infinity Arches Aquarelle Rag. With strong contrasts it is just as crisp as other premium matte papers, only texture comes out already here.

Matte-CAAqua-140214002.jpg
Matte-CBFK-140214004.jpg
Matte-CRagPhoto-140214003.jpg
Matte-HahPrag310-140214001.jpg

The same four papers in same order, and again this crop is 3 cm wide on print. As seen, Canson Infinity Rag Photographique (third from top) is the sharpest one showing tight, crisp grain/noise pattern from the original image file. None of this noise can be seen with unaided eye, quite remarkable. Canson Infinity BFK Rives (second from top) is more subtle, sharp but not as edgy. In reality, just looking at prints, part of subtlety comes of course from BFK Rives having a slight texture and being not as bright white. PhotoRag is a bit mushier in comparison, both here and seen normally. When you now look at the textured Canson Infinity Arches Aquarelle on top, I hope you can understand what I meant by writing above about material and image becoming one. 

64 0214-5055.jpg
64 0214-5056.jpg

The only difference between these two images is the angle of light. They show an 8 cm wide area of the surface of Canson Infinity Arches Aquarelle. This why it is practically impossible to show here what an image printed on this paper actually looks and feels like. It is a multitude of feelings depending also on light. That´s why the quality of viewing light becomes so important with textured materials. Something to remember especially when having an exhibition... Yes, this crop is from the bottom edge of the image above. Look at that gold shining through grit and dirt! Only on print, only on print!

-p-