Must say I am very impressed with the S5. Fuji must be doing something right with their unique "honeycomb" pattern CCD. For those who are interested in exactly how the Fuji image quality is different, and what the CCD is doing, I have come across an interesting explanation by a poster in dpreview that i would like to share here. (An official explanation from Fuji would help, but I have not been able to find any.) To me, at least, it helps explain that the 12mp S5 output resolution is an interpolated one, and therefore will not be equal to say a D2x 12mp of actual resolution (ie hardware). But from what I can see so far, Fuji engineers are very very good at squeezing all the info from the sensor.
--------------
"RAW files are always 6mp, but those 6mp are bayer pattern and (for lack of better analogy) rotated at 45 degrees. The only way to display a 45 degree rotated matrix on a non rotated device (your display e.g., or a jpeg or tiff or any other bitmap file) without loosing information is to pad it exactly sqrt(2) times in each direction (vertical + horizontal), such that the total image is twice the size of the original rotated image.
As you rotate, though you do not loose any information in vertical and horizontal dimensions, you have also created some interpolated information in diagonal dimensions, and so at a per-pixel level your image may appear softer.
When demosaicing, the raw conversion software allows you to choose whether you want to just do the rotation w/o loosing the information in vertical and horizontal axes, thus increasing the size of the resulting bitmap to 12mp or to then interpolate the image down to a size you desire, such as 6mp - the image will loose some vertical and horizontal information, but will appear sharper at a per-pixel level.
With S3 and S5, if you shoot with DR more than 100%, you also get a second set of 6mp 45 degree rotated matrix, which is exposed at 2 stops less than the first set. Same demosaicing and rotation operaions are performed on it, and the results are blended into the first set.
This generally does not increase resolution, but rather adds to the dynamic range of the image. It is as if you had shot several times on a tripod with different exposures, but w/o the spacial movement that might occur had you actually shot 2 raw images one after another, since they are acquired at the same time.
Whether DR are or are not used for resolution is an interesting question. I haven't played with S3 and S5 raw files yet ( i looked a little bit at the content of S3 file, but not enough to arrive to any conclusions).
The real answer depends on 2 things, and i don't have solid facts on those:
1. Whether the R and S pixels share the same microlens - if they share the same microlens, there is almost no way one can extract extra detail from that, at least not real detail.
2. If they do not share the same microlens, there is a chance to extract some extra resolution, but this will only be achievable in the midtones - where the information from both sensors is not clipped and can provide usefull input.
3. If 2 is correct, it will take quite a bit of work from raw converter software makers (quite a bit of issues to deal with, noise, false detail, etc), and i am not sure if anyone actually does it."
Posted by Moshe Vainer (at dpreview.com)
-----------------------------