Good news for photographers who yearn for film like images


Status
Not open for further replies.
dslang said:
Digital isnt the future... digital is pretty much now ;)

Apology accepted. Yes, I sounded like one.

I speak of shooting film in a more conceptual sort of way, rather than practical. Just like you mentioned, CDs have taken over records, just as how online mp3s will probably take over CDs. But to truly know sound and the recording of it, at least once in a very good sound technicians life, no matter how brief, they probably learnt/tried the older systems. To know your future is to know your past, no?

What I'm more interested in now is to see what will become of the accepted standards of post-editing in photoshop. It used to be whatever you could do in a darkroom and nothing else, but one glance at today's crop of magazines would prove otherwise.
To add to your point, yes.. In fact, in any field, it is best to know the history and know how it was done in the past, its evolution and the current technology. Only then can we appreciate how far we have come now, know its advantages and disadvantages, what we have retained, what we have left behind. Only then can we truly appreciate what we are doing now.

Like the title of this thread, yes, there were people trying to make CDs sound more analog by introducing the various curves and imperfections to it during recording. But how would using this sensor give more film like qualities?

All I can say is that I know what I am looking for, but I don't know about the rest. Can you tell what is it about film that you like that digital cannot give? I don't think anyone can give a confident answer. The grains? The tonal range? The contrast? Digital can definitely be tweaked to achieve these, with curves, introducing noise, etc. For those who are familiar with audio, they will also find it very similar.. Is it the noise? The clicks and pops? The equalization? The dynamic range? The frequency response?

From a technical point of view, colour film is technically inferior to digital in some aspects and also superior in others. Tones in film are created by dithering, by using grain clusters with different levels of sensitivities and in between tones are represented by the average density of the clusters. In digital, a pixel can represent all the levels as determined by the ADC and then depending on whether the compression is used, the inherent ability of the format to represent the levels.

Then there were discussions on star trail photography, it is generally accepted that film is still the best media because the shutter can be opened for very much longer even hours but for digital, noise would already have creeped in. Also reciprocity failure in film also allows the film to become less sensitive so that exposure can be longer... an imperfection of film which is made into an advantage!

Therefore, before we start to comment that one media is better than another, let's ask ourselves, do we really understand how these media work and can they give what we want to achieve? It is really up to us.
 

lsisaxon said:
Tones in film are created by dithering, by using grain clusters with different levels of sensitivities and in between tones are represented by the average density of the clusters. In digital, a pixel can represent all the levels as determined by the ADC and then depending on whether the compression is used, the inherent ability of the format to represent the levels.
Where did you get those understanding from?
Tones in film are not created by dithering of grains, but by varying the thickness of the emulsion layer. The picture below illustrates a cross section of a developed film:
IMG_0983.JPG

Point A is the "shoulder", the point exposed to the most light. While point C is the "toe", where no light struck before.
The thickness of emulsion has a characteristic known as H & D curve (Hurter & Driffield) which also determines the optical density.
http://photography.about.com/library/glossary/bldef_characteristic.htm
Therefore, the "dithering" in film is not similar as dithering of dots such as inkjet printers do.
 

lsisaxon said:
Can you tell what is it about film that you like that digital cannot give? I don't think anyone can give a confident answer. The grains? The tonal range? The contrast? Digital can definitely be tweaked to achieve these, with curves, introducing noise, etc.

I can give you a very confident answer why, in my prefered B&W imagery, I prefer "Film", or more accurately, the final product of such an endeaver - THE PRINT. Not the LCD images which are really inferior "Bas***ds" of the real stuff.

It is not the tonal range.
It is not the grains.
It is not the contrast.

And yes, I agree with you that digital can definitely achieved these. Even me, with my paltry knowledge, can "create" such an effect without too much problem. But then, I do have a little more experience in B&W imagery than the average CS members.

It is not all those things you mentioned.

It is the nature of the PRINT. On silver halide paper.

When you look at a B&W inkjet print, you are seeing light reflected from the pigments.

When you look at a B&W silver halide print, you are seeing light reflected from silver halides paticles.

And I just prefer that look. In fact, for certain images, I actually prefer another more archaic process. Platinum prints. And even occasionally Bromoil prints.

You might prefer the inkjet look, and it is fine. I have several good friends who only do B&W imagery in inkjet. They are happy with that look. But I do not. I just have a different esthetics.

BTW, just for interest, when was the last time you really saw a good silver halide print? Or a platinum print?

It took me a fair amount of time to understand why inkjet prints do not excite me.

Now I know. With confidence.
 

student said:
When you look at a B&W silver halide print, you are seeing light reflected from silver halides paticles.

And I just prefer that look. In fact, for certain images, I actually prefer another more archaic process. Platinum prints. And even occasionally Bromoil prints.

"Silver halide" prints are free of halides after development and fixing. What you see is metallic silver, not halides (which are mostly colourless in the first place, apart from a slight yellowish tinge). The light reflecting properties of the silver pigment are used in ambrotypes; paper prints employ the light scattering/absorbtion by the pigments.

Silver halide prints predate platinum prints. The standard print medium since the days of Fox Talbot was "salt paper", which is essentially a silver chloride medium.
 

tsdh said:
Where did you get those understanding from?
Tones in film are not created by dithering of grains, but by varying the thickness of the emulsion layer. The picture below illustrates a cross section of a developed film:
IMG_0983.JPG

I don't know where you dug out this one, but it is wrong. Maybe you're confused by the term "optical density"? It has nothing to do with the thickness of the emulsion. Optical density is defined as OD=-log10(T), where T is the transmissivity of the medium.
 

tsdh said:
Where did you get those understanding from?
Tones in film are not created by dithering of grains, but by varying the thickness of the emulsion layer. The picture below illustrates a cross section of a developed film:
IMG_0983.JPG

Point A is the "shoulder", the point exposed to the most light. While point C is the "toe", where no light struck before.
The thickness of emulsion has a characteristic known as H & D curve (Hurter & Driffield) which also determines the optical density.
http://photography.about.com/library/glossary/bldef_characteristic.htm
Therefore, the "dithering" in film is not similar as dithering of dots such as inkjet printers do.
What you have shown may in a sense be true for B&W. In colour, you cannot have such a buildup otherwise the other colours will not be transmitted. I have never compared digital imaging to B&W film photography.

The grains for colour film is present because of the subtractive colour nature. It is not like the TV monitor where you use primary colour light and they add to give you the colour you see. In colour transparencies, negative and prints, the colours you see is the result of subtracting from the light, whether incident or reflected. Therefore, short of dithering, there is no other way to let the light penetrate another layer. This doesn't happen for B&W film because there is only one layer involved.
 

student said:
I can give you a very confident answer why, in my prefered B&W imagery, I prefer "Film", or more accurately, the final product of such an endeaver - THE PRINT. Not the LCD images which are really inferior "Bas***ds" of the real stuff.

It is not the tonal range.
It is not the grains.
It is not the contrast.

And yes, I agree with you that digital can definitely achieved these. Even me, with my paltry knowledge, can "create" such an effect without too much problem. But then, I do have a little more experience in B&W imagery than the average CS members.

It is not all those things you mentioned.

It is the nature of the PRINT. On silver halide paper.

When you look at a B&W inkjet print, you are seeing light reflected from the pigments.

When you look at a B&W silver halide print, you are seeing light reflected from silver halides paticles.

And I just prefer that look. In fact, for certain images, I actually prefer another more archaic process. Platinum prints. And even occasionally Bromoil prints.

You might prefer the inkjet look, and it is fine. I have several good friends who only do B&W imagery in inkjet. They are happy with that look. But I do not. I just have a different esthetics.

BTW, just for interest, when was the last time you really saw a good silver halide print? Or a platinum print?

It took me a fair amount of time to understand why inkjet prints do not excite me.

Now I know. With confidence.
I agree with you on that. Even for colour digital photography, I will still go for a Frontier print.

Printing on an inkjet, no matter how much they boast of the increased gamut and the life span of the modern Chroma inks etc, is still not so much my cup of tea. They may improve somemore but I have yet to purchase these newer inkjets so I am still skeptical. I still prefer a chemical colour print. The pigment just looks different.

But my point is, for people who do colour photography and send to lab to print, why would there be a need to go through a film medium, then send to the lab for processing and at the end still scan the negative at 6mp before sending it to a laser(?) writer to write the image at 300dpi to the colour paper? And at the end, claim that it is still superior to a DSLR image at 10mp? How many people making that remark is doing just that? Oh and one more thing is that the laser writer writes inverted RGB, so it doesn't go into the CMY phase like transparencies and negatives will go through. So that would be about as accurate as what the sensor receives. Here is some information. http://www.photosig.com/articles/342/article

Let's see... the last time I did B&W was.. I think at least 15 years ago. Silver halide but I have never seen a platinum print yet. :)
 

Ooooo, now this IS getting interesting.

Interesting to read the different technicalities.

I am NOT being sarcastic...quite the opposite.

I am reading this with interest as I have not delved so deeply into this.

I admire your depths of dedication in understanding your craft :)
 

LittleWolf said:
I don't know where you dug out this one, but it is wrong. Maybe you're confused by the term "optical density"? It has nothing to do with the thickness of the emulsion. Optical density is defined as OD=-log10(T), where T is the transmissivity of the medium.
The built-up of silver particles causing different thickness of the emulsion (this is after the film being developed). The different thickness will alter the transmissivity of the film (therefore optical density will change too).
Since you said that it was wrong, then could you please explain what is right, so that I can learn from you?
by the way, I dug the description from a textbook written by New York Institute of Photography.
 

tsdh said:
The built-up of silver particles causing different thickness of the emulsion (this is after the film being developed). The different thickness will alter the transmissivity of the film (therefore optical density will change too).
Since you said that it was wrong, then could you please explain what is right,

Film emulsions contain silver halide crystals embedded in a matrix (usually gelatin, which is permeable for water). During development, the Ag(I) in these grains gets reduced to metallic silver (developers are generally mild reducing agents); since neither silver halides nor metallic silver are soluble in water to any significant extent, the silver is deposited in situ at the halide grains - the final silver image therefore reflects the grain structure of the original halide crystals in the emulsion. Undeveloped Ag(I) is subsequently made water soluble by forming complexes with the fixing agent (e.g. thiosulphate) and washed out.

If the silver was deposited as a layer on the film base, it would give rise to mirror-like reflections when looking through the smooth film base. If you look at a silver film negative, you'll find that the silver image scatters the light, indicating its colloidal nature. Or just look at an enlargement from a silver negative and try to explain the observed grains ...

You frequently find emulsion thickness variations with colour slide film (which uses dye images). I'm not 100% sure about the reason, but I suspect the concentration of dyes in the gelatin affects the degree of swelling in ambient humidity.

I dug the description from a textbook written by New York Institute of Photography.

The conclusion is that the textbook is wrong.
 

lsisaxon said:
What you have shown may in a sense be true for B&W. In colour, you cannot have such a buildup otherwise the other colours will not be transmitted. I have never compared digital imaging to B&W film photography.

The grains for colour film is present because of the subtractive colour nature. It is not like the TV monitor where you use primary colour light and they add to give you the colour you see. In colour transparencies, negative and prints, the colours you see is the result of subtracting from the light, whether incident or reflected. Therefore, short of dithering, there is no other way to let the light penetrate another layer. This doesn't happen for B&W film because there is only one layer involved.
yes, I'm talking about B&W film to simplify.
But actually, color film works with almost the same principle. There is no dithering like ink-jet printer, but "dithering" in the sense of dye particle formation.
A color film, generally has 3 light-sensitive layers: blue-sensitive, green-sensitive, and red-sensitive layer. The blue-sensitive layer has silver halide coupled with yellow dye. Magenta dye coupled with green-sensitive silver halide.And cyan dye at red-sensitive layer.
If, let's say, a magenta light hit the film. Magenta is a mix of red and blue light, therefore the red-sensitive layer and blue-sensitive layer will be activated.
At development process, unlike B&W where the silver particles were fixed on the film, the color negative film will discard all of the remaining silver, left only with the dye built-up proportionally to the activated silver halide.
Thus the magenta light will left the film with built-up of yellow and cyan dye above each other. If we see the film, it will show green color (The complementary color of magenta)
due to the mix of yellow and cyan.
There is no silver particles on a fully developed color-film, but only dyes. The density of dyes will determine the tonality.
 

LittleWolf said:
Film emulsions contain silver halide crystals embedded in a matrix (usually gelatin, which is permeable for water). During development, the Ag(I) in these grains gets reduced to metallic silver (developers are generally mild reducing agents); since neither silver halides nor metallic silver are soluble in water to any significant extent, the silver is deposited in situ at the halide grains - the final silver image therefore reflects the grain structure of the original halide crystals in the emulsion. Undeveloped Ag(I) is subsequently made water soluble by forming complexes with the fixing agent (e.g. thiosulphate) and washed out.

If the silver was deposited as a layer on the film base, it would give rise to mirror-like reflections when looking through the smooth film base. If you look at a silver film negative, you'll find that the silver image scatters the light, indicating its colloidal nature. Or just look at an enlargement from a silver negative and try to explain the observed grains ...

You frequently find emulsion thickness variations with colour slide film (which uses dye images). I'm not 100% sure about the reason, but I suspect the concentration of dyes in the gelatin affects the degree of swelling in ambient humidity.

The conclusion is that the textbook is wrong.
Care to share which supportive article describes all of those?
(I expect a legitimate scientific document good enough to officially counter the textbook, and of course, can be formally submitted to the textbook's writer for review)
 

Wa, this ..Good news for photographers.... so technical and so many chemistry jargons.

Don't know good news or bad news. But everthing seems new to me.
Really need to know so much to do photography? Scary!

Good night. Me going to look for simpler good news in other section.

Cheers
 

Are we talking about chemistry and physics or are we talking the art of creating an image???

To our fellow beloved technogeek, please continue the arguements, discussions and innovations, we need you to come up with better photopapers, chemicals, enlargers, machines, gadgets etc so that we, photographers, can concentrate on creating beautiful images!
 

tsdh said:
yes, I'm talking about B&W film to simplify.
But actually, color film works with almost the same principle. There is no dithering like ink-jet printer, but "dithering" in the sense of dye particle formation.
A color film, generally has 3 light-sensitive layers: blue-sensitive, green-sensitive, and red-sensitive layer. The blue-sensitive layer has silver halide coupled with yellow dye. Magenta dye coupled with green-sensitive silver halide.And cyan dye at red-sensitive layer.
If, let's say, a magenta light hit the film. Magenta is a mix of red and blue light, therefore the red-sensitive layer and blue-sensitive layer will be activated.
At development process, unlike B&W where the silver particles were fixed on the film, the color negative film will discard all of the remaining silver, left only with the dye built-up proportionally to the activated silver halide.
Thus the magenta light will left the film with built-up of yellow and cyan dye above each other. If we see the film, it will show green color (The complementary color of magenta)
due to the mix of yellow and cyan.
There is no silver particles on a fully developed color-film, but only dyes. The density of dyes will determine the tonality.
I think you are mistaken. What the textbook has given you is a very simplified explanation. Yes you need to change the density of the each layer, but the darkening of the dyes are not continuous, the whole concotion for each layer is a mixture of high speed chemicals and low speed chemical and they will be activated at different time during exposure.

It's not really about how deep the light penetrate, it's about how many of each type of light sensitive grains the light has activated. During the exposure, the light will not cause an immediate chemical change and reveal the other layers for the light to penetrate further. Instead, it is a latent photochemical effect, a photon releasing an electron causing the change made to the chemical bonds. The part about the silver being bleached away is correct. However, due to the varying sizes of the silver halide particles for different densities, these dyes form "clouds" which appears as the grains and there are different size grains for the different tones. In a sense the image is made up of these dithered dye clouds.

I can't find any articles yet but I am sure the film manufacturers can tell you otherwise. If it were that simplified, high speed film need not be so grainy anymore. Everything can be continuous.
 

Here are some information but they are from Wikipedia which may be questionable.
http://en.wikipedia.org/wiki/Film_speed


Here is what one of the paragraph says..


Speed is roughly related to granularity, the size of the grains of silver halide in the emulsion. Fine-grain stock, such as portrait film or those used for the intermediate stages of copying original camera negatives, is "slow", meaning that the amount of light used to expose it must be high or the shutter must be open longer. Fast films, used for shooting in poor light or for shooting fast motion, produce a grainy image. The image actually consists of a mosaic of developed and undeveloped areas of the emulsion, and each grain of silver halide develops in an all-or-nothing way. Thus, film is a threshold detector rather than a linear detector. If the subject has an edge between light and darkness and that edge falls on a grain, the result will be an area that is all light or all shadow. An accumulation of such areas breaks up the visible contours of the object, the effect known as graininess (or grain). Fast films are also relatively contrasty, for the same reason. That is, an area of the image will consist of bright areas and dark ones with few transitional areas of midtones.
 

photoboy said:
Wa, this ..Good news for photographers.... so technical and so many chemistry jargons.

Don't know good news or bad news. But everthing seems new to me.
Really need to know so much to do photography? Scary!

Good night. Me going to look for simpler good news in other section.

Cheers
Only for people who wants to argue whether film is better or digital is better.. Otherwise, shoot first.. talk later.. ;)
 

lsisaxon said:
Here are some information but they are from Wikipedia which may be questionable.
http://en.wikipedia.org/wiki/Film_speed


Here is what one of the paragraph says..


Speed is roughly related to granularity, the size of the grains of silver halide in the emulsion. Fine-grain stock, such as portrait film or those used for the intermediate stages of copying original camera negatives, is "slow", meaning that the amount of light used to expose it must be high or the shutter must be open longer. Fast films, used for shooting in poor light or for shooting fast motion, produce a grainy image. The image actually consists of a mosaic of developed and undeveloped areas of the emulsion, and each grain of silver halide develops in an all-or-nothing way. Thus, film is a threshold detector rather than a linear detector. If the subject has an edge between light and darkness and that edge falls on a grain, the result will be an area that is all light or all shadow. An accumulation of such areas breaks up the visible contours of the object, the effect known as graininess (or grain). Fast films are also relatively contrasty, for the same reason. That is, an area of the image will consist of bright areas and dark ones with few transitional areas of midtones.
Here's another article one from Kodak. If you see the microscopic images, you will see that the image is indeed dithered. Who is more qualified to tell us the facts than the manufacturer themselves?
http://www.kodak.com/US/en/motion/students/handbook/sensitometric6.jhtml

Here is a few lines which explains why colour negatives appear less grainy if they are overexposed.

"Because many color films are made with emulsion layers of varying graininess levels, increasing the exposure (up to a point) places more of the density in the finer grained layers, which actually reduces the overall graininess of the observed images."

These are all good read. http://www.kodak.com/US/en/motion/students/handbook/sensitometric.jhtml
 

lsisaxon said:
Here's another article one from Kodak. If you see the microscopic images, you will see that the image is indeed dithered. Who is more qualified to tell us the facts than the manufacturer themselves?
http://www.kodak.com/US/en/motion/students/handbook/sensitometric6.jhtml
You're not totally wrong, it is just a misperception.
That article from Kodak was talking about graininess, it does not mention about the varying thickness of the emulsion after development. But it did indirectly tell that the grains are not all located at the same plane, but randomly within the thickness of emulsion.
This is the small words quoted from that article:
"Note that surface grains are in focus while grains deeper in the emulsion are out of focus. The apparent "clumping" of silver grains is actually caused by overlap of grains at different depths when viewed in two-dimensional projection"

I drew some diagram below (this is simplified, B&W film, just to illustrate the logic):

The unexposed film, will have all silver-halide intact in its emulsion layer.
Film1.jpg


If the film was exposed to a light with varying brightness, some of its silver halide will be activated.
Film2.jpg


After development, the non-activated silver halide will be dissolved, and gone. The rest of silver particles will create a "clump" of particles proportional to the light brightness.
Film3.jpg


Those clumps of particles create different thickness of the remaining emulsion.
If you see the film from top (two dimensionally), then you'll see the grains form the image, this is what you said as "dithering". And this is what that Kodak's article describe.
Film manufacturer usually use the word "density" instead of "dithering".

For those wondering whether this kind of knowledge is necessary for a photographer, I can say: no. You don't need this knowledge to create a good photograph.
 

Status
Not open for further replies.