ok, I know what you mean.
If you want to compare the continuity of gradation levels between film and digital, it is hardly possible.
The closest possibility is to sample the same area size of both mediums.
For example:
Canon 1Ds' sensor size which is the same as 35mm film, has 4992 x 3328 pixels (monochromatic). Meaning each pixel is 0.0072mm rectangular (7.2 micron).
Then we cut a B&W film at 7.2 x 7.2 micron. It has a volume 7.2 x 7.2 x emulsion thickness.
The size of each silver particle as accroding to Kodak, is between 2 micron to 0.2 micron. But I don't have the information on how many particles are there inside that volume.
The digital using 12-bit ADC will have maximum of 4096 levels, with minimum step of 1 bit. While the film will have minimum step of 1 particle, and the maximum levels as many as the number of particles at that volume (assuming that we ignore the overlapping of particles).
The differences between those two mediums are actually more pronounced on the continuity of image shape. Let see the picture below.
If we cut 4 x 3 pixels, and strike it with light shaped as shown on the first picture.
The film, second picture, will render image shape more faithfully than digital. While the digital sensor, third picture, will result with some incorrrect halftone value due to the averaging of light intensity. This issue will be amplified while using color due to the Bayer masking.
Which one is better? well, up to the photographer, each has its own merit.