7. The Concept of Frame
7.1 The borders of physical vision and photography
The first step to define composition in photography is the concept that there is a frame that affect the way we see a picture. This is why looking at a photograph feels different from standing within the scene.
Why is it when you are standing on a cliff looking at a vast horizon of the ocean and enjoying the sea breezes, it is so nice, but when you capture it in a photograph, you cannot feel that vastness anymore? Why is it that you are walking around in the streets, you saw nothing interesting, but a good photographer can take pictures of isolated items that looks so attractive?
The way our eyes see and our minds relate to subjects, matters and emotions directly, has an intricate difference, yet ironically as well as an intricate correlation with how we estimate the photographic output to be. The science of our physical vision, optics, technicalities and creative vision all ties into the same photographic process of composition.
First of all, our natural physical vision holds no frame, not even a spherical cut-off. When we walk around, we see many things. When we look at the actual scene, for example, at a cliff, we see the sky, the sea, and perhaps a little trees, birds and boats if we look around and noticed their presence – and that’s all. Taking that picture into a photograph, print it out and look at the photograph itself. Is it identical? In a way, yes and no. You are now looking at a two dimensional image that is shrunk to a 4R sized photograph held at arm length. Although your central vision is on the photograph, you are still able to see things around the photograph, for example your hand, the table behind your hand in front of you, and a pretty girl that is walking by in front of you. The same applies to why watching a movie in an omnitheatre, in a cinema and with your 62” widescreen display at home are still a hell lots of difference.
Our physical vision is frameless. It is merely an entire hemispherical vision where we will pay attention to the things in front of us, and gradually start to be unaware of things when the surroundings are closer and closer to the peripheries from our central vision. It is not an abrupt cut-off but rather a decrease in sensivitiy at the peripheries, which means that if it is a bright light source and a large patch of color, you can still capture those data without settling your central vision in that direction, but you will gradually become unaware of the movement in that location, or you find that you cannot decipher or describe the details in that vaguely perceptible region.
We pay the most attention to roughly about 43 degrees, and is roughly aware for more than 90 degrees of view up to 180 degrees. Try moving your hands in front of you and then slowly move them apart while you keep your eyes looking in front. However there is only a point to which your vision is less sensitive to details and movement, but not totally ending with a frame. It is also like a video that keeps moving around and collecting data into the brain keeping a sense of the three dimensional way.
In photography, we transform scenaries into images within a frame of distinct edges, which we reproduced into viewable images on screen or on prints, in which in whatever reasonable ways we look at it, they are within our central vision.
7.2 Eventual photographic output is two dimensional within a square or rectangular frame
Whatever the camera can capture within a frame, is whatever our eyes will eventually see on screen or on print. Unlike physical vision of the actual scene, there is no significant segregation as strong as that of our central and peripheral vision, although the eye still pick up cues on details, subjects, exposure, colors or depth of field where vision will pay more attention to some parts than the other, but there will usually be no parts that is ignored totally like blind spots of our actual vision.
In photography is different, there is a framing where we capture what the camera sees with their focal lengths, and we fit the components that we want into the picture as a composition. And this framing becomes 2 dimensional, the height and the width of the picture. It captures image onto on the 2 dimensional sensor over a certain time frame.
What we see is flat, but because we can correlate recognisable subjects to their existence in real life, and we can tell if something is near or far, and how near and far they are by simply the items size, shade and the way shadows are formed. That creates the sense of depth and completes the spatial relationship in a frame.
And becos there is a frame, we can decide on how much we can include and exclude within the frame.
7.3 Image circle is larger than and contain the rectangular or square frame
Image circle (covering power) refers to the area of light entry physically allowable by the lens, which determine the physical size of the frame functionally allowable.
Rephrased from
photonotes.org, lenses allow light to enter through a hole that is circular rather than rectangular or square. The rectangular or square framing is due to capture of light on a rectangular or square sensor. This sensor must have a diagonal dimension that is at most the same size as the diameter image circle itself, no bigger than that, otherwise parts of the rectangular sensor may not be illuminated by the light from the image circle and capture no images, giving four curved corners. Such a situation may arise and result as an accepted feature of extremely wide angle fisheye lenses or physically compatible lens that has a smaller image circle than the diagonal dimension of the sensor.
Also, as there may be noticeable light falloff towards the edge of the image circle due to increased chances of optic aberrations especially towards the peripheries, the sensor is again cropped smaller into a desirable area to avoid the light fall off, which thus makes the sensor now even smaller than the image circle. This smaller sensor size is to accomodate acceptable image quality within what is known as an usable image circle or the circle of good definition, which is ascertained by the manufacturer. The threshold at which the manufacturer sets versus how good the glass are and how good they are put together, will determine if the circle of good definition is maintained throughout all lighting situation and all apertures, whereas poorer quality without suitable threshold set for a smaller circle will fail at poor light situation when put against a larger circle of a larger sensor.
This law of the image circle diameter and the sensor diagonal length thus became one of the important issues of compatibility of body and lens, apart from physical compatibility and electronic compability.
7.4 Viewfinder coverage
Now that we know that our eyes' span of vision (more than span of concentration) is wider than the image circle of the lens, and the image circle of the lens is wider than the rectangular diagonal diameter of the sensor. The next thing to know in the pathway is actually a split pathway from the mirror - as we know the same optic axis is divided into two pathway, one goes into the sensor for capture, another goes into the prism and viewfinder for us to preview and focus before mirror flip up.
As there is a variety of issues coverable by different types of viewfinders, that would be further discussed under DSLR body parts and under focusing. Over here, we will just talk about framing and coverage.
Normally, what we see in the optical viewfinder of a DLSR is smaller than the eventual result of the image output because it is heavy/bulky and expensive to produce a large pentaprism in order to cover 100% of the image area into the viewfinder at relatively satisfactory magnification. Most entry to mid range DSLRs thus cover 85-97% coverage, while the higher end models cover close to 100% coverage. Some models, such as the Olympus DSLRs offer a more than 100% coverage, which means that the scene outside of the frame and the image is also seen, somewhat like a rangefinder, but the overall size and the size of the frame in the optical viewfinder is small with a smaller magnification factor.
There is slight difference as unexpected elements or vignette may be seen in that 3-10% area of difference that is not seen in your viewfinder, which is thus a concern over composition for some people, supposedly that no cropping or cloning is to be done in post processing.
.