You can expand the blurred image components into a series, with the coefficients being objective parameters that specify the bokeh as precisely as you like it.
Really? That's like saying music can be digitised, which is entirely true (CD's, MP3's are all digital)-- but how does that help one to compare Beethoven's 5th symphony with Christina Aguilera and arrive at a determination of which is better?
You need to understand that quantifying something does not make it an objective attribute which enables people to be able to agree on whether X is better than Y.
No, that's by no means clear. If A resolves 100 lpm in the center, but only 80 in the corner, whereas B resolves 90 lpm throughout the image circle, it depends on the application. A lens with 100 lpm that distorts like a fisheye is really bad at repro photography, compared to one with only 80 lpm but negligible distortion. The MTF function depends also on the distance the lens is focused at. Things are not that easy ...
On the contrary, it's very clear. Your example doesn't change the point. The point is that there are numbers we can compare. If I produced a perfect lens which beats the competition on every criteria, there would be no one who can say that my lens is not the best.
The fact that no perfect lens exists in the real world doesn't change the fact that numbers can be used to determine who is better at what, and to determine it in a way that is not based on someone's opinion, preferences or adjectives.
The fact that you have to live with compromises doesn't change the fact that the numbers given help you determine who is better at what, so you can decide what you want to compromise for a given application.
Of course there is. If you compare e.g. "CZ lenses" to "Japanese lenses", you're comparing entire populations of lenses. Unless there are certain features that correlate with "CZ" or "Japanese" to a statistically significant extent, you cannot make blanket statements about certain brands.
Actually, this would be necessary, since consistency/quality control is also a performance metric when making blanket statements about lenses of certain origin.
I haven't made any blanket statements. I'm pointing out to you that most lens reviewers, even the guys who run optical bench tests, do not do so like statisticians. Even manufacturers do not run tests on hundreds of samples before they publish their data. As I said, some don't even run any tests, they just pull the figures from their optical design programs. Hence I repeat what I said:
There is nothing statistically significant in the comparisons anyone has made-- not here, not in Popular Photo, not in Sean Reid reviews, and not even in the data from Zeiss, Leica, Nikon, etc.
I know you're not happy with that, you want it to be statistically significant. But if you want things to be statistically significant-- you'll grow old waiting. I haven't seen anyone publish such data in any industry. I wouldn't sit around waiting for means and variances, not to mention confidence intervals, etc. Frankly, it's already a big deal if anyone provides "average" or "typical" data. Even car safety regulators-- when they award the safety ratings to the cars under test-- do you think they crash 30 or 50 or 100 of each car? Do you know how much that would cost in time and money, not to mention crash test dummies? With the no of new models released each year, do you think anyone can afford the time to do it in a statistically significant manner?
Which makes these numbers meaningless, since we photograph with real lenses, including their manufacturing tolerances, not with computer simulations.
If you trust the manufacturer that they publish typical data (i.e. one based on statistical analysis of the influence of the manufacturing tolerances), not the "best case" data. Ideally, they should also publish the "worst case" data. You find this information in the data sheets of components that cost only pennies, but I have yet to see it for photographic lenses that cost thousands of dollars.
I disagree. I understand statistics, but I also understand real-world limitations (time, money, etc. as stated above) and real-world manufacturing. In general, unless you have reason to distrust the manufacturing process (eg with FSU lenses), tolerances are generally quite good in world-class manufacturing companies.
People are not stupid. They know that any performance figures published by manufacturers are usually best-case figures, and may never be achieved in real life (handheld with camera shake, less than ideal contrast, etc).
Just because the numbers were not generated by statisticians does not make the numbers meaningless. The figures still provide a useful guide. A car that the manufacturer says can go 0-100 in 5 s must be faster than one that the manufacturer says does it in 10s. Maybe you can't do it in 5s, only Michael Schumacher can attain 5s, but you should be able to beat the other car. The same goes for lenses.
If you understand modern optical design, you'll know why computed MTF graphs are good enough for comparison. Frankly, most optical depts today use more or less the same software, and once you enter a design into the software, it can tell you all the key attributes. In other words, the design determines the performance, and any design is a set of compromises-- size, cost, weight, distortion, manufacturing complexity, etc.
I don't see how anyone can publish "worst" case data. The worst case is obviously unlimited. A car can take 1 hour to reach 100 if you set your mind to it. Even 10 hours if you want. Similarly, a lens can resolve nothing if there is zero contrast. How does one determine how bad a scenario is before it qualifies as a "worst" case?