Will you buy a Panasonic AC120 without an EVF?


Funny that in the past when we rely on the EVF, I couldn't bring myself to frame with the LCD. But now that I am so conditioned with an LCD, I might not miss the EVF at all. That's another 'ouch' for me …… so many choices to make. Another headache for Ryan too.

By the way, can you explain the importance of bit rate? What's the diff between shooting in 24 Mbps, 50 Mbps, etc?
 

The numbers are not so much a factor as compared to how they are achieved. 24Mbps is certainly very efficient and is the bitrate of 4:2:0 AVCHD. 50Mbps often times is achieved with MPEG-2 with 4:2:2 color.
To the naked eye, it is virtually impossible tell what is shot in 24Mbps and what is shot in 50Mbps. The difference may be slightly visible in very fast moving subject e.g. footage where almost every pixel is changing over every frame. (think: close up of a choppy sea surface) You might see some pixelation at 24Mbps, but I serious doubt it.
But the difference manifest itself in other ways. Because 24Mbps AVCHD is more compressed than 50Mbps Mpeg-2, GPU/CPU have to work harder to decode the predicted frames in AVCHD than Mpeg-2. So the difference in ease of editing/transcoding time is apparent.

Also when you compare 4:2:2 and 4:2:0 color space, or Y:CR:CB. 24Mbps 4:2:0 compression is missing the true "Color Blue (CB)" information. This predicted (missing) info will be apparent when you try to color grade the footage. The colors will flatten out fairly unsubtly and carry a stronger tinge as you scroll the color wheel.

In some situations, 4:2:2 will produce noticeably better chromakey than 4:2:0.
 

Funny that in the past when we rely on the EVF, I couldn't bring myself to frame with the LCD. But now that I am so conditioned with an LCD, I might not miss the EVF at all. That's another 'ouch' for me …… so many choices to make. Another headache for Ryan too.

I don't really need a view finder bcos the LCD is enuf for me. Under sunlight, we can buy hood cover to cover the LCD. So, yes, for me getting rid of the view finder is not an issue for me ...
 

But the difference manifest itself in other ways. Because 24Mbps AVCHD is more compressed than 50Mbps Mpeg-2, GPU/CPU have to work harder to decode the predicted frames in AVCHD than Mpeg-2. So the difference in ease of editing/transcoding time is apparent.
My transcoding time in Final Cut ProX is quite long. Hmmm ... maybe a video camera that records in AVCHD 50Mbps Mpeg-2 will decrease the transcoding time significantly (assuming using the same processor & all being equal) ?

Thanks
 

MPEG2 & AVCHD are different types of compression methods....Increasing the bitrate of a codec not necessarily will reduce transcoding time.

In TV & time critical production workflows, we always try to use natively supported codecs by the editing systems to minimise render or transcoding times. The more highly compressed your footage are, the more intensive your CPU will be for the rendering. The less compressed your footage are (even uncompressed), the least intensive for the CPU, but the more intensive for the storage space & transfer rate of your harddrive. That's why planning your 'lens to post' workflow is very important depending on your type of work requirements.

Looking back at the acquisition end, our post workflow are also defined by the types of cameras/recording systems available in the market today. Ranking from the most heavily compressed to the least compressed, we roughly have the following systems:
1) H.264 / AVCHD (MPEG4)
2) Sony XDCAM EX (MPEG2)
3) Canon XF (MPEG2)
4) Sony XDCAM 422HD (MPEG2)
5) P2 (DVCProHD/AVCIntra)
6) Apple ProRES
7) Uncompressed / RAW

At this point of time, the most commonly supported, tested & proven to be stable workflow for native realtime post are compressed footage from XDCAM, P2 & ProRES. (XDCAM being long-GOP based & P2/Prores being intra based. Comparing the two, Intra based footage is less intensive to the CPU than GOP based.) However, these are set to change as the industry & technology progress.

Other considerations to think about is the compression generational loss due to rendering & transcoding. (ie. if you take a footage, transcode it 3-5 times down the road, the pixellation & noise artifacts simply starts to show up.)

Based on my tests, AVCHD / H.264 footage can't maintain the picture integrity of multi-generation encoding situation that we normally encounter in broadcast applications. Our typical workflows involved editing, grading, then recompressing for the playout servers...then playout servers may need to recompress for satellite feed, which may in turn end up in someone's post facility for re-editing. So if our original footage can't hold up the quality, the picture you get at the far end of the chain is just a blob of pixels. :)


Just a little more info for people to see the bigger picture in our industry... :)
 

Thanks DXNMedia for your detailed explanation ... really appreciate your hardwork in explaining it.

Ok, for me, I do a lot of editing using the latest version of Final Cut Pro X. But I always find that the transcoding time by FCPX is not satisfactory to me. For avoidance of doubt, when I said that I am not satisfied with the transcoding time, I am referring to the time FCPX "transcodes" all the media (video, photos, sound ... etc) in the timeline into a single movie file.

Question 1:
Correct me if I am wrong, my understanding is that whenever we import from a AVCHD video camera, FCPX will automatically convert the footage to ProRes for ease of editing. Since, you listed ProRes at number 6, it seems that FCPX can edit this easily as it is highly UNcompress at number 6. Thus, the transcoding time should be fast. So, does this mean that is the fastest the FCPX can go (ignore number 7 as RAW would eat up too much space).

Question 2:
Since FCPX can convert from AVCHD to ProRes, can you explain a bit more about the difference between AVCHD and AVCHD 2.0. Is AVCHD 2.0 means 4:4:2 ?

QUESTION 3:
Is video camera that records in 4:4:2 shows better video quality than 4:4:0 ?

Thanks & regards


MPEG2 & AVCHD are different types of compression methods....Increasing the bitrate of a codec not necessarily will reduce transcoding time.

In TV & time critical production workflows, we always try to use natively supported codecs by the editing systems to minimise render or transcoding times. The more highly compressed your footage are, the more intensive your CPU will be for the rendering. The less compressed your footage are (even uncompressed), the least intensive for the CPU, but the more intensive for the storage space & transfer rate of your harddrive. That's why planning your 'lens to post' workflow is very important depending on your type of work requirements.

Looking back at the acquisition end, our post workflow are also defined by the types of cameras/recording systems available in the market today. Ranking from the most heavily compressed to the least compressed, we roughly have the following systems:
1) H.264 / AVCHD (MPEG4)
2) Sony XDCAM EX (MPEG2)
3) Canon XF (MPEG2)
4) Sony XDCAM 422HD (MPEG2)
5) P2 (DVCProHD/AVCIntra)
6) Apple ProRES
7) Uncompressed / RAW

At this point of time, the most commonly supported, tested & proven to be stable workflow for native realtime post are compressed footage from XDCAM, P2 & ProRES. (XDCAM being long-GOP based & P2/Prores being intra based. Comparing the two, Intra based footage is less intensive to the CPU than GOP based.) However, these are set to change as the industry & technology progress.

Other considerations to think about is the compression generational loss due to rendering & transcoding. (ie. if you take a footage, transcode it 3-5 times down the road, the pixellation & noise artifacts simply starts to show up.)

Based on my tests, AVCHD / H.264 footage can't maintain the picture integrity of multi-generation encoding situation that we normally encounter in broadcast applications. Our typical workflows involved editing, grading, then recompressing for the playout servers...then playout servers may need to recompress for satellite feed, which may in turn end up in someone's post facility for re-editing. So if our original footage can't hold up the quality, the picture you get at the far end of the chain is just a blob of pixels. :)


Just a little more info for people to see the bigger picture in our industry... :)
 

Last edited:
Sorry to be very frank, but some of your questions doesn't quite make sense. I don't know how to answer them in a short explanation.
But no worries, most people who attended the course I conduct face the same problems.... this usually happens when people try to jump into the technicalities without knowing most of the basics.

Try reading up more from these links and I'm sure you'll get the idea.

Data compression - Wikipedia, the free encyclopedia
Chroma subsampling - Wikipedia, the free encyclopedia
Transcoding - Wikipedia, the free encyclopedia
AVCHD - Wikipedia, the free encyclopedia

Once you get the idea, try answering your Q1, Q2 & Q3 yourself and I'll see if you've gotten the idea. :)
 


Alamak, this is getting too "cheem" liao ....

As FCPX edits easily ProRes which is not so compressive (number 6 on your list), the transcoding time to produce a video file from all the medias in the timeline should be fast. So my bottle neck is my computer processor ?

I just wanna if there is a way to improve the transcoding time without changing parts of my iMac.

Thanks
 

change your camera, dun shoot with AVCHD and you'll be fine... :P
 

It will be hard for many people in this forum not to shoot AVCHD. I think Ryan wants to know what is the drawback of shooting AVCHD and what he may gain from shooting on other FCP supported formats like DVCProHD/AVCIntra or XF/XDCAM.

Q1 Personally, the biggest advantage of shooting in formats supported natively in FCP is the joy of NOT having to transcode. This is great for ENG and long event recordings. But that advantage is largely negated once you start adding filters and layers of graphics over the native footage. In fact, it could even be more fustrating and unstable than working with the pro-res footage converted from AVCHD!

Q2 AVCHD2.0 is still 4:2:0 but AVCHD 2.0 supports 60p and 3D.

Q3:Does 4:2:2 looks better than 4:2:0? Not necessarily owing to limitations of most non-broadcast standard monitors. But your options at the color grading phase is severely limited with 4;2:0
Color sampling is explained here
Chroma subsampling - Wikipedia, the free encyclopedia
 

Back
Top