SkierEvans wrote:Yes but that was not the questioned posed. It was how to get an interlaced file from a progressive file. How that interlaced file is dealt with in transmission or viewed on a progressive display is a totally different topic.
i don't think, this papers are so much transmission or rendition focused. they main point, which you can learn from them, concerns the requirements of downsampling tasks, and that progressive-to-interlaced-conversion should be understood as one special case of this category.
the suggested filtering has to be done before downsampling resp. as one of the early steps during this conversion. it can't be postponed to a latter stage in the processing pipeline or capabilities on the recipients side, because the necessary information isn't available anymore after the downsampling -- just nasty artifacts, if it wasn't done very well.
it's a challenge, which has to be handled properly during the progressive-to-interlaced-conversion by your application or within your ENG camera [which nowadays will most likely do very similar image refinements based on a progressive sensor readout], nothing else.
and i wouldn't overestimate the deinterlace capabilities and quality of most consumer equipment. in this respect i share andrew kolakowskis point of view and can't work up any enthusiasm for interlaced video at all. legacy broadcast traditions often look like a pure anachronism and a horrible waste of image quality.