Page 1 of 1

Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 12:38 am
by jdelisle
I'm working with a variety of crappy sources. VHS captures and MiniDV. Video is AVI dvvideo SD-NTSC interlaced 29.97 fps, see way down for full details.


I want to make this old video look as good as I can and produce an updated "master" to use for color grading and to keep as a pre-grading archive. Since the video will be made accessible via Youtube or local Plex (PC/mobile media center), I want to deinterlace and possibly scale it to a 1080 height, assuming I'm correct in thinking scaling can be better done through processing than when the video plays and is scaled in real-time.

At first, I thought I'd deinterlace using Resolve natively, but in my tests, VapourSynth with QTGMC deinternacing looks so much better that I feel it's worth my time to deinterlace outside Resolve. I'm "catching" VapourSynth's output with ffmpeg, and have powerful CPU and nvidia GPU at my disposal. Storage is not a problem.

I'm looking for feedback around workflow and formats/ containers.

1. Deinterlacing with VapourSynth QTGMC results in 59.94 fps output. Is that a problem?

2. Should I scale to a 1080 height frame? I suspect Resolve or VapourSynth can scale better than my computer/ phone/ whatever can in real-time. Which scales better? Recommendations?

3. For input to Resolve, what container format and codec would you recommend I use? Ideally, I'd like ffmpeg to produce a lossless compressed output which I'll use as the input to Resolve. Resolve is rather limited in this regard.. I can't seem to find a supported lossless compressed input container/ codec format. I've got TONS of disk space, compute, and GPU at my disposal. I've tried both h264 and HEVC lossless compressed video in both MOV and MP4 containers, and Resolve HATES them... One looks like one of those old scrambled pay-per-view channels from the 80s/90s, or simply has no video, or just won't import, etc.

4. Resolve seems limited in setting pixel aspect ratio on custom dimension video. I cannot, for example, tell Resolve that a clip uses a SD 4:3 pixel AR on a 1440x1080 video. Do I need to address this before importing to Resolve? How can I solve this?


Code: Select all
General
Complete name                            : /mnt/HomeVideo/MiniDV.Tapes/MiniDV-47/MiniDV-47.avi
Format                                   : AVI
Format/Info                              : Audio Video Interleave
Commercial name                          : DVCPRO
Format profile                           : OpenDML
File size                                : 2.52 GiB
Duration                                 : 12 min 4 s
Overall bit rate mode                    : Constant
Overall bit rate                         : 29.8 Mb/s

Video
ID                                       : 0
Format                                   : DV
Commercial name                          : DVCPRO
Codec ID                                 : dvsd
Codec ID/Hint                            : Sony
Duration                                 : 12 min 4 s
Bit rate mode                            : Constant
Bit rate                                 : 24.4 Mb/s
Encoded bit rate                         : 28.8 Mb/s
Width                                    : 720 pixels
Height                                   : 480 pixels
Display aspect ratio                     : 4:3
Frame rate mode                          : Constant
Frame rate                               : 29.970 (30000/1001) FPS
Original frame rate                      : 29.970 (29970/1000) FPS
Standard                                 : NTSC
Color space                              : YUV
Chroma subsampling                       : 4:1:1
Bit depth                                : 8 bits
Scan type                                : Interlaced
Scan order                               : Bottom Field First
Compression mode                         : Lossy
Bits/(Pixel*Frame)                       : 2.357
Stream size                              : 2.43 GiB (97%)

Audio
ID                                       : 1
Format                                   : PCM
Format settings                          : Little / Signed
Codec ID                                 : 1
Duration                                 : 12 min 4 s
Bit rate mode                            : Constant
Bit rate                                 : 1 024 kb/s
Channel(s)                               : 2 channels
Sampling rate                            : 32.0 kHz
Bit depth                                : 16 bits
Stream size                              : 88.5 MiB (3%)
Alignment                                : Aligned on interleaves
Interleave, duration                     : 33  ms (1.00 video frame)

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 1:26 am
by MishaEngel
First deinterlace it with your prefered method (we use StaxRip with Avisynth QTGMC).
Convert the deinterlaced video with VirtualDub2 to a stream of pictures(we used jpeg highest quality jpeg444).
Then we upscaled(denoised and deblurred) the stream of pictures with Topaz A.I. Gigapixel to a height of 1080 pixel leaving the format the same(1440x1080 in our case).
We imported the enhanced stream of pictures back in to VirtualDub2 and combined it with the sound source of the original video file.
Export it to your prefered codec(we used Cineform film scan 2 for editing).
Convert it back to interlaced (1080i50 in our case).

A.I Gigapixel is the by far the best upscaling method we have ever used, the drawback is that you need a lot of storage and a fast workstation.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 3:54 am
by Dilson Abraham
MishaEngel wrote:First deinterlace it with your prefered method (we use StaxRip with Avisynth QTGMC).
Convert the deinterlaced video with VirtualDub2 to a stream of pictures(we used jpeg highest quality jpeg444).
Then we upscaled(denoised and deblurred) the stream of pictures with Topaz A.I. Gigapixel to a height of 1080 pixel leaving the format the same(1440x1080 in our case).
We imported the enhanced stream of pictures back in to VirtualDub2 and combined it with the sound source of the original video file.
Export it to your prefered codec(we used Cineform film scan 2 for editing).
Convert it back to interlaced (1080i50 in our case).

A.I Gigapixel is the by far the best upscaling method we have ever used, the drawback is that you need a lot of storage and a fast workstation.



Hey Misha -

Thanks for those inputs, it would come in handy for me as well as I have quite a few DV and HDV interlaced material that I would like to De-Interlace, De-Noise and upscale as well to color correct and finish in DR. Thanks for outlining the workflow that you use.

Its been quite a few years that I have switched to using a Hack and lost touch using Avisynth.
Would be a great help, if you have some tips or inputs on the Avisynth Scripts that you are using to get your outputs.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 4:37 am
by Bryan Worsley
MishaEngel wrote:Convert it back to interlaced (1080i50 in our case).


@jdelisle - are you intending to reinterlace though ? If so, I wouldn't use QTGMC to 'double-rate' deinterlace as it does not preserve the original field pixels - by virtue of the applied temporal gaussian blur and contra-sharpening. It does have a Source-Match/Lossless mode that can recover the source fields, but there are (quite complex) trade offs to consider.

http://avisynth.nl/index.php/QTGMC#Source_Match_.2F_Lossless

Added to which, QTGMC is best suited for high quality sources; with lower quality SD material it has a tendency to thicken edges and lines and requires some tweaking to get the best possible outcomes.

I'd be more inclined to use YadifMod with NNEDI3, or possibly NNEDI2, and certainly if you intend to re-interlace.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 7:37 am
by jdelisle
MishaEngel wrote:First deinterlace it with your prefered method (we use StaxRip with Avisynth QTGMC).
Convert the deinterlaced video with VirtualDub2 to a stream of pictures(we used jpeg highest quality jpeg444).
Then we upscaled(denoised and deblurred) the stream of pictures with Topaz A.I. Gigapixel to a height of 1080 pixel leaving the format the same(1440x1080 in our case).
We imported the enhanced stream of pictures back in to VirtualDub2 and combined it with the sound source of the original video file.
Export it to your prefered codec(we used Cineform film scan 2 for editing).
Convert it back to interlaced (1080i50 in our case).

A.I Gigapixel is the by far the best upscaling method we have ever used, the drawback is that you need a lot of storage and a fast workstation.



Sounds like a great approach, I'd love to try that. Do you happen to have any scripts batch jobs etc. you might share? Even if they're just scraps, I'd take them over starting from nothing! :)

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 10:10 am
by Andrew Kolakowski
Bryan Worsley wrote:
MishaEngel wrote:Convert it back to interlaced (1080i50 in our case).


@jdelisle - are you intending to reinterlace though ? If so, I wouldn't use QTGMC to 'double-rate' deinterlace as it does not preserve the original field pixels - by virtue of the applied temporal gaussian blur and contra-sharpening. It does have a Source-Match/Lossless mode that can recover the source fields, but there are (quite complex) trade offs to consider.

http://avisynth.nl/index.php/QTGMC#Source_Match_.2F_Lossless

Added to which, QTGMC is best suited for high quality sources; with lower quality SD material it has a tendency to thicken edges and lines and requires some tweaking to get the best possible outcomes.

I'd be more inclined to use YadifMod with NNEDI3, or possibly NNEDI2, and certainly if you intend to re-interlace.


Yadifmod over QTGMC? Only if you don't have time for QTGMC way slower processing. It's good, but leaves artefacts which need to be clean up (there is good filter for it an d it should be always used with YadifMod). Also- any very high frequency details will be blurred with YadifMod. It's also not good at all when going to double fps.

Who cares if QTGMC does not preserve original fields in default presets? You are producing new progressive format and there is nothing what "forces" you to preserve original fields. When you later want to upscale then it's totally irrelevant to preserve original fields. You want to produce stable new frames and this is how QTGMC is written. It will "clean up" problems with original fields and produce great looking "new" progressive frames, which are free of any flickering etc. It's one if its best features, specially when you creating progressive output for web etc. I even used it to "clean up" progressive source which were badly made. There is mode for this as well.
The only time you may want to preserve original fields is when doing deinterlacing for fps conversion which goes without frame size change. Then you have setting for it as you said (very slow, but they are there). It's still not clear if you actually gaining anything in final new fps output (I actually never used it). Again- we are creating totally new master and need for original field is not so obvious (it can actually lead to worse looking results).
It's explained very well in QTGMC manual.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 10:22 am
by vivoices
I am planning to improve and archive a few boxes of PAL miniDV cassettes 720×576i PAR 1.067 (768x576 square pixels).

Topaz AI Gigapixel really makes a very good impression in up-scaling and seemingly adding detail.
So why not let it do all the work, meaning:

1. Render an image sequence from the original interlaced fields @720x288. What application can extract the fields without any loss of information?

2. Scale up with AI Gigapixel to 1440x1080.

3. Render Cineform or DNxHR @50p video files.

Would that not result in better quality video than deinterlacing before up-scaling?

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 10:27 am
by Andrew Kolakowski
No- scaling by field is very unlikely to be better, even of you use very good scaling algorithm.
Have you tried Topaz on your footage?
It will never looks as good as demoes on their website.
VirtualDub should allow you to extract single fields.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 11:44 am
by Andrew Kolakowski
I had a quick look at Topaz.
End result- nothing special at all. It can create additional details, but they may look artificial, so nothing nice at all.
I would say that for video usage it's actually worse than nnedi3 filter which uses neural network upscaling. Topaz is crazy slow even for SD frame size- good few seconds per frame on 4 cores i7! Waste of time :D
Remembered Topaz from the past with their "big claims" and not much has really changed. Amazing (fake) pr driven examples on their website, where reality is very different.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 1:10 pm
by MishaEngel
It works pretty good for us, atleast better than al the other upscallers. AI gigapixel is GPU driven so the faster your GPU's the sooner you will see results (we use TR1950x + 2xVEGA FE).

Rumors are that BMD DR is also working on AI upscaling techniques, let's wait for the announcements.

The best way to test a program like AI gigapixel it to get a 100 MPixel Hasselblad picture, scale it down to around 8Mpixel and then upscale again (in steps of around 1.5x (max 2x)) to 100 MPixel again so you can clearly see how good(bad) it is. First play around a bit with the settings(noise reduction and blur removal) for your files before you batch proces all the files.

Kind of crazy that a combination of opensource programs(StaxRip, Avisynth, VirtualDub2) + AI gigapixel produce superior quality compared to the big guys.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 2:03 pm
by Bryan Worsley
Andrew Kolakowski wrote:
Bryan Worsley wrote:
MishaEngel wrote:Convert it back to interlaced (1080i50 in our case).


@jdelisle - are you intending to reinterlace though ? If so, I wouldn't use QTGMC to 'double-rate' deinterlace as it does not preserve the original field pixels - by virtue of the applied temporal gaussian blur and contra-sharpening. It does have a Source-Match/Lossless mode that can recover the source fields, but there are (quite complex) trade offs to consider.

http://avisynth.nl/index.php/QTGMC#Source_Match_.2F_Lossless

Added to which, QTGMC is best suited for high quality sources; with lower quality SD material it has a tendency to thicken edges and lines and requires some tweaking to get the best possible outcomes.

I'd be more inclined to use YadifMod with NNEDI3, or possibly NNEDI2, and certainly if you intend to re-interlace.


Yadifmod over QTGMC? Only if you don't have time for QTGMC way slower processing. It's good, but leaves artefacts which need to be clean up (there is good filter for it an d it should be always used with YadifMod). Also- any very high frequency details will be blurred with YadifMod. It's also not good at all when going to double fps.

Who cares if QTGMC does not preserve original fields in default presets? You are producing new progressive format and there is nothing what "forces" you to preserve original fields. When you later want to upscale then it's totally irrelevant to preserve original fields. You want to produce stable new frames and this is how QTGMC is written. It will "clean up" problems with original fields and produce great looking "new" progressive frames, which are free of any flickering etc. It's one if its best features, specially when you creating progressive output for web etc. I even used it to "clean up" progressive source which were badly made. There is mode for this as well.
The only time you may want to preserve original fields is when doing deinterlacing for fps conversion which goes without frame size change. Then you have setting for it as you said (very slow, but they are there). It's still not clear if you actually gaining anything in final new fps output (I actually never used it). Again- we are creating totally new master and need for original field is not so obvious (it can actually lead to worse looking results).
It's explained very well in QTGMC manual.


Andrew, I know all that. I've been using QTGMC since it's conception. In fact the concept came about when another Doom9 forum and I were pondering how to best treat residual interline 'shimmer' (twitter) with the existing AVISynth deinterlacers and Didee, the original author, came back with the idea of applying a temporal gaussian blur, realizing that in the process it made an exceptionally good deinterlacer in its own right - so giving birth to TGMC - Temporal Gauss Motion Compensated.

If you read my post you will see that I was asking jdelisle if does intend to re-interlace, in light of MishaEngel's comment:

Bryan Worsley wrote:
MishaEngel wrote:Convert it back to interlaced (1080i50 in our case).


@jdelisle - are you intending to reinterlace though ? If so, I wouldn't use QTGMC to 'double-rate' deinterlace as it does not preserve the original field pixels - by virtue of the applied temporal gaussian blur and contra-sharpening. It does have a Source-Match/Lossless mode that can recover the source fields, but there are (quite complex) trade offs to consider.

http://avisynth.nl/index.php/QTGMC#Source_Match_.2F_Lossless

Try re-interlacing the double rate output of QTGMC (at default) and see what it looks like. But hey, what do I know.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 2:15 pm
by Andrew Kolakowski
MishaEngel wrote:I
The best way to test a program like AI gigapixel it to get a 100 MPixel Hasselblad picture, scale it down to around 8Mpixel and then upscale again (in steps of around 1.5x (max 2x)) to 100 MPixel again so you can clearly see how good(bad) it is. First play around a bit with the settings(noise reduction and blur removal) for your files before you batch proces all the files.




This is exactly what you shouldn't do when testing Topaz! :D
This is how AI is trained and then it works well. AI "detects" patterns used in scaling down algorithms and this is why you see such a good results. On real footage it's way worse and creates aritfial look.

I used real SD footage and played with all parameters. Nothing impressive at all. I get same (or actually less artificial) results with nnedi3 filter.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 2:18 pm
by Andrew Kolakowski
MishaEngel wrote:Kind of crazy that a combination of opensource programs(StaxRip, Avisynth, VirtualDub2) + AI gigapixel produce superior quality compared to the big guys.


Why crazy? Those are created by people with passion and who sometimes have plenty of time. Add support from open source community which can do 1000s of tests. Nothing surprising at all. This is also exactly why x264 is so good. It involved many programmers and 1000s of people who tested it and provided feedback to them.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 2:31 pm
by MishaEngel
Andrew Kolakowski wrote:
MishaEngel wrote:I
The best way to test a program like AI gigapixel it to get a 100 MPixel Hasselblad picture, scale it down to around 8Mpixel and then upscale again (in steps of around 1.5x (max 2x)) to 100 MPixel again so you can clearly see how good(bad) it is. First play around a bit with the settings(noise reduction and blur removal) for your files before you batch proces all the files.




This is exactly what you shouldn't do when testing Topaz! :D
This is how AI is trained and then it works well. AI "detects" patterns used in scaling down algorithms and this is why you see such a good results. On real footage it's way worse and creates aritfial look.

I used real SD footage and played with all parameters. Nothing impressive at all. I get same (or actually less artificial) results with nnedi3 filter.


I disagree with you.

Also AI Gigapixel improved a lot since it's introduction (around aug, 2018).

The good thing is that you don't have to use it, you can stick to nnedi3. Let others find out for themselves, you have a free trial period with AI Gigapixel.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 3:02 pm
by Andrew Kolakowski
Exactly.
Tried it and see no advantage at all over needi3. On top way slower and needs video as image sequence. For me pure waste of time compared to vapoursynth with nnedi3 where I can use video as source and do all in one go (deinterlacing as well if needed).

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 3:15 pm
by jdelisle
MishaEngel wrote:The good thing is that you don't have to use it, you can stick to nnedi3. Let others find out for themselves, you have a free trial period with AI Gigapixel.


How are you interfacing and automating with Gigapixel? Also, at seconds per-frame.. a 20 minute 59.94 FPS video is gonna take a bit of processing time.. even with two 1080ti's on it. Is it quicker when processing multiple sequential images that are nearly identical (i.e. frames of video)?

Thanks

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 3:20 pm
by jdelisle
Andrew Kolakowski wrote:Exactly.
Tried it and see no advantage at all over needi3. On top way slower and needs video as image sequence. For me pure waste of time compared to vapoursynth with nnedi3 where I can use video as source and do all in one go (deinterlacing as well if needed).



I'm leaning towards this approach, but I'm wondering if you can address getting the output into Resolve so I can do my color grading etc.

Source is MiniDV AVI as described above from mediainfo. I'll deinterlace with QTGMC, and resize with nnedi3 to 1080 height.

- What container and codec should I output from VapourSynth?

- What should I do about square vs. 4:3 pixel AR?

Thanks!

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 4:26 pm
by Andrew Kolakowski
Script outputs uncompressed data (8-32bit, 420-444 depending on the source or your settings). You choose desired format in your host app, which can be Vdub or ffmpeg or other tool which understands vapoursynth.
4x3 SD needs to be resized to 1440x1080 or 1920x1080 with pilarboxing. HD is almost always square aspect. To make things simpler and universal use 1920x1080 with pilarboxing.
Use VirtualDub2 which allows you to export to quite many formats including native Cineform which Resolve supports natively. Use MOV container.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 4:54 pm
by Bryan Worsley
jdelisle wrote: I'll deinterlace with QTGMC, and resize with nnedi3 to 1080 height.


OK, so you are not intending to reinterlace then ?

If you were, I was going to add that MCBob would probably be a better choice than YadifMod/NNEDI3:

http://avisynth.nl/index.php/MCBob

It was also developed by Didee, the original creator of TGMC, and incorporates many of the same elements. At the time it was considered the best AVISynth bob deinterlacer for preservation of fine detail and freedom from residual interlace artifacts. And it retains the original fields.

I used it extensively with DV material at that time.

I'm not sure it was ever ported as a VapourSynth function though.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 5:07 pm
by jdelisle
Bryan Worsley wrote:OK, so you are not intending to reinterlace then?


Correct, no TV will ever play these directly, it'll always be a PC playing them to a TV "monitor", or a mobile device, etc..

Does deinterlacing with QTGMC make sense given that?

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 5:20 pm
by Bryan Worsley
Absolutely.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 6:17 pm
by Andrew Kolakowski
jdelisle wrote:
Bryan Worsley wrote:OK, so you are not intending to reinterlace then?


Correct, no TV will ever play these directly, it'll always be a PC playing them to a TV "monitor", or a mobile device, etc..

Does deinterlacing with QTGMC make sense given that?


In this case you can use 1440x1080.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 9:20 pm
by jdelisle
Andrew Kolakowski wrote:Script outputs uncompressed data (8-32bit, 420-444 depending on the source or your settings). You choose desired format in your host app, which can be Vdub or ffmpeg or other tool which understands vapoursynth.
4x3 SD needs to be resized to 1440x1080 or 1920x1080 with pilarboxing. HD is almost always square aspect. To make things simpler and universal use 1920x1080 with pilarboxing.
Use VirtualDub2 which allows you to export to quite many formats including native Cineform which Resolve supports natively. Use MOV container.



Thanks for the detailed suggestions Andrew.

I've made significant progress. I'm using VapourSynth on Arch Linux, using packages from AUR to make getting all the software etc. a bit simpler.

Here's my VapourSynth script:

Code: Select all
#!/usr/bin/env python
import vapoursynth as vs
import havsfunc as haf
import edi_rpow2 as edi
core = vs.get_core()
clip = core.ffms2.Source(source='/omega/HomeVideo/MiniDV.Tapes/MiniDV-47/MiniDV-47.Tour.old.house.avi')
clip = core.fmtc.resample (clip=clip, css="420")
clip = core.fmtc.bitdepth (clip=clip, bits=8)
clip = haf.QTGMC(clip, Preset='Slower', TFF=False)
clip = edi.nnedi3_rpow2(clip=clip, rfactor=2)
clip = core.resize.Spline36(clip, 720*2, 540*2, format=vs.YUV420P8, matrix_in_s='709')
clip.set_output()


Code: Select all
vspipe --y4m ./test.py - | ffmpeg -thread_queue_size 24 -i pipe: -i /omega/HomeVideo/MiniDV.Tapes/MiniDV-47/MiniDV-47.Tour.old.house.avi -c:v libx264 -crf 17 -preset slow -tune grain -c:a copy -map 0:0 -map 1:1 /omega/HomeVideo/MiniDV.Tapes/MiniDV-47/MiniDV-47.Tour.old.house.test2.mov


The video looks terrific.. but I'm a bit concerned with my container + codec choice. I cannot encode Cineform from ffmpeg from what I can tell, so for this test run I used x264 as seen in the ffmpeg command above.

My objective is to create a deinterlaced scaled (to 1440x1080 59.94fps) pre-Resolve master. This will supplement the original DV AVI and be stored away for posterity, so ideally I'd like something that is both lossless compressed and Resolve compatible. I'm stumped!! What to do?

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 9:50 pm
by Cary Knoop
[quote="jdelisle"]
Code: Select all
...
clip = core.fmtc.resample (clip=clip, css="420")
...


Depending on how the 4:1:1 video was encoded the pre deinterlacing resample to 4:2:0 may be a problem.

Generally speaking, you would want to leave chroma subsampling alone until after deinterlacing.

Also. the input matrix is Rec601 output should be Rec709 and also the primaries should be converted.

A little nitpicking but the convert to the destination color space should be before resizing.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 9:55 pm
by jdelisle
Cary Knoop wrote:
jdelisle wrote:
Code: Select all
...
clip = core.fmtc.resample (clip=clip, css="420")
...


Depending on how the 4:1:1 video was encoded the pre deinterlacing resample to 4:2:0 may be a problem.

Generally speaking, you would want to leave chroma subsampling alone until after deinterlacing.


As I understand it, QTGMC is incompatible with yuv411, but I'm no expert! Maybe there's a better way?

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 08, 2019 10:16 pm
by Andrew Kolakowski
Check QTGMC parameters in havsfunc- I think it may supports 4:1:1. If not convert to 4:2:0 before deinterlacing with fmtc (which should support 4:1:1) precisely setting source and destination chroma sampling and interlaced nature.
In ffmperg use DNxHR instead of h264:

-pix_fmt yuv422p10le -c:v dnxhd -profile:v 4 -movflags write_colr

is for DNxHR HQX.

There is not much lossless which will work in Resolve. If you have Resolve Studio then you can use h264 lossless with I frame only. Use -tune fastdecode and also keyint=1 so it's easy to work with in Resolve.

Re: Workflow from VapourSynth to Resolve

PostPosted: Mon Apr 15, 2019 4:38 pm
by jdelisle
Andrew Kolakowski wrote:Check QTGMC parameters in havsfunc- I think it may supports 4:1:1. If not convert to 4:2:0 before deinterlacing with fmtc (which should support 4:1:1) precisely setting source and destination chroma sampling and interlaced nature.
In ffmperg use DNxHR instead of h264:

-pix_fmt yuv422p10le -c:v dnxhd -profile:v 4 -movflags write_colr

is for DNxHR HQX.

There is not much lossless which will work in Resolve. If you have Resolve Studio then you can use h264 lossless with I frame only. Use -tune fastdecode and also keyint=1 so it's easy to work with in Resolve.



Thanks for all the feedback Andrew, and everyone else too!

I've tried DNxHD (a pain to use) and DNxHR (nice to use) in both MXF and MOV containers. The MOVs worked nicely with Resolve.

I've also tried lossless x264 with your suggested settings in MOV containers, and Resolve (studio) works nicely with them too.

I feel like I need a sanity check Andrew!

The input is SD DV AVI 29.97fps 720x480 yuv411. I'm deinterlacing and resizing with Vapoursynth, outputting 59.94fps 1440x1080 yuv422 to ffmpeg to compress/transcode, etc. I will then use that as an input to edit, color correct, etc. in Resolve Studio.

I'd like to keep this workflow visually lossless, but going truly lossless seems like it might be overkill.

The files below were produced from a 3-minute test video as an input. As you can see, they get a little ridiculous in size.. and I have 60+ hours to deal with.

Which would you recommend? From my inexperienced eye they look pretty much equal when playing them back side-by-side.. but I know they're not.

### Test video
640M MiniDV-47.SHORT.avi

### x264 output sizes
957M MiniDV-47.SHORT.x264.crf15.veryslow.mov
1254M MiniDV-47.SHORT.x264.crf12.veryslow.mov
1510M MiniDV-47.SHORT.x264.crf10.veryslow.mov
2081M MiniDV-47.SHORT.x264.crf7.veryslow.mov
3284M MiniDV-47.SHORT.x264.crf3.veryslow.mov
4083M MiniDV-47.SHORT.x264.crf1.veryslow.mov
6621M MiniDV-47.SHORT.x264.crf0.veryslow.mov

### dnxdr output sizes (LB, SQ, HQ)
1498M MiniDV-47.SHORT.dnxhrlb.mov
4701M MiniDV-47.SHORT.dnxhrsq.mov
7104M MiniDV-47.SHORT.dnxhrhq.mov

Re: Workflow from VapourSynth to Resolve

PostPosted: Tue Apr 16, 2019 8:23 am
by Andrew Kolakowski
Bitrate curve is nonlinear, so to preserve 95% quality you need eg. 80Mbit, but to preserve 97% you need eg. 150Mbit. The closer you get to lossless the faster bitrate raises and at those levels it's all about preserving noise/high frequencies.
Anything above CRF 10 will be already very good.
You can export YUV uncompressed file, then DNxHR LB and h264 with around same file size. Then run

ffmpeg -i "yuv_source" -i "other_source" -filter_complex psnr -f null -

to check what is PSNR difference for those two. h264 should be better. Anything above 3dB difference is your threshold.

Re: Workflow from VapourSynth to Resolve

PostPosted: Tue Apr 16, 2019 2:48 pm
by jdelisle
Andrew Kolakowski wrote:Bitrate curve is nonlinear, so to preserve 95% quality you need eg. 80Mbit, but to preserve 97% you need eg. 150Mbit. The closer you get to lossless the faster bitrate raises and at those levels it's all about preserving noise/high frequencies.
Anything above CRF 10 will be already very good.
You can export YUV uncompressed file, then DNxHR LB and h264 with around same file size. Then run

ffmpeg -i "yuv_source" -i "other_source" -filter_complex psnr -f null -

to check what is PSNR difference for those two. h264 should be better. Anything above 3dB difference is your threshold.


Thanks very much for the tips! I'll give that a shot.

Re: Workflow from VapourSynth to Resolve

PostPosted: Sun Mar 29, 2020 4:41 am
by Dmytro Shijan
If it is still actual for you, check this VapourSynth + QTGMC Deinterlace + Hybrid FAQ for macOS (you can process files in same way on Windows too) viewtopic.php?f=3&t=109259

Re: Workflow from VapourSynth to Resolve

PostPosted: Sun Mar 29, 2020 6:34 am
by Uli Plank
I did a test where I used the same camera, the UMP 4.6. Not to fall into the trap of downscaling, I used it full size with 3 lenses: 25m, 50mm, and 100mm. They are not really identical in color, even if they are all by Zeiss and from the same period. Plus, no lens is really exactly its focal length (the 25 is more like a 26, for example).
Nevertheless, using the original only scaled down to UHD, I could work pretty well with a crop to HD and another one to SD. Well, kind of, I used square pixels at 1025x576 to avoid scaling.

The results from Topaz Video Enhance AI (a recent program from them) were not really that impressive going from HD to UHD, where the difference to Resolve's Super Scale is hardly justifying the rendering time, even if textures looked a bit better. When going from SD to UHD the difference was already more obvious. Not that I seriously recommend it, it's still very soft.

But what really impressed me was going from true SD, interlaced DVC-Pro, to HD. The amount of credible detail regained was surprising. For details, see Dmitry's thread in the post-production forum.

Re: Workflow from VapourSynth to Resolve

PostPosted: Fri Jul 17, 2020 5:20 pm
by Alek74
Hi,
How to deinterlace 1080i using QTGMC and Staxrip ?
Greetings
Alek

Re: Workflow from VapourSynth to Resolve

PostPosted: Sat Jul 01, 2023 11:25 am
by anttiryt
Uli Plank wrote:
But what really impressed me was going from true SD, interlaced DVC-Pro, to HD. The amount of credible detail regained was surprising. For details, see Dmitry's thread in the post-production forum.


That would probably be This one

VapourSynth + QTGMC Deinterlace + Hybrid FAQ for macOS
https://forum.blackmagicdesign.com/viewtopic.php?f=3&t=109259