Findings on DNxHR/HD as intermediate (post-production) codec

Get answers to your questions about color grading, editing and finishing with DaVinci Resolve.
  • Author
  • Message
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Findings on DNxHR/HD as intermediate (post-production) codec

PostSun Sep 12, 2021 11:20 am

Findings and Executive Summary

If you need to work with the H.264 source footage with DaVinci Resolve (free), then use the "Direct Intermediate workflow" as described in this article.

For FHD footage, use DNxHR codec as an "intermediate" (post-production) one. If you don't suffer from disk space shortage, use ffmpeg and transcode into HQ profile (440 Mbps bitrate), otherwise, SQ profile (290 Mbps bitrate) should be Ok, too. There is a piece of independent evidence in favour of SQ suitability from Atomos support - from there you can notice that 220 Mbps bitrate is considered "very good" for FHD 1920x1080p 60fps representation.

Deliver your FHD outcomes into DNxHR HQ codec MOV container master source, you can easily convert it to any other delivery format later. The (bonus) fast ffmpeg script to convert your master into H.264 codec MP4 container (like YouTube accepts) with instructions on how to finetune its bitrate, is at the end of this post.

For 4k UHD footage, use the same guidelines - transcode your footage into DNxHR with ffmpeg (DNxHR codec will handle the bitrate for you automagically depending on HQ or SQ profile chosen). If your available computing resources are not up to the task of 4k - use DaVinci-produced "optimized media" for editing instead. But for delivery, you should use the "original" transcoded (large-sized files) footage.

To work with the H.264 source footage with DaVinci Resolve Studio (paid), just import your H.264 footage into your timeline, no matter FHD or 4k UHD. Using optimized media is recommended anyway, just for the sake of your comfortable editing.

Unlike the free edition, Studio has many delivery options right out of the box, but you may use the "DNxHR master" strategy too, with transcoding to whatever you need using ffmpeg later.

The Story

I've left the dark side moved to DaVinci Resolve on Linux only a week ago and just finished my first project in it. Got a few questions, made some research (my results follow) and now I'm sharing my findings with the community. So, here the story goes.
Given
  1. DaVinci Resolve (free) neither encodes nor decodes H.264, and purchasing Studio is on my schedule but not right now;
  2. all my present cameras are taking footage in various containers but with H.264 codec (which is actually my single “acquisition codec”),
the only option I have is to go for a "Direct Intermediate workflow" as explained in this nice blog post from 2017. My current project footage is taken in FHD 1920x1080p 59.94fps H.264 36Mbps 4:2:0 and my quick and dirty decision for an intermediate (post-production) codec was DNxHR 4:2:2 10bit. By default, it produces a 440Mbps output bitrate. I transcoded everything to it with ffmpeg and actually, it worked well, the project was a success.

Now I started thinking on some optimizations. Took one footage file as a sample, namely TSCF4382.MOV (FHD 1920x1080p 59.94fps H.264 36Mbps 4:2:0) and compared it's size to the transcoded one:
Code: Select all
 2398943488 bytes (~ 2,3 Gb) TSCF4382.MOV
27732143963 bytes (~26,0 Gb) TSCF4382_DNxHR_HQX.mov

The latter is 11.6 times bigger. So my first question goes: is 440Mbps DNxHR an overkill for representation of a 36 Mbps H.264 video? Or maybe it just fits well? because H.264 is rumoured to compress video stream at around a 10x ratio? Could you share your opinions on this subject, please?

(Side note: converting 8bit footage to 10bit doesn't make any difference for editing, as explained in detail in the older topic, but neither it hurts the transcoded file size; DNxHR HQ produced the same file size as DNxHR HQX; so from now on I experiment with 4:2:2p 8bit only).

My next idea was: Ok, let's assume that 440Mbps DNxHR is really an overkill for FHD 59.94. DNxHR (attn: I am speaking HQ profile only) does not allow you to alter the output video bitrate; no matter what you put under the -b:v option, it runs 440 Mbps for FHD 59.94fps and about 820 Mbps for UHD 29.97fps. Ok, we have an older DNxHD codec which is still Ok for FHD and allows bitrate to be adjusted. Though DNxHD is picky about whatever bitrate you specify, and will accept the "good" ones only. Let's see which bitrates my ffmpeg installation (v.4.2.4) promises to accept, using a dummy command:
Code: Select all
ffmpeg -loglevel error -f lavfi -i testsrc2 -c:v dnxhd -f null - | grep 1920x1080p | grep yuv422p

This gives us a list of acceptable bitrates: 36 45 75 90 115 120 145 175 185 220 240 290 365 440 Mbps. Nice. I wrote a simple script to make a research and compare file sizes from multiple Mbps values:
Code: Select all
#!/usr/bin/bash
# Convert H.264 from camera to DNxHD for DaVinci resolve
#
f="$1"
# 2 dumb checks, may be deleted
if [ -z "${f}" ] ; then echo "Filename expected" ; exit 1 ; fi
[ -f "${f}" ] || { echo "No such file" ; exit 1 ; }

g=`basename ${f} .MOV`
echo Converting file ${f} to ./${g}.mov

for br in 36 45 75 90 115 120 145 175 185 220 240 290 365 440
do
   time ffmpeg -i ${f}                        \
      -threads     4                         \
      -c:v            dnxhd                 \
      -profile:v    dnxhd                   \
      -pix_fmt     yuv422p                \
      -colorspace bt709                   \
      -r               60000/1001          \
      -b:v            "${br}M"              \
      -c:a            pcm_s16le -r:a 48 \
      -f                mov                     \
      -movflags   +faststart              \
      -write_tmcd on                        \
      ./${g}_DNxHD_${br}M.mov
done

DNxHD codec accepted all these bitrate values (no errors popped up), but here the first finding of my research has come:

DNxHD (as included in my ffmpeg) actually implements only a subset of the bitrates mentioned.

I.e. three files with requested bitrates 115, 120 and 145 Mbps are identical byte by byte, they are the same size of 18355908555 bytes (~18Gbytes), same MD5 checksum 8528ece5fc963416b85b3f8c09f10284 and ffprobe reports them all as being 290689 kb/s bitrate (actually, 290 Mbps).

Same story with 36, 45, 75 and 90 Mbps files: they all are identical 90 Mbps, ~5.4 Gbytes each.

Files with 175, 185, 220 Mbps bitrate settings also came out all identical, all 440 Mbps, ~26 Gbytes each. At this point, I stopped further measurements because obviously whatever is 175 Mbps up to 440 Mbps all comes byte-by-byte identical at 440 Mbps bitrate.

The second finding: when I compared the file, produced with DNxHR codec HQ profile, and files from DNxHD with 440 Mbps bitrate - they were almost identical size (some few kilobytes difference in size) and video was indistinguishable.

Conclusion
DNxHD should be considered obsolete. ffmpeg actually implements just three bitrates of 90, 290 and 440 Mbps for FHD 60fps, which correspond to "LB", "SQ" and "HQ" profiles of DNxHR, respectively.

(UPD: actually, I should have been more attentive. My postfactum check of Wikipedia section on Avid DNxHD resolutions for FHD 60/59.97fps was pretty clear about three available bitrates only - 90, 291 and 440 Mbps; Ok I tested it myself so I am confident now; and your own live experience is better than any wikipedia anyway, right? Also let's note that Avid official DNxHR specs whitepaper from 2015 tells us something completely different for FHD.) ;)

So for a post-production (intermediate) codec, just use DNxHR and be happy with it either for FHD or UHD 4k - you only need to select a correct profile which fits both your footage quality and your artistic intention. SQ profile seems to be the best tradeoff overall for FHD project.

If your camera footage is 8bit, transcoding into 10bit will provide zero benefit - if you have just $8 in your wallet, using a bigger wallet won't make your $8 to become $10. Under the hood, DaVinci works with its own internal 32bit floating point frame-by-frame representation, anyway.

Bonus. Fast ffmpeg script to convert DNxHR master into H.264

Important notes.
  1. I intentionally use the h264_nvenc encoder instead of widely accepted libx264. NVidia encoder is many times faster, and - given the identical visual quality of transcoded files - the bitrate difference (in favour of libx264) is really negligible (I noticed maybe 3-5% difference in bitrates and corresponding file sizes). Simply said, with the same bitrate and filesize, libx264 will be a tad better visually, but if you use h264_nvenc then go add 3-5% to the bitrate and file size (by decreasing CQ value by 1.0 or say 0.5, it is float), and get you transcoding job done 2-3 times faster - this is true even on my old Nvidia 980M, what to say about modern 2000 and 3000 series GPUs?
  2. While VBR in libx264 is controlled with CRF setting (option name is "-crf XX" where the lower XX the better your visual quality is, and the bigger overall bitrate is, too). With h264_nvenc, the "-cq" option has a similar effect. So while transcoding your DNxHR master into H.264 MP4, try a few different values for -cq until you will achieve your desired tradeoff between bitrate (and size) vs. visual quality. I.e. CQ value of 10 gave me some 90+ Mbps bitrate, CQ 18.5 gave ~30 Mbps, but YMMV as it all depends on your personal intentions and the actual source video.
Code: Select all
#!/usr/bin/bash
#
# Convert DaVinci-exported MOV master to YouTube-compliant MP4 H.264, see Google Help Center
# "Recommended upload encoding settings"
#
# Container: MP4
#     -- moov atom at the front of the file (Fast Start)
#
# Codec: H.264 (I want to use NVidia "nvenc_h264" aka "h264_nvenc" encoder as it is faster)
#     -- Progressive scan
#     -- High profile
#     -- 2 consecutibe B frames
#     -- Closed GOP. GOP of half the frame rate.
#     -- CABAC coder.
#     -- Variable bitrate. For 1080p high frame rate ( >= 48 fps) 12 Mbps is recommended, though I prefer 15
#     -- Chroma subsampling 4:2:0
#     -- Color space BT.709 (for SDR uploads, for HDR they recommend smth else)
#
# Audio codec: AAC-LC
#     -- stereo or stereo + 5.1
#     -- sample rate 48 kHz or 96 kHz
#
f="$1"
if [ -z "${f}" ] ; then echo "Filename expected" ; exit 1 ; fi
[ -f "${f}" ] || { echo "No such file" ; exit 1 ; }
g=`basename ${f} .mov`
echo Converting file ${f} to ./${g}.mp4
sleep 1
#
# This is for Nvidia GPU hardware encoder
#
# cq 18 - 33.5 Mbps, cq 10 - 90+ Mbps
#
 time ffmpeg -i $1                      \
   -threads      4                   \
   -c:v           h264_nvenc            \
   -coder         cabac               \
   -profile:v       high                \
   -preset         slow               \
   -bf            2                    \
   -strict_gop      1                  \
   -rc-lookahead   8                   \
   -rc            vbr_hq               \
   -2pass         1                  \
   -pix_fmt       yuv420p               \
   -colorspace    bt709               \
   -b:v          0                  \
   -maxrate      300M               \
   -bufsize      600M               \
   -cq            18.5               \
   -c:a           aac -r:a 48 -b:a 768k   \
   -movflags      +faststart            \
   -f             mp4                  \
   ./${g}.mp4


Warmest regards,
Andreas Stesinou
Posted Sep. 12, 2021
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

Peter Chamberlain

Blackmagic Design

  • Posts: 13947
  • Joined: Wed Aug 22, 2012 7:08 am

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Sep 13, 2021 2:13 am

moved to Resolve forum
DaVinci Resolve Product Manager
Offline
User avatar

Olivier MATHIEU

  • Posts: 937
  • Joined: Thu Aug 04, 2016 1:55 pm
  • Location: Paris/Grenoble, FRANCE

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Sep 13, 2021 6:35 am

Thanks
Resolve Studio 18.6.x & Fusion Studio 18.6.x | MacOS 13.6.x | GUI : 3840 x 2160 | Ntw : 10Gb/s
MacbookPro M2 Max

Editor, Compositing Artist
Davinci Resolve & Fusion Certified Trainer
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Findings on usability of SQ profile

PostWed Sep 15, 2021 10:00 am

Peter Chamberlain wrote:moved to Resolve forum


Thank you Peter, this section is definitely the most appropriate for the topic!

BTW I made some experimentation with out-of-the-camera H.264 conversions into MOV which is consumable by DaVinchi Resolve free edition. My findings are as follows.

1. DNxHR SQ profile (both at 4k UHD and at FHD) is well-suited for converting and editing interviews and other scenes where image quality (IQ) is not mission-critical. For FHD, it converts my H.264 36 Mbps camera rolls into 291 Mbps bitrate, which seems to be well enough for the purpose. My camera2davinci tiny script for ffmpeg is at the end of this comment.

2. But for 4k UHD H.264 100 Mbps (think some landscape scenery with plenty of details, or some serious work where IQ is mission-critical) SQ doesn't fit quite well. So for this purpose HQ (or HQX) profile definitely is better. Just correct the profile name in the script from dnxhr_sq to dnxhr_hq and ffmpeg will handle bitrate for you silently.

3. You may also utilize the Apple ProRes codec as an intermediate one, with the same objections on the IQ profile used and very similar results. It's just a matter of your personal preference, I think - given the (almost) same bitrate and IQ profile, your transcoded IQ will be on par with both codecs.

4. How does the IQ of your media, transcoded with an intermediate codec, affect the IQ of your deliveries? As of now my understanding is as follows, I may or may not be correct in my assumptions:
  • DVR takes the input (which should be of sufficient quality) and imports it into the media pool. Whatever gets into the timeline, undergoes the lossless conversion to DVR's internal 32bit floating point frame sequence format and is stored into the clip cache.
  • Editing and grading are performed over DVR's internal format. The quality, codec and pixel format of the source clip doesn't matter anymore.
  • But if your source footage was 8bit and you did heavy editing and grading on it, then to preserve your results you should set up final delivery into HQ/HQX profile and 10bit.
  • I don't know does DVR depend on its already cached internal representation for final delivery or does it (again) takes the original source footage, decodes it again from scratch and processes following your editing/grading commands, accumulated in your project. The latter seems to be the more probable scenario but honestly, I don't know how to clarify this.

Everything said above is my humble, personal, subjective and highly biased opinion; also my text contains more questions than answers. Please take it with a grain of salt, YMMV.

Warmest regards, Andreas. P.S. My camera2davinci script (DNxHR SQ setting) follows. Output frame rate is hardcoded because some of the source media are shoot at higher fps (i.e. 100fps) so I want everything transcoded to a uniform state. Transcoding from lower fps to higher does not work. If you have some footage at 29.97 (or 30) fps and other at 59.94 (60) fps, and you don't want your "slow" footage be used at its 2x speed then I suggest setting your project timeline to the lowest value, change fps setting in the script and use it.
Code: Select all
#!/usr/bin/bash
#
# Convert H.264 from camera for DaVinci resolve

f="$1"

if [ -z "${f}" ] ; then echo "Filename expected" ; exit 1 ; fi

[ -f "${f}" ] || { echo "No such file" ; exit 1 ; }

g=`basename ${f} .MOV`

echo Converting file ${f} to ./${g}_DNxHR_SQ.mov
sync ; sleep 1

#
# transcode FHD to DNxHR SQ 8bit
#
# bitrate is the default whatever profile prescribes, and can not be altered by options
#
time ffmpeg                                   \
        -i               ${f}                     \
        -threads     4                          \
        -c:v           dnxhd                   \
        -profile:v    dnxhr_sq              \
        -pix_fmt     yuv422p               \
        -colorspace bt709                   \
        -r               60000/1001          \
        -c:a            pcm_s16le -r:a 48 \
        -movflags    +faststart             \
        -write_tmcd on                        \
        -f                mov                     \
        ./${g}_DNxHR_SQ.mov
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

Heikki Repo

  • Posts: 20
  • Joined: Wed Apr 20, 2016 10:48 am
  • Location: Finland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Sep 15, 2021 2:25 pm

DaVinci Resolve (free) neither encodes nor decodes H.264, and purchasing Studio is on my schedule but not right now;


Please be warned that owning Resolve Studio, while it allows you to decode h264 video and encode h264 video on Linux, doesn't allow you to decode or encode any audio embedded with h264. Not really a nice thing to find out after purchasing the Resolve license, but it is what it is.

I use ffmpeg for separating audio from h264 footage before editing and encode h264 with Handbrake.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostSun Oct 31, 2021 12:04 pm

Heikki Repo wrote:
DaVinci Resolve (free) neither encodes nor decodes H.264, and purchasing Studio is on my schedule but not right now;


Please be warned that owning Resolve Studio, while it allows you to decode h264 video and encode h264 video on Linux, doesn't allow you to decode or encode any audio embedded with h264. Not really a nice thing to find out after purchasing the Resolve license, but it is what it is.

I use ffmpeg for separating audio from h264 footage before editing and encode h264 with Handbrake.

Thank you for the wisdom, now I'll be aware of this. Though my cameras produce MOV files with H.264 video and PCM 16bit 48 kHz audio anyway, but if I'll be using some smartphone video I will be ready.

BTW why don't you just transcode your source files to PCM with the same ffmpeg before editing? It looks like your workflow contains some extra steps which aren't really required. Can you share your reasons, please? Thanks!
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Updated ffmpeg transcoder H.264 -> DNxHR HQ

PostSun Oct 31, 2021 12:51 pm

For the DaVinci Resolve free ed. users: here is the updated ffmpeg command line to transcode H.264 camera output into DNxHR HQ intermediate (post-production) codec which is consumable by the free edition of DVR.

What's new:

1. Correctly preserves colorimetric data. Actually, I've just hardcoded whatever my Fuji cameras provide; SONY cameras have different colorimetric settings, as you can see below down the thread. Yes, I can write the automated script which will handle this, too... if anyone cares.
2. Generates frame timestamps into the data stream 2. This is useful for the preliminary sync of long chunks of a video where syncing by waveform takes forever.
3. Explicitly sets the timecode metadata tag of the output file to the hh:mm:ss.00 value of the Media Creation Time.
4. Preserves (copies to output) as much original video and audio metadata as possible (Today, since version 4.4 ffmpeg does this by default, but mine is 4.2.something so it does not).
5. Audio stream from the camera (which is PCM 16bit 48 kHz already) is not re-transcoded but copied to output "as is" instead. (For files with AAC audio track, you can easily add the transcoding command-line options from the previous edition).
6. Video frame rate is hardcoded to 30000/1001 because 4k UHD 29.97 fps 4:2:0 at ~100 Mbps bitrate is the best quality my cameras are able to produce, and 60 fps delivery is not in demand anyway.

Note: at 4k UHD 29.97 fps, the output bitrate is ~870 Mbps so be ready for file sizes to grow ~9 times compared to the original H.264.

Warning: this is not a script which is "ready to go" but just a snippet of a 3 command sequence. Tune it to your taste yourself, please :)
Code: Select all
tmcd=`ffprobe -hide_banner  -show_entries stream_tags=creation_time srcCameraFile.MOV 2>&1 | \
   grep -m 1 '^TAG' | \
   sed '/^TAG/ {   s/TAG:creation_time=.*T//
            s/....Z$// }' `

ffmpeg -hide_banner -fflags +genpts+igndts  -hwaccel cuda -i srcCameraFile.MOV \
   -c:v dnxhd -profile:v dnxhr_hq \
   -color_range pc -pix_fmt yuv422p \
   -color_primaries bt709 -color_trc smpte170m -colorspace smpte170m \
   -r 30000/1001 -c:a copy -movflags +faststart+use_metadata_tags \
   -map_metadata 0 -map_metadata:s:a 0:s:a -map_metadata:s:v 0:s:v \
   -write_tmcd on -timecode "${tmcd}" -vsync cfr -f mov \
   outTranscodedVideo.mov

touch -r srcCameraFile.MOV outTranscodedVideo.mov


Actually, transcoding makes the footage files so big that I found purchasing the DVR Studio license (and working with 4k UHD H.264 footage directly) to be much cheaper compared to purchasing and installing more HDD(s) and working with transcoded files. Also, transcoding is a really time-consuming business.

On the contrary, for FullHD source video footage transcoding is a viable way to go.

Warmest regards,
Andreas
Last edited by stesin on Sun Oct 31, 2021 11:57 pm, edited 2 times in total.
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostSun Oct 31, 2021 1:40 pm

Andreas, do you need to parse the timecode metadata like that? Can't you just copy stream 1? Something like -map 0:1 to map stream 1 (the second stream, the timecode stream) of the source file into second stream out the output file.
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostSun Oct 31, 2021 2:01 pm

Actually I'm not sure it's necessary to do any mapping or manual timecode setting. Using this command:
Code: Select all
ffmpeg -hide_banner -fflags +genpts+igndts  -i TestH264.mov \
   -c:v dnxhd -profile:v dnxhr_hq \
   -color_range pc -pix_fmt yuv422p \
   -color_primaries bt709 -color_trc smpte170m -colorspace smpte170m \
   -r 30000/1001 -c:a copy -movflags +faststart+use_metadata_tags \
   -vsync cfr -f mov TestDNxHR.mov
On this source file:
Code: Select all
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'TestH264B.mov':
  Metadata:
    major_brand     : qt
    minor_version   : 512
    compatible_brands: qt
    creation_time   : 2021-10-31T13:57:19.000000Z
    encoder         : Blackmagic Design DaVinci Resolve Studio
  Duration: 00:02:24.87, start: 0.000000, bitrate: 20744 kb/s
  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709/bt709/unknown), 1920x1080 [SAR 1:1 DAR 16:9], 20743 kb/s, 30 fps, 30 tbr, 15360 tbn, 30720 tbc (default)
    Metadata:
      creation_time   : 2021-10-31T13:57:19.000000Z
      handler_name    : VideoHandler
      vendor_id       :
      encoder         : H.264
      timecode        : 01:55:57:18
  Stream #0:1(eng): Data: none (tmcd / 0x64636D74)
    Metadata:
      creation_time   : 2021-10-31T13:57:19.000000Z
      handler_name    : TimeCodeHandler
      timecode        : 01:55:57:18
Gives this output file:
Code: Select all
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'TestDNxHRB.mov':
  Metadata:
    minor_version   : 512
    major_brand     : qt
    compatible_brands: qt
    encoder         : Lavf58.76.100
  Duration: 00:02:24.91, start: 0.000000, bitrate: 219981 kb/s
  Stream #0:0: Video: dnxhd (DNXHR HQ) (AVdh / 0x68645641), yuv422p(pc, smpte170m/bt709/smpte170m), 1920x1080, 219980 kb/s, SAR 1:1 DAR 16:9, 29.97 fps, 29.97 tbr, 30k tbn, 30k tbc (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : FFMP
      encoder         : Lavc58.134.100 dnxhd
      timecode        : 01:55:57:18
  Stream #0:1(eng): Data: none (tmcd / 0x64636D74)
    Metadata:
      handler_name    : VideoHandler
      timecode        : 01:55:57:18
And the timecode was detected fine in Resolve:

Image
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostSun Oct 31, 2021 3:12 pm

TheBloke wrote:Andreas, do you need to parse the timecode metadata like that? Can't you just copy stream 1? Something like -map 0:1 to map stream 1 (the second stream, the timecode stream) of the source file into second stream out the output file.
Hi my friend!

No, I can't and that's why. My main cameras are both Fujifilm, X-T20 and X-E3 - technically and software-wise they are identical (hmm, almost) and produce identical videos which contain neither frame timestamps nor timecodes. Look:
Code: Select all
$ ffprobe -hide_banner TSCF4521.MOV
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'TSCF4521.MOV':
  Metadata:
    major_brand     : qt 
    minor_version   : 0
    compatible_brands: qt 
    creation_time   : 2021-09-24T20:39:22.000000Z
    original_format : Digital Camera
    original_format-eng: Digital Camera
    comment         : FUJIFILM DIGITAL CAMERA X-T20
    comment-eng     : FUJIFILM DIGITAL CAMERA X-T20
  Duration: 00:03:57.24, start: 0.000000, bitrate: 103286 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m/bt709/smpte170m), 3840x2160, 101716 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
    Metadata:
      creation_time   : 2021-09-24T20:39:22.000000Z
      encoder         : AVC Coding
    Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, 2 channels, s16, 1536 kb/s (default)
    Metadata:
      creation_time   : 2021-09-24T20:39:22.000000Z
$

As you see, stream 0 is video, stream 1 is audio and that's all.
So...

Warmest regards,
Andreas

UPD. I double-checked it with the command
Code: Select all
ffprobe -hide_banner -show_streams TSCF4521.MOV | grep timecode

and got the output:

timecode=N/A
Last edited by stesin on Sun Oct 31, 2021 3:29 pm, edited 1 time in total.
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostSun Oct 31, 2021 3:21 pm

TheBloke wrote:Actually I'm not sure it's necessary to do any mapping or manual timecode setting
Your source file is produced by DVR Studio, not by the camera; that's why it already has all the necessary metadata filled in, including timecode and timestamps stream already present - this is a different use case from mine.

Also, your file is video only (no audio track/stream) so the timestamp data stream goes as stream 1 in the output. On my files (where 0 is the video and 1 is audio) it becomes output stream 2 after transcoding.

Also, which version of ffmpeg do you use? New (4.4.*) version is known to handle metadata well while transcoding (it silently copies everything possible from the input to the output if not explicitly told otherwise), and mine is version 4.2.4-1ubuntu0.1 so it does not by default.

Thanks! Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostSun Oct 31, 2021 4:01 pm

Ahh OK, so your camera actually doesn't write timecode at all, and you're generating it from the time created field. Fair enough.
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

On proper setting the timecodes of multiple rolls & cams

PostSun Oct 31, 2021 6:05 pm

TheBloke wrote:Ahh OK, so your camera actually doesn't write timecode at all,
Yup, exactly. The third camera I am occasionally using is a SONY RX10M4, and it provides both timecode and timestamps but does not list the camera make/model in metadata. I.e.
Code: Select all
$ ffprobe -hide_banner C0001.MP4
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55e13c461f00] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55e13c461f00] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C0001.MP4':
  Metadata:
    major_brand     : XAVC
    minor_version   : 16785407
    compatible_brands: XAVCmp42iso2
    creation_time   : 2021-09-06T14:22:13.000000Z
  Duration: 00:01:24.00, start: 0.000000, bitrate: 98676 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 96217 kb/s, 100 fps, 100 tbr, 100k tbn, 200 tbc (default)
    Metadata:
      creation_time   : 2021-09-06T14:22:13.000000Z
      handler_name    : Video Media Handler
      encoder         : AVC Coding
    Stream #0:1(und): Audio: pcm_s16be (twos / 0x736F7774), 48000 Hz, 2 channels, s16, 1536 kb/s (default)
    Metadata:
      creation_time   : 2021-09-06T14:22:13.000000Z
      handler_name    : Sound Media Handler
    Stream #0:2(und): Data: none (rtmd / 0x646D7472), 819 kb/s (default)
    Metadata:
      creation_time   : 2021-09-06T14:22:13.000000Z
      handler_name    : Timed Metadata Media Handler
      timecode        : 00:00:00:00
Unsupported codec with id 0 for input stream 2
This was the first roll from the single session taken with SONY. And here is the second:
Code: Select all
$ ffprobe -hide_banner C0002.MP4
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55f31abe3f00] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55f31abe3f00] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C0002.MP4':
  Metadata:
    major_brand     : XAVC
    minor_version   : 16785407
    compatible_brands: XAVCmp42iso2
    creation_time   : 2021-09-06T14:29:39.000000Z
  Duration: 00:00:27.84, start: 0.000000, bitrate: 98842 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 96212 kb/s, 100 fps, 100 tbr, 100k tbn, 200 tbc (default)
    Metadata:
      creation_time   : 2021-09-06T14:29:39.000000Z
      handler_name    : Video Media Handler
      encoder         : AVC Coding
    Stream #0:1(und): Audio: pcm_s16be (twos / 0x736F7774), 48000 Hz, 2 channels, s16, 1536 kb/s (default)
    Metadata:
      creation_time   : 2021-09-06T14:29:39.000000Z
      handler_name    : Sound Media Handler
    Stream #0:2(und): Data: none (rtmd / 0x646D7472), 819 kb/s (default)
    Metadata:
      creation_time   : 2021-09-06T14:29:39.000000Z
      handler_name    : Timed Metadata Media Handler
      timecode        : 00:01:24:00
Unsupported codec with id 0 for input stream 2
$
As you can see, the creation times are 14:22:13 and 14:29:39 respectively. The difference is 07:26:00.

But timecodes are 00:00:00:00 and 00:01:24:00 respectively, while the duration of the 1st roll is exactly 00:01:24.00; with multiple different cameras simultaneously working at the same scene, starting and stopping by hand (thus, at arbitrary moments of time) this leads to some headaches.

So I think that having timecodes of multiple rolls from multiple cameras all adjusted to the same timescale is a Good Thing (tm). Also, I always synchronize the camera internal clocks with GPS time (using the respective mobile apps from camera vendors) before starting taking my footage.
and you're generating it from the time created field. Fair enough.
Here another question goes. For some reason, my Fujis set the FileModificationDateTime tag value to be some 25 seconds earlier than i.e. the value of the MediaCreateDate tag and I have no idea why.

On the contrary, SONY sets the FileModificationDateTime tag value to be later than the value of the MediaCreateDate tag by more than a minute.

So I voluntarily choose the MediaCreateDate to be a unified time base to count timecodes from. How good is this decision? It is a "fair" approach at least, I agree with you. It works for a preliminary rough synchronization of the rolls, and the precise sync is done by waveform later.

Thanks! Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

Marc Wielage

  • Posts: 11060
  • Joined: Fri Oct 18, 2013 2:46 am
  • Location: Hollywood, USA

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 6:05 am

For a lot of reasons, neither H.264 nor H.265 (particularly Long-GOP versions) are good for post. And I think it's compounded by compressed audio that's wrapped around those formats. H.264 is better as a distribution format, not as a camera format.

You can convert the picture files either to a 10-bit or 12-bit DNxHR or ProRes format, and then convert the audio to WAV files. You'll have a much better post experience by starting your workflow with this conversion. The files will be bigger, but playing them back within Resolve will be a lot less stressful for your system.
marc wielage, csi • VP/color & workflow • chroma | hollywood
Offline

Wouter Bouwens

  • Posts: 244
  • Joined: Sun Dec 03, 2017 7:53 pm
  • Location: Alkmaar, Netherlands

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 6:57 am

Marc Wielage wrote:For a lot of reasons, neither H.264 nor H.265 (particularly Long-GOP versions) are good for post. And I think it's compounded by compressed audio that's wrapped around those formats. H.264 is better as a distribution format, not as a camera format.

You can convert the picture files either to a 10-bit or 12-bit DNxHR or ProRes format, and then convert the audio to WAV files. You'll have a much better post experience by starting your workflow with this conversion. The files will be bigger, but playing them back within Resolve will be a lot less stressful for your system.


I am interested in this, since I have a gh5 and a ninja v. Filming in dnxhr on the ninja v gives huge files, but even an absolute amateur as me can notice the difference in resolve, compared to the h265 from the gh5 internally. I can let resolve convert the h265 from the gh5 to dnxhr, but how can I convert the audio from the h265 to a seperate wav file?
CPU: Intel Core I9 10850K
GPU: MSI Suprim X Geforce 3080
Motherboard: MSI Z590-A Pro
RAM: 32 GB Gskil Ripjaws 3600
SSD: Samsung EVO 970 M.2 NVME 1TB
OS: Windows 10 Home
Offline
User avatar

Uli Plank

  • Posts: 21809
  • Joined: Fri Feb 08, 2013 2:48 am
  • Location: Germany and Indonesia

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 7:34 am

Why separate? Much safer for sync to have one file. Set it under "Audio" in the Deliver page (Linear PCM).
Now that the cat #19 is out of the bag, test it as much as you can and use the subforum.

Studio 18.6.6, MacOS 13.6.6, 2017 iMac, 32 GB, Radeon Pro 580
MacBook M1 Pro, 16 GPU cores, 32 GB RAM and iPhone 15 Pro
Speed Editor, UltraStudio Monitor 3G
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 12:09 pm

Wouter Bouwens wrote:I can let resolve convert the h265 from the gh5 to dnxhr, but how can I convert the audio from the h265 to a seperate wav file?
This can be easily done with FFmpeg. There are plenty of examples over the net, I can google it for you and cook the command line for that if you wish. :)

The tricky part is to take one input file and to copy the video stream alone (and intact) from it to a new file without any transcoding, and the audio stream into the other separate new file, with transcoding to, say, PCM 16bit 48 kHz.

Also, you may add the timecode and the timestamps data stream to your video outfile if you wish, all with a single command line.

I didn't do that myself before but I'm pretty sure that ffmpeg is able to do this easily.
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 12:17 pm

Pass through is very easy.
-c:v copy -an out video.mov -c:a pcm_s24le -vn out audio.wav
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Is H.264 *really* a "bad" post-production codec?

PostMon Nov 01, 2021 12:42 pm

Marc Wielage wrote:For a lot of reasons, neither H.264 nor H.265 (particularly Long-GOP versions) are good for post.
Dear Marc, while I somehow agree with you, would you mind clarifying some details for us, please? Thanks in advance!

First, how "long" is Long-GOP? Actually, I don't know how to measure the GOP of the camera output file (I only know how to force a certain GOP value while transcoding into H.264). But I have a gut feeling that amateur cameras produce the GOP which is "short enough" to be acceptable for the post.

Now let's limit the scope of our discussion to the use case where H.264 (or H.265) is an unavoidable acquisition codec - so we are speaking about amateur and semi-professional cameras, like Fuji X-T series, SONY A7 series and similar. We don't touch the pro cameras which are filming RAW video or DNxHR or Apple ProRes out of the box.

Using the camera-provided H.264 directly as a post-production (intermediate) codec has the following benefits:

1. You don't need to transcode it to (say) DNxHR, so you save time, disk space and effort. Also, transcoding is not completely lossless...

2. H.264 files are ~10 times smaller compared to DNxHR, so they require much less bandwidth between your disk and your RAM to be displayed. Real-world case: I took a 4k 29.97 H.264 100Mbps 2.4 Gb footage, and converted it to a 23 Gb-sized 870 Mbps DNxHR. From my HDD, VLC plays H.264 well, but it can't play DNxHR - HDD bandwidth is exhausted, it can't cope with 870 Mbps; after copying the DNxHR file to NVME SSD, VLC played it nicely. And I didn't even touch the editing part!

3. DVR converts timeline frames in its own internal 32bit format anyway, and store these in the cache. So it should not be a problem.

4. Given a decent GPU, DVR will uncompress H.264 on the fly transparently, so actually, you just get a tradeoff between disk usage (plus disk-to-RAM bandwidth) and GPU utilization.

5. And you always can generate the "optimized media" from your H.264 footage and do your editing on it (lowering the GPU utilization) and use the original H.264 media pool content for your final delivery only.

My personal decision is to purchase Studio license and to work with H.264 directly. It's much cheaper compared to the upgrade of all my SSD and HDD disks to get 10x more capacity.
And I think it's compounded by compressed audio that's wrapped around those formats.
Sometimes yes, sometimes no. Cameras usually create PCM 16bit 48 kHz audio streams natively. Smartphones usually create AAC audio. But transcoding AAC to PCM is as easy as launching the small ffmpeg script and waiting for a few seconds. Transcoding video streams take many times more time, resources and effort.
H.264 is better as a distribution format, not as a camera format.
You can convert the picture files either to a 10-bit or 12-bit DNxHR or ProRes format, and then convert the audio to WAV files. You'll have a much better post experience by starting your workflow with this conversion. The files will be bigger, but playing them back within Resolve will be a lot less stressful for your system.
I think that creating the optimized media (i.e. in DNxHR) and using it in post will actually have the same effect while saving your time and disk space, doesn't it?

The real benefit of pre-transcoding into DNxHR shines under the two following conditions:

a) you have a free DVR that can't decode/encode H.264/265 at all so you have no other choice

b) you work with FullHD maximal resolution AND you don't touch UHD 4k with a point of the stick, otherwise, be ready to increase your disk space 10x times.

Note: transcoding of a reach set of your footage files may take hours, be ready to launch it overnight and keep your eye at your electricity bills ;)

Thanks for the insights! Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

rNeil H

  • Posts: 576
  • Joined: Tue Jun 26, 2018 9:43 pm
  • Real Name: R. Neil Haugen

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 4:13 pm

I'm looking forward to Marc's answer, that is one very experienced colorist. I'll just add my own thoughts.

"Long-GOP" refers to the nature of the format, irrespective of the number of frames for each group. May not make that much difference in use, though the longer the group expect worse playback especially with effects.

From my more limited experience I can say that long-GOP media can often be edited ok on decently new kit.

When I start adding serious process load effects like color or stabilization, long-GOP media becomes far more of an issue.

While 6k ProRes/DNxHR/Cineform media runs fine.

But different computers can have radically different capabilities for H.264/5 hardware decoding. And that is a factor for every individual user.

Sent from my SM-G960U using Tapatalk
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 7:47 pm

Long GOP is anything else than I frame based format.
It can be just IPIPIP... structure. In this case decoding P frame requires decoding I frame as well, so already can be treated as Long GOP. In practice GOPs are eg. 1 sec to even 10 seconds long, so for 25 frames based formats from 25-250 frames long. There can be also open GOPs which are even worse for decoding as to decode frame in 1 GOP you may need to get frame from next GOP as well.

You investigation is basically widely known/established knowledge.
DNxHD and HR are treated as one codec in ffmpeg. They are more like different profiles of the same codec (because they actually are). DNxHR is a successor of DNxHD which is strictly for HD and uses strict/predefined profiles based on frame size and fps. DNxHR is just quality profiles based without any special restrictions to frame size/fps. Given profiles targets specific compression ratio, so depending on frame size/fps bitrate is calculated to specific value. DNxHR is strictly CBR based, so each frame has exactly same size (DNxHR has VBR mode as well, but it's basically not used). DNxHD in most cases is obsolete and you should really use DNxHR (unless you have some specific request for DNxHD). ffmpeg's DNxHR implementation is good (where ProRes is not really polished at all and not exactly same as reference encoder where it comes to bitrate, peak rules etc). It doesn't mean ProRes out of ffmpeg is bad , it's just not exactly the same as Apple's reference encoder.

There is nothing wrong with working directly on h264/5 long GOP files. If you happy with decoding/scrubbing performance then there is really no reasons to use intermediate codec.
Intermediate codec produce way bigger file sizes, for price of way easier decoding/scrubbing. That's about it. No magic, no big ideas etc. :D
Last edited by Andrew Kolakowski on Mon Nov 01, 2021 8:12 pm, edited 1 time in total.
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 8:02 pm

stesin wrote:Findings and Executive Summary

Now I started thinking on some optimizations. Took one footage file as a sample, namely TSCF4382.MOV (FHD 1920x1080p 59.94fps H.264 36Mbps 4:2:0) and compared it's size to the transcoded one:
Code: Select all
 2398943488 bytes (~ 2,3 Gb) TSCF4382.MOV
27732143963 bytes (~26,0 Gb) TSCF4382_DNxHR_HQX.mov

The latter is 11.6 times bigger. So my first question goes: is 440Mbps DNxHR an overkill for representation of a 36 Mbps H.264 video? Or maybe it just fits well? because H.264 is rumoured to compress video stream at around a 10x ratio? Could you share your opinions on this subject, please?


Nothing is overkill. DNxHR being I frame based and simpler math codec than h264 needs that much bitrate to preserve original file quality (with some tiny loss). You have to understand this, otherwise you won't understand key idea of compression.

ffmpeg doesn't understand all metadata in files from different manufactures, as most often those are "custom" tags. ffmpeg understand those most common, but sometime can pass through (even without understanding it) custom tags as well. Another issue is that Resolve itself can't really read/set many MOV tags either, so rich metadata is not passed further in Resolve either. Eg, Assimilate Scratch will read all ARRI metadata in MOV and during export can pass it further if desired. Recent ffmpeg version(s) had some improvements to pass though even more metadata, specially in MOV and now it does most if it by default.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 9:01 pm

Andrew Kolakowski wrote:Nothing is overkill.
Ok my friend, I agree. Would you mind sponsoring my upgrade from 2Tb storage to 20Tb? No, you won't? So far, so good...
DNxHR being I frame based and simpler math codec than h264 needs that much bitrate to preserve original file quality (with some tiny loss). You have to understand this
Thank you my friend. I do understand the difference.

But did you read what I was telling you before?
otherwise you won't understand key idea of compression.
Thank you again, my friend. Yes I am stupid but not to the degree you suppose.
ffmpeg doesn't understand all metadata in files from different manufactures, as most often those are "custom" tags.
Thank you, my friend. You are correct, ffmpeg does not deal with metadata well. This is a different task. ffmpeg is about transcoding, and it does this job good enough.
ffmpeg understand those most common, but sometimes can pass through (even without understanding it) custom tags as well. Another issue is that Resolve itself can't really read/set many MOV tags either, so rich metadata is not passed further in Resolve either. Eg, Assimilate Scratch will read all ARRI metadata in MOV and during export can pass it further if desired. Recent ffmpeg version(s) had some improvements to pass through even more metadata, especially in MOV and now it does most if it by default.
Thank you my friend for the insights you were so kind to share with us. But would you please care to take the use-case discussed to your attention, please? We aren't speaking about ARRI or RED here. We are speaking about the use-case of H.264/H.265 acquisition codec and how to deal with it in DVR.

Warmest regards, Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

Uli Plank

  • Posts: 21809
  • Joined: Fri Feb 08, 2013 2:48 am
  • Location: Germany and Indonesia

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 10:14 pm

Get a Mac then. Seriously. The M1 models handle H.264/265 smoothly.
Now that the cat #19 is out of the bag, test it as much as you can and use the subforum.

Studio 18.6.6, MacOS 13.6.6, 2017 iMac, 32 GB, Radeon Pro 580
MacBook M1 Pro, 16 GPU cores, 32 GB RAM and iPhone 15 Pro
Speed Editor, UltraStudio Monitor 3G
Offline
User avatar

roger.magnusson

  • Posts: 3399
  • Joined: Wed Sep 23, 2015 4:58 pm

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 10:22 pm

So does Resolve Studio on Windows and Linux (except for H.264 clips with 422 chroma subsampling which isn't accelerated on most PC hardware). But I understand this thread is about working with the free version of Resolve and what I assume is 10-bit footage that won't work in the free version on a PC.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 10:33 pm

Uli Plank wrote:Get a Mac then. Seriously. The M1 models handle H.264/265 smoothly.

Thank you, my friend, for your advertisement. I am using Linux so you have been missing your target.

Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 10:38 pm

roger.magnusson wrote:So does Resolve Studio on Windows and Linux (except for H.264 clips with 422 chroma subsampling which isn't accelerated on most PC hardware). But I understand this thread is about working with the free version of Resolve and what I assume is 10-bit footage that won't work in the free version on a PC.
Yes Roger, you are correct. But actually, all the thread is about how good the H.264 codec is for post-production. Or maybe not good. But if not, you need 10x disk space.

Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

roger.magnusson

  • Posts: 3399
  • Joined: Wed Sep 23, 2015 4:58 pm

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostMon Nov 01, 2021 10:44 pm

I see. As you're not using the Studio version of Resolve you will certainly be giving your CPU a workout if you're editing long-GOP H.264.
Offline
User avatar

Uli Plank

  • Posts: 21809
  • Joined: Fri Feb 08, 2013 2:48 am
  • Location: Germany and Indonesia

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostTue Nov 02, 2021 7:21 am

Well, to elaborate a bit on my remark: since I've been using all three systems in my life.

IMHO Linux is still the best for professional users. It is very stable (if serviced professionally) and it scales better than anything else with more than one GPU. But this needs Studio and normally the dominant sources are RAW or some higher level, like ProRes.

But the amateur user who wants to stick with the free version and has H.264 as his/her sources has to jump through too many hoops with Linux, while the cheapest MacBook Air can serve them well.
Now that the cat #19 is out of the bag, test it as much as you can and use the subforum.

Studio 18.6.6, MacOS 13.6.6, 2017 iMac, 32 GB, Radeon Pro 580
MacBook M1 Pro, 16 GPU cores, 32 GB RAM and iPhone 15 Pro
Speed Editor, UltraStudio Monitor 3G
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostTue Nov 02, 2021 1:00 pm

stesin wrote:Thank you my friend for the insights you were so kind to share with us. But would you please care to take the use-case discussed to your attention, please? We aren't speaking about ARRI or RED here. We are speaking about the use-case of H.264/H.265 acquisition codec and how to deal with it in DVR.

Warmest regards, Andreas


If you start with h264/5 in camera then not sure what is the problem?
Quality is what it’s ( you can’t make it better) and you either edit on original files or transcode to intermediate codec.
That’s about it.
To save money on storage buy Resolve Studio to have more GPU acceleration. Resolve Studio is worth it’s price for sure.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 12:03 pm

roger.magnusson wrote:I see. As you're not using the Studio version of Resolve you will certainly be giving your CPU a workout if you're editing long-GOP H.264.
Thank you Roger, you are correct in your every word. Though I am not sure about the free DVR ability to decode H.264/265 even on Mac, but actually I don't care. Warmest regards, Andreas
Last edited by stesin on Wed Nov 03, 2021 12:46 pm, edited 1 time in total.
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 12:16 pm

Uli Plank wrote:Well, to elaborate a bit on my remark: since I've been using all three systems in my life.
Hi Uli! In my professional life, I have been using so many OSes... I started in 1982 and I have been using (and administering) CP/M, RT-11, RSX-11 (both M and M+), IBM OS/360 and OS/370, and VM/SP, too; m$dos and m$ windows since v.1.0 (they could not make windows overlap at the time) and UNIX since v.7 and BSD since 2.8 and many many more. Back in 1993, I've even tried Linux of v.0.9 what a **** it was at the time :) so I DO know what I am speaking about.
IMHO Linux is still the best for professional users. It is very stable (if serviced professionally)
For me, keeping the Linux installation working is not a problem.
and it scales better than anything else with more than one GPU. But this needs Studio and normally the dominant sources are RAW or some higher level, like ProRes.
I've already purchased Studio because I'm not ready to multiply my disk storage 10x.
But the amateur user who wants to stick with the free version and has H.264 as his/her sources has to jump through too many hoops with Linux, while the cheapest MacBook Air can serve them well.
As of amature user, going with a free DVR is Ok until you try to work with 4k. Full HD works well, given you take my scripts for transcoding from above.

Warmest regards, Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 12:20 pm

Andrew Kolakowski wrote:If you start with h264/5 in camera then not sure what is the problem?
This is NOT a problem but a tradeoff, this what I told you earlier.

Free DVR does not recognize H.264 (or H.265) as an input.

Studio does.
To save money on storage buy Resolve Studio to have more GPU acceleration. Resolve Studio is worth it’s price for sure.
Thank you, my friend! This has been done. I purchased the Studio license and the Speed Editor, too. Now busy delivering my long-promised videos to my clients.

Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

roger.magnusson

  • Posts: 3399
  • Joined: Wed Sep 23, 2015 4:58 pm

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 12:49 pm

stesin wrote:Free DVR does not recognize H.264 (or H.265) as an input.

Just to clarify if anyone should find this thread using a search engine; Ingest support of H.264 and H.265 in the free version of DaVinci Resolve is different for different operating systems. It also depends on the specific files.
  • On macOS I think pretty much everything is supported now.
  • On Windows 8-bit footage should work. 10-bit footage hasn't worked in the past, but in free v17.4 they added hardware acceleration support for H.265 (not H.264) so maybe that works now. That hardware acceleration will of course depend on the capabilities of the hardware.
  • On Linux, long time since I checked different kinds of files, but a pretty safe bet as you've discovered is that it doesn't work.
Also note that hardware acceleration of 4:2:2 chroma subsampling in H.264 for instance is still pretty rare, so if your files are 4:2:2 those files might not work even in the Studio version unless it can fallback to software decoding.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 12:52 pm

roger.magnusson wrote:Just to clarify if anyone should find this thread using a search engine; Ingest support of H.264 and H.265 in the free version of DaVinci Resolve is different for different operating systems. It also depends on the specific files.
Roger, thank you a lot.
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 1:12 pm

roger.magnusson wrote:On macOS I think pretty much everything is supported now.
And with hardware accelerated decoding (and encoding).

roger.magnusson wrote:On Windows .. free v17.4 they added hardware acceleration support for H.265 (not H.264)
So that's decoding too? The changelog only mentioned encoding.
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline
User avatar

roger.magnusson

  • Posts: 3399
  • Joined: Wed Sep 23, 2015 4:58 pm

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 1:34 pm

I didn’t check when writing so you’re probably right. Anyway, the point is it’s very different between the supported operating systems.
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 2:09 pm

stesin wrote:
Andrew Kolakowski wrote:If you start with h264/5 in camera then not sure what is the problem?
This is NOT a problem but a tradeoff, this what I told you earlier.

Free DVR does not recognize H.264 (or H.265) as an input.

Studio does.
To save money on storage buy Resolve Studio to have more GPU acceleration. Resolve Studio is worth it’s price for sure.
Thank you, my friend! This has been done. I purchased the Studio license and the Speed Editor, too. Now busy delivering my long-promised videos to my clients.

Warmest regards,
Andreas


What trade off ?
If files are coming as h264/5 out of camera then not sure what trade off are you talking about? You mean you deliberately chosen h264/5 opposite to eg ProRes recording to have smaller files?
In such a case you rather want to invest in more storage and have better quality recording. Storage is not that expensive today ( compared to other components).

H264/5 can be any quality: from heavily compressed to mathematically lossless, so there is no simple answer. It’s all relative and also related to project quality level.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostWed Nov 03, 2021 3:15 pm

Andrew Kolakowski wrote:
stesin wrote:
Andrew Kolakowski wrote:If you start with h264/5 in camera then not sure what is the problem?
This is NOT a problem but a tradeoff, this what I told you earlier.

Free DVR does not recognize H.264 (or H.265) as an input.

Studio does.
To save money on storage buy Resolve Studio to have more GPU acceleration. Resolve Studio is worth it’s price for sure.
Thank you, my friend! This has been done. I purchased the Studio license and the Speed Editor, too. Now busy delivering my long-promised videos to my clients.

Warmest regards,
Andreas


What trade off ?
If files are coming as h264/5 out of camera then not sure what trade off are you talking about?
I am speaking about the tradeoff of disk space (and bandwidth) vs. GPU load. This is true for the Studio; the free version has no other options because it doesn't consume H.264/265
You mean you deliberately chosen h264/5 opposite to eg ProRes recording to have smaller files?
No. My cameras acquisition codec is H.264.
In such a case you rather want to invest in more storage and have better quality recording. Storage is not that expensive today ( compared to other components).
Thank you, my friend. I decided for purchasing a Studio because this is actually much cheaper compared to upgrading my storage 10x size.

Warmest regards, Andreas

H264/5 can be any quality: from heavily compressed to mathematically lossless, so there is no simple answer. It’s all relative and also related to project quality level.
I think the camera is doing its best while delivering H.264/265 and actually you don't have too many options to choose from while filming.
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

Marc Wielage

  • Posts: 11060
  • Joined: Fri Oct 18, 2013 2:46 am
  • Location: Hollywood, USA

Re: Is H.264 *really* a "bad" post-production codec?

PostThu Nov 04, 2021 7:57 am

stesin wrote:Dear Marc, while I somehow agree with you, would you mind clarifying some details for us, please? Thanks in advance! First, how "long" is Long-GOP?

OK, here we go with a long answer. One definition is here:

https://www.pcmag.com/encyclopedia/term ... ompression

Basically, the video is fairly "stepped-on" to the point where the compression introduces a lot of artifacts in the picture. One of the things you pay for with an Alexa camera or a Red camera or a Blackmagic camera or a Sony Venice camera is very little compression and RAW images, since their recordings are only lightly compressed (maybe 3:1 or 5:1, vs. the 30-40:1 compression of H.264). When you have material from a pro camera, you can actually introduce fairly severe color corrections in Resolve and not "break" the image. Because the H.264 images from cheaper cameras are fairly heavily compressed (and often use 8-bit color depth), they fall apart fairly quickly with, for example, heavy key qualification or gain stretching. The images tend to "block up" and look noisy when you really have to dig in and brighten an image.

You don't need to transcode it to (say) DNxHR, so you save time, disk space and effort. Also, transcoding is not completely lossless...

Disk space -- to me -- is not a problem in 2021. It was a huge problem 20 years. It was a big problem 10 years ago. But today, it's not that hard to afford 20-30-40TB of drives that are fast enough to work well with Resolve. If you media manage your project effectively, you won't run out of space. We have a fairly small company (by LA standards) and we have at least 500TB in the office, half of which are online RAIDs. There was a time that would cost millions of dollars, but it's a fraction of that now.

H.264 files are ~10 times smaller compared to DNxHR, so they require much less bandwidth between your disk and your RAM to be displayed.

The problem is the stress the H.265/H.265 puts on the system during playback, because it has to be decoded before Resolve can use the image. Trust me, the computer runs slower and hotter when you pound it with highly-compressed images. Compare the CPU/GPU usage when it plays a modest DNxHR SQ file.

DVR converts timeline frames in its own internal 32bit format anyway, and store these in the cache. So it should not be a problem.

If you pour 4 gallons of water into a 10-gallon barrel, you don't get any more water. You can't make an 8-bit image look any better in 12-bits or 16-bits or 24-bits or 32-bits. What Resolve does internally is mainly trying to avoid adding distortion to the image when going from process to process or node to node.

And you always can generate the "optimized media" from your H.264 footage and do your editing on it (lowering the GPU utilization) and use the original H.264 media pool content for your final delivery only.

If you're using Optimized media, you may as well go to the next step and use Proxies... which is basically what I suggested originally. Note there are five different flavors of DNxHR codecs, all the way from fairly compressed to a visually-lossless 444 mastering format. Pick the one that works with the disk space you have.

My other reason for disliking small DSLR still cameras in post in general:

1) no timecode. I often say, "timecode is the railroad tracks on which the whole project runs. Without tracks... train don't run." I've dealt with this for 40 years and never agreed with an excuse for not using timecode to drive the project.

2) a lot of these cameras have very limited dynamic range, and they tend to overload highlights very easily. There's no way to salvage blown-out highlights from cameras like this. We can try to apply softening filters, clips, and other techniques, but you basically can't ever get back the detail. It becomes a salvage operation where you never wind up with decent pictures.

3) most of these cameras have very limited audio inputs, and you wind up with substandard sound, even under the best conditions. Of course, you could use an external audio recorder... but those require timecode and syncing. (See #1 above.)

4) many of these cameras have automatic gain controls that react wildly in changing levels in the real world. Even in cases where the cameras have manual gain, my experience is that users tend to misjudge the histograms and wind up with either drastically-underexposed images or clipped highlights. I understand that in the real world, particularly in documentary/reality conditions, you can't always predict where you're going to be, so light levels do change. But the Alexas and Reds and so on react a lot better to sensor overload (or underexposure), and you wind up with better pictures under the same conditions.

5) many (if not most) of the little H.264 consumer cameras have bad file-naming practices, so every time you mount a new card (particularly after a battery change), it names it "0001" again, creating terrible filename conflicts and Media Management nightmares later on. I have a long Canon/DJI/GoPro/iPhone/Nikon H.264 memo we routinely send out to clients explaining how to get that workflow under control for post, but you basically have to rename and transcode all the files prior to editing so you can avoid these problems. You then edit the Proxies and consider those the "masters," since they have filenames that make sense (like Date/Roll number/Camera Letter) and non-conflicting timecode. There have been a few feature films that were partially or entirely shot on iPhones or GoPros, and this is the method they used to corral all the data. Keeping everything in H.264 absolutely will not work, especially if you're dealing with 250-300+ hours of material.

6) a lot of these little cameras have cheap lenses, and they wind up looking soft and having visual aberrations -- at least to me. They're also hard to manually focus because the viewfinders are generally small (except for the rare users attaching larger displays on the back). Granted, you can put a $10,000 lens on it, which can look incredible under the right circumstances. But I'd say if you're going to do all that, then why use a $2000 still camera in the first place? If you check Sharegrid or Kitsplit, you can rent an actual Red camera or an Alexa -- not the newest model, but maybe one 9 or 10 years old -- for $150 a day. I'd take one of those any time over a DSLR, since it avoids all those problems and actually makes usable pictures for real movies, real TV shows, and real projects.

As for transcode time: If somebody brings us H.264 files for a project, we just load it all up into a machine, set it to render to a different drive, and run it all night after we go home. We come back in the next day... and all the ProRes files are generated and everything is fine. We have multiple machines in the office and keep a few older Macs around just to crank out footage like this, leaving our main system undisturbed. The same thing could be done with PCs.

I have to say, of everything out there, the cheap $1200 Blackmagic Pocket cameras are not too bad, and those will at least capture real BMD Raw or ProRes, and those you can do something with. They have timecode inputs and XLR audio inputs, and while they're unwieldly and eat up batteries, they can work. For their size, they actually do a decent job. I'd much prefer them to any of the cheap Canon/DJI/GoPro/iPhone/Nikons out there. I concede that the DJI drones are kind of in a special class, but I cringe whenever we get in otherwise-beautiful drone shots that are marred with aliasing, noise, and that typical 8-bit "crunchiness" I see all too often.

Last thought: I don't mind when students or amateurs use cheap cameras because of budget, and because they're still in a learning process. What does drive me crazy is when I see this done with commercial projects that have a little money for an indie feature or a short, and they wind up figuring out too late that they chose the wrong cameras. I've seen it happen a lot in the last 10 years, and it's a sad situation. I think it's kind of like buying a Kia and flooring it to 120MPH all the way from LA to Vegas: you could theoretically do it, but in the end it'll blow out the engine after a few hours and you'll be thumbing a ride by the time you hit Barstow. You'd be a lot better off getting an actual sports car, one capable of handling real performance under tough conditions.
marc wielage, csi • VP/color & workflow • chroma | hollywood
Offline
User avatar

Uli Plank

  • Posts: 21809
  • Joined: Fri Feb 08, 2013 2:48 am
  • Location: Germany and Indonesia

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostThu Nov 04, 2021 8:04 am

This should be a sticky, Marc!
Now that the cat #19 is out of the bag, test it as much as you can and use the subforum.

Studio 18.6.6, MacOS 13.6.6, 2017 iMac, 32 GB, Radeon Pro 580
MacBook M1 Pro, 16 GPU cores, 32 GB RAM and iPhone 15 Pro
Speed Editor, UltraStudio Monitor 3G
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostThu Nov 04, 2021 1:19 pm

Except that 90% Resolve users never will have footage from Arri (or even BM) to work with.
They are forced to h264/5 and they have to deal with it 1 way or another.
Wish list is 1 thing, real world another (production mistakes yet another).
...and I bought KIA because I can't afford Ferrari (and because I have 3 kids and need to drive them to school). Still have money left to buy DSLR and other bits to shoot weddings and earn money for living. Thanks god for h264/5 :D
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Is H.264 *really* a "bad" post-production codec?

PostThu Nov 04, 2021 10:46 pm

Dear Marc, thank you a lot for sharing your wisdom and explaining the topic!
Marc Wielage wrote:
stesin wrote:First, how "long" is Long-GOP?

OK, here we go with a long answer.
Unfortunately I still didn't find a way to measure how "long" is the actual GOP in the files produced by my cameras. :( What I know is YouTube recommendation of setting GOP of your uploads to 1/2 of the frame rate (basically 2 GOPs per 1 second) and I consider this mode is a really too "long" GOP but maybe the camera does a better job? like 1/8 * fps?
Basically, the video is fairly "stepped-on" to the point where the compression introduces a lot of artifacts in the picture. One of the things you pay for with an Alexa camera or a Red camera or a Blackmagic camera or a Sony Venice camera is very little compression and RAW images since their recordings are only lightly compressed (maybe 3:1 or 5:1, vs. the 30-40:1 compression of H.264).
Actually, transcoding 100 Mbps H.264 4:2:0 stream to DNxHR HQ 4:2:2 produces some ~870 Mbps bitrate stream which looks like "visually lossless" but yes, you can't reach the IQ of the heavy and pricey professional camera with it.

ARRI RAW 4k UHD 29.97 fps provides you with a 3032 Mbps video stream. The same 4k UHD bit with ProRes 422 HQ gives 1010 Mbps, ProRes 4444 XQ gives 2030 Mbps (data from ARRI

Of course, neither consumer nor prosumer camera is designed to give you an IQ this high. I.e. speaking of SONY A7S III, we see 4k UHD 30p 10bit 4:2:2 H.264 bitrate of 140 Mbps. SONY and Panasonic also implement the All-I variant of XAVC S. This is still based around H.264, but treats every frame as an 'Intra' frame: saving full information about it, rather than saving the differential information about what's changed between more occasional 'I' frames. It gives you a 300 Mbps bitrate. Generally, "long-GOP" H.264 bitrates of about 150 Mbps aren't something uncommon today. Considering 10bit 4:2:2 and a compression ratio of 20:1, the equivalent (losslessly decompressed/transcoded) RAW stream would be around the same 3000 Mpbs, right?
Because the H.264 images from cheaper cameras are fairly heavily compressed
Yes, but you just get what you pay for, right? Who uses a cheaper amateur level camera which does 8bit 4:2:0 to the SD card (like me) shouldn't be expecting miracles.
Disk space -- to me -- is not a problem in 2021. It was a huge problem 20 years. It was a big problem 10 years ago. But today, it's not that hard to afford 20-30-40TB of drives that are fast enough to work well with Resolve. If you media manage your project effectively, you won't run out of space. We have a fairly small company (by LA standards) and we have at least 500TB in the office, half of which are online RAIDs. There was a time that would cost millions of dollars, but it's a fraction of that now.
You are perfectly correct and I agree with you wholeheartedly. But you are speaking from the position of a professional who works for the company.

For an amateur private hobbyist person, building a PC with i.e. ZFS raidz2 array of 8 x 8Tb HDDs accelerated with a pair of M2 1Tb SSDs + another pair of M2 1Tb SSDs for scratch files probably is not a wise investment ;)
H.264 files are ~10 times smaller compared to DNxHR, so they require much less bandwidth between your disk and your RAM to be displayed.

The problem is the stress the H.265/H.265 puts on the system during playback, because it has to be decoded before Resolve can use the image. Trust me, the computer runs slower and hotter when you pound it with highly-compressed images. Compare the CPU/GPU usage when it plays a modest DNxHR SQ file.
You are perfectly correct. With H.264 am saving on disk space and especially on the I/O bandwidth between disk and RAM and I am paying with CPU and GPU load for that instead.

The new computer is on my wishlist, but I can't afford one right now, especially considering the crazy GPU prices this year.
DVR converts timeline frames in its own internal 32bit format anyway, and store these in the cache. So it should not be a problem.
If you pour 4 gallons of water into a 10-gallon barrel, you don't get any more water. You can't make an 8-bit image look any better in 12-bits or 16-bits or 24-bits or 32-bits. What Resolve does internally is mainly trying to avoid adding distortion to the image when going from process to process or node to node.
Hmm. Suppose I have a source frame that is 8-bit coded. Then I process it in Resolve and add some corrections, and export the footage from the internal 32bit floating point format into 10bit format, which (I think) should be enough for my corrections to fit into. Will it work this way?
If you're using Optimized media, you may as well go to the next step and use Proxies... which is basically what I suggested originally. Note there are five different flavors of DNxHR codecs, all the way from fairly compressed to a visually-lossless 444 mastering format. Pick the one that works with the disk space you have.
Thank you for the hint, now when I already have Studio, I will try and compare both workflows.
My other reason for disliking small DSLR still cameras in post in general:

1) no timecode. I often say, "timecode is the railroad tracks on which the whole project runs. Without tracks... train don't run." I've dealt with this for 40 years and never agreed with an excuse for not using timecode to drive the project.
Do you mean just timecode or the frame timestamps, or both?

As was already discussed earlier, adding the timecode and the timestamps data stream to an H.264 file (without transcoding it) can be done with FFmpeg. I will be cooking the appropriate command line tomorrow.
2) a lot of these cameras have very limited dynamic range, and they tend to overload highlights very easily. There's no way to salvage blown-out highlights from cameras like this. We can try to apply softening filters, clips, and other techniques, but you basically can't ever get back the detail. It becomes a salvage operation where you never wind up with decent pictures.
I think this is more about selecting correct exposure while filming. I don't trust AE so I do it manually.
3) most of these cameras have very limited audio inputs, and you wind up with substandard sound, even under the best conditions. Of course, you could use an external audio recorder... but those require timecode and syncing. (See #1 above.)
The first thing to do is to never trust internal microphones and use an external one. Syncing by audio waveform also works (hmm, often it does).
4) many of these cameras have automatic gain controls that react wildly in changing levels in the real world. Even in cases where the cameras have manual gain, my experience is that users tend to misjudge the histograms and wind up with either drastically-underexposed images or clipped highlights. I understand that in the real world, particularly in documentary/reality conditions, you can't always predict where you're going to be, so light levels do change. But the Alexas and Reds and so on react a lot better to sensor overload (or underexposure), and you wind up with better pictures under the same conditions.
Of course the pro cameras perform much better technically, no doubt. But the price of the camera itself and especially of the optics/lenses and add-on equipment is too high for an amateur hobbyist.
5) many (if not most) of the little H.264 consumer cameras have bad file-naming practices, so every time you mount a new card (particularly after a battery change), it names it "0001" again, creating terrible filename conflicts and Media Management nightmares later on. I have a long Canon/DJI/GoPro/iPhone/Nikon H.264 memo we routinely send out to clients explaining how to get that workflow under control for post, but you basically have to rename and transcode all the files prior to editing so you can avoid these problems. You then edit the Proxies and consider those the "masters," since they have filenames that make sense (like Date/Roll number/Camera Letter) and non-conflicting timecode. There have been a few feature films that were partially or entirely shot on iPhones or GoPros, and this is the method they used to corral all the data. Keeping everything in H.264 absolutely will not work, especially if you're dealing with 250-300+ hours of material.
Thank you, this is an invaluable suggestion. In fact, it's a simple way to keep your footage in some acceptable order.
6) a lot of these little cameras have cheap lenses, and they wind up looking soft and having visual aberrations -- at least to me. They're also hard to manually focus because the viewfinders are generally small (except for the rare users attaching larger displays on the back). Granted, you can put a $10,000 lens on it, which can look incredible under the right circumstances. But I'd say if you're going to do all that, then why use a $2000 still camera in the first place? If you check Sharegrid or Kitsplit, you can rent an actual Red camera or an Alexa -- not the newest model, but maybe one 9 or 10 years old -- for $150 a day. I'd take one of those any time over a DSLR, since it avoids all those problems and actually makes usable pictures for real movies, real TV shows, and real projects.
Yes, I agree - for a serious work rental is a good solution.
As for transcode time: If somebody brings us H.264 files for a project, we just load it all up into a machine, set it to render to a different drive, and run it all night after we go home. We come back in the next day... and all the ProRes files are generated and everything is fine. We have multiple machines in the office and keep a few older Macs around just to crank out footage like this, leaving our main system undisturbed. The same thing could be done with PCs.

I have to say, of everything out there, the cheap $1200 Blackmagic Pocket cameras are not too bad, and those will at least capture real BMD Raw or ProRes, and those you can do something with. They have timecode inputs and XLR audio inputs, and while they're unwieldly and eat up batteries, they can work. For their size, they actually do a decent job. I'd much prefer them to any of the cheap Canon/DJI/GoPro/iPhone/Nikons out there. I concede that the DJI drones are kind of in a special class, but I cringe whenever we get in otherwise-beautiful drone shots that are marred with aliasing, noise, and that typical 8-bit "crunchiness" I see all too often.

Last thought: I don't mind when students or amateurs use cheap cameras because of budget, and because they're still in a learning process. What does drive me crazy is when I see this done with commercial projects that have a little money for an indie feature or a short, and they wind up figuring out too late that they chose the wrong cameras. I've seen it happen a lot in the last 10 years, and it's a sad situation. I think it's kind of like buying a Kia and flooring it to 120MPH all the way from LA to Vegas: you could theoretically do it, but in the end it'll blow out the engine after a few hours and you'll be thumbing a ride by the time you hit Barstow. You'd be a lot better off getting an actual sports car, one capable of handling real performance under tough conditions.
Thank you again for the detailed explanations! I completely agree with almost each of your statements. Especially I agree that taking amateur-level cameras and optics for doing commercial projects where the budget is present, is a nuisance and should be avoided.

But I myself is just an amateur hobbyist and learning, so I will stick with what I have until the better times ;)

Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline
User avatar

Marc Wielage

  • Posts: 11060
  • Joined: Fri Oct 18, 2013 2:46 am
  • Location: Hollywood, USA

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostFri Nov 05, 2021 8:58 am

Andrew Kolakowski wrote:Except that 90% Resolve users never will have footage from Arri (or even BM) to work with. They are forced to h264/5 and they have to deal with it 1 way or another.

Then convert it to a simpler format like DNxHR or ProRes, and get on with life. That'll work fine. If it's 10-bit or 12-bit, even better.

Did you read my comment about the cheap $1200 Blackmagic Pocket camera? It's cheaper than the Nikon we were initially talking about.
marc wielage, csi • VP/color & workflow • chroma | hollywood
Offline
User avatar

Marc Wielage

  • Posts: 11060
  • Joined: Fri Oct 18, 2013 2:46 am
  • Location: Hollywood, USA

Re: Is H.264 *really* a "bad" post-production codec?

PostFri Nov 05, 2021 9:11 am

stesin wrote:As was already discussed earlier, adding the timecode and the timestamps data stream to an H.264 file (without transcoding it) can be done with FFmpeg.

The timecode will be unstable and will drift (particularly if you use another camera or a timecode sound recorder. You have to take my word for this. The camera has to have genlocked timecode precisely timed to the shutter speed being used inside the camera; without that, it'll be out in 10 minutes. (Assuming you have a take that long.) There are many, many positive reasons for using "Time of Day" timecode; the problem is, many newcomers and amateur users haven't dealt with it, because they haven't learned how even low-budget professional productions are done.

The first thing to do is to never trust internal microphones and use an external one.

Then you have the issue of the bad-quality mic preamps and the lack of phantom powering in these little cameras. You can use an external preamp -- which is actually not a bad idea -- but again you have the problem of high compression with the MP4 audio file. And on real sets, getting the microphone close to the actors is not simple or easy. You can do it with wireless mics, but those are not cheap and involve some level of skill to get the best results.

Of course the pro cameras perform much better technically, no doubt. But the price of the camera itself and especially of the optics/lenses and add-on equipment is too high for an amateur hobbyist.

I dunno. There are cheap cameras out there that don't have these problems. I already mentioned the Blackmagic Pocket, and (again) I've seen full-blown Red systems rent for $250 a day. I see BMD Cinema Cameras rent for $100 a day with a full lens package. I see high school and college students shooting with these, and they have little or no money. Granted, they aren't the latest $40,000-$50,000 studio cameras, but 10-year-old cameras that are a little beat-up, all still capable of making great pictures. There's a terrific YouTube video where a guy talks about buying an Alexa for $5,000 on eBay, and it's pretty entertaining.

Thank you again for the detailed explanations! I completely agree with almost each of your statements. Especially I agree that taking amateur-level cameras and optics for doing commercial projects where the budget is present, is a nuisance and should be avoided.

Well, there's amateurs, and there's amateurs. I've volunteered to help friends here in LA -- for free -- and through connections, they get grip trucks, real lights on set, real cameras, great locations, and real sound for a day. And everybody's throwing in 8 hours of their time for essentially a "student" film. The advantage of using great people is they won't make newcomer mistakes. If I'm on the set, I'll stop them from any disastrous decision that will kill us in post.

90% of getting great pictures from something like the little Canons or Nikons or Sony A7's boils down to: expose it well on set. Light the actors properly. That doesn't cost any money -- it just takes practice, time, effort and testing. Small lights are cheap these days (particularly the portable, widely-used LED panels used by mobile news crews), and you can do amazing things with them if you follow all the rules for key light, fill light, backlight, and key-to-fill ratio. All of this stuff can be learned -- there's a thousand YouTube videos on the subject. Get that right, and you have a fighting chance at least the basic camera signal will be right. I can't tell you the number of low-budget indie film producers I've dealt with who didn't do this, and paid the price later on.

This still won't solve the problems of compression artifacts, dynamic range limitations, clean audio, timecode, and so on, but it's a start.

BTW, here's part of my H.264 workflow memo below:

-------------------------------------------------------------------

A good way to deal with a non-timecode camera (say, a Canon 7D or a Sony A7S or DJI or GoPro) project would be as follows:

1) organize all the files by shoot date and camera card number, so each file folder would be A1_03112019 [and so on]

2) add a unique prefix to the heads of every file to create easy-to-understand file names (like "A1_03112019_C0001," "A1_03112019_C0002," etc.)

3) transcode all of these Long-GOP H.264/H.265 files to a high-quality codec like ProRes 422HQ or DNxHR, and keep the file names and folder structure exactly the same

4) archive the original files somewhere safe, just in case

5) do the entire edit with the transcoded files and consider them "the new masters"

6) now, when you do the conform in Resolve, every file will have a unique file name and even though the camera timecodes will still start at 00:00:00:00, there will be no conform conflicts because of the file names.

7) it helps greatly to have the editor create a reference file that has visible timecode and filenames for source files, as well as record timecode for the project itself.

Variable speed changes are a problem for all edit systems and cameras, because there is no simple provision for variable framerates in XML (plus different playback methods like frame-blending or optical flow produce different results). One way around this would be to just export the variable frame-rate clips and consider them a self-contained VFX shot. The alternative (which I have done) is just to manually varispeed the clip within Resolve and eyematch it to the reference video created by the editor.

This is a proven workflow that can work. I'm not a fan at all of productions using cheap cameras that have no internal jam-sync timecode, because inevitably there's also other problems like exposure issues, sound problems, and other things that basically come with the territory. But I get that not everybody can use an Alexa or a Red camera, and sometimes you have to just deal with what's there.

There is more to it, but I would point to the following books as fairly thorough references on the subject:

"Modern Post: Workflow & Techniques for Digital Filmmakers"
by Scott Arundale
https://www.amazon.com/Modern-Post-Work ... 0415747023

"The Guide to Managing Postproduction for Film, TV, and Digital Distribution"
by Susan Spohr & Barbara Clark
https://www.amazon.com/Guide-Managing-P ... 1138482811
marc wielage, csi • VP/color & workflow • chroma | hollywood
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostFri Nov 05, 2021 11:17 am

Marc Wielage wrote:
Andrew Kolakowski wrote:Except that 90% Resolve users never will have footage from Arri (or even BM) to work with. They are forced to h264/5 and they have to deal with it 1 way or another.

Then convert it to a simpler format like DNxHR or ProRes, and get on with life. That'll work fine. If it's 10-bit or 12-bit, even better.

Did you read my comment about the cheap $1200 Blackmagic Pocket camera? It's cheaper than the Nikon we were initially talking about.


Converting is not a problem- problem is that you start with Long-GOP format and quite often there is no escape from it.
Very different cameras with autofocus been main/huge difference.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostFri Nov 05, 2021 8:07 pm

Marc Wielage wrote:
Andrew Kolakowski wrote:Except that 90% Resolve users never will have footage from Arri (or even BM) to work with. They are forced to h264/5 and they have to deal with it 1 way or another.

Then convert it to a simpler format like DNxHR or ProRes, and get on with life. That'll work fine. If it's 10-bit or 12-bit, even better.
Is DNxHR really "simple?
Did you read my comment about the cheap $1200 Blackmagic Pocket camera? It's cheaper than the Nikon we were initially talking about.
As of me - yes, I did. But I already do work with my double Fuji cameras, and I don't intend to rent any ARRI (or Blackmagic). Because of optics. The price of the camera body is approximately 1/20 of the price of optics (lenses). Fujifilm lenses are the winners for price/performance ratio.

Warmest regards,
Andreas
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Findings on DNxHR/HD as intermediate (post-production) c

PostFri Nov 05, 2021 9:14 pm

DNxHR is actually fairly simple codec.
Offline

stesin

  • Posts: 139
  • Joined: Sat Oct 24, 2020 4:25 pm
  • Location: Cyprus
  • Real Name: Andreas Stesinou

Re: Is H.264 *really* a "bad" post-production codec?

PostTue Nov 09, 2021 8:37 am

Dear Marc, once again thank you for sharing your wizardry with us. Would you mind commenting on a few more side notes of mine, please? (I just finished my first work with the paid Studio + Speed Editor and acquired some new experiences this way).
Marc Wielage wrote:
stesin wrote:As was already discussed earlier, adding the timecode and the timestamps data stream to an H.264 file (without transcoding it) can be done with FFmpeg.
The timecode will be unstable and will drift (particularly if you use another camera or a timecode sound recorder. You have to take my word for this.
What I actually did:

1. I injected just initial timecodes into each H.264 camera footage, these being just a "best-effort guess" derived from the "Media Creation Time" metadata value. Cameras were GPS-synchronized just at the beginning of the session. The precision is only 1 second because the metadata value does not contain any fractions. This helps for initial ordering files into a sequence, nothing more.

Why didn't I guess the initial timecodes from the file "birth time" (creation time)? For some unknown reason, Fujifilm - while using the exFAT filesystem on SDXC cards - does not store fractions into the file birth datetime either. exFAT is technically capable of storing fractions up to 10 ms according to the standard, but Fuji engineers decided to ignore it. Take it as given.

2. I decided to avoid the artificial generation of the frame timecodes data stream with ffmpeg, this is useless. But on the project settings page, at the top of the "General Options" tab, the Studio provides us with a few useful options: you can tell it either use timecodes "Embedded in the source clip" (we don't have these) or "From the source clip frame count". I suppose that the latter option is what will be sufficient for my purpose, given the initial timecode is already assigned with some approximate, non-precious values.

3. When I was assembling the "Multicam clip" via the "Sync bin", I used the audio waveform precise synchronisation (and it actually works good enough).

My question actually is: in the project "Master Settings" tab, in the "Timeline Format" section, I set the "Timeline frame rate" to 29.97 fps (footage was taken with the same 29.97 fps rate, too). Now I am looking at the dropbox right next line: "Use drop frame timecode" (yes/no).

I recall reading that for 29.97 fps frame dropping is unavoidable. But why and what for, if initial footage is 29.97 fps, too? I left this unchecked (so no drop frame timecode in use). But am I correct here? What are the consequences of either having a "drop frame timecode" or not? Am an in a doubt, anyone cares to explain this magic, please? Thanks in advance!

My wild guess is that frame dropping should activate only for the special case of importing footage taken at a frame rate that differs from the timeline fps setting. I.e. given a 29.97 fps timeline, if I'll be importing 59.94 fps footage, exactly each second frame will be dropped. If the footage is at exact 30 fps, some frames are to be occasionally dropped. Am I correct?
Marc Wielage wrote:The camera has to have genlocked timecode precisely timed to the shutter speed being used inside the camera; without that, it'll be out in 10 minutes. (Assuming you have a take that long.) There are many, many positive reasons for using the "Time of Day" timecode; the problem is, many newcomers and amateur users haven't dealt with it, because they haven't learned how even low-budget professional productions are done.
Dear Marc, you are correct in your every word. But let's face the reality: the precise timecode synchronization is neither technically nor financially affordable to the vast majority of users. So we are left with what we can afford (thus, audio waveform sync) which is a "good enough" solution.
Marc Wielage wrote:
stesin wrote:The first thing to do is to never trust internal microphones and use an external one.
Then you have the issue of the bad-quality mic preamps and the lack of phantom powering in these little cameras.
I'd tell that the mic preamps in my Fujis are "good enough" at the very least. At least for my purposes (actually, some multi-person dialogues and/or interviews), I get some very decent sound while using external microphones. Camera internal mics are good as the "waveform sync" data source only.

Also, despite claiming otherwise, Fuji actually provides the phantom powering via the external microphone plug. It works for smaller capsules and lavaliers. So I have one "big and good" external directional mic with 2xAAA batteries and one "small and mediocre" which doesn't have batteries, both perform sufficiently well.

The two lavaliers I also have, are never being connected to the cameras directly. I use just 2 smartphones as recorders; with the RecForge Studio II application (which allows you to manually adjust the input gain level and disables the automatic gain control entirely) I acquire very decent WAV files with 48 kHz 16bit quality from these. My speakers just hide the smartphone somewhere in the pocket and the wire is masked under the clothes; they are recording non-stop throughout all the session, and finally, I have 4 soundtracks:

1. The "main" soundtrack of the whole scene, from the main tripod-installed camera, is taken with the "big and good" directional mic.
2. The "second backup" soundtrack is from the second camera which is being manoeuvred around the scene on the gimbal or monopod, is taken with the "small" directional mic.
3-4. Two soundtracks of two individual speakers at once are taken with lavaliers and recorded into WAVs on two smartphones.

This is a "good enough" setup that works for my purposes. When making videos of musical performances, things are more complicated. Usually, I try to arrange the 5th soundtrack which is recorded directly from the output of the sound mixer, so I get the stereo audio signal identical to whatever goes to the input of the post-amplifier and speakers (preferably recorded in 32bit 96kHz PCM with a professional recorder, if available). This is not always possible, though... But my "big and good" mic does a good job even there.
Marc Wielage wrote:You can use an external preamp -- which is actually not a bad idea -- but again you have the problem of high compression with the MP4 audio file.
Fuji cameras capture PCM 16bit 48kHz soundtracks. RecForge II too. So my sound never gets compressed at any point, including delivery (I deliver to PCM 16bit 48kHz, too - it does not really matter compared to the size of a video track).
Marc Wielage wrote:And on real sets, getting the microphone close to the actors is not simple or easy. You can do it with wireless mics, but those are not cheap and involve some level of skill to get the best results.
Look at how I use my lavaliers :) and speaking of wireless mics, I'd say I don't like them and they don't justify their price and the effort needed to use these efficiently. Also, the risk of RF interference ruining your audio is unacceptably high. So my conclusion is: wireless mics are Ok for controlled environments like studios or limited access spaces, and only if you have a dedicated trained staff to deal with this tech. Given a non-controlled environment (what I use most of the time) I prefer some dumb but predictable wired solutions.
Marc Wielage wrote:
stesin wrote:Of course the pro cameras perform much better technically, no doubt. But the price of the camera itself and especially of the optics/lenses and add-on equipment is too high for an amateur hobbyist.
I dunno. There are cheap cameras out there that don't have these problems. I already mentioned the Blackmagic Pocket, and (again) I've seen full-blown Red systems rent for $250 a day. I see BMD Cinema Cameras rent for $100 a day with a full lens package. I see high school and college students shooting with these, and they have little or no money. Granted, they aren't the latest $40,000-$50,000 studio cameras, but 10-year-old cameras that are a little beat-up, all still capable of making great pictures. There's a terrific YouTube video where a guy talks about buying an Alexa for $5,000 on eBay, and it's pretty entertaining.
Yup you are correct, as always. But it depends on what country and what district you are in. Cyprus is actually a low-populated rural and mountain area, so the proposition of equipment to rent is very limited and prices are high compared to the highly competitive and enormously populated areas like LA.
Marc Wielage wrote:
stesin wrote:Thank you again for the detailed explanations! I completely agree with almost each of your statements. Especially I agree that taking amateur-level cameras and optics for doing commercial projects where the budget is present, is a nuisance and should be avoided.
Well, there's amateurs, and there's amateurs. I've volunteered to help friends here in LA -- for free -- and through connections, they get grip trucks, real lights on set, real cameras, great locations, and real sound for a day. And everybody's throwing in 8 hours of their time for essentially a "student" film. The advantage of using great people is they won't make newcomer mistakes. If I'm on the set, I'll stop them from any disastrous decision that will kill us in post.

90% of getting great pictures from something like the little Canons or Nikons or Sony A7's boils down to: expose it well on set. Light the actors properly. That doesn't cost any money -- it just takes practice, time, effort and testing.
Yup, again I agree with your every word.
Marc Wielage wrote:Small lights are cheap these days (particularly the portable, widely-used LED panels used by mobile news crews), and you can do amazing things with them if you follow all the rules for key light, fill light, backlight, and key-to-fill ratio. All of this stuff can be learned -- there's a thousand YouTube videos on the subject. Get that right, and you have a fighting chance at least the basic camera signal will be right. I can't tell you the number of low-budget indie film producers I've dealt with who didn't do this, and paid the price later on.

This still won't solve the problems of compression artifacts, dynamic range limitations, clean audio, timecode, and so on, but it's a start.
As I already mentioned, the compression artefacts are actually not a problem given a 100+ Mbps UHD H.264 video stream. These small cameras aren't as bad as you imagine :) Dynamic range yes it can be problematic, as the noise levels too, but in the end these limitations are easily avoided with good lighting.

Also, 4:2:2 color sampling is a pretty decent quality anyway. Internally, Studio doesn't care too much about that.

As for sound, PCM 16bit 48kHz is commonly available, and only rarely you do actually need something better (like PCM 32bit 96kHz suitable for recording of symphonic orchestra performance).
Marc Wielage wrote:3) transcode all of these Long-GOP H.264/H.265 files to a high-quality codec like ProRes 422HQ or DNxHR, and keep the file names and folder structure exactly the same
With paid Studio, this extra step is actually an overkill.

Studio allows you to work (edit and grade) with optimised media (or proxy media) at say 1/2 or 1/4 of original resolution if you wish. And for the final delivery it takes the original files in their original quality, this works exceptionally good for me and allows you to do post of 4k UHD videos on comparatively weak computers. Check the Ch.10 of Part 2 of the DaVinci Resolve Reference Manual, August 2021 edition, page 250, the chapter is titled "Image Sizing and Resolution Independence". Highly recommended. UPD: in the November 2021 edition of the manual, this is now Ch.11 on page 237.
Marc Wielage wrote:Variable speed changes are a problem for all edit systems and cameras because there is no simple provision for variable framerates in XML (plus different playback methods like frame-blending or optical flow produce different results).
AFAIK my cameras produce the CFR (constant frame rate) H.264 output. VFR is common on smartphones AFAIK.

Warmest regards,
and many thanks in advance,
Andreas
Last edited by stesin on Tue Nov 09, 2021 11:47 pm, edited 2 times in total.
Blackmagick DaVinci Resolve Studio 17.4.6
Blackmagick Speed Editor USB cable connected
Linux Ubuntu 22.04 (5.18.14)
Asus G750 i7-4860HQ 32GB RAM
NVidia 980M 8Gb (510.85.02 CUDA: 11.6)
2x166GB SSDs in RAID0 - DVRS Caches
1x4TB Samsung EVO 870 SSD
Offline

Andrew Kolakowski

  • Posts: 9213
  • Joined: Tue Sep 11, 2012 10:20 am
  • Location: Poland

Re: Is H.264 *really* a "bad" post-production codec?

PostTue Nov 09, 2021 10:58 am

stesin wrote:
I recall reading that for 29.97 fps frame dropping is unavoidable. But why and what for, if initial footage is 29.97 fps, too? I left this unchecked (so no drop frame timecode in use). But am I correct here? What are the consequences of either having a "drop frame timecode" or not? Am an in a doubt, anyone cares to explain this magic, please? Thanks in advance!



Not a sigle frame is ever dropped.
It's just the way how you count frames and convert to timecode, not about dropping any frame. You would have jerky image if you started dropping frames :D

DF timecode will be in-line with clock time. NDF will drift (because you play at 29.97Hz, but count as 30).
If you work alone (not forced by client to work to specific one) then it's meaningless which one you use. It's just a side metadata which you can change at any time.
Next

Return to DaVinci Resolve

Who is online

Users browsing this forum: Google [Bot], tylertb14 and 131 guests