Hey all, new guy here. Bear with me while I try to explain my situation and then my question.
I have a small YouTube channel where majority of my content is from my GoPro. Currently I record at 2.7K @ 60 FPS, which produces H.264 files as far as I understand it. I've been using my very old computer (4790K, GTK 980) and *gulp* Filmora X to edit my videos. As you can imagine it's a very painful process and I decided to kick it up a notch. I'm building a new computer (13900K, DDR5, RTX 4080, M.2 drives, etc.) and purchased Resolve studio along with the quick editor, even though I've never used the software. I'm all in. I'm watching a number of tutorials now, but I have what are probably basic workflow question and I'm hoping y'all can point me in the right direction.
So here goes.
I know YouTube will re-encode and butcher my videos no matter what. The reason I record and upload at 2.7K is to get the VP09 codec for better quality, which does work. However, I'd like to switch my recording to 4K @ 60 FPS, and then have the GoPro save the files in H.265 since, as I understand it, that will produce higher quality files that are also smaller in size (compared to H.264).
So, if I pull those H.265 files off of the GoPro, Resolve studio will be able to natively read them and I could edit them as is, correct? I realize this would be really hard on the hardware, but the system I'm building should be able to handle it.
I understand "intermediate" codecs will be easier on the system and that ProRes or DNxHR are recommended. So, would I want to use Handbrake on my H.265 files to convert to, say, DNxHR before I import those output files into Resolve? Or would I just create proxies in Resolve? Is there a quality or time-to-complete difference?
If I DO use intermediate codes, I can render my timeline into (again, say) DNxHR, but I could also tell Resolve to render to H.264. I've read conflicting information on this, specifically whether Resolve can produce H.264 output that's as high quality as what Handbrake can produce. Is there really a difference (quality, time-to-finish, file size)?
I understand YouTube can accept DNxHR uploads, but they'll be big. Given that YouTube will re-encode everything anyway, is there any quality difference between uploading H.264 and DNxHR?
Basically I'm trying to figure out what transcoding and rendering and encoding steps I should be taking to preserve as much quality between the original GoPro content and YouTube. I don't mind throwing time at intermediate transcoding steps, though it would be nice to limit final upload file size as my upload speeds aren't the greatest where I live.
Sorry about the laundry list of questions, I'm hoping someone can explain this to me like I'm 5 and I'll do further research from there. I have more questions regarding rendering bitrates, but I'll save those for later. Thanks!
I have a small YouTube channel where majority of my content is from my GoPro. Currently I record at 2.7K @ 60 FPS, which produces H.264 files as far as I understand it. I've been using my very old computer (4790K, GTK 980) and *gulp* Filmora X to edit my videos. As you can imagine it's a very painful process and I decided to kick it up a notch. I'm building a new computer (13900K, DDR5, RTX 4080, M.2 drives, etc.) and purchased Resolve studio along with the quick editor, even though I've never used the software. I'm all in. I'm watching a number of tutorials now, but I have what are probably basic workflow question and I'm hoping y'all can point me in the right direction.
So here goes.
I know YouTube will re-encode and butcher my videos no matter what. The reason I record and upload at 2.7K is to get the VP09 codec for better quality, which does work. However, I'd like to switch my recording to 4K @ 60 FPS, and then have the GoPro save the files in H.265 since, as I understand it, that will produce higher quality files that are also smaller in size (compared to H.264).
So, if I pull those H.265 files off of the GoPro, Resolve studio will be able to natively read them and I could edit them as is, correct? I realize this would be really hard on the hardware, but the system I'm building should be able to handle it.
I understand "intermediate" codecs will be easier on the system and that ProRes or DNxHR are recommended. So, would I want to use Handbrake on my H.265 files to convert to, say, DNxHR before I import those output files into Resolve? Or would I just create proxies in Resolve? Is there a quality or time-to-complete difference?
If I DO use intermediate codes, I can render my timeline into (again, say) DNxHR, but I could also tell Resolve to render to H.264. I've read conflicting information on this, specifically whether Resolve can produce H.264 output that's as high quality as what Handbrake can produce. Is there really a difference (quality, time-to-finish, file size)?
I understand YouTube can accept DNxHR uploads, but they'll be big. Given that YouTube will re-encode everything anyway, is there any quality difference between uploading H.264 and DNxHR?
Basically I'm trying to figure out what transcoding and rendering and encoding steps I should be taking to preserve as much quality between the original GoPro content and YouTube. I don't mind throwing time at intermediate transcoding steps, though it would be nice to limit final upload file size as my upload speeds aren't the greatest where I live.
Sorry about the laundry list of questions, I'm hoping someone can explain this to me like I'm 5 and I'll do further research from there. I have more questions regarding rendering bitrates, but I'll save those for later. Thanks!