AVID (FCP, Edius not sure about Premiere) seems to use YUV internal processing, so this is very different. You easier get out of gamut or negative values. Also- looks like scopes there shift scale below 0, where in Resolve it's always 0-100% and if any data is there then it's 'hidden' (you can use Offset or Curves to see if any data is outside 0-100).
I would still expect scale to be 0-1023, not 16 at axis 0 (that would be more confusing?).
Resolve is RGB + "fake" Y channel. Not sure if people do really use this Y info.
In Resolve typically everything is full range, so forget about 16/64 on Resolve scopes. Black in Resolve is 0 and white 1023. It's during export where Resolve converts data properly to YUV according to settings/codec requirements.
When you import YUV based file to Resolve is does opposite- normalises everything to RGB, so accurate info in file about YUV data range (one tools always assume limited- which is far from ideal) is required. Some codecs store range inside codec, others don't. Range info can be also stored in container as well, but things like ProRes MOV don't have range info (neither on codec nor container level) and this is why exporting full range ProRes is "risky". ProRes by definition should be limited range. It can be full, but then you have to tell your tool that it's full range (in Resolve you overwrite in clip attributes, but in Premiere no way to change it from hard coded rules!). This is very important. DNxHR has range flag inside codec metadata. You just need tools which set/read it properly. Resolve (after few fixes) now seems to do it accurately.
Resolve is not really a typical NLE as it's coming from grading. Everything imported is converted to RGB internally, unless you do direct exports without any touches (this is for speed and to preserve YUV when possible). RGB processing is typical for high-end finishing/grading tools as then it makes more sens.
Imagine this case.
Source - v210 (YUV 422 10bit) going back to v210 after lets say simple edge crop.
If tool operates in YUV then this will be done on YUV and be fairly fast. It won't involve YUV->RGB->YUV conversion either, which is desired behaviour.
In case of Resolve you will get YUV->RGB on import, processed on RGB and then converted back to YUV for export. In this case Resolve is actually "worse". It's not that important, but it can alter original YUV data, so there is some argument here.
In old Resolve (somewhere <v16) conversion to RGB use to happen always, regardless if it was needed or not. Currently if you do no processing (just eg. edit) Resolve during export will pass YUV data to export module (unless export codec is RGB), so this way it will avoid pointless YUV->RGB->YUV conversion.
Old Resolve had poor YUV->RGB->YUV conversion, so after exporting v210 back to v210 few times you were getting totally messed-up file (even if you were exporting uncompressed format!). You can find old post about it. Current version is better and also YUV->RGB->YUV is improved as well.
Things look differently when you work with RAW (which is not RGB nor YUV, but becomes RGB after debayering) or high quality source (DPX, EXR etc.) as then Resolve RGB processing is desired. You do all operations in RGB (with proper color space, gamut management, etc) and then quite often you export back to RGB master (DPX etc). If you need YUV based master (typical broadcast format) then this is just single (final) conversion to YUV (it's also unavailable). In this case you have no chance for bad YUV file as you are protected by math. Resolve produces only valid RGB values, so there is no way RGB->YUV->RGB can be invalid (if we forget about unpredictable/unavoidable internal codec overshoots due to compression). Only processing in YUV (eg. Edius) can produce new YUV values which converted to RGB can create out of range signal. It's that reverse YUV->RGB step which can causes out of gamut errors in case of YUV signal is not coming directly from RGB source. In this conversion there is also colro space info (matrix) involved so it matters if we go to Rec.709 or Rec.2020.
YUV could be today always full levels and it would make thing way easier and less confusing.
Whole idea of limited range is a legacy from analog world where things were never perfect, so those "buffers" were important.
We could use RGB all he way, except this is not compression efficient as it links chroma with luma (and we want them separated), so not good. This is the reason why ProRes is always YUV based internally (DNxHR/Cineform can be YUV or RGB).
There are other color models, better than YUV/RGB (eg. new Dolby ICtCp), but it would be so hard to make them universal. We are so deeply buried in YUV and limited levels
This is why high-end production/grading/finish is rather RGB based, but once your produce YUV master for further deliveries in the chain (broadcast, etc.) then this typically should be processed in YUV without going back to RGB. With todays 32bit float processing this is way less important though.