- Posts: 4
- Joined: Wed Jun 19, 2013 1:17 am
Hi folks,
I've been developing an application that receives, processes, and transmits video over an SDI interface (using a Decklink SDI Duo). I've been trying to characterize the behavior of my system and test out error cases, and I've found something that confuses me... The initial question was how the system would react if it was sent 10-bit data when it was expecting 8-bit data (and vice-versa). Here's what I did:
I built a pair of test programs - one that loads a frame from the HDD, packs it in the frame buffer as either bmdFormat8BitYUV or bmdFormat10BitYUV (according to the formats shown in the SDK Section 2.6.4), and sends it out over HD-SDI, and another program that receives that frame, unpacks the data, and then displays it. I then connected the two external BNC ports with a coax cable and fired it up. What I found was that if I defined the input as bmdFormat8BitYUV, it didn't matter what format the transmit program was using, the input frame was always coming in as bmdFormat8BitYUV. Same with bmdFormat10BitYUV. This doesn't make much sense as the packing format should not be at all compatible across pixel formats...
The only apparent conclusions are that either I'm doing something wrong with the frame handling or that there is something going on behind the scenes in the API which is converting the formats to match whatever the input was defined as. Could somebody shed some light on this? If there is pixel format conversion going on, what is the cue - perhaps something in the ancillary data? Could this lead the system to recognize type discrepancies between Blackmagic devices only and not for other third-party devices that may be connected to the system?
Thanks in advance for any insights.
I've been developing an application that receives, processes, and transmits video over an SDI interface (using a Decklink SDI Duo). I've been trying to characterize the behavior of my system and test out error cases, and I've found something that confuses me... The initial question was how the system would react if it was sent 10-bit data when it was expecting 8-bit data (and vice-versa). Here's what I did:
I built a pair of test programs - one that loads a frame from the HDD, packs it in the frame buffer as either bmdFormat8BitYUV or bmdFormat10BitYUV (according to the formats shown in the SDK Section 2.6.4), and sends it out over HD-SDI, and another program that receives that frame, unpacks the data, and then displays it. I then connected the two external BNC ports with a coax cable and fired it up. What I found was that if I defined the input as bmdFormat8BitYUV, it didn't matter what format the transmit program was using, the input frame was always coming in as bmdFormat8BitYUV. Same with bmdFormat10BitYUV. This doesn't make much sense as the packing format should not be at all compatible across pixel formats...
The only apparent conclusions are that either I'm doing something wrong with the frame handling or that there is something going on behind the scenes in the API which is converting the formats to match whatever the input was defined as. Could somebody shed some light on this? If there is pixel format conversion going on, what is the cue - perhaps something in the ancillary data? Could this lead the system to recognize type discrepancies between Blackmagic devices only and not for other third-party devices that may be connected to the system?
Thanks in advance for any insights.