Input pixel formats

Ask software engineering and SDK questions for developers working on Mac OS X, Windows or Linux.
  • Author
  • Message
Offline

James Bonnett

  • Posts: 4
  • Joined: Wed Jun 19, 2013 1:17 am

Input pixel formats

PostWed Nov 19, 2014 8:07 pm

Hi folks,
I've been developing an application that receives, processes, and transmits video over an SDI interface (using a Decklink SDI Duo). I've been trying to characterize the behavior of my system and test out error cases, and I've found something that confuses me... The initial question was how the system would react if it was sent 10-bit data when it was expecting 8-bit data (and vice-versa). Here's what I did:

I built a pair of test programs - one that loads a frame from the HDD, packs it in the frame buffer as either bmdFormat8BitYUV or bmdFormat10BitYUV (according to the formats shown in the SDK Section 2.6.4), and sends it out over HD-SDI, and another program that receives that frame, unpacks the data, and then displays it. I then connected the two external BNC ports with a coax cable and fired it up. What I found was that if I defined the input as bmdFormat8BitYUV, it didn't matter what format the transmit program was using, the input frame was always coming in as bmdFormat8BitYUV. Same with bmdFormat10BitYUV. This doesn't make much sense as the packing format should not be at all compatible across pixel formats...

The only apparent conclusions are that either I'm doing something wrong with the frame handling or that there is something going on behind the scenes in the API which is converting the formats to match whatever the input was defined as. Could somebody shed some light on this? If there is pixel format conversion going on, what is the cue - perhaps something in the ancillary data? Could this lead the system to recognize type discrepancies between Blackmagic devices only and not for other third-party devices that may be connected to the system?

Thanks in advance for any insights.
Offline

chasapis.christos

  • Posts: 17
  • Joined: Thu Jun 19, 2014 3:31 pm
  • Location: Greece

Re: Input pixel formats

PostFri Nov 28, 2014 3:29 pm

The input format from a live source will be (logical to be) YUV any other types RGB will be (logical) software converted at the kernel module (driver).

I dont really understud your's question, the input pixel format is when you have live source. Right?

Feel free to contact me (via forum) or even email me.

www.chasapis.com
Offline

Brad Parker

  • Posts: 22
  • Joined: Fri Mar 01, 2013 11:12 pm

Re: Input pixel formats

PostTue Mar 24, 2015 6:56 pm

The card/driver has no idea how many bits are being received per pixel, the hardware will read as many bits as it is capable of (10, 12 etc.), but how many bits are actually presented in the frames arriving via the SDK are determined by the format you set before starting capture. If you tell it you want to capture 10 bits, it will give you 10 bits no matter if your source is providing 8, 10 or 12, it has no idea what it's receiving, it just gives you what you asked for.

Return to Software Developers

Who is online

Users browsing this forum: Mike Ambrose and 17 guests