Learning about Dolby Vision and CoreELEC development

@ anyone with suggestions

For me, it is proving difficult / impossible / I can’t work how to embed the metadata into the transmitted hdmi signal stream to make DV work on all devices. I’ve worked out one way of doing embedding the data - but it only works on 16:9 video and by outputting all video in a 8-bit format (with double_write set to 1) - works for testing concepts, but not practical for actual use.

The 16:9 requirement is because the part of the code I am using to embed the metadata only has access to the decoded video stream - and the metadata needs to be embedded in the top pixels that are transmitted over the hdmi regardless of the aspect ratio of the video file.

The 8-bit format is because that results in a video signal of type NV21, this format is documented and a pointer is available to the pixel data - easy to read and modify.

In normal 10-bit mode, the video stream is a type flagged as VIDTYPE_COMPRESS, VIDTYPE_SCATTER, and VIDTYPE_VIU_FIELD. Best I’ve been able to work out, this format is required for amlogic devices to play 10-bit content, VIDTYPE_COMPRESS refers to the data being in an undocumented lossless compressed form, and VIDTYPE_SCATTER refers to the data being somehow scattered in memory. Does anyone have any idea on how a video signal in this format can be read / written to?


To avoid issues in having to understand the 10-bit format, I had the idea of instead overlaying the pixel sequence in either the osd layer or the secondary video layer. But I have no idea how to go about trying that - any suggestions from others on how to do this?