Learning about Dolby Vision and CoreELEC development

I’ve made some progress in extracting the metadata from these captures, but the data seems to be slightly different to what I was expecting.

All those captures are in 10-bit RGB, but to extract the metadata I need the exact bytes from the 8-bit RGB tunnel.

Is it possible to capture the data in 8-bit RGB to avoid any possible scaling/conversion issues? If not, do you know how the original 8-bit data was mapped into the 10-bit data in those dpx files?

Could you also make a new dpx capture of this file - if it is possible could you also provide me with the frame number of the video that is captured. This cleaner test pattern should prevent me introducing possible errors relating to the bit scrambling used in embedding the metadata. It should also help in working out 8-bit RGB has been mapped into the 10-bit capture format.

Capture is possible only in R210 codec, which is 444 10bit. But since the signal itself is 8-bit, the upper bits are empty and contain no information.
file structure, rgb 10 bit and 2 bit transparency.

upload am6b-min p7-rgb-.mov, original video stream in r210

Thanks, I’ll give that a try. Interesting that the dpx files lose the 2 bit transparency and seem to fill the full 10-bit range

Your capture of the test png shows that coreelec does not output RGB images even close to bit-perfectly. Out of the possible range of 0-255, the R,G, and B channels have maximum errors of (25, 24, 23) (14, 32, 8) and median errors of (9, 3, 7)(8, 13, 1) - essentially almost every byte is output inaccurately - and often significantly so. Given the large value of these errors I doubt it would be due to any sort of unwanted colorspace conversion and would guess it is an issue with coreelec/kodi itself.

edit: fixed error value now that captured data is being imported correctly

Thank you for your testing. I had a feeling that something was broken in CoreELEC and your testing confirmed it. Hopefully, the bugs can be fixed and you and DMDreview can continue to test the output for bit-perfectness.

The OSD 1 is on all the time and would need to be for subtitles to be shown.
There maybe a way to switch it off for testing purpose, not well versed in Kodi to know if there is any transparency or alpha blending going on.

I will double-check, but as far as I remember Amlogic has a problem with RGB output, oh can’t output it natively, and it seems to be converted via Ycbcr. At least my very first test on Ugoos X2 Amlogic S905x2 then output very inaccurate RGB. but other players in RGB could output bit for bit with the original.
But I’ll double-check

For dpx I did a contrast reduction so that the RPU data squares were visible. Because DPX clipped to limited signal when exporting, and RPU are in lower data values.
Now I have redone the settings in Davinci to keep the whole range. So the new DPX will be full-range, but when importing it is necessary to include for them that they are Full.

That seems to have done the trick. I’ve got some more work to do - but a quick test shows the crc is now passing and the first packet of data is decoding with values that look correct

This appears slightly incorrect, it looks like the padding occurs in the lower two bits.

Does this means that the encoding rather than the CoreELEC output itself caused the previous problem with incorrect RGB values? In other words, is CoreELEC bitperfect?

No - it means that the capture was performed perfectly and I can read the raw bytes into my code. The fact that tv-led DV works with embedded metadata at all shows that the DV part of the soc is able to exactly sets bits in the RGB output though - says nothing about accuracy of the video output though.

Re-examining the .mov capture of the 8-bit RGB png test pattern now still shows significant errors and has corrupted the embedded metadata. Maximum errors of (25,24,23) and median errors of (9,3,17). Interestingly, these errors form a clear patterns, i.e., in one part the errors are 10,9,9,9,10,9,9,9,10, … in a repeating pattern. Keep in mind this test was displaying a png image, no clue if the same thing happens when playing videos

That’s disappointing, but it’s important to know and hopefully it’ll help the team. Thanks for all the testing.

If you are interested in bit perfect video output than maybe start a new thread. I think it is something that could be fairly easily tested for non-DV video by forcing a 4:2:0 output (or whatever to match a sample file) and comparing a capture against the YUV decoded with ffmpeg.

I remembered, there is an amlogic problem. it outputs png and jpg images through the FP32 block. Therefore, neighboring pixels have different coordinates. I made a video from png, but amlogic, like Fire tv, does not know how to output RGB normally, because it uses a limited range when recalculating apparently. And amlogic also does this conversion incorrectly. Shield tv can output rgb 444 8 bit full exactly

Upload capture from video making from png and original video.
am6 png to video.mov
dv-test-orig video.mp4

Player led (not mentioned here) works on fully non-DV panels by EDID hacking where the output is just RGB HDR. I’d suspect it’s by chance this worked by sending frames later on the RPi, and as part of the HDMI handshake you still need to announce your device as supporting the appropriate bits for DoVi TV led. The HDFury guys figured this out years ago.

@cpm found an interesting comment in comment in one of the patents that seems to imply that a dolby vision display checks for tv-led DV hdmi signal simply by checking if valid metadata is embedded in a 8 bit RGB signal. Given the exact output capability of a shield tv could you test if the DV test pattern is able to trigger tv-led DV from that device?

Is it known if this is a software or hardware problem? Having a quick look at the code in hdmi_tx_hw.c it appears that the Color Space Converter is disabled if the input and output formats for the hdmi code are the same (options being RGB444, YCbCr422, YCbCr444, and YCbCr420)

Are you talking about announcing from the sink or the source? If the source, do you know / have a reference to exactly what needs to be set?

You have a link to anything on this? Only thing I’ve been able to find on this is editing the edid of the sink (display) that a LLDV capable display is presented to the source (player) when the actual display does not report LLDV capabilities, i.e., this trick. This is not related to tv-led at all.

As a side note, I wonder if this LLDV trick can be done from the CoreELEC supported DV capable devices without needing a hd fury. As this trick seems to simply act to report a LLDV capable display via the EDID - wouldn’t it be as simple as ignoring the actual source EDID in the code and inserting whatever LLDV capable EDID when deciding if a LLDV output will be allowed?

I don’t have Shield TV anymore. But in general, you can run pure RGB 8bit from a video card and a computer. But how to embed metadata?