It is all in the AMLogic code (Open source Linux) and it’s interaction with the Dolby library.
I very much think dovi.ko is it on the proprietary code front - containing all the Dolby interfacing, and presumably checking the license in the SoC to allow it to run it’s algorithms.
That decompiles to not too much - what I could make out in the decompile was to do with RPU, but must contain the composer orchestration from the HEVC decoder output etc.
Kodi is only checking at a superficial level that the SoC is good to play content nothing beyond that - it is just fundamentally passing the stream down to AMLogic which then does all the work.
The only thing that could stop this being a case of just outputting the correct 0/1 bits on HDMI to the sink would be some kind of encryption per frame or license handshake - but see no evidence of either.
If sending the correct bits then I am 99% convinced it will work with a Dolby Vision Sink (TV)
The thing I would be wary of is the implementations in the Sink we know they work for (p7,p8 IPT and p5 IPT-PQ-C2) not sure about others - as you mentioned if the pi gen png is just raw data i.e. actually IPT captured from the RGB 4:4:4 and stored in a png we don’t know if actual Sink will work with other tunneling data.
That is the concept as I understand it. There is some discussion of it here that talk about triggering both lldv and tv-led with test patterns (and a proper hsv infoframe - what is this? does it apply to both lldv and tv-led?).
Well we have the bits thanks to the captures provided by @DMDreview. It would really be good to work out what changes need to be made to CoreELEC in order to trigger TV-led DV using this captured data in a “replay attack” style.
If this can be done, I think it we would be able to immediately afterwards get some very interesting results by varying the metadata embedded into the chroma LSB’s…
If anyone has any idea how we could modify coreelec to be able do this “replay attack” (i.e., bit perfect RGB output) any input would be much appreciated.
The AMLogic debugfs has the Info-Frames information logged out for the hdmi connection, so can check there what is being sent.
(may need to switch hdmitx debug on)
From what I learnt looking at the bt.2020 flag recently there are the following Info-Frames:
Vendor Specific InfoFrame (VSIF)
Auxiliary Video Information (AVI) InfoFrame
Dynamic Range and Mastering (HDR) InfoFrame
Audio InfoFrame
From what I understand DoVi should have a VSIF with version info, type as in DV-Std or DV-LL etc. (Need to grab form the debugfs in each mode to check)
For replay think would need custom code (on the Linux AMLogic side) or a total over write with an HDFury etc. for the Info Frames and then write to the hdmitx.
Below has a little bit of context - indicating the same thing:
If the oppo is doing nearest neighbour chroma upsampling it is not being performed as a final step. From my understand this would result in each chroma value being horizontal duplicated in the captured chroma channels - this is not the case. Also doesn’t explain the difference in the luma channel. So something else is going on …
Doing some digging, it appears that one of the more important properties of IPT/ICtCp/ IPTPQc2 (all referred to as IPT for the rest of this post - not entire sure exactly which is being used where / what are the differences), is that it has a much more constant luminance with better decorrelation of the chroma and luma channels than YCbCr.
It appears that this property is very important w.r.t. minimising the introduction of errors when performing chroma sampling. Essentially it appears that, if an IPT colorspace is involved, this is the stage where chroma sampling operations is best to, and are performed when IPT / DV content is used.
Now onto the method used for chroma sampling. For HDR video, a method for chroma up-sampling appears to be defined with a 4-tap filter with fixed coefficients, first applied vertically and then horizontally (for down-sampling a 3-tap filter, horizontal then vertical). These coefficients are known, and while I haven’t found anything specifically stating this is used for all DV content the filter is provided as an informative reference in one of the DV related ETSI specifications.
For those that are interested here are some references
Compound Content Management Specification - helpful specification for DV (not that it once mentions Dolby or DV). Annex C also defined the same filers as the previous link
TL;DR
All chroma sampling for DV content is almost certainty performed in a IPT colorspace - likely with a fixed 4-tap filter for up-sampling.
I’m suspicious of the VSIF.enable: disable at the start of both, but given the TV does enter DV mode for LLDV, it appears unclear to me what that is meaning.
Could someone with a supported DV device please run the same command and see if the output is the same as what I have posted?
this difference between Oppo and am6 because of color interpolation and conversions from ycbcr to rgb and ITP. Unlike HDR, there is no pure luminance channel here anymore it seems, it has already gone through conversions.
@DMDreview From what I can gather from these files, you have taken a 8-bit RGB png, created a 8-bit YUV 4:2:0 mp4 (surely loosing the embed bits need for DV tunnelling), and then made a capture of the am6 playing that file back with CoreELEC set to a 8-bit RGB output.
Thinking may help here somewhere, if can reliably capture in box then can more easily check changes upstream and the effects straight away.
Can probably ref the below capture usage / integrate into menus etc - not looked yet myself but throw it out there, maybe at some point can write a small add on like pi gen.
Only a little, but at the time I couldn’t see anything that gave any hints about how to use it - I’ll look through those links. It would be great of we could work out how to make captures on the device at different stages of the video pipeline instead of relying on external hardware that most don’t have - resulting in a much slower testing loop.
It does seem though that capturing at different stages of the video pipeline are possible. Hyperion seems to shows both video and the gui, while the python addon I copied from before only worked when video was playing.
I like this idea, once the output pipeline, color space conversions, chroma sampling, etc, are understood I could see any reason why these devices couldn’t become a very good calibration tool. Could probably one-up Pi Gen as well - that seems to need a hd fury afterwards to get hdr working.
I think that is was I started with, trying to take screenshots using kodi-send --action="TakeScreenshot". Didn’t seem to work though, only produced scrambled garbage - hence looking for something else that lead to the python tool.
Didn’t occur to me at the time, but it seems that is showing yet another way to use amvideocap . Maybe it was actually capturing the YUV and just raw dumping it into a png with whatever ordering/packing of YUV data making the png appear as garbage.
Looking at the video made from the test pattern and the captures of the png again, it appears that CoreELEC is outputting in RGB with a limited range (when displaying the png and playing the yuv mp4, full range for tunnelled tv-led DV).
@DMDreview Were you getting the exact same pixels values between the decoded frames of dv-test-orig video.mp4 and the capture? I’m still seeing differences - wanted to check
What makes you say this?
From what I can see this doesn’t seem to be the case by examining your captures, the output pixel values from the png capture and the mp4 capture are almost identical. The only differences appearing to occur around edges and throughout the top box (which consists entirely of single pixel wide stripes) which would be explained by chroma up-sampling.
Given (I have tested and confirmed) that frames decoded from the mp4 file and the original png are close to identical, it seems to me that whatever is causing differences in the output compared to the original values in the png/mp4 file is something that is common to both the pipleline of displaying a RGB png and displaying a YUV video both with a RGB output.
To me, the conversion to limited range RGB seems to be the obvious candidate for causing the problem. Given that achieving a full-range RGB output will be required to make the test pattern work, I’ll try to see if I can get CoreELEC to achieve a full range RGB output.
When I have some level of certainty that I know how to get a full-range output going, of either the test png or a lossless encode into a video, I’ll ask you for another test to confirm - assuming you are still up for doing more tests
I’m not up to speed to the issue relating to positive lifts - but it seems that some hdmi devices (at least the Dolby CMU, User Manual) can send L5 info. It has the option to send L5 with the warning
NOTE: Disabling this function will cause the letterboxed areas in
the TV to lift when positive lift trim is applied.
Though I’d share in case this is helpful info at all. If it could be done, would changing the code so that L5 is sent be helpful at all?
@cpm Do you know if the changes to provide “true tv-led” functionality have been added yet to the “NE” builds? Or are they still only present in the “NG” builds?
Not aware, I think that uses AMLogic 5.4 - I recall that was checked and it had the same logical issue - though the code was slightly different, it looked like it would still end up in graphics mode if an osd was on - osd1 is always on for subtiltes etc.