Is it that fake tv-led is ITP [ICtCp] (sent in an RGB tunnel with a format layout same as YUV [YCbCr] ) with the RPU already applied, but a 'blank" RPU also sent to the tv so it works ok?
Where as player led is YUV [YCbCr] (can also be RGB but lets ignore that and keep it simple) with the RPU already applied.
Visually is there a difference between fake TV led and player led? From what you’re describing, it seems like functionally they are the same thing.
Both cases the RPU is applied at the box, both limited to CM2.9. Seems like the difference is negligible and semantics.
In any case, lets say there are 3x DV modes. tv-led, fake tv-led, and player led. How does this apply to VS10? Tv-led is obviously preferred. But are there cases where you would use fake tv-led over player led?
Cannot say I know for sure, just what I have picked up, but what I think is it depends - ITP should do a better job than YUV for the colour with the same bandwidth - so should be better when converting ITP to the RGB of the display driver.
12bit ITP is meant to effectively eliminate banding etc. through both more bit depth and ITP.
fake tv-led is a misnomer it is actually graphics mode - where we want priority for the graphical generated content over the video content (not that in reality graphical interactive content ever took off for blu-ray)
It can be switched on and that may help with some testing / or not, but is an option.
Would not expect it ever to be useful in real-world would always want video mode tv-led for watching video.
player led uses only data from EDID. fake tv-led according to information from R3S3T_9999 uses brightness information from the internal DV parameter block of the TV itself, sometimes there is quite a significant difference between what EDID reports and this internal table. I haven’t been able to confirm this information on my tests yet, but I trust his tests.
Here he did a comparison between fake tv-led and lldv.
Did a full comparison between GTV ‘‘fake tv-led’’, ‘‘LLDV’’, x800m2 ‘‘true tv-led’’ and C2 internal player…
There’s definitely a difference between fake TV-led and LLDV.
LLDV is brighter and red looks more saturated than fake TV-LED.
There’s a noticeable difference between fake TV-led and true TV-led.
C2 internal app and X800m2 TV-led look pretty much identical in most of the shots.
Is it possible to add data from this block to CE, for TVs without DV support, in particular colometry, since only brightness information is probably not enough for proper tonemapping.
this is data from the edid LG CX
Vendor-Specific Video Data Block (Dolby), OUI 00-D0-46:
Version: 2 (12 bytes)
DM Version: 4.x
Backlt Min Luma: 100 cd/m^2
Interface: Standard + Low-Latency
Supports 10b 12b 444: Not supported
Target Min PQ v2: 0 (0.00000000 cd/m^2)
Target Max PQ v2: 2965 (774 cd/m^2)
Unique Rx, Ry: 0.67968750, 0.30859375
Unique Gx, Gy: 0.26953125, 0.69921875
Unique Bx, By: 0.13281250, 0.04687500
3 last are color primaries, right ? Do they influence tonemapping in terms of color gammut capabilities or in terms of “which color space is used by the TV panel” ?
Definitely it does not understand that it should use LLDV. made such settings, theoretically it should output DV for DV source, but we have wrong ITP colors. i.e. it tries to send it as ITP, but without RGB tunnel.
Maybe it should be forced to specify, like this data
Vendor-Specific Video Data Block (Dolby), OUI 00-D0-46:
Version: 2 (12 bytes)
Supports YUV422 12 bit
DM Version: 3.x
Backlt Min Luma: 100 cd/m^2 Interface: Low-Latency
Supports 10b 12b 444: Not supported
Target Min PQ v2: 0 (0.00000000 cd/m^2)
Target Max PQ v2: 2965 (774 cd/m^2)
Unique Rx, Ry: 0.70703125, 0.28906250
Unique Gx, Gy: 0.16796875, 0.79687500
Unique Bx, By: 0.12890625, 0.04296875
Colorimetry Data Block:
xvYCC601
xvYCC709
BT2020cYCC
BT2020YCC
BT2020RGB
So to playback - we want to fake the Dolby VSI being sent from the TV/Sink - so that the VS10 / DV Engine picks up and thinks it is connected to a DV Sink to do LLDV.
It looks theoretically possible but a little complex so not 100% sure, not something I will take on at the moment - but can give pointers if someone wants to take up the challenge.
This had crossed mine and a few other peoples minds that in theory unless something in the dovi.ko is checking again faking the VSI may work.
For the DV → HDR10 I think the parameters influence that and may a better HDR10 conversion - though not going to be as good as LLDV.
Don’t know if this helps but my Apple TV outputs 12 bit 422 in lldv and for Ugoos i have to change from auto to 12 bit 422 or else i get green / purple picture .
on the contrary, it is useful because it allows you to watch DV p5 on HDR TV.
If you make some condition that will disable DV support, exactly when running such videos. That would be great.
Since it is not possible to make DV to HDR conversion full-fledged, is it possible to check that the ST-DL file is launched and disable the option for it.
Hi @cpm fantastic work as always. I have an issue where dv files is played total black screen with just dv logo top right. This is on your builds on an lg c6(2016) oled. I have no issues on ce-21 nightlies/stable.
Before doing that do you know if the RPU is actually being used for p5 DV → HDR10.
I bring it up, because for the p7 MEL DV → HDR10 it appeared to not use the RPU unless mapping down to SDR10 or SDR8 and then it started to use the RPU.
For a STDL mkv, you can use ffmpeg to grab the first frame and then dovi_tool to extract the rpu and convert to json, in the json can see if the EL is MEL or FEL.
maybe with your help I will actually be able to do what you describe. seeing the text here normally tells leave it begure you destroy something…
thanks!! will try!