Dolby Vision - VS10 Engine on Ugoos AM6+

Is it that fake tv-led is ITP [ICtCp] (sent in an RGB tunnel with a format layout same as YUV [YCbCr] ) with the RPU already applied, but a 'blank" RPU also sent to the tv so it works ok?

Where as player led is YUV [YCbCr] (can also be RGB but lets ignore that and keep it simple) with the RPU already applied.

Visually is there a difference between fake TV led and player led? From what you’re describing, it seems like functionally they are the same thing.

Both cases the RPU is applied at the box, both limited to CM2.9. Seems like the difference is negligible and semantics.

In any case, lets say there are 3x DV modes. tv-led, fake tv-led, and player led. How does this apply to VS10? Tv-led is obviously preferred. But are there cases where you would use fake tv-led over player led?

Cannot say I know for sure, just what I have picked up, but what I think is it depends - ITP should do a better job than YUV for the colour with the same bandwidth - so should be better when converting ITP to the RGB of the display driver.
12bit ITP is meant to effectively eliminate banding etc. through both more bit depth and ITP.

fake tv-led is a misnomer it is actually graphics mode - where we want priority for the graphical generated content over the video content (not that in reality graphical interactive content ever took off for blu-ray)

It can be switched on and that may help with some testing / or not, but is an option.
Would not expect it ever to be useful in real-world would always want video mode tv-led for watching video.

player led uses only data from EDID. fake tv-led according to information from R3S3T_9999 uses brightness information from the internal DV parameter block of the TV itself, sometimes there is quite a significant difference between what EDID reports and this internal table. I haven’t been able to confirm this information on my tests yet, but I trust his tests.
Here he did a comparison between fake tv-led and lldv.

Did a full comparison between GTV ‘‘fake tv-led’’, ‘‘LLDV’’, x800m2 ‘‘true tv-led’’ and C2 internal player…

  • There’s definitely a difference between fake TV-led and LLDV.
  • LLDV is brighter and red looks more saturated than fake TV-LED.
  • There’s a noticeable difference between fake TV-led and true TV-led.
  • C2 internal app and X800m2 TV-led look pretty much identical in most of the shots.
  • LLDV brightness looks closer to true tv-led.
  • Fake TV-led colors look closer to true tv-led.
1 Like

Is it possible to add data from this block to CE, for TVs without DV support, in particular colometry, since only brightness information is probably not enough for proper tonemapping.
this is data from the edid LG CX
Vendor-Specific Video Data Block (Dolby), OUI 00-D0-46:
Version: 2 (12 bytes)
DM Version: 4.x
Backlt Min Luma: 100 cd/m^2
Interface: Standard + Low-Latency
Supports 10b 12b 444: Not supported
Target Min PQ v2: 0 (0.00000000 cd/m^2)
Target Max PQ v2: 2965 (774 cd/m^2)
Unique Rx, Ry: 0.67968750, 0.30859375
Unique Gx, Gy: 0.26953125, 0.69921875
Unique Bx, By: 0.13281250, 0.04687500

1 Like

OMG, does this mean we can setup the so called “LLDV hack” (for TVs lacking DV support) without an expensive hdfury splitter ?

3 last are color primaries, right ? Do they influence tonemapping in terms of color gammut capabilities or in terms of “which color space is used by the TV panel” ?

Definitely it does not understand that it should use LLDV. made such settings, theoretically it should output DV for DV source, but we have wrong ITP colors. i.e. it tries to send it as ITP, but without RGB tunnel.
Maybe it should be forced to specify, like this data
Vendor-Specific Video Data Block (Dolby), OUI 00-D0-46:
Version: 2 (12 bytes)
Supports YUV422 12 bit
DM Version: 3.x
Backlt Min Luma: 100 cd/m^2
Interface: Low-Latency
Supports 10b 12b 444: Not supported
Target Min PQ v2: 0 (0.00000000 cd/m^2)
Target Max PQ v2: 2965 (774 cd/m^2)
Unique Rx, Ry: 0.70703125, 0.28906250
Unique Gx, Gy: 0.16796875, 0.79687500
Unique Bx, By: 0.12890625, 0.04296875
Colorimetry Data Block:
xvYCC601
xvYCC709
BT2020cYCC
BT2020YCC
BT2020RGB


also checked, the situation with ST-DL video has not changed, same wrong color video if DV to HDR is enabled

So to playback - we want to fake the Dolby VSI being sent from the TV/Sink - so that the VS10 / DV Engine picks up and thinks it is connected to a DV Sink to do LLDV.

It looks theoretically possible but a little complex so not 100% sure, not something I will take on at the moment - but can give pointers if someone wants to take up the challenge.

This had crossed mine and a few other peoples minds that in theory unless something in the dovi.ko is checking again faking the VSI may work.

For the DV → HDR10 I think the parameters influence that and may a better HDR10 conversion - though not going to be as good as LLDV.

I hooked up a HDR only monitor and DV → HDR10 worked for me for ST-DL (though may have been MEL only, will double check with an FEL)

Edit: Checked an ST-DL FEL and now get ITP colours so something in the composer maybe that needs info from the Dolby VSI to work correctly.

Given that I think probably should just remove VS10 DV → options as not that useful.

unbelievable what you are all doing here!! it’s Latin for me.

can I somehow see if my mkv file has fel data in it?
thanks!!

Don’t know if this helps but my Apple TV outputs 12 bit 422 in lldv and for Ugoos i have to change from auto to 12 bit 422 or else i get green / purple picture .

If anyone wants to try, a staring point would be looking to override the dv_info in:

extern struct vinfo_s *get_current_vinfo(void);

e.g.:
struct vinfo_s *vinfo
vinfo->vout_device->dv_info

with both raw and parsed data

struct dv_info {
	unsigned char rawdata[27];
	enum block_type block_flag;
	uint32_t ieeeoui;
	uint8_t ver; /* 0 or 1 or 2*/
	uint8_t length;/*ver1: 15 or 12*/

	uint8_t sup_yuv422_12bit:1;
	/* if as 0, then support RGB tunnel mode */
	uint8_t sup_2160p60hz:1;
	/* if as 0, then support 2160p30hz */
	uint8_t sup_global_dimming:1;
	uint8_t dv_emp_cap:1;
	uint16_t Rx;
	uint16_t Ry;
	uint16_t Gx;
	uint16_t Gy;
	uint16_t Bx;
	uint16_t By;
	uint16_t Wx;
	uint16_t Wy;
	uint16_t tminPQ;
	uint16_t tmaxPQ;
	uint8_t dm_major_ver;
	uint8_t dm_minor_ver;
	uint8_t dm_version;
	uint8_t tmaxLUM;
	uint8_t colorimetry:1;/* ver1*/
	uint8_t tminLUM;
	uint8_t low_latency;/* ver1_12 and 2*/
	uint8_t sup_backlight_control:1;/*only ver2*/
	uint8_t backlt_min_luma;/*only ver2*/
	uint8_t Interface;/*only ver2*/
	uint8_t sup_10b_12b_444;/*only ver2*/
};

on the contrary, it is useful because it allows you to watch DV p5 on HDR TV.
If you make some condition that will disable DV support, exactly when running such videos. That would be great.
Since it is not possible to make DV to HDR conversion full-fledged, is it possible to check that the ST-DL file is launched and disable the option for it.

1 Like

Hi @cpm fantastic work as always. I have an issue where dv files is played total black screen with just dv logo top right. This is on your builds on an lg c6(2016) oled. I have no issues on ce-21 nightlies/stable.

It could be limited to just p5.

Before doing that do you know if the RPU is actually being used for p5 DV → HDR10.

I bring it up, because for the p7 MEL DV → HDR10 it appeared to not use the RPU unless mapping down to SDR10 or SDR8 and then it started to use the RPU.

For normal DV Content and set DV → DV it should handle the same way as the nightly.

I think your c6 is a rare case of it will do tv-led but cannot do player-led?

For a STDL mkv, you can use ffmpeg to grab the first frame and then dovi_tool to extract the rpu and convert to json, in the json can see if the EL is MEL or FEL.

ffmpeg -i filename.mkv -c:v copy -bsf:v hevc_mp4toannexb -frames:v 1 -f hevc - | dovi_tool extract-rpu - -o RPU.bin
dovi_tool export -i RPU.bin -d all=RPU_export.json

2 Likes

maybe with your help I will actually be able to do what you describe. seeing the text here normally tells leave it begure you destroy something…
thanks!! will try!