EDID Override - Injecting a Dolby VSVDB Block

ok gotcha.

Yes for HDR10 PQ only display then would be no Dolby VSVDB in the EDID.

Not looking to build on that code for now, as moving to other things, can just use the raw value and online tool easy enough to pull values.

I am planning to look more deeply into the HDR10+ > DV P8.1 next.

is this one close? seems product code may be just region difference but don’t know:

From the HD Fury forums I believe that if you do send an EDID with HDR metdata that has only the first elements of the EDID up to the PQ value and everything else on the EDID being 00 then the TV/projector will fall back to its native luminance level (to tone map to it’s native level).

From my experience, the forum below has been the one that has drilled down on the use of EDID for HDR metadata and DV block for a number of years using HD Fury devices. The post number 5074 uses a way to send the minimum HDR metadata for a TV/projector. Notice that both HDR metadata and DV block are sent. For this post the HDR metadata is the one to notice since the DV block was for a projector mainly (lowest luminance levels). From the debate on this issue it seems that some devices (TV/projectors) use the HDR metadata and others do not - so ymmv.

https://www.avsforum.com/threads/dolby-vision-including-hdr10-conversion-w-dtm-on-projectors.3097934/page-254

Looks fundamentally the same as what is being done here:

  • Injecting a Dolby VSVDB (V2) into the Dolby Engine (in lieu of one from the EDID / Display) with the CS, and Min and Max Lum set according to the display and its setup (as input by the user)

  • Outputting on HDMI the resulting DV-LL as HDR with PQ0.


I have access to a DLA-X9900B Projector but not in a setup yet, so maybe will look into this more later - just adding for others to try for now - come to the end of the road on this bit unless some major issues pop out I can fix.

1 Like

That is correct. The nuance is that it seems that some projectors/TVs take into consideration the HDR metadata differently (some use and some not) than the the DV block (which I assume is always used). In that forum post (5074) the HDR metadata is forced to null “00” (no value) without any CS, min Lum or max Lum information while the DV block still had the CS, min Lum and Max Lum information. I am assuming that since in that forum they are basically dealing with LLDV, the HDR metadata is triggering the projector/TV to its native min and max Lum or whatever values you want (given the “no values” or the values you want to send) while the DV Block is telling the LLDV source device to tone map the image to the parameters on the DV Block which are different (min and max Lum).

It seems that folks are trying to decouple the HDR metadata and DV Block data in the case of LLDV. So the info does not need to be insync - ie - you can have the LLDV source device (player) being tone map (min and max Lum) to a different dtm curve from the sink device (TV/projector) so you can find the best mix for the specific system (source and sink devices) you have. Not sure if this would be applicable to projector/TV-Led DV since I would assume the info would be insync by nature of the EDID?

1 Like

I had participated in that discussion (OKVCOS discussion #5,072 ) with some tests on the values ​​set by David Haper.
one of these is the PAYLOAD that allows you to use the DV trick (https://www.avsforum.com/posts/62181045/), but it depends on which HDR display (samsung…sony etc) you use.
If you have a DV display it makes no sense.

Do some tests:

BT2020 space min max Luminance “empty” DV Trick
47:04:00:57:98:A9:53 LLDV
47:04:FF:57:98:A9:53 Dv+LLDV+RGB/YBC
47:04:FE:57:98:A9:53 DV+LLDV
47:04:FD:57:98:A9:53 DV+LLDV+RGB

BT2020 space min max Luminance 0,01 96
47:04:FD:57:98:A9:53 LLDV
47:0C:01:57:98:A9:53 Dv+LLDV+RGB/YBC
47:0C:02:57:98:A9:53 DV+LLDV
47:0C:03:57:98:A9:53 DV+LLDV+RGB

surely one of these will give you green/red color

Correct, that is why I focused on LLDV source devices. That is when being able to manipulate the HDR metadata EDID portion and the DV Block EDID portion independently and separately (like you can do with HD Fury devices such as the Vertex 2) makes sense and there is value. For a TV-Led DV device that does not make sense.

It would be some - but not much effort - to alter the values sent in the HDR packet from the device to the display. Below is the existing code currently implemented for Player Led (HDR), zeroing all the PQ. Note most effort is probably in the UI side for the management of the values.

This existing functionality is just my test work so far, not part of CE mainline.
In the future given suitable demand and this is actually in the CE mainline then can look to expand, but thinking most people who want to go to this level go the HDFury and possible MadVR route anyway not really the remit for these CE devices - but all is possible.


Given the DV tone mapping being applied to the pixel values for DV-LL i.e. making sure they stay in range of the min and max by scene/frame and in a manner which is suitable for the human eye, e,g. very likely a non-linear application, then a 2nd mapping in the display side is probably helping in those case to map out the devices own non-linear issues not really min and max clipping per se because we are already within the range with the values.

hdr10_data.features =
		  (1 << 29)		/* 1 video available / present */
		| (5 << 26)		/* 5 unspecified */
		| (0 << 25)		/* 0 limited range */
		| (1 << 24)		/* 1 color available / present */
		| (9 << 16)		/* 9 primaries bt2020 */
		| (0x10 << 8)	/* 16 transfer char. smpte-st-2084 */
		| (10 << 0);	/* 10 matrix co. bt2020c / 9  bt2020nc */
	
	for (i = 0; i < 3; i++) {
		hdr10_data.primaries[i][0] = 0;
		hdr10_data.primaries[i][1] = 0;
	}
	hdr10_data.white_point[0] = 0;
	hdr10_data.white_point[1] = 0;
	hdr10_data.luminance[0] = 0;
	hdr10_data.luminance[1] = 0;
	hdr10_data.max_content = 0;
	hdr10_data.max_frame_average = 0;
	
	if (vinfo && vinfo->vout_device &&
	    vinfo->vout_device->fresh_tx_hdr_pkt)
		vinfo->vout_device->fresh_tx_hdr_pkt(&hdr10_data);

Also of note, unless I am missing something the HDR packet is not EDID related (the Dolby VSVDB is from the EDID in the normal case), the HDFury in altering that on input to the device and then altering the HDMI HDR packet like above on the way back to the display.


A CoreElec device (such as the robust Ugoos AM6b+) is around ~$150 but a HDFury with this functionality like a Vertex2 is around ~$500, and a device such as a PC to run effectively MadVR is also expensive.

Including both the Dolby VSVDB EDID and HDMI HDR packet (you are correct, not technically part of the EDID) manipulation on this software would enable a person to have a full solution in one device that is very cost effective. Paired with what you already developed for Dolby Vision VS10 Engine this would be one of the best devices currently available to play any video file (SDR, HDR10, HDR10+, LLDV, etc…) and also be able to do configurable dynamic tone mapping even if you do not have a TV-Led DV device which for Projectors is the majority of the cases.

2 Likes

Absolutely. To me a “neutral” HDR10 metadata (PQ0) would be quite allright BUT we mustn’t forget that there is also color space attribute in this metadata, and if I’m not mistaken, let’s take an example, your renderer is calibrated for dci-p3 (+ really aware of beeing in the corresponding dci-p3 mode, meaning that it has detected dci-p3 input OR that you’ve forced it in dci-p3), in that case you would want to set VSVDB to dci-p3 AND hdr10 metadata to dci-p3.

(you could as well calibrate your renderer based on bt2020 even if it has only dci-p3, it would probably mean that it won’t be able to switch between color spaces and so that it doesn’t care about the one indicated in the hdr10 metadata packet.)

Please correct me if I’m wrong. I’m just trying to improve our common understanding.

CPM, together with CE you have created a player that does not exist on the market…
This Ugoos with the part of the code written by you and CE, performs the same functions (or rather better) as a Zidoo + HDfury Vertex 2.
I have the vertex2 and also the zidoo, now I’m only using Ugoos am6b+

1 Like

It maybe it is using the dci-p3 VSVDB as the effective colour range limit for the DV processing, to match the target display gamut ability.

DV processing would have been done in ICtCp and the DV-LL may always have a basis of BT.2020 on output as everything else should “fit” in that space if I not wrong.

Yes DV = BT2020

Something I have glitchy the latest version, very strange behavior, modes then appear, then disappear, ie you can turn on vs10 and conversion HDR to DV, play the file, then go to the menu, and this mode can no longer be selected, only SDR will be, turn on the mode DV-LL, there too hdr to DV can no longer be selected.
The same for DV.
Reset before flashing this version did a full reset.
What is VS10 mode for, how does it differ from DV-LL?

Also there is a problem, the movie Harry Potter and the Goblet of Fire (2005) 4K HDR. fragment about 9min30sec.
The brightness of the face is about 2500 nit, in DV LL mode the details of the face are lost, while in tv-led mode the details of the face are transmitted to the TV without loss of details.
Dune Real vision also loses facial details, because the player always does processing, but Dune 8k pro even in LL mode, although it reduces brightness a lot, but leaves facial details.
It seems that the only true mode to make everything good is TV-led. That’s why using HDR to DV doesn’t always help.


I had the same problem with previous versions. I did a reset on that page and then in my case selected mode On Demand, LLDV, and selected each version having a problem, (ex. HDR10) to SDR and play a video type with the problem (ex. HDR10) then go back and select Dolby and play the same file again. As cpm mentioned once he changed the SDR8 and SDR10 output to be SDR only this menu have minor issues. You actually have to play a video file for the problem to resolve (not only select on the menu). In my case I also had to play a video type of each type that had this problem - similar to the problem of the DV video type file playing in HDR mode. Since you only have to do this once when you upgrade to a newer version it is not a big problem at all for having such a great addition to CE.

You can try setting the VSVDB for the LLDV using the spreadsheet (and tweak around the max lum settings - setting it higher or lower than the recommended value for your display) to see if it resolves the problem and matches the TV-Led image.

This is indeed where I was confused. I would totally admit that DV VSVDB color primaries are to set gammut limits similarly to luminance, so that every pixel near the gammut limits are rolled off smartly to avoid clipping. But last time I checked, putting bt709 primaries did not only do that, it really made every colors vastly more saturated (same as if you announce an actual bt709 stream as a bt2020 stream by mistake, colors are near to bt2020 gammut edges instead of bt709 edges) This is why I was initially really sure about this assumption. But I was wrong, especially because a renderer can thus indicate gammut limits in its VSVDB block, different from bt2020 ones, while still being calibrated for bt2020. So your assumption makes more sense. Thanks for your clarification. What I’ve experienced might have been caused by something else in the chain.

(and yes every other color spaces (the common ones) fit in that BT2020 space)

[VS10 only] was mainly an attempt to give functionality for SDR displays e.g. can still map down other content where possible to SDR, the other modes check the display capabilities and will not show if don’t have DV / HDR PQ etc

It will show mapping modes the the display could handle, but maybe should lock it down to SDR only and call it Player Led [SDR], and only SDR option would be available.

1 Like

Maybe a real world example of a difference between VS10 and VS12?
The Dune 8K pro I am thinking is VS12.
Your Display I guess is newer and probably VS12 based logic maybe!


Edit: um just checked their website and says:

Media processor: Amlogic S928X-K/J with Dolby Vision VS10 engine

I guess maybe that is a typo as think that SoC should be used with/is VS12.

Out of interest can it decode the FEL layer? - if so then for sure CE will be able to do it later with all the pieces of the puzzle solved. Imagine not otherwise others would have been just using this all along.