Learning about Dolby Vision and CoreELEC development

So far, I’ve found nine places color space conversions can occur, thought I had disabled/set them all to identity matrixes, and overridden the hardcoded coefficients relating to the onscreen display csc matrix in three more different places. Despite this, I still haven’t been able to stop the gui/pictures/software decoded video seeming to be output as YUV.

If anyone knows where this output is being set as YUV, pointers would be greatly appreciated.

This would solve a whole lot of difficulties with trying to set individual bits in each pixel. Easy in an image or software decoded 444 video - much harder in a YUV420 video and what led to abusing the embedding of the DV metadata with the parity scrambling.

Okay, but I won’t be able to do that until after April 29th.

1 Like

Update: SUCCESS tv-led dv only any device !

The catch: Currently only at a proof-of-concept stage / first triggering of the DV mode on the TV.


How to replicate:

  1. Install this build. This was built on 20.5 NG, no idea if it would work with 21.
  2. Set CoreELEC to 1080p 8-bit RGB output and change the refresh rate to trigger the changed mode. This will mess up the colors of some/most content - it is expected they appear all pinkish.
  3. Play test_pattern_yuv_as_rgb.mkv
  4. Pause the video, this is required (not strictly needed - but easier as the video is short). Do NOT have the OSD / giu showing at all or let CoreELEC dim.
  5. With the test pattern showing, trigger tv-led DV mode with:

echo DV_enable > /sys/devices/virtual/amhdmitx/amhdmitx0/attr

  1. Should now be in tv-led DV mode. Colors of the picture will have changed and the tick should now only be in the top DV box of the image.

Note: bringing up the gui / letting CoreELEC dim the screen / anything but keeping the video showing will cause the DV mode to stop.

3 Likes

This test is no longer needed. See updated post.

Good job, next step is to play own contents in dv mode, right?

Right.

I’ll split the remaining steps to get tv-led DV working on all devices (not limited to CoreELEC, also applies to computers, etc) into two category’s: technical uncertainty and implementation.


Technical Uncertainty
This is referring to any steps in the process of going from a file to outputting from a device. To me, the only remaining part here is the modifications made to the DV metadata from the file before it is output from the device. I discovered that the metadata is modified based on the captures of the DV tunnel - these modifications were minor though.

I haven’t yet looked into this in detail yet, but, by looking the source code provided here, it doesn’t seem anything major. This would need more looking at, but it seems limited to the functions update_md_for_hdmi, update_dm_l0_for_hdmi, and update_dm_ext_for_hdmi. My first impression is that these functions are small and self-contained enough to be understood / reimplemented.

I do not see this being a particular problem, and it shouldn’t stop other people working on the implementation.


Implementation:
Steps needed:

  1. Embed the metadata (after it has been modified per above) into the LSB of the chroma channel.
  2. Set hdmi AVI infoframe to full range 8-bit RGB
  3. Send the vendor specific infoframe

For CoreELEC:
Steps 2 and 3 are easy / known how to do. Step 1 would be easy if the video data could be read and written to at a stage after all processing (i.e. after adding in the GUI) is done. I don’t know how/if this can be achieved.

To be clear though, the only difficulty here is actually getting access to the bytes to read/write to a small number of pixels.

This is something that clearly licensed devices can do. It would be worth seeing if the code for this part can be found and if it does actually rely on parts of the SoC that are not only functional on licensed devices.

For FEL content, adding in the EL layer would also need to be done - I have no idea how this could be implemented.

For computers:
Step 2 is easy. To me, step 1 appears to be a straightforward process, all it should take is adding a custom post processing step to the video output. I can’t imagine this being hard for someone with experience in video post processing on a computer.

Step 3, I don’t know if computers let you send vendor specific info frames that you want. If so, that should give TV-led DV. If not, the vendor specific infoframe would need to be added after the computer, from my limited understanding this is something a hd fury could do.


Summary:
This is likely where I really would need help from others on the implementation parts to make meaningful progress. In particularly, any help and ideas on how to get the metadata embedded into the video stream in CoreELEC would be much appreciated.

Whilst you wait for advice from an actual CoreELEC dev, I would look at:

Looks like a good candidate as to find out where can inject the read and write, maybe!

1 Like

Does anyone know what is different between a playing and a paused video?

I’ve been trying to see a dynamic response, but have found that if the video is playing the tv will only flick into DV mode and immediately return to normal mode. Pausing the same video (which is just the one lossless encoded image) and triggering DV mode with the vsif works fine though.

@cpm It looks like that might be a useful refence, it seems to be used by the videocap capture code. Any idea where in the code I could add stuff to that has access to the decoded video frames so I can have a go trying to do the read/write? I’ve tried to have a look, but don’t even know where to start …

@Portisch Your name is all over the capture code, any input on how I can read/write to the top pixels of decoded video frames, so I can embed the DV metadata on the fly?

In a similar boat at present, maybe the person here is contactable and has some insight?

https://patchwork.kernel.org/project/linux-amlogic/cover/20180823114954.30704-1-mjourdan@baylibre.com/

What caught my eye was the fact as you say it was involved in decoder cap, and that there are 256 canvas so possibly have a few places can intercept and check/change the data.

Some related stuff - may shed a sliver of light:
https://forum.odroid.com/viewtopic.php?t=30724&start=50

The frame chain do start right after the decoder:

So I guess you will need to hijack the data somewhere there before it reach the kernel at all.

Thanks, I’ll see what I can get upto.

Any idea if it is possible to hijack the data stream at the other end of the process, i.e., after the osd has been combined / immediately before data is passed to the hdmi module for transmission?

Does anyone know of a video format that is supported by hardware decode that has a 4:2:2 or 4:4:4 subsampling? Finding one would be make testing with custom patterns much easier.

Every format I’ve tried that has 4:2:2 or 4:4:4 has fallen back to software decoding (or just not played) - which doesn’t work as that goes onto the osd layer which has a color space conversion.

Someone can please share a BluRay ISO in private with me?
All my BlueRays I have local here are non DV as they are to old.

And also a M2TS Dolby Vision media file?

Maybe there will be some news soon…

2 Likes

If you don’t really need a full ISO, I have converted the well-known BL_EL.mkv file to a dual-layer blu-ray here:

[link removed]

Both in ISO and BDMV format. The BDMV\STREAM folder contains the M2TS file.

I assume you are working on DTDL support? That would be really cool :grin:

BDVM dual layer support is worked on, yes.
I prefer original files as the source need to be developed on this data.

I assume that was you who sent me an access request? If so, I have sent you an e-mail (to the Yahoo address).

Yes, I will try the BL_EL.iso for tests but yes, I will need one big “virgin” ISO as well.

It’s there now. Please let me know when I can delete it again.

Since I’m at the moment exploring all box options for my new device, it would be much appreciated, if all goes well, to know which devices could have BDVM support ?