Learning about Dolby Vision and CoreELEC development

what do you mean?

Think your right here.

The thought to do the same trick occurred to me when I saw the completely messed up colors when I disabled the bt.601 conversion and set the output mode to rgb.

@DMDreview If you are talking about the file test 8bit rgb2.avi, that capture is limited range rgb. The white values are (235, 234, 235) and the black is (16, 15, 16). Don’t know why these aren’t all 235 or 16, respectively though.

edit: the file dv-test-orig video.mp4 you made doesn’t explicitly flag the range as limited, guess that leaves it up to the player to make a guess… So the oppo may be outputting full-range RGB but interpreting the video as full range when it is limited.

Would it be possible to capture a test file with YUV4:2:2 and compare it with the original to check if any unwanted conversion is occurring.

This capture I made when a PNG file was being displayed, not a video file
Video name test rgb png -oppo full.avi.
If you watch it in Davinci you need to specify in the parameters that it is Full range.
Otherwise the range will be processed incorrectly.

To compare capture from laptop intel hd Dell PNG rgb full.avi

1 Like

As you said, the computer gave a perfect output. The oppo capture is also full range, but has differences around edges, and the top pixels where the metadata goes. Somehow the metadata still decodes though…

Any processing done by coreelec will probably need to be disabled to let the metadata passed through untouched. Looks like that is what the existing DV code does.

@DMDreview I’ve got a test build for you now. As far as I can tell, it should give a output flagged as full range RGB without hardware decoded video going through any color space conversions. Test build:
https://mega.nz/file/cdhX3YST#HexKZShkzfO7FxTdd5W0bOIlUnwW1a6cy16qCgIZVEs

I’ve haven’t yet worked out where the gui is being converted from rgb, so when you set the gui to 1080p 8-bit RGB, it will appear all pink. Any software decoded videos will also have the same problem.

Hardware decoded video though should be output flagged as full range RGB. As this is happening without any color space conversions, playing back any normal file you have will have distorted colors.

I have made a custom video, test_pattern_yuv_as_rgb.mkv, where, if the YUV values are interpreted as RGB, it looks as it should. This is what is outputted from the hdmi.

This is based on the original png with some changes:

  • Blocks of solid colour to help with testing that each of the bits in each of the 3 channels can be individually set.
  • The embedding of the DV metadata has been altered by abusing the parity check to allow it to be placed into the non-downsampled luma channel (corresponds to green). This is needed as the hardware decode only seems to work with YUV 4:2:0, and the bits need to be set for each pixel
  • Block of random values to test if anything unexpected happens.

Partial results from looking at my TV:

  • Output is indeed full range RGB. I can see changes where values are below 16, and they clip if I set my TV to limited range.
  • The custom test video has been formed as intended. It look as it should when played back. Note for others: this is not how it looks when played back normally / on a computer.
  • The metadata is embedded where expected in the top row. This can now be visually seen as (by abusing the scrambling / parity check) it has been placed in the most significant bit of the green channel. Note for others: this works as the data is technically still embedded into the 5th bit of the blue channel, despite these bits still having a zero, due to the parity scrambling.

@DMDreview Test I would like from you:

  • A capture of the test_pattern_yuv_as_rgb.mkv with the corelec set to 1080p 8-bit RGB output using the provided build.

So far, I’ve found nine places color space conversions can occur, thought I had disabled/set them all to identity matrixes, and overridden the hardcoded coefficients relating to the onscreen display csc matrix in three more different places. Despite this, I still haven’t been able to stop the gui/pictures/software decoded video seeming to be output as YUV.

If anyone knows where this output is being set as YUV, pointers would be greatly appreciated.

This would solve a whole lot of difficulties with trying to set individual bits in each pixel. Easy in an image or software decoded 444 video - much harder in a YUV420 video and what led to abusing the embedding of the DV metadata with the parity scrambling.

Okay, but I won’t be able to do that until after April 29th.

1 Like

Update: SUCCESS tv-led dv only any device !

The catch: Currently only at a proof-of-concept stage / first triggering of the DV mode on the TV.


How to replicate:

  1. Install this build. This was built on 20.5 NG, no idea if it would work with 21.
  2. Set CoreELEC to 1080p 8-bit RGB output and change the refresh rate to trigger the changed mode. This will mess up the colors of some/most content - it is expected they appear all pinkish.
  3. Play test_pattern_yuv_as_rgb.mkv
  4. Pause the video, this is required (not strictly needed - but easier as the video is short). Do NOT have the OSD / giu showing at all or let CoreELEC dim.
  5. With the test pattern showing, trigger tv-led DV mode with:

echo DV_enable > /sys/devices/virtual/amhdmitx/amhdmitx0/attr

  1. Should now be in tv-led DV mode. Colors of the picture will have changed and the tick should now only be in the top DV box of the image.

Note: bringing up the gui / letting CoreELEC dim the screen / anything but keeping the video showing will cause the DV mode to stop.

3 Likes

This test is no longer needed. See updated post.

Good job, next step is to play own contents in dv mode, right?

Right.

I’ll split the remaining steps to get tv-led DV working on all devices (not limited to CoreELEC, also applies to computers, etc) into two category’s: technical uncertainty and implementation.


Technical Uncertainty
This is referring to any steps in the process of going from a file to outputting from a device. To me, the only remaining part here is the modifications made to the DV metadata from the file before it is output from the device. I discovered that the metadata is modified based on the captures of the DV tunnel - these modifications were minor though.

I haven’t yet looked into this in detail yet, but, by looking the source code provided here, it doesn’t seem anything major. This would need more looking at, but it seems limited to the functions update_md_for_hdmi, update_dm_l0_for_hdmi, and update_dm_ext_for_hdmi. My first impression is that these functions are small and self-contained enough to be understood / reimplemented.

I do not see this being a particular problem, and it shouldn’t stop other people working on the implementation.


Implementation:
Steps needed:

  1. Embed the metadata (after it has been modified per above) into the LSB of the chroma channel.
  2. Set hdmi AVI infoframe to full range 8-bit RGB
  3. Send the vendor specific infoframe

For CoreELEC:
Steps 2 and 3 are easy / known how to do. Step 1 would be easy if the video data could be read and written to at a stage after all processing (i.e. after adding in the GUI) is done. I don’t know how/if this can be achieved.

To be clear though, the only difficulty here is actually getting access to the bytes to read/write to a small number of pixels.

This is something that clearly licensed devices can do. It would be worth seeing if the code for this part can be found and if it does actually rely on parts of the SoC that are not only functional on licensed devices.

For FEL content, adding in the EL layer would also need to be done - I have no idea how this could be implemented.

For computers:
Step 2 is easy. To me, step 1 appears to be a straightforward process, all it should take is adding a custom post processing step to the video output. I can’t imagine this being hard for someone with experience in video post processing on a computer.

Step 3, I don’t know if computers let you send vendor specific info frames that you want. If so, that should give TV-led DV. If not, the vendor specific infoframe would need to be added after the computer, from my limited understanding this is something a hd fury could do.


Summary:
This is likely where I really would need help from others on the implementation parts to make meaningful progress. In particularly, any help and ideas on how to get the metadata embedded into the video stream in CoreELEC would be much appreciated.

Whilst you wait for advice from an actual CoreELEC dev, I would look at:

Looks like a good candidate as to find out where can inject the read and write, maybe!

1 Like

Does anyone know what is different between a playing and a paused video?

I’ve been trying to see a dynamic response, but have found that if the video is playing the tv will only flick into DV mode and immediately return to normal mode. Pausing the same video (which is just the one lossless encoded image) and triggering DV mode with the vsif works fine though.

@cpm It looks like that might be a useful refence, it seems to be used by the videocap capture code. Any idea where in the code I could add stuff to that has access to the decoded video frames so I can have a go trying to do the read/write? I’ve tried to have a look, but don’t even know where to start …

@Portisch Your name is all over the capture code, any input on how I can read/write to the top pixels of decoded video frames, so I can embed the DV metadata on the fly?

In a similar boat at present, maybe the person here is contactable and has some insight?

https://patchwork.kernel.org/project/linux-amlogic/cover/20180823114954.30704-1-mjourdan@baylibre.com/

What caught my eye was the fact as you say it was involved in decoder cap, and that there are 256 canvas so possibly have a few places can intercept and check/change the data.

Some related stuff - may shed a sliver of light:
https://forum.odroid.com/viewtopic.php?t=30724&start=50

The frame chain do start right after the decoder:

So I guess you will need to hijack the data somewhere there before it reach the kernel at all.

Thanks, I’ll see what I can get upto.

Any idea if it is possible to hijack the data stream at the other end of the process, i.e., after the osd has been combined / immediately before data is passed to the hdmi module for transmission?

Does anyone know of a video format that is supported by hardware decode that has a 4:2:2 or 4:4:4 subsampling? Finding one would be make testing with custom patterns much easier.

Every format I’ve tried that has 4:2:2 or 4:4:4 has fallen back to software decoding (or just not played) - which doesn’t work as that goes onto the osd layer which has a color space conversion.