Well, I had the dolby settings stuck and also one of the test files I was using did not work. I have reset the settings and avoid using the file that was giving problems so now P5 is working with the standard CE build.
Not sure if I will have to do reshaping for my use case but just using the hdmi code we discussed is not working, so I am working my self backwards on the code to find the appropriate place to be able use the conversion.
It would be much easier if there was documentation on the CE code and how the program flows since for me it is a guessing game the way it is.
So I checked the IPT_MAP and in the midst of changing the code and researching others doing the same a found out your interactions with the osmc folks. I took a look at their open source code (they also have secure private code as you know). Although I did not find a solution for my problem yet, I believe their solution for transcoding ICtCp fulfills your needs for processing P5 including a way to (re)create the appropriate rpu. Not sure if you looked at their open source code in detail.
They do not have the latest code on Github to view (at least I could not find it since what is there is very old or updates are not shown). I had to download the “tag” full code (see below) which is not that big:
The appropriate files are:
amcsc.c
set_hdr2_v0.c
There are also dependencies on the following files:
amcsc_pip.c
amvecm.c
amcsc.h
set_hdr2_v0.h
vframe.h
amvecm.h
hdr_curve.h
The portion that is secure is the osmc_videoenhancement which includes the following files:
videoenhancement.c
device.c
tee_core.c
tee_drv.h
optee_private.h
mod_devicetable.h
and probably others that I did not trace.
Some of these files have two versions - ones that are on the osmc secure folders and ones that are not.
As you probably know a good amount of the base code is amlogic also so there is a large overlap between CoreELEC and osmc.
I think it could be resolved by enabling the second OSD layer over the top of the existing one. This would also make a bunch of other stuff cleaner as well, and I suspect may also resolve issues with 4k 60Hz videos.
But…when I tried to do it, I couldn’t work out how to enable the second OSD layer at all.
If anyone knows how / can work out how to enable the second OSD layer (with random / hardcoded / whatever data), I could that it from that point.
Take a look at bitdepth.c which has this function and others related to osd2 enablement.
As we discussed, I am trying to get the video stream just after the dolby vision engine completes all of its processing (specially after the FEL process is done) and before any other processing begins? I know you do not use the dolby engine but based on your code where (what function is called) do you go from the VIU/CSC to the VPP, and where (what function is called) do you go from the VPP to the Video Enconder, and where (what function is called) do you go from the Video Encoder to the HDMI module? Thank you
Thanks, I’ll take a look, been a while since I tried the second OSD and have a better grasp on CE now.
I was under the impression that CE doesn’t do any other processing. Not heaps of certainty on that though. There may be some chroma up-sampling/down-sampling going on though
That I was I thought my previous suggestion of doing the color space conversion in the hdmi module would work, it appears to be the last possible stage the video stream can be touched so the FEL layer would already be included.
In primary_render_frame, which is in video.c, I call my code to add the metadata to the OSD layer.
That works by finding the canvas of the OSD layer, and directly writing the desired pixels values. This then makes it to the screen as the OSD layer always overlays the video layer. Given the OSD and video streams are clearly separate at this point, this would be prior to the VPP block which is responsible for blending all the layers together.
As the OSD layer is in a RGB format at this point (OSD layer gets converted at some later point to YUV), I guess it may also be on the input side of the VIU/CSC?
Beyond that, I haven’t touched any code related how the actual video stream gets processed through the chain of different modules and don’t really understand it. So unfortunately can’t help.
If by any chance you do work out how to access the video stream in code at (any point) after the VPP modules has blended the layers together do let me know. That would be a far better spot to embed metadata.
Thank you. Currently I am looking at amvecm.c to see if that helps me and that seems to be at VPP level and have metadata functions. So you could take a look to see if it helps you.
So now I am trying to do a combined approach between changing the CE code but also changing the HDMI color space converter function hdmitx_csc_config() coefficients (e.g. csc_coeff_a1) as you outlined. The problem I am having is that on the hdmitx_csc_config() function the existent coefficients are in hexadecimal but when I convert to decimal the numbers are too large (some numbers go as high as 30004). Even if I divide by the 4096 (or 8192 or 16384) as described in one of the document you provided the value is still higher that 1.0 which I assume is not correct. Do you know how the numbers in those coefficients should look like (base, multiplication, and range) and how I should convert the decimal to hexadecimal to make this HDMI conversion to work correctly? Thank you.
Thank you, I am using a calculator that has 2 complement so unless they are using some different methodology, I am accounting for negative numbers.
How is your P5 endeavor going?
By the way the last point I found before going to the hdmi module that still does metadata manipulations is amdolby_vision.c and that is where cpm has implemented the HDR InfoFrame metadata and the VSVDB injection. I know you do not use the dolby engine but you can cut/comment out anything you want or do not need.
As a heads up for anyone interested in this. This isn’t something I am currently looking at, I don’t have a use case and there is other stuff I’d look at doing before.
I am getting closer but I am in doubt what is the best video stream variable/structure to use if I want to convert the video stream from ICtCp to YCbCr (or RGB) with a color space conversion function I am writing. I think I can put the conversion function in either hdmi_tx_main.c or amdolby_vision.c but again not sure what is the best variable/structure and substructure to use such as vinfo, vsif or something else. Thank you.
Update to latest nightly with audio sync improvements
Removed default incorrect 4 frame audio delay present in the nightly
Fixed what appears to be a apparent numerical comparison error for fractional frame-rates. Seems to have reduced the variation in audio sync - doesn’t appear to have caused any new issues from limited testing. This is slightly experimental - comment if any issues seem to have appeared.