Reading latency by EDID will not work, it’s not safe and not supported by most systems. Also it’s fact latency is different for different resolutions:
vLatency: Invalid/Unknown
aLatency: Invalid/Unknown
i_vLatency: Invalid/Unknown
i_aLatency: Invalid/Unknown
So I added a new latency tweak to advanced settings:
<advancedsettings>
<video>
<latency>
<resolution>
<strId>2160p24hz</strId>
<delay>-166.833344</delay>
</resolution>
</latency>
</video>
</advancedsettings>
So for each resolution a different latency can be set.
It’s applied on end. First is delay
, may override by sub node refresh
delay
, may override by sub node resolution
delay
.
So latency is now calculated by:
- tweak latency [advanced settings latency]
- + display latency [(framebuffercount + 1) / framerate]
- - user set audio delay
So for the sample above with resolution latency of -166.833344ms the final value will be 0ms latency because [(3 + 1) / 23,976] = 166.833344ms.
Here a test image:
It’s now the question is this enough? It can not be auto corrected as the latency is different for every system. The default do cover mostly of hardware setups but some need a tweak.
Or is there another tweak need like somehow by resolution + audio format?
Or is resolution alone enough?
It’s also the question if same like presentation buffer number can/should be used instead a ms
value. Default is 3
for triple buffering and can be decreased to double buffering. But maybe 2160p24hz do just need another number like 5 or 6.