Bug 598 - Approaching Zero Latency
Summary: Approaching Zero Latency
Alias: None
Product: Libre-SOC's first SoC
Classification: Unclassified
Component: Specification (show other bugs)
Version: unspecified
Hardware: Other Linux
: --- enhancement
Assignee: Jacob Lifshay
Depends on:
Reported: 2021-02-15 03:58 GMT by Jacob Lifshay
Modified: 2021-02-16 23:44 GMT (History)
2 users (show)

See Also:
NLnet milestone: ---
total budget (EUR) for completion of task and all subtasks: 0
budget (EUR) for this task, excluding subtasks' budget: 0
parent task for budget allocation:
child tasks for budget allocation:
The table of payments (in EUR) for this task; TOML format:


Note You need to log in before you can comment on or make changes to this bug.
Description Jacob Lifshay 2021-02-15 03:58:36 GMT
I had an interesting idea: You know how GPU manufacturers have been increasing display fps like crazy? (Some monitors are up to 360fps iirc) The real reason is it reduces overall system latency -- from mouse/keyboard/etc. to pixels lighting up on the screen. So, why don't we try to design it so we can reduce latency to 10s/100s of microseconds by instead designing the system in a streaming fashion, so we render a few scan-lines before we send it to the display, rather than having to render a full screen before we start sending it to the display. Also, we take adaptive sync (variable refresh rate) to its logical extreme and can delay on the level of small groups of pixels if rendering is taking too long (can be done in DisplayPort since it's packet based, but obviously it breaks the current spec, I'd expect at least some displays to support this, unintentionally of course).

We can also have mechanisms to allow the scan-out engine to translate (move) the displayed picture partway through scanning-out a frame (potentially with sub-pixel resolution, interpolating the colors), interpolating the translation amount in time to smooth out the tearing you'd otherwise get.
Adding this to Vulkan shouldn't be that difficult, for full-screen games at least. In particular, this would be very useful for VR, since low latency is a necessity to avoid nausea.

This would allow us to achieve a lot of the benefits of extremely high refresh rates at a lower refresh rate, with power savings to match.
Comment 1 Jacob Lifshay 2021-02-16 23:44:32 GMT
I was thinking that if a GPU's raytracing performance was high enough, it could raytrace directly from the scanout circuits, no need for a frame buffer at all.