I had an interesting idea: You know how GPU manufacturers have been increasing display fps like crazy? (Some monitors are up to 360fps iirc) The real reason is it reduces overall system latency -- from mouse/keyboard/etc. to pixels lighting up on the screen. So, why don't we try to design it so we can reduce latency to 10s/100s of microseconds by instead designing the system in a streaming fashion, so we render a few scan-lines before we send it to the display, rather than having to render a full screen before we start sending it to the display. Also, we take adaptive sync (variable refresh rate) to its logical extreme and can delay on the level of small groups of pixels if rendering is taking too long (can be done in DisplayPort since it's packet based, but obviously it breaks the current spec, I'd expect at least some displays to support this, unintentionally of course).
We can also have mechanisms to allow the scan-out engine to translate (move) the displayed picture partway through scanning-out a frame (potentially with sub-pixel resolution, interpolating the colors), interpolating the translation amount in time to smooth out the tearing you'd otherwise get.
Adding this to Vulkan shouldn't be that difficult, for full-screen games at least. In particular, this would be very useful for VR, since low latency is a necessity to avoid nausea.
This would allow us to achieve a lot of the benefits of extremely high refresh rates at a lower refresh rate, with power savings to match.
I was thinking that if a GPU's raytracing performance was high enough, it could raytrace directly from the scanout circuits, no need for a frame buffer at all.