Anyone here unironically using Xorg with separate X screens?
It allows you to avoid the limitations of Xinerama such as bad handling of mixed refresh rates and VRR not working with multiple displays active.
I'm curious how many people are using such setup.
Mixed refresh rates actually work perfectly fine with randr and xcb. I think VRR is indeed broken though.
>Mixed refresh rates actually work perfectly fine with randr and xcb.
Do applications actually use the correct refresh rate on all displays? Do they not just always run at the refresh rate of the primary display?
Many don't run correctly and it's often mistakenly believed that it's an xorg limitation, but it's actually not. I'd wager it's because most people still use xlib which is limited in a lot of ways but it's way easier to program in. Also xcb's documentation is trash. However, with xcb you can correctly detect dynamic events like your window going to a different monitor and updating your rendering appropriately.
What about compositors? Is it possible to use xcb to make a compositor that works with mixed refresh rates?
It should be. I don't really use compositors, but for example the TearFree option (which is basically compositing just on the driver level) and glxgears work correctly.
Weird thing I've noticed about TearFree is that just enabling it can sometimes make compositors work correctly (i.e. no longer be capped to the lowest refresh rate). I wonder what the mechanism behind that is.
Anyway this is actually mostly irrelevant to me. My main motivation behind wanting to use separate X screens is VRR. There doesn't seem to be any way to make it work with Xinerama and Wayland is still like 5+ years away from being usable...
Xinerama is definitely too old for this. I'm Randr is more than a complete replacement but some people still have xinerama around in their code anyone. The best bet for fixing this is probably patching xorg. I doubt it is somehow impossible since there is already some sort of VRR support. Perhaps randr needs to be extended or something like that, but it should be doable.
x11 has many different ways to synchronize to display. With the simple old ones it syncs to the primary display, which means it may run at a higher fps but it still doesn't have tearing. With the newer sync functions you can sync to a specific framerate which you can get by just checking the framerate of the display the window is on. This option also allows you to handle frame overflows to make vsync perform smoother than traditional sync options. You can also always enable vsync but sleep to fit the vsync of the display it's on. x11 makes that all possible. This is the method serious games use for more consistent frame times (even on windows).
Most applications use a very simple vsync method that syncs to the primary display but it's easy to fix in xorg server. I can fix it, but you need to wait until I buy a VRR display to test it on lol
I have a VRR display but only one monitor kek. I attempted to connect my laptop to this monitor, and it wasn't able to detect that it was VRR capable. That might just be a the limitation of the HDMI I was using though since it wasn't even able to run it at 4k 60fps (only 30fps).
XCB isn't needed. This behavior is usually controlled by GLX/EGL or Vulkan.
Applications that only cover one monitor will synchronize to the refresh rate of that monitor. The issue is compositors that create one window covering all monitors. If they created separate windows for every monitor I'm pretty sure mixed refresh rates would work, but none of them do.
(At least this behavior applies to Mesa, who knows what NVIDIA is up to with their driver)
I'm using two monitors, main one with 144hz and second with 60hz. Worked out of the box with xrandr by just setting the first to 144hz. Might have to mess with vsync settings a bit in order to avoid screen tearing but I haven't had any huge issues as of yet.
I did it for a while. Works well enough for me as I rarely move windows across monitors so I'm not losing much. Only reason I stopped is that awesome doesn't support it well. It kinda works if you run two WMs, but it's still annoying in some ways.
>bad handling of mixed refresh rates
Works on my machine.
>but it's still annoying in some ways
What are some of the issues that you ran into?
For example, switching focus between monitors was impossible without a xdotool hack and even then it was a bit buggy. Also setting up different tags on both monitors was a pain among other things I don't remember any more.
Notably, awesome DID support separate X screens before, but it was dropped after version 4.1 or something. Maybe it could be patched back in, but I just went with xrandr instead.
>switching focus between monitors was impossible without a xdotool hack and even then it was a bit buggy.
I solved this by writing my own little C program with Xlib. It's working nicely so far.
>Also setting up different tags on both monitors was a pain among other things I don't remember any more.
This would probably be easy to patch into something like dwm, but I guess if you're using a more complicated WM that might be hard to do.
>I solved this by writing my own little C program with Xlib. It's working nicely so far.
not him, but can you share it anon senpai?
https://pastebin.com/CDpTGF7i
thanks tbhu ill try it out later
I stumbled upon this way or multi-monitor when I decided to put an an older video card to use in my set-up. Both monitors have the same refresh rate, so this hasn't been an issue.
It kind of feels like having two computers running, if that makes sense. You can't move windows between the two monitors, and this even goes so far as to not being able to open programs on both monitors at once. I found myself using two web browsers so I could have internet information on both monitors.
But, this also allows me have a monitor swap between workspaces independent of the other, which I think will be useful for reference.
>Xorg
This isn't 2003 anymore
tbh nvidia just werks, i dont have this problem on my 1080
Do you know if variable refresh rate on multi-monitor setups work on nvidia?
yea it does
Fricking hell. Mesa/xorg bros pls fix
It doesn't.
Why the frick are you lying?
I used to, mostly because I had four screens across two separate cards and couldn't make it work any other way. I strongly suspect that the reason I couldn't make it work any other way boiled down to "Nvidia are shitheads", multi-card works fine now with amdgpu, nouveau, and intel. VRR wasn't a thing back then so I never tried that.
Never, ever, under any circumstance, get Nvidia anything.
>multi-card works fine now with amdgpu, nouveau, and intel
No it doesn't. Xinerama completely disables 3D acceleration and copies everything to each card.
>Xinerama
found your problem
That's the only way to get multi-card on Xorg, unless you use Nvidia's Mosaic shit which only works on Quadros.
no it's not. I'm using two nvidia GPUs with a different monitor on each X screen/GPU, no Xinerama involved (i.e. can't drag windows between screens)
I'm using it but it seems to break gsync for me even without Xinerama, also there is a Qt6 bug where apps always start on the wrong monitor. Apparently multi-screen X is something so few people use that they don't actually test it.
I'm talking about without using multiple X screens. Read the post I'm replying to
Randr has supported that for like 10 years.
Randr is a protocol, none of those drivers support a screen spread across multiple GPUs without Xinerama.
It's implemented in xorg homie. Don't get smart with me. Your use case has been supported since 1.4.
https://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt?id=randrproto-1.4.0#n144
Linked the wrong line technically but whatever
https://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt?id=randrproto-1.4.0#n150
It's not my use case, I'm just correcting misinformation.
>It's implemented in xorg
Link to the code then, or you're just making shit up.
>implying xorg doesn't implement randr
uh anon what? This is the commit where they bumped the version (we're actually on 1.5 now). There's stuff before that where you can see them implementing this.
https://github.com/freedesktop/xorg-xserver/commit/c1602d1c17967bdd4db9db19b3a9c0dfca6a58aa
https://github.com/freedesktop/xorg-xserver/commits/master/randr
Post the commit where "multiple GPU rendering" is implemented then.
It's multiple commits, but it looks like it should be this series of changes that did it.
https://github.com/freedesktop/xorg-xserver/compare/3cbc4c10b52896324fe14d2ab56bd54577c9294c...c41922940adbc8891575b3321fadf01ff4cb5854
Those are all for PRIME support, I.e. two screens where one driver renders (source) and the other displays (sink). It's not multi-GPU rendering nor multiple GPUs on one screen.
>It's not multi-GPU rendering nor multiple GPUs on one screen
That's also PRIME, or PRIME render offload specifically.
See: It's not multi-GPU rendering nor multiple GPUs on one screen
>We want to render applications on the more powerful card and send the result to the card which has display connected.
>Xinerama completely disables 3D acceleration and copies everything to each card.
1. That depends on implementation. It worked fine with the Nvidia driver, for example.
2. No one uses Xinerama any more.
3. Copying buffers doesn't mean disabled 3D acceleration.
i used to run a separate x server to run games in 15 years ago
I do. I use my laptop in a docked configuration. The external monitor is the primary scree. The laptop's display runs a second X screen with an instance of suckless' tabbed and a small progran Ibwrote to automatically reparent windiws to that instance of tabbed. It also monitors how manny tabs there are, and it there are none, it turns the monitor off (it also turns it back on should a new windiw get mapped). Additionally it handles a few keybindings.
I mostly use it for a terminal when I want to start something that takes a long time, but which I want to monitor like a download or an update. I also use it if I want to keep some documentation open, or watch a video in the background.
The downside I have noticed is that some programs using glx just crash on the side monitor (screen 1) while they work on the main one (screen 0). I think it's related to using them both with the same gpu in a zaphodheads configuration.
>The downside I have noticed is that some programs using glx just crash on the side monitor (screen 1) while they work on the main one (screen 0). I think it's related to using them both with the same gpu in a zaphodheads configuration.
This shit is not tested at all any more, so bugs like that wouldn't surprise me.
Yes, they do. It's part of RandR 1.4.
>--setprovideroutputsource provider source
> Set source as the source of display output images for provider. This is only possible if source and provider have the Source Output and Sink Output capabilities, respectively. If source is
> 0x0, then provider is disconnected from its current output source.
> --setprovideroffloadsink provider sink
> Set provider as a render offload device for sink. This is only possible if provider and sink have the Source Offload and Sink Offload capabilities, respectively. If sink is 0x0, then provider
> is disconnected from its current render offload sink.
Even the Nvidia driver lets you do this through xrandr nowadays.
It's part of PRIME/reverse PRIME and there's also render offloading.
See: https://wiki.archlinux.org/title/PRIME#Reverse_PRIME
>Yes, they do. It's part of RandR 1.4.
Nothing you quoted supports that. Render offload works by creating at least two screens, one being a GPU screen.
You have PRIME (render offload) and reverse PRIME (using another GPU as dumb output pipe).
What exactly are you missing?
What started this moronic conversation in the first place -- a singular X screen for multiple GPUs and multi-GPU rendering. Randr offloading is neither of those: there's >=2 screens and the rendering work isn't distributed across them.
Just attach multiple GPUs to the same screen
And once again the only way of doing that is with Xinerama or Nvidia mosaic. Thanks for playing you lost.
Just use multiple providers and put them on the same output. Isn't this what you want? Or do you think you can't do this for some reason?
>Isn't this what you want?
No read again.
>Here you can see how there is only one X screen in a dual GPU setup
What's this supposed to prove? Put an ErrorF in init_screen(). xrandr doesn't loop over every screen, it just shows the screen number of whatever was handed to it either as an optarg or the default screen from xlib.
>Both outputs from two different GPUs attached to the same X screen.
Yeah an output not a whole card. Try again.
>What the frick do you expect? That glxgears renders on all GPUs at the same time like some frankenstein SLI? You can choose to use either the iGPU, or the dGPU. Or the weaker of two dGPUs, whatever.
What do you think the definition of multi-GPU rendering is? What you're talking about is just glorified GPU switching.
>What's this supposed to prove? Put an ErrorF in init_screen(). xrandr doesn't loop over every screen, it just shows the screen number of whatever was handed to it either as an optarg or the default screen from xlib.
It proves you can use multiple GPUs on a single screen.
>Yeah an output not a whole card. Try again.
What does "whole card" mean?
>What do you think the definition of multi-GPU rendering is? What you're talking about is just glorified GPU switching.
Oh wow, way to move the goalposts.
First you were like this
>That's the only way to get multi-card on Xorg, unless you use Nvidia's Mosaic shit which only works on Quadros.
>Randr is a protocol, none of those drivers support a screen spread across multiple GPUs without Xinerama.
Which are both wrong.
Now you also want heterogeneous multi GPU rendering. Wow. But hey, you can do that by creating multiple GL/Vulkan contexts for each GPU. It requires explicitly handling it by the program (unless you have a literal SLI setup), but you can do it. This has nothing to do with Xorg, though.
>It proves you can use multiple GPUs on a single screen.
This is called "fallacy of composition". An RROutput is an abstraction of a display output, not a GPU.
>What does "whole card" mean?
a graphics card
>Which are both wrong.
your denial is impressive
>Oh wow, way to move the goalposts.
This was always my goalpost...
>Now you also want heterogeneous multi GPU rendering. Wow. But hey, you can do that by creating multiple GL/Vulkan contexts for each GPU. It requires explicitly handling it by the program (unless you have a literal SLI setup), but you can do it. This has nothing to do with Xorg, though.
You didn't answer my question. Let me try again: what definition of multi-GPU rendering are you using?
>This is called "fallacy of composition". An RROutput is an abstraction of a display output, not a GPU.
All this shit is abstractions to do specific things.
>a graphics card
You have the outputs and can use it for rendering. What is missing?
>This was always my goalpost...
You are this anon, right?:
This is the anon you were originally replying to who mentioned "multi-card":
>multi-card works fine now with amdgpu, nouveau, and intel
Which in this context obviously means PRIME/reverse PRIME, not what you are talking about. Which is possible, but not transparently unless you are using SLI. Vulkan explicitly supports it.
>All this shit is abstractions to do specific things.
things can be different
>You have the outputs and can use it for rendering. What is missing?
rendering distributed across multiple gpus with only one X screen
>Which in this context obviously means PRIME/reverse PRIME, not what you are talking about.
Fine, sorry. It wasn't obvious to me since that still needs a screen per GPU to work. But I see now that by screen you actually mean non-GPU screen.
>a singular X screen for multiple GPUs and multi-GPU rendering.
That's what these xrandr options do. You have a single X screen with as many outputs from multiple GPUs as you want, plus you can offload rendering if you like.
>Randr offloading is neither of those: there's >=2 screens
GPU screens are an Nvidia thing, plus they're just virtual render targets. They don't affect how anything is actually displayed.
>and the rendering work isn't distributed across them.
You can select which GPU to use for rendering at runtime with the DRI_PRIME variable.
>You have a single X screen with as many outputs from multiple GPUs as you want
false
>plus you can offload rendering if you like.
irrelevant
>GPU screens are an Nvidia thing, plus they're just virtual render targets. They don't affect how anything is actually displayed.
No it's how randr offloading works on ALL drivers. From one of the commits previously mentioned
https://github.com/freedesktop/xorg-xserver/commit/9d179818293b466ec6f1777f0b792e1fbbeb318c
What you want to personally call them is irrelevant. They're still X screens to the server with their own ScreenRec.
>You can select which GPU to use for rendering at runtime with the DRI_PRIME variable.
Which isn't multi-GPU rendering, dumbass it's only one GPU. If glxgears covers multiple monitors on different GPUs every pixel is still rendered by exactly the same single GPU.
Here you can see how there is only one X screen in a dual GPU setup: https://wiki.gentoo.org/wiki/AMDGPU#Xrand_doesn.27t_see_HDMI_port_with_hybrid_system
>iGPU: eDP
>dGPU: HDMI-A-1-0
Both outputs from two different GPUs attached to the same X screen.
>Which isn't multi-GPU rendering, dumbass it's only one GPU. If glxgears covers multiple monitors on different GPUs every pixel is still rendered by exactly the same single GPU.
What the frick do you expect? That glxgears renders on all GPUs at the same time like some frankenstein SLI? You can choose to use either the iGPU, or the dGPU. Or the weaker of two dGPUs, whatever.
Imagine using loonix on desktop...
i3 doesn't support it
Support what exactly? This is not really window manager dependent at all.
Each instance can only run on one X screen
And that should work with i3 if you really want to do that instead of just using xrandr. It's not window manager dependent.
hes saying he would need to run 2 i3 instances, meaning he can't move windows between both, or share the workspaces
How do i set it up?