<spacekookie>
CPU: Intel(R) Core(TM) i5-3360M CPU @ 2.80GHz
<qyliss>
^ also a kernel panic
<qyliss>
this is good because it means it's not just my machine
<spacekookie>
I assume we have different CPUs?
<qyliss>
I have an i5-2520M
<qyliss>
Mine is Sandy Bridge, yours is Ivy Bridge
<qyliss>
So based on current data, Ivy Bridge and older doesn't work, and Skylake and newer does work
<qyliss>
So it would be especially useful if anybody has a Haswell or Broadwell they could test on
<qyliss>
In thinkpad terms that'd be a tx40
<qyliss>
Oh, also, everybody who ran the script: this will have created a nixos.qcow2 file in the directory you ran it in
<qyliss>
you might want to delete that :)
<qyliss>
I wonder if I could fix the leaveDotGit impurity
<IdleBot_2e4f9b4b>
Does it use git meaningfully? You could remove .git and re-git-init, I guess?
<qyliss>
maybe
<IdleBot_2e4f9b4b>
(I wonder what share of leaveDotGit could be replaced with this in Nixpkgs, but will not investigate right now)
<IdleBot_2e4f9b4b>
Is the non-KVM boot expected to hand for ~59.2 seconds?
<IdleBot_2e4f9b4b>
Then 60.5s more. Or maybe it expects something that is not provided in that jail…
<qyliss>
Yeah non-KVM is slow af
<qyliss>
why are you trying a non-KVM boot, though? The reproduction I posted should use KVM.
<IdleBot_2e4f9b4b>
To see how success looks like… (also launched a jailed terminal without KVM and thought if I can do anything in it before launching a proper one… big mistake)
<alj[m]>
The oldest Intel processor I have access to is the same you have qyliss (inside the t420) I can reproduce that sometime today or tomorrow if that's still relevant by then
<IdleBot_2e4f9b4b>
Aha, virtio_pci_probe -> vp_reset Oops-es (but apparently the boot process goes on? But fails because vda is not there with broken virtio?)
<IdleBot_2e4f9b4b>
i7-3740QM in Thinkpad W530
<qyliss>
it's weird because other virtio-pci devices work fine
<qyliss>
i7-3740QM is also Ivy Bridge so fits the pattern
<qyliss>
kernel-side, it's the vp_reset in virtio_pci_modern.c that's being called (as opposed to virtio_pci_legacy.c)
<qyliss>
and other devices are definitely calling that same function and succeeding.
<qyliss>
Current focus of my attention is virtio_vhost_user_init_bar in virtio-vhost-user.c in QEMU
<qyliss>
The reset is the first time the kernel tries to write to the PCI stuff
<qyliss>
I'm guessing it's just not getting set up right somehow
<qyliss>
I think it might be interesting to compare it to another QEMU device that does work to see if something is different
<IdleBot_2e4f9b4b>
Is this Qemu repository so old as to lack IvyBridge emulation?
<qyliss>
no
<qyliss>
It's from 2019
<IdleBot_2e4f9b4b>
I tried -cpu help and there is no IvyBridge — is it just switched off during the build then?
<qyliss>
don't know
<qyliss>
It should have all the way to skylake
<IdleBot_2e4f9b4b>
Ahhh, I am stupid. It has _some_ IvyBridge, but not what I copied from a fresher Qemu help
<qyliss>
-cpu IvyBridge works for me
<IdleBot_2e4f9b4b>
But it does not seem to emulate as faithfully as to fail
<qyliss>
yeah
<qyliss>
Similarly -cpu Skylake-Client on a Sandy Bridge machine will still panic
<IdleBot_2e4f9b4b>
Even -cpu qemu64
<qyliss>
I'm staring at virtio_vhost_user_init_bar but can't figure out how I can find out what guest address the PCI bars get mapped to
<IdleBot_2e4f9b4b>
If it would be specifically useful I could try i7-4770R tonight. Hopefully. Never got around to doing something about the failed RAM slot, but the second one with 8 GiB should be enough…
<IdleBot_2e4f9b4b>
(I have an old GB-BRIX that I have not booted for years)
<qyliss>
I think it would be useful to have that data
<qyliss>
We probably won't be much further along by tonight because I'll be going to sleep in the next few hours I'd imagine
<qyliss>
(woke up at 19:00 UTC yesterday or so)
<IdleBot_2e4f9b4b>
If a programmer wakes up in the morning, it is a part of round-the-clock phase shift…
<IdleBot_2e4f9b4b>
OK, hopefully it will not be too hard to bring up the partially broken BRIX, will try
<qyliss>
It feels a bit weird to me that one of the virtio-vhost-user PCI bars is 64 GiB
<qyliss>
holy shit that's it
<qyliss>
changed that to be 64 MiB
<qyliss>
no more kernel panic
<qyliss>
I have no idea what an appropriate size of this thing is
<qyliss>
But I _highly_ doubt 64 GiB is the correct number
<qyliss>
cc puck edef
<qyliss>
let's iterate and find the maximum size my computer will allow
<qyliss>
32 GiB works
<IdleBot_2e4f9b4b>
That requires a rebuild, right?
<qyliss>
Yeah
<qyliss>
I'm working out of a QEMU tree
<puck>
qyliss: hrmmm. so like, 64GiB BARs seem /reasonable/ to some degree
<puck>
like, this is a worst-case max i think
<qyliss>
sure
<qyliss>
I am kinda unclear why older CPUs wouldn't like them
<qyliss>
I wonder if Intel would have this limitation documented somewhere
<puck>
oh.
<puck>
wait a minute, what was the error again
<puck>
like, which address did it break on
<qyliss>
IdleBot_2e4f9b4b: if you wanted to test, you could substituteInPlace 1ULL << 36; to 1ULL << 35; in virtio-vhost-user.c
<puck>
qyliss: i think i know the issue now, but it needs a bit of poking