>>VMs implement host CPU deprecated instructions
They do, actually. They can implement/emulate all kinds of instructions that the CPU doesn't support directly.
2 years ago
Anonymous
A VM isn't emulation
2 years ago
Anonymous
>A VM isn't emulation
It's not *full* emulation, but what is a virtual IDE controller if not emulated?
Similarly, instructions not implemented on the CPU can be trapped and emulated. Even existing instructions can be disabled. Check qemu-system-x86_64 -cpu help and check the CPUID flags you can enable and disable. Not all, but some of these can be enabled even on CPUs not supporting them in hardware. They're just emulated in that case.
Don't even need a VM, back in the day there used to be kernels for things like OS X to run on normal PCs, but the software required SSE3 for example, but to run it on SSE2 CPUs, the kernels had implemented snippets of code from Qemu to emulate SSE3 instructions when software asked for them.
I really don't know, I used to think it was a problem with certain versions of VMware, but I tested with virtualbox and got the same results
Looks like our only option is
I really don't know, I used to think it was a problem with certain versions of VMware, but I tested with virtualbox and got the same results
Looks like our only option is [...]
it's a bug in ryzen's implementation of v8086: https://www.os2museum.com/wp/vme-broken-on-amd-ryzen/
It's not fixed either on 3xxx series or 5xxx series Zen CPUs.
I previously had 3800X, and currently I have 5800X in my desktop and 5700U in my laptop, Win98 doesn't work properly in all cases.
2 years ago
Anonymous
Yes it is, anon.
If your Win9x installs are breaking, it's not because of
[...]
it's a bug in ryzen's implementation of v8086: https://www.os2museum.com/wp/vme-broken-on-amd-ryzen/
.
2 years ago
Anonymous
Dude, the error that happens is literally the exact same
2 years ago
Anonymous
It was microcode patched in AGESA 1.0.0.6 in May 2017, and engineered out in Zen+ and later.
Further, the bug affected the IRET instruction (it didn't restore all registers in V8086 mode). This isn't what's happening ITT, so it's a different problem.
I'm sorry you're so deep in confirmation bias that you refuse to understand this, but that's not my problem.
stack pointers look normal, registers look normal. EFLAGS is normal. Interrupts are on, the system is not hung or single-stepping.
Some bytes seem to have been corrupted with FF's, creating this fricky jump instruction that sent the instruction pointer to Valhalla, triggering a general-protection fault, because 0xFFFFF...whatever is outside the code segment's bounds
stack pointers look normal, registers look normal. EFLAGS is normal. Interrupts are on, the system is not hung or single-stepping.
Some bytes seem to have been corrupted with FF's, creating this fricky jump instruction that sent the instruction pointer to Valhalla, triggering a general-protection fault, because 0xFFFFF...whatever is outside the code segment's bounds
if that was true wouldn't the first instruction at eip be the jmp
>Some bytes seem to have been corrupted with FF's, creating this fricky jump instruction that sent the instruction pointer to Valhalla, triggering a general-protection fault, because 0xFFFFF...whatever is outside the code segment's bounds
nah, that's just your disassembler spewing garbage, jmp takes a rel32, that simply jumps backwards. the solution is
[...]
it's a bug in ryzen's implementation of v8086: https://www.os2museum.com/wp/vme-broken-on-amd-ryzen/
>that's just your disassembler spewing garbage
welp that is a bug. e9 is, in fact, a relative 32-bit jump. fricking gahnoo gays cant write anything for shit >the solution is
[...]
it's a bug in ryzen's implementation of v8086: https://www.os2museum.com/wp/vme-broken-on-amd-ryzen/
no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
from the screenshot
[...]
if that was true wouldn't the first instruction at eip be the jmp
>if that was true wouldn't the first instruction at eip be the jmp
i have no idea how win98 is formatting this hex dump. EIP is in the middle of that hexdump somewhere, because the first 2 bytes are a perfectly valid move.
changing e9 45 8f to eb 45 8f fixes everything, changing the 32-bit relative jump to an 8-bit one, yielding valid instructions after that instead of garbage. maybe that one bit got flipped somehow
>no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
why would it not be? since sandy bridge (on the intel side, but surely amd has similar) you have unrestricted mode which can allow a guest drop down to real mode or do v8086. But either way others reported the same issue with vms: https://msfn.org/board/topic/177951-important-for-anyone-trying-to-run-windows-9x-under-a-ryzen-based-virtual-machine/
2 years ago
Anonymous
>no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
Even in a virtual machine the virtualized OS can enable and use v8086 mode. And as in the case with DOS-based Windows it is used for disk I/O among other things but if you actually have the bug try installing 32-bit Windows XP, it should crap its pants and not work since it uses v8086 for the generic graphics driver.
Unless you were using an emulator like PCem you're using the hardware v8086 mode
>unrestricted mode which can allow a guest drop down to real mode or do v8086 >Unless you were using an emulator like PCem you're using the hardware v8086 mode >dude change the execution mode in ring3 lmao
frick x86 to hell and back
>no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
Even in a virtual machine the virtualized OS can enable and use v8086 mode. And as in the case with DOS-based Windows it is used for disk I/O among other things but if you actually have the bug try installing 32-bit Windows XP, it should crap its pants and not work since it uses v8086 for the generic graphics driver.
Unless you were using an emulator like PCem you're using the hardware v8086 mode
>no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
As I understand it the bug in silicon affected the real VME as well as the AMD-V VME.
If the interrupt handling is bugged, then maybe it's not that the bits got flipped, but that it's jumping to a wrong address instead of the ISR, and executing random garbage, or code that is meant for 32 bit mode while in 16 bit mode.
Intel: has a relatively robust verification team, but are also greedy israeli homosexuals
amd: does not have a robust verification team and is just as israeli.
the choice is yours... anyhow it's not the first time AMD has fricked something up and having to publish an errata.
I took 98SE for a spin on my 3400G with VirtualBox and had no problems either.
I'm starting to suspect OP has some weird-ass configuration fault.
This. I used to have a 2600X and can't remember anything wrong with running 98 in a VM, I used to do it a lot to install old games for my computers so I could just transfer the installed games instead of having to install on each one.
>CPU isn't backwards compatible with a 20 year old OS
Get with the times gramps.
>VM
>Virtual Machine
>VMs implement host CPU deprecated instructions
>>VMs implement host CPU deprecated instructions
They do, actually. They can implement/emulate all kinds of instructions that the CPU doesn't support directly.
A VM isn't emulation
>A VM isn't emulation
It's not *full* emulation, but what is a virtual IDE controller if not emulated?
Similarly, instructions not implemented on the CPU can be trapped and emulated. Even existing instructions can be disabled. Check qemu-system-x86_64 -cpu help and check the CPUID flags you can enable and disable. Not all, but some of these can be enabled even on CPUs not supporting them in hardware. They're just emulated in that case.
how does it block implemented instructions?
Don't even need a VM, back in the day there used to be kernels for things like OS X to run on normal PCs, but the software required SSE3 for example, but to run it on SSE2 CPUs, the kernels had implemented snippets of code from Qemu to emulate SSE3 instructions when software asked for them.
I've noticed this as well. Now I'm curious what instruction is causing the issue.
I really don't know, I used to think it was a problem with certain versions of VMware, but I tested with virtualbox and got the same results
Looks like our only option is
it's a bug in ryzen's implementation of v8086: https://www.os2museum.com/wp/vme-broken-on-amd-ryzen/
Nah, it's not that. It only affected early-run Ryzen 1000s - it was long-fixed in Zen+ (which is what OP has).
It's not fixed either on 3xxx series or 5xxx series Zen CPUs.
I previously had 3800X, and currently I have 5800X in my desktop and 5700U in my laptop, Win98 doesn't work properly in all cases.
Yes it is, anon.
If your Win9x installs are breaking, it's not because of
.
Dude, the error that happens is literally the exact same
It was microcode patched in AGESA 1.0.0.6 in May 2017, and engineered out in Zen+ and later.
Further, the bug affected the IRET instruction (it didn't restore all registers in V8086 mode). This isn't what's happening ITT, so it's a different problem.
I'm sorry you're so deep in confirmation bias that you refuse to understand this, but that's not my problem.
unironically try WinME (last DOS-based)
test: file format binary
Disassembly of section .data:
00000000 <.data>:
0: 8b 45 f8 mov -0x8(%ebp),%eax
3: e9 45 8f ff ff jmp 0xffff8f4d
8: 6a 08 push $0x8
a: 6a 40 push $0x40
c: ff .byte 0xff
d: 15 .byte 0x15
e: 10 11 adc %dl,(%ecx)
stack pointers look normal, registers look normal. EFLAGS is normal. Interrupts are on, the system is not hung or single-stepping.
Some bytes seem to have been corrupted with FF's, creating this fricky jump instruction that sent the instruction pointer to Valhalla, triggering a general-protection fault, because 0xFFFFF...whatever is outside the code segment's bounds
How do you know these things anon?
lots of googling and a few years of my life that i'm never getting back
I hope you get the chance to use your knowledge to help people anon
from the screenshot
if that was true wouldn't the first instruction at eip be the jmp
>Some bytes seem to have been corrupted with FF's, creating this fricky jump instruction that sent the instruction pointer to Valhalla, triggering a general-protection fault, because 0xFFFFF...whatever is outside the code segment's bounds
nah, that's just your disassembler spewing garbage, jmp takes a rel32, that simply jumps backwards. the solution is
>that's just your disassembler spewing garbage
welp that is a bug. e9 is, in fact, a relative 32-bit jump. fricking gahnoo gays cant write anything for shit
>the solution is
it's a bug in ryzen's implementation of v8086: https://www.os2museum.com/wp/vme-broken-on-amd-ryzen/
no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
>if that was true wouldn't the first instruction at eip be the jmp
i have no idea how win98 is formatting this hex dump. EIP is in the middle of that hexdump somewhere, because the first 2 bytes are a perfectly valid move.
changing e9 45 8f to eb 45 8f fixes everything, changing the 32-bit relative jump to an 8-bit one, yielding valid instructions after that instead of garbage. maybe that one bit got flipped somehow
>no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
why would it not be? since sandy bridge (on the intel side, but surely amd has similar) you have unrestricted mode which can allow a guest drop down to real mode or do v8086. But either way others reported the same issue with vms: https://msfn.org/board/topic/177951-important-for-anyone-trying-to-run-windows-9x-under-a-ryzen-based-virtual-machine/
>unrestricted mode which can allow a guest drop down to real mode or do v8086
>Unless you were using an emulator like PCem you're using the hardware v8086 mode
>dude change the execution mode in ring3 lmao
frick x86 to hell and back
>no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
Even in a virtual machine the virtualized OS can enable and use v8086 mode. And as in the case with DOS-based Windows it is used for disk I/O among other things but if you actually have the bug try installing 32-bit Windows XP, it should crap its pants and not work since it uses v8086 for the generic graphics driver.
Unless you were using an emulator like PCem you're using the hardware v8086 mode
>no. hardware v8086 is never enabled. this is in a virtual machine on a modern system
As I understand it the bug in silicon affected the real VME as well as the AMD-V VME.
If the interrupt handling is bugged, then maybe it's not that the bits got flipped, but that it's jumping to a wrong address instead of the ISR, and executing random garbage, or code that is meant for 32 bit mode while in 16 bit mode.
> mask out the VME CPUID bit
Uhhhh, what? How do I do that in QEMU? Is replacing a guest CPU to a specific model enough?
-cpu host,-vme
Something like that.
amd had some shit with 98, that was solved by some patch that is available somewhere
Ok, the retail version of WinME didn't even boot the iso
Will try the OEM version now
Why do you think it would be different
It was, the OEM iso actually got detected and installed without any hitches actually
Color me surprised
The VM software is at fault here.
Intel: has a relatively robust verification team, but are also greedy israeli homosexuals
amd: does not have a robust verification team and is just as israeli.
the choice is yours... anyhow it's not the first time AMD has fricked something up and having to publish an errata.
dear newbie,
you need to emulate the CPU as well
use PCem
homosexual
>vmware
Works fine for me in VMware Workstation with a 5600X.
how so?
I took 98SE for a spin on my 3400G with VirtualBox and had no problems either.
I'm starting to suspect OP has some weird-ass configuration fault.
This. I used to have a 2600X and can't remember anything wrong with running 98 in a VM, I used to do it a lot to install old games for my computers so I could just transfer the installed games instead of having to install on each one.
use QEMU, moron