Author Topic: How many bytes are supposed to be on the stack for RETF from 32bit real mode?  (Read 132 times)

Offline ben321

  • Full Member
  • **
  • Posts: 173
Ok so I've been experimenting with 32bit protected mode, and am trying to get back to 16bit real mode. Of course when you clear CR0 register's protected mode bit you are back in a state somewhere between 32bit protected mode and 16bit real mode. I think this is called 32bit real mode. This state remains until you run a far instruction that sets the code's CS segment (which puts it back into 16bit real mode). In my case, since I got into protected mode with a far CALL instruction, I should be returning with a RETF instruction. However, while the initial call from 16bit real mode put 4 bytes on the stack (16bit segment and 16bit offset), the return to 16bit real mode (due to the RETF instruction being run in 32bit real mode), should be expecting there to be 6 bytes on the stack (16bit segment and 32bit offset). And I've already compensated for this by fixing the size of the values on the stack for the RETF instruction.

Now I've tested my code in DosBox, but something strange is going on. Instead of using only 6 bytes for the RETF instruction, it seems to be using 8 bytes (32bit segment and 32bit offset) instead. The result is that it kept getting my stack out of sync , even though it my code pointer at the right location after the RETF. It's like it popped 2 extra bytes off the stack that I'm not sure what they were even used for. This doesn't make any sense.  The segment part of an address is never a 32bit number. I managed to fix it by pushing the segment number on the stack as 32bit number (and of course keeping the offset as 32bits as well). But from my understanding, this extra fix (using 8 bytes on the stack for a 32bit RETF) shouldn't be needed. Is this just a DosBox bug, or is this behavior correct for real hardware too? Does the fact that the RETF instruction is being run from the strange 32bit real mode, and that the destination segment is a different bitness (16bit real mode), something that actually should cause the behavior I'm seeing?
« Last Edit: February 19, 2023, 09:43:45 PM by ben321 »

Offline ben321

  • Full Member
  • **
  • Posts: 173
I thought I found a solution, but I realized I hadn't. My above problem still remains, that the RETF from the 32bit real mode to the 16bit real mode is popping 8 bytes off the stack, instead of the 6 bytes that it should be.
« Last Edit: February 19, 2023, 11:41:26 PM by ben321 »

Offline ben321

  • Full Member
  • **
  • Posts: 173
Ok, I finally found a partial solution. It doesn't answer my question about why the RETF instruction seemed to be popping 8 bytes off the stack instead of 6, but it did seem to compensate. I needed to use an override byte in the far call from a 16bit mode to 32bit mode, or alternatively an override byte in the RETF instruction that takes you back to the 16bit mode from the 32bit mode. Note that when changing the far call, you also need to correctly set the size of the data that it uses as a pointer to the destination of the jump. So when making the far call, the pointer's offset value must be 32 bits in size (segment still 16 bits). This is in contrast to a 16bit offset when using a normal 16bit far call. You don't need to change this pointer's offset field size (and can keep it just a 16bit offset field), if you change the RETF to a RETFW, to force the far return instruction running in 32bit mode to behave like it would normally when running in 16bit mode.