Ok, so I made a simple Win32 program, and then observed the output of this program via OllyDbg.
If I do this with unsigned multiplying:
mov ax,0x4000
mov cx,0x0002
mul cx
AX receives the value 0x8000 (as I'd expect, as it is a valid value).
However if I change it to signed multiplying:
mov ax,0x4000
mov cx,0x0002
imul cx
I still get the same output value of 0x8000 stored in AX. However, this is invalid, since both inputs are positive, and therefore the output is clearly also positive, but positive 0x8000 is an INVALID value for a 16bit signed integer. Why does the CPU actually complete this calculation, instead of throwing an error (which then would need to be handled by SEH, structured exception handling, in the program)?
It seems that with the CPU ignoring this error condition, MUL and IMUL operators are actually identical.