I am fairly new to assembly language but not new to programming in general. I will be looking over some machine organization books, but I decided to look ahead at cpu core emulation. I found some material on the internet which got me up to speed on emulator programming, but I came to a very important question. At least I have not found detailed explanation of this yet, which is why I am here to ask.
Hi Rob0Tix,
First and foremost, Welcome to the forum.
To make answering your questions easier/clearer, I'll break up your statement a bit and answer your post in pieces. So please bare with me.
At the most fundamental level, what is the minimal necessary Instruction set that needs to be implemented in a cpu core, for it to be able to functionally execute a program.
The BARE MINIMUM number of instructions you need to implement a CPU is actually ONE. Wikipedia has a good page describing how several
One instruction set computers work. This shouldn't be surprising if you think about the fact that CPU's are regularly built on FPGA chips and those are simply array's of universal logic gates (like NAND's or NOR's).
I understand that the assembler is on top of machine code, so for example in DOS, the assembler introduces Interrupts to allow for more complex programming. However, before there is the assembler and interrupts for the operating system, there is simply the cpu and machine code.
I want to break here, just for clarity. An interrupt is a signal, generated by hardware or software, which forces a subroutine to be executed by the CPU. Your description is accurate, except that the hardware usually provides on-board subroutines called the Basic Input/Output System (or BIOS). These are built-in to the motherboard (or sometimes on-chip in flash) so they are just as much hardware as the gates I mentioned earlier.
Initially when computers were first designed, the programs that they ran had to be built on top of simply a cpu and whatever else hardware.
And in theory, they still are. The difference being, instead of having multiple packaged DIP chips on the motherboard, microprocessor manufacturers have been adding more and more logic onto a single wafer of silicon so that now you even have full "System On-a-Chip" solutions which take up minimal circuit board real estate allowing the design of things like IoT devices. That's why almost EVERYTHING has a computer in it these days.
I'm trying to get as much detailed information on this as I possibly can, so I can have a better structural understanding of both the computer and especially how a cpu functions.
Can anyone here go into detail on this or provide a link to documentation that can help me better understand this question?
I have to say that the most under-recognized and under appreciated works has to be
Kenneth Short's book Microprocessors and Programmed Logic. I came across this book by accident at a used bookstore about 15 years ago. The book covers building computers using the 8085A CPU with some references to the 8086. This book is ideal for anyone who likes electronics because it explains how to build a minimal three-chip 8085A microcomputer system and works it's way up to multiple master distributed systems using the MULTIBUS structure. Most of this technology is now inside of the chips, but it's great to see how this things worked decades ago.
If you're not shy of learning new "languages" you might look into learning a HDL (hardware design language) such as VHDL or Verilog. One programmer to another, the big thing you have to get used too is, when you're using VHDL or Verilog, you ARE NOT PROGRAMMING. These design languages will fool you into thinking you're programming with the familiar look of a programming language, but they aren't. That took time for me to get used too. It's important because in HDL's, almost everything happens at the same time so if you write three lines of code, they won't always execute one after another, the third might execute first, the first second, and the second last... usually all at the same time. It's best to think of each line of code as a single electrical path and design in a more functional style (thinking of things as inputs and outputs rather than procedurally).
The main reason I bring up HDL's are because most computer organization and architecture courses would work you through an HDL in the design of a MIPS like CPU. These were usually fairly simple and straight forward CPU's with modern-ish features simulated on a computer then built onto a FPGA.
If you're strapped for cash, you could check out the
Microprocessor Design page on wikibooks.org or
Niklaus Wirth's Project Oberon. A word of warning about Project Oberon, the documentation and code are all freely available, but the Spartan-3 board used in the book has been retired by Digilent and the replacement (Nexys 4) is $320/USD. I'd say it's worth it, but only if you are planning to really build a computer and not just for a passing interest.
Regards,
Bryant Keller