First off, the memory controller does not want to do anything (except refresh DRAM, but thats overcomplicating things for this example).
In many basic CPUs, the rough sequence of events is:
- CPU encounters a LOAD instruction (either direct instruction or through microcode). The address register is then loaded with the payload from this instruction (the address to read). The address register in this simple example is directly coupled to the ADDRESS bus. Note that loading INSTRUCTIONS is the same sequence, but may occur on a separate bus (Harvard architecture)
- The CPU assets the READ line.
- The memory controller does whatever it needs to do in order to perform a read.
- The memory controller places the data on the DATA bus.
- The memory controller may or may not assert a DR (Data Ready) signal.
- Either by waiting a specific number of cycles, or waiting for the DR signal, the CPU then latches the contents of the DATA bus.
- The DATA bus latch is loaded into the target register.
This is a gross simplification of a modern CPU. Introducing cache, pipelines, out of order execution, makes this a much more involved sequence.