tags:

views:

13

answers:

1

Hi all,

I have a very basic question. The architecture I am studying offers a memory-mapped co-processor Interface. Could somebody confirm that I understand this concept correctly: If I have a co-processor attached than some memory region on the bus system is reserved to communicate with the co-processor, ie to send and read data, execute commands etc.

Alternatively, there is the tightly coupled approach, I assume there is another mechanism used to communicate with the co-processor and the overhead of this is less because the co-processor is closer to the host, is that right?

Thank you very much for some insight into this trival problem ;).

A: 

I think your understanding is fundamentally correct, except what is reserved is a piece of the address space, that probably doesn't correspond to any "real" memory anywhere, it's simply a way of using the same mechanism one would use to read and write memory locations to read and write data to/from the co-processor, usually.

I don't think it's necessarily true that exposing a co-processor in a non-memory-mapped way will be more efficient. There could be some kind of virtual instructions implemented, for example, but that is not necessarily lower on overheads. I think it's very difficult to make general statements about this kind of thing without reference to specifics of the architecture and implementation.

Perhaps the Wikipedia article on Memory-Mapped IO will allow you to verify that your assumptions are correct, as co-processor communication is pretty much just normal "device IO" as far as the CPU is concerned.

Gian