Using shared memory is tougher because you'll have to manage the size of the shared memory buffer (or just pre-allocate enough). You'll also have to manually manage the data structures that you put in there. Once you have it tested and working though, it will be easier to use and test because of its simplicity.
If you go the remoting route, you can use the IpcChannel instead of the TCP or HTTP channels for a single system communication using Named Pipes. http://msdn.microsoft.com/en-us/library/4b3scst2.aspx. The problem with this solution is that you'll need to come up with a registry type solution (either in shared memory or some other persistent store) that processes can register their endpoints with. That way, when you're looking for them, you can find a way to query for all the endpoints that are running on the system and you can find what you're looking for. The benefits of going with Remoting are that the serialization and method calling are all pretty straightforward. Also, if you decide to move to multiple machines on a network, you could just flip the switch to use the networking channels instead. The cons are that Remoting can get frustrating unless you clearly separate what are "Remote" calls from what are "Local" calls.
I don't know much about WCF, but that also might be worth looking into. Spider sense says that it probably has a more elegant solution to this problem... maybe.
Alternatively, you can create a "server" process that is separate from all the other processes and that gets launched (use a system Mutex to make sure more than one isn't launched) to act as a go-between and registration hub for all the other processes.
One more thing to look into the Publish-Subscribe model for events (Pub/Sub). This technique helps when you have a listener that is launched before the event source is available, but you don't want to wait to register for the event. The "server" process will handle the event registry to link up the publishers and subscribers.