views:

217

answers:

3

I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).

The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).

I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:

  • Reading and writing to a file (…yes), very basic but not very scalable.
  • Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
  • Distributed Objects: seems rather inelegant for a task like this
  • NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results

I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.

+8  A: 

Disclaimer: Distributed Objects are not available on iPhone.


Why do you find distributed objects inelegant? They sounds like a good match here:

  • transparent marshalling of fundamental types and Objective-C classes
  • it doesn't really matter wether clients are local or remote
  • not much additional work for Cocoa-based applications

The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.

Georg Fritzsche
+1 This is really the kind of problem Distributed Objects was designed to solve.
Rob Napier
The word "automagically" alarms me.
jbrennan
@jbr: Why, isn't it a good thing to have your work done for you? The linked documentation also explains the mechanism quite well i think.
Georg Fritzsche
My aversion to DO stems from its extensive use of Exceptions… Doesn't feel natural.
jbrennan
Don't take this the wrong way, but are you really considering more work-intensive solutions because you dislike the design of DO? @jbr
Georg Fritzsche
Call me old-fashioned, but I'm always a bit dubious about technologies that add several layers of fiddly "automagic" in order to pretend that there is no difference between remote and local activity. (EJB, I'm looking at you. And CORBA. And DCOM. And even olde worlde RMI.) Maybe one day the world will be wrapped in the cosy embrace of a single continuous process space, but until then *here* is not the same thing as *there* and it's as well to remember that.
walkytalky
+4  A: 

I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.

About the options you have considered:

  1. Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.

    The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)

    If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.

  2. Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).

    The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).

    In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.

    [EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]

  3. +4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)

    The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.

    There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting: http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html

    Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: http://stackoverflow.com/questions/2848900/corba-on-macos-x-cocoa

    [EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]

So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.

user8472
While DO certainly lack portability, i am curious why you find them the hardest to learn? Portable lower-level solutions are in my opinion much harder because you have to take care of more layers yourself (connection handling, marshalling, ...).
Georg Fritzsche
As for DO on the iPhone, sadly it looks like [you're right](http://developer.apple.com/iphone/library/documentation/Miscellaneous/Conceptual/iPhoneOSTechOverview/PortingfromCocoa/PortingfromCocoa.html#//apple_ref/doc/uid/TP40007898-CH8-SW2). I didn't notice, quite annnoying that.
Georg Fritzsche
@Georg Fritzsche: This might be more due to my learning (in)ability than about DO; but I have used both traditional message passing systems (MPI) and socket programming in the past, which might result in a perceptual bias on my part. I found it easy to figure out how to do communication of data and didn't worry about remote method invocation. DO forces me to also think about remote methods *in addition* to data, which makes this approach more complicated and unintuitive for me.
user8472
I guess it might take getting used to that the usual data-oriented communication suddenly happens transparently once the connections are set up. :) *(sidenote: full names are not needed for [comment notifications](http://blog.stackoverflow.com/2010/01/new-improved-comments-with-reply/))*
Georg Fritzsche
@Georg: As soon as debugging is concerned, one needs to figure out where a piece of data came from and why it looks like it does. Remote and distributed debugging is far from trivial; a different syntax does neither help nor harm here. In the case of DO one more layer of abstraction and remote methods introduce even more complexity. Maybe someone who thinks naturally in such terms (or someone who doesn't need to debug her programs ;^) will not find it harder than data-oriented communication, but for me it is more complex.
user8472
Use sockets... If you are worried about endianness just verify the byte-order using a byte-order-mark at the beginning of the packet and adjust to compensate. It makes for robust communication and it can be used universally on any system/platform.
Evan Plaice
@Evan: I would not advertise a specific solution without knowing the problem and the exact circumstances; different problems may ask for different solutions. Alternatively, one could specify that binary data must always be [little|big]-endian when communicated. It is also possible to wrap specific kinds of data using the `XDR`-library (External Data Representation). I have just brought forward this particular problem since I had encountered it before and I also know that other people have had to work around it when exchanging binary data.
user8472
+1  A: 

We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..

NSSplendid