views:

204

answers:

3

So by now I'm getting the point that we should all be implenting our RESTful services providing representations that enable clients to follow the HATEOAS principle. And whilst it all makes good sense in theory, I have been scouring the web to find a single good example of some client code that follows the idea strictly.

The more I read, the more I'm starting to feel like this is an academic discussion because no-one is actually doing it! People can moan all they like about the WS-* stack's many flaws but at least it is clear how to write clients: you can parse WSDL and generate code.

Now I understand that this should not be necessary with a good RESTful service: you should only need to know about the relationships and representations involved and you should be able to react dynamically to those. But even still, shouldn't this principle have been distilled and abstracted into some common libraries by now? Feed in information about the representations and relationships you might receive and get some more useful higher level code you can use in your application?

These are just half-baked ideas of mine really, but I'm just wary that if I dive in and write a properly RESTful API right now, no-one is actually going to be able to use it! Or at least using it is going to be such a pain in the behind because of the extra mile people will have to go writing glue code to interpret the relationships and representations I provide.

Can anyone shed any light on this from the client perspective? Can someone show an example of properly dynamic / reactive RESTful client code so that I can have an idea of the audience I'm actually writing for? (better still an example of a client API that provides some abstractions) Otherwise its all pretty theoretical....

[edit: note, I've found a similar question here, which I don't think was really answered, the author was palmed off with a wikipedia stub!]

+2  A: 

We've kind of half-done this on our current project. The representations we return are generated from domain objects, and the client can ask for them either in XML, JSON, or XHTML. If it's an XHTML client like Firefox, then a person sees a set of outbound links from the well-known root resource and can browse around to all the other resources. So far, pure HATEOAS, and a great tool for developers.

But we're concerned about performance when the client is a program, not a human using a browser. For our XML and JSON representations we've currently suppressed the generation of the related links, since they triple the representation sizes and thus substantially affect serialization/deserialization, memory usage, and bandwidth. Our other efficiency concern is that with pure HATEOAS, client programs will be making several times the number of HTTP requests as they browse down from the well-known link to the information they need. So it seems best, from an efficiency standpoint, if clients have the knowledge of the links encoded in them.

But doing that means the client must do a lot of string concatenation to form the URIs, which is error prone and makes it hard to rearrange the resource name space. Therefore we use a templating system where the client code selects a template and asks it to expand itself from a parameter object. This is a type of form-filling.

I'm really eager to see what others have experienced on this. HATEOAS seems like a good idea aside from the performance aspects.

Edit: Our templates are part of a Java client library we wrote on top of the Restlet framework. The client library handles all details of HTTP requests/responses, HTTP headers, deserialization/serialization, GZIP encoding, etc. This makes the actual client code quite concise, and helps to insulate it from some server side changes.

Roy Fielding's blog entry about HATEOAS has a very relevant and interesting discussion following it.

Jim Ferrans
@Jim: exactly! You hit the nail on the head and understood my question, and like me, you have all the same doubts. Pure HATEOAS seems like a nice idea, but it does introduce a lot over potential overhead. If I understand you correctly though, you store the URL templates on the sever not the client so although the client has some knowledge of how to fill them out it doesn't know what they will be: this seems like a fair compromise. Great answer...I'll accept in a couple of days unless someone shows me a pure HATEOAS client.
jkp
@Jim: you say the clients download a template, then take the data in responses to fill in the complete URI? Why not just return the complete URI? (I'm assuming that you mean the server returns `{'name': 'john'}` which clients template into "http://example.com/users/john".). Also, how does the templating work? e.g, how does a client know to take "john" and apply the "name" template? The template sorta sounds like out-of-band information?
Richard Levasseur
oh! How are you representing your model objects and transforming them to xml, json, and xhtml? Do you think the additional work of supporting all those formats has been worth it? Can XML/JSON clients enable the related links in the output? If so, how are you returning that data in xml/json? re: performance, have you considered using the HTTP caching-related headers?In general, I agree that HATEOAS is great, except for the performance implications.
Richard Levasseur
@Richard: Sorry, I was a bit misleading on templates. Ours are static instances of a Template class, and are loaded in with the client code. A Template has a pattern string like "/user/{user}/document/{document}" and methods for filling in the properties from a JavaBean or Map using Java's reflection APIs. So the client does have out of band information about the resource names, but less than it would if it was doing raw string concatenation.
Jim Ferrans
@Richard: Model objects are serialized into XML and JSON using XStream, which makes these serializations almost free. XHTML representations are generated using FreeMarker, which took time but has really helped us in debugging. We do set HTTP caching headers. I really like your suggestion of allowing a HATEOAS client to ask for related links on a case by case basis.
Jim Ferrans
A: 

So far I have built two clients that access REST services. Both use HATEOAS exclusively. I have had a huge amount of success being able to update server functionality without updating the client.

I use xml:base to enable relative urls to reduce the noise in my xml documents. Other than loading images, and other static data I usually only follow links on user requests so the performance overhead of links is not significant for me.

On the clients, the only common functionality that I have felt the need to create is wrappers around my media types and a class to manage links.


Update:

There seem to be two distinct ways to deal with REST interfaces from the client's perspective. The first is where the client knows what information it wants to get and knows the links it needs to traverse to get to that information. The second approach is useful when there is a human user of the client application controlling which links to follow and the client may not know in advance what media type will be returned from the server. For entertainment value, I call these two types of client, the data miner and the dispatcher, respectively.

The Data Miner

For example, imagine for a moment that the Twitter API was actually RESTful and I wanted write a client that would retreive most recent status message of the most recent follower of a particular twitter user.

Assuming I was using the awesome new Microsoft.Http.HttpClient library, and I had written a few "ReadAs" extension methods to parse the XML coming from the twitter API, I imagine it would go something like this:

var twitterService = HttpClient.Get("http://api.twitter.com").Content.ReadAsTwitterService();

var userLink = twitterService.GetUserLink("DarrelMiller");
var userPage = HttpClient.Get(userLink).Content.ReadAsTwitterUserPage();

var followersLink = userPage.GetFollowersLink();
var followersPage = HttpClient.Get(followersLink).Content.ReadAsFollowersPage();
var followerUserName = followersPage.FirstFollower.UserName;

var followerUserLink = twitterService.GetUserLink(followerUserName);
var followerUserPage = HttpClient.Get(followerUserLink).Content.ReadAsTwitterUserPage();

var followerStatuses = HttpClient.Get(followerUserPage.GetStatusesLink()).Content.ReadAsTwitterUserPage();

var statusMessage = followerStatuses.LastMessage;

The Dispatcher

To better illustrate this example imagine you were implementing a client that rendered genealogy information. The client needs to be capable of showing the tree, drilling down to information about a particular person and viewing related images. Consider the following code snippet:

 void ProcessResponse(HttpResponseMessage response) {
            IResponseController controller;

            switch(response.Content.ContentType) {
                case "vnd.MyCompany.FamilyTree+xml":
                    controller = new FamilyTreeController(response);
                    controller.Execute();
                    break;
                case "vnd.MyCompany.PersonProfile+xml":
                    controller = new PersonProfileController(response);
                    controller.Execute();
                    break;
                case "image/jpeg":
                    controller = new ImageController(response);
                    controller.Execute();
                    break;
            }

        }

The client application can use a completely generic mechanism to follow links and pass the response to this dispatching method. From here the switch statement passes control to a specific controller class that knows how to interpret and render the information based on the media type.

Obviously there are many more pieces to the client application, but these are the ones that correspond to HATEOAS. Feel free to ask me to clarify any points as I have skimmed over many details.

Darrel Miller
@Darrel: I was waiting for you to respond as I've read a lot of your previous responses. It's fine for you to tell me you have done it, but I can't see your code! This is all still abstract for me, can you actually show us an example? I'm trying to actually learn here and there is *nothing* out there on the web. Thanks in advance.
jkp
@Darrel: Also you mention xml:base, are you using xlinks as well?
jkp
I'm not actually using xlink, but I have considered it on numerous occasions. I'll see what I can do to create a good code example of what I currently do.
Darrel Miller
A: 

Your web browser of choice is a "pure HATEOAS" client to the entire WWW.

The question doesn't really make sense imo.

Gandalf
Gandalf, thats obvious. Come on, please try to read what is being asked: seems the other two people who answered got it. I __know__ a webbrowser uses HATEOAS. Show me a Javascript client that does or some programatic API that follows the principle.
jkp
Google's web crawler then. It starts at some base URI and parses the page, finding all other URIs and by using content-negotiation and link relations knows how to handle them.
Gandalf