If the goal it to prevent "static" URLs from being manipulated, then you can simply encrypt the parameters, or sign them. It's likely "safe enough" to tack on an MD5 of the URL parameters, along with some salt. The salt can be a random string stored in the session, say.
Then you can just:
http://example.com/service?x=123&y=Bob&sig=ABCD1324
This technique exposes the data (i.e. they can "see" that xyz=123), but they can not change the data.
There's is an advantage of "encryption" (and I use that term loosely). This is where you encrypt the entire parameter section of the URL.
Here you can do something like:
http://example.com/service?data=ABC1235ABC
The nice thing about using encryption is two fold.
One it protects the data (they user can never see that xyz=123, for example).
The other feature tho is that it's extensible:
http://example.com/service?data=ABC1235ABC&newparm=123&otherparm=abc
Here, you can decode the original payload, and do a (safe) merge with the new data.
So, requests can ADD data to the request, just not change EXISTING data.
You can do the same via the signing technique, you would just need consolidate the entire request in to a single "blob", and that blob is implicitly signed. That's "effectively" encrypted, just a weak encryption.
Obviously you don't want to do ANY of this on the client. There's no point. If you can do it, "they" can do it and you can't tell the difference, so you may as well not do it at all -- unless you want to "encrypt" data over a normal HTTP port (vs TLS, but then folks will wisely wonder "why bother").
For Java, all this work goes in a Filter, that's the way I did it. The back end is isolated from this.
If you wish, you can make the back end completely isolated from this with an outbound filter that handles the URL encryption/signing on the way out.
That's also what I did.
The down side is that it's very involved to get it right and performant. You need a light weight HTML parser to pull out the URLs (I wrote a streaming parser to do it on the fly so it didn't copy the entire page in to RAM).
The bright side is all of the content side "just works", as they don't know anything about it.
There's also some special handling when dealing with Javascript (as your filter won't easily "know" where there's a URL to encrypt). I resolved this by requiring urls to be signed to be specific "var signedURL='....'", so I can find those easily in the output. Not as crushing a burden on designers as you might think.
The other bright side of the filter is that you can disable it. If you have some "odd behavior" happening, simply turn it off. If the behavior continues, you've found a bug related to encryption. It also let developers work in plain text and leave the encryption for integration testing.
Pain to do, but it's nice overall in the end.