tags:

views:

320

answers:

7

I know that using non-GET methods (POST, PUT, DELETE) to modify server data is The Right Way to do things. I can find multiple resources claiming that GET requests should not change resources on the server.

However, if a client were to come up to me today and say "I don't care what The Right Way to do things is, it's easier for us to use your API if we can just use call URLs and get some XML back - we don't want to have to build HTTP requests and POST/PUT XML," what business-conducive reasons could I give to convince them otherwise?

Are there caching implications? Security issues? I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."

Edit:

Thanks for the answers so far regarding prefetching. I'm not as concerned with prefetching since is mostly surrounding internal network API use and not visitable HTML pages that would have links that could be prefetched by a browser.

+11  A: 
  • Prefetch: A lot of web browsers will use prefetching. Which means that it will load a page before you click on the link. Anticipating that you will click on that link later.
  • Bots: There are several bots that scan and index the internet for information. They will only issue GET requests. You don't want to delete something from a GET request for this reason.
  • Caching: GET HTTP requests should not change state and they should be idempotent. Idempotent means that issuing a request once, or issuing it multiple times gives the same result. I.e. there are no side effects. For this reason GET HTTP requests are tightly tied to caching.
  • HTTP standard says so: The HTTP standard says what each HTTP method is for. Several programs are built to use the HTTP standard, and they assume that you will use it the way you are supposed to. So you will have undefined behavior from a slew of random programs if you don't follow.
Brian R. Bondy
This is the type of answer I'm looking for, thanks.
Rob Hruska
"GET HTTP requests do not change state and they are idempotent" - that's rather the point - they're not idempotent in themselves, it's expected that they will be used in such a way as they are.
Will Dean
+4  A: 

How about Google finding a link to that page with all the GET parameters in the URL and revisiting it every now and then? That could lead to a disaster.

There's a funny article about this on The Daily WTF.

DrJokepu
+1  A: 

Security for one. What happens if a web crawler comes across a delete link, or a user is tricked into clicking a hyperlink? A user should know what they're doing before they actually do it.

Brandon
+4  A: 

GETs can be forced on a user and result in Cross-site Request Forgery (CSRF). For instance, if you have a logout function at http://example.com/logout.php, which changes the server state of the user, a malicious person could place an image tag on any site that uses the above URL as its source: . Loading this code would cause the user to get logged out. Not a big deal in the example given, but if that was a command to transfer funds out of an account, it would be a big deal.

POST requests and others can also be forged in this way, but it requires the ability to run scripts on the client (i.e. you can execute a GET via loading an image, but you need an AJAX call to make a POST), so it isn't a huge security increase, but it does help.
rmeador
Grant Wagner
Note that there are defenses against CSRF that still allow you to have actionable URLs, but that isn't really the point of the original question.
Grant Wagner
+1  A: 

Good reasons to do it the right way...

They are industry standard, well documented, and easy to secure. While you fully support making life as easy as possible for the client you don't want to implement something that's easier in the short term, in preference to something that's not quite so easy for them but offers long term benefits.

One of my favourite quotes

Quick and Dirty... long after the Quick has departed the Dirty remains.

For you this one is a "A stitch in time saves nine" ;)

Lazarus
+1  A: 

Security: CSRF is so much easier in GET requests.

Using POST won't protect you anyway but GET can lead easier exploitation and mass exploitation by using forums and places which accepts image tags.

Depending on what you do in server-side using GET can help attacker to launch DoS (Denial of Service). An attacker can spam thousands of websites with your expensive GET request in an image tag and every single visitor of those websites will carry out this expensive GET request against your web server. Which will cause lots of CPU cycle to you.

I'm aware that some pages are heavy anyway and this is always a risk, but it's bigger risk if you add 10 big records in every single GET request.

dr. evil
+1  A: 

I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."

...

I don't care what The Right Way to do things is, it's easier for us

Tell them to think of the worst API they've ever used. Can they not imagine how that was caused by a quick hack that got extended?

It will be easier (and cheaper) in 2 months if you start with something that makes sense semantically. We call it the "Right Way" because it makes things easier, not because we want to torture you.

Ken