You have confused a couple of common issues here.
Firstly, the attack as others have noted is called is a cross-site request forgery. It is possible to cause either GETs or POSTs from another domain and because the request is going to your domain it will pass in the cookies for your domain which include the session details.
To counter this, when a user logs in, generate a token (some random string of characters) that all links and forms on your site pass back during that session. When the request comes in, take the session details from the cookie and look up which token should GETted/POSTed for that session. If the correct token has not been passed then you can ignore the request/inform the user/log detail for further investigation. I'd recommend the last as when implementing this you may well miss a few links or forms which will then not work. Users may simply leave rather than taking the time to inform you of this.
Secondly, GET requests should be safe (i.e. simply cause data to be displayed with no changes made) and POSTs should be used for all data altering requests. Firstly in case a spider manages to follow a link, causing changes that spiders shouldn't be causing. Secondly as a backup to the user refreshing the page - the browser should remind them that they will be resubmitting the request and do they want to continue. I say as a backup because all your requests should be written in such a way that they are harmless/ignored if resubmitted i.e. don't have a button that requests the last item to be deleted, instead look up that the id of the last item is 1423 and have the button request that 1423 is deleted; if this is submitted twice then the second time around your validation should notice that item 1423 is no longer there and cause no further changes.