views:

453

answers:

6

What do you think.. are clean URLs a backend or frontend 'discipline'

+1  A: 

Backend for sure. Your server is the one that has to take care of the routing to the resources requested by the URL.

William
A: 

I think the main reasons for using friendly URLs are:

  • Ease of linking / sharing
  • Presentation
  • Seo

So I think it's purely a client-side pleasure. While they're nice on the server as well, they're not mission critical.

Shawn Simon
+2  A: 

My perspective is simple:

  • every place I visit with my browser(with various edge case exceptions) should be bookmarkable and Forward/Back should be usable and not destroy any data entry.
Paul Nathan
That doesn't appear to be addressing the question
Sam Hasler
+4  A: 

If we're talking url's being 'clean' from an end user experience then I'm going to break the mould a bit and say that url's in general are not intuitive and they never will be, they are intended to be machine readable.

There is no standard to the format of a url such that when navigating from site to site humans will never ever remember how to reach a resource purely through remembering urls and their 'friendly syntax'. We can argue the toss about whether using a '?' and '&' or '/' to express how how to identify a resource via a url; is one method better than the other? it doesn't matter. At the end of the day a machine parses it and sends back the result.

We should stop deluding ourselves that people actually type these things in and realise that uri's are for machines, not people.

I have yet to use/remember a uri that goes beyond the first few characters of the http://domain.com/ part of an address, and I've been using the web since a long time. That's what bookmarks are for. Nowhere on a website does it say 'change this part here in our url to view 'whatever else' resource' because url's are usually undocumented and opaque.

Yes make your uri's SEO friendly (hell even they change periodically) but forget about the whole 'human/clean' resource identifier thing, it's a mystical pipe dream.

I agree with Vlion that url's should provide a unique mechanism to bookmark a resource and return to it (unlike some of these abominable web 2.0 ajax/silverlight/flash creations), but the bookmark will never be for humans to comprehend and understand. There seems to be quite a lot of preoccupation and energy spent in dreaming up url strategies that humans can remember and type in, it's a waste of energy. Let's get on and solve real problems.

Sorry for the rant, but there's a lot of web 2.0 nonsense related to urls going on in certain circles that are just a total waste of time.

Kev
But how often do you see a URL like /products/motherboards/foo-fx-1234 and decide to lop off the last part in the hope you'll find an index page of some sort? I think that's the true power of readable URLs - being able to infer likely patterns easily.
Paul Dixon
Maybe if you're a developer or fairly web savvy. The other 90% of the world don't think like that. That and any decent website would have breadcrumb nav to help you get around. I still stand by my assertion that uri's are for machines, not humans.
Kev
And what about when you're typing in the address bar and the auto-complete kicks in? Happens quite often when I type in the domain of a site I'm looking for and a recently visited page that I want pops up so I don't need to find my way through the site's (possibly horrible) navigation system.
Davy8
+4  A: 

The answer is BOTH.

For example:

http://stackoverflow.com/questions/203278/are-clean-urls-a-backend-or-a-frontend-thing

The number above is a database id, a back-end thing. Chop off the pretty part and it goes to the same page. Therefore the "are-clean-urls-a-backend-or-a-frontend-thing" is part of the front-end thing.

JarrettV
+4  A: 

Now that Firefox's Awesome bar and Google Chrome's Omnibox address bars can be used to search the browsing history it makes it much easier for users to search their history for previously visited sites, so having clean urls may help the user find sites in their history more easily.

Making sure the page has an appropriate Title is important (as both browsers search the title as well as the url) but by making sure the url has relevant keywords in it as well, when those keywords are typed in the address bar the urls will be more likely to show up higher in the suggestions as the keyword will be matched twice, in the url and the title.

Also, once a user has typed the name of a site they will be presented with example urls from the site which they can then use as a template for narrowing down their search. So using verbs and nouns in the url for different sections or actions of the site will aid the user to narrow their search to just the part of the site they are interested in, e.g. the /questions/ or /tag/ sections of stackoverflow, or the "/doc" at the end of docs.google.com/doc that can be used to view just document pages on Google docs*.

Since both Firefox and Chrome search for each space separated word typed into the address bar, it could be argued that it isn't necessary for searching that the url be completely human readable, but to allow the user to actually read the keywords they are interested in from the url the amount of "noise" should be kept to a minimum.


* which are of the form http://docs.google.com/Doc?id=gibberish

Sam Hasler