views:

591

answers:

9

Hi guys:

I've done some web based projects. In those days, I met a lot of difficulties(questions, confusions) and most of them can be figured out with help. But till now a really important question is still there, even with asking some experienced developers. The question is: When one functionality can be implemented with both server side codes and client side scripting(javascript). Which one should be preferred?

I give a simple instance:

To render a dynamic html page, I can format the page in server side with code(PHP, python) and use ajax fetch the formatted page and render it directly. (more logic in server side, less logic in client side).

And I can also use ajax fetch the data(not formatted, json) and use client side script to format the page and render it with more processing. (server side gets the data from db or elsewhere, return to client with json or xml. more logic in client side and less logic in server).

So how can I decide which one is better? Which one is with better performance? Why? which one is more user-friendly?

I'm wondering, with the browser's js engine evolving, js can be interpreted with less time. So I should prefer client side scripting?

I'm also wondering, with hardware's evolving, the server's performance will grow and the cost will decrease, so I should prefer server side scripting?

EDIT:

With the answers, I want to give a brief summary.

Pro of client side:

  1. better user expeience
  2. save network bandwidth(decrease cost)
  3. scalability(with boom of PVs, the growth won't grow so much as the server side logic)

Pro of server side:

  1. security issues
  2. better availability and accessiblity (mobile device, old browsers, SEO)
  3. expand easily(we can add more servers but we can't make the browser faster)

It seems that we need to balance these two approaches when facing a specific scenario. But how? What's the best practice?

IMO, I will use client side logic except the following conditions:

  1. security critical
  2. special groups(no javascript enabled/mobile devices, and others)
A: 

I think the second variant is better. For example, If you implement something like 'skins' later, you will thank yourself for not formatting html on server :)

It also keeps a difference between view and controller. Ajax data is often produced by controller, so let it just return data, not html.

If you're going to create an API later, you'll need to make a very few changes in your code

Also, 'Naked' data is more cachable than HTML, i think. For example, if you add some style to links, you'll need to reformat all html.. or add one line to your js. And it isn't as big as html (in bytes).

But If many heavy scripts are needed to format data, It isn't to cool ask users' browsers to format it.

valya
+5  A: 

I tend to prefer server-side logic. My reasons are fairly simple:

  1. I don't trust the client; this may or not be a true problem, but it's habitual
  2. Server-side reduces the volume per transaction (though it does increase the number of transactions)
  3. Server-side means that I can be fairly sure about what logic is taking place (I don't have to worry about the Javascript engine available to the client's browser)

There are probably more -and better- reasons, but these are the ones at the top of my mind right now. If I think of more I'll add them, or up-vote those that come up with them before I do.


Edited, valya comments that using client-side logic (using Ajax/JSON) allows for the (easier) creation of an API. This may well be true, but I can only half-agree (which is why I've not up-voted that answer yet).

My notion of server-side logic is to that which retrieves the data, and organises it; if I've got this right the logic is the 'controller' (C in MVC). And this is then passed to the 'view.' I tend to use the controller to get the data, and then the 'view' deals with presenting it to the user/client. So I don't see that client/server distinctions are necessarily relevant to the argument of creating an API, basically: horses for courses. :)

...also, as a hobbyist, I recognise that I may have a slightly twisted usage of MVC, so I'm willing to stand corrected on that point. But I still keep the presentation separate from the logic. And that separation is the plus point so far as APIs go.

David Thomas
It's interesting that I got four up-votes and a down-vote for this; I'd be interested if the person that down-voted would/could explain? That way the answer becomes either *less wrong*, or *more useful*.
David Thomas
A: 

As long as you don't need to send a lot of data to the client to allow it to do the work, client side will give you a more scalable system, as you are distrubuting the load to the clients rather than hammering your server to do everything.

On the flip side, if you need to process a lot of data to produce a tiny amount of html to send to the client, or if optimisations can be made to use the server's work to support many clients at once (e.g. process the data once and send the resulting html to all the clients), then it may be more efficient use of resources to do the work on ther server.

Jason Williams
But then someone can write their own client and feed the system with whatever data they want... without any validation.
Eric J.
True, in some cases that may be a consideration. But the question doesn't mention "security-critical" data, it simply says "if I can implement it ss or cs, which is better?", to which I gave a generalised answer.
Jason Williams
+4  A: 

In many cases, I'm afraid the best answer is both.

As Ricebowl stated, never trust the client. However, I feel that it's almost always a problem if you do trust the client. If your application is worth writing, it's worth properly securing. If anyone can break it by writing their own client and passing data you don't expect, that's a bad thing. For that reason, you need to validate on the server.

Unfortunately if you validate everything on the server, that often leaves the user with a poor user experience. They may fill out a form only to find that a number of things they entered are incorrect. This may have worked for "Internet 1.0", but people's expectations are higher on today's Internet.

This potentially leaves you writing quite a bit of redundant code, and maintaining it in two or more places (some of the definitions such as maximum lengths also need to be maintained in the data tier). For reasonably large applications, I tend to solve this issue using code generation. Personally I use a UML modeling tool (Sparx System's Enterprise Architect) to model the "input rules" of the system, then make use of partial classes (I'm usually working in .NET) to code generate the validation logic. You can achieve a similar thing by coding your rules in a format such as XML and deriving a number of checks from that XML file (input length, input mask, etc.) on both the client and server tier.

Probably not what you wanted to hear, but if you want to do it right, you need to enforce rules on both tiers.

Eric J.
+1; I never thought to critique the false-dichotomy in the OP's question (this **or** that).
David Thomas
+1 Always both: you *want* the UX and performance benefits of cliet-side but *need* the safety and control of the server-side.
annakata
yep, Eric, I almost agree with you. But what is best practice? How can I balance these two approaches? How can I apply each approach to different scenario with a guide? what's the guide? Thanks.
Tower Joo
@Tower: You will not find a single best practice because each application has it's own threat model (likelihood of exposure to hacking, and likelihood of success) and it's own user expectations. If the threat level is "high" you MUST implement server side checking. I use the air quotes because almost every app I have worked on falls into the "high" category. If the user expectations are high enough to demand immediate validation, you must do it (or face the prospect of unhappy users). You need to determine each variable for your own app.
Eric J.
A: 

I generally implement as much as reasonable client-side. The only exceptions that would make me go server-side would be to resolve the following:

Trust issues

Anyone is capable of debugging JavaScript and reading password's, etc. No-brainer here.

Performance issues

JavaScript engines are evolving fast so this is becoming less of an issue, but we're still in an IE-dominated world, so things will slow down when you deal with large sets of data.

Language issues

JavaScript is weakly-typed language and it makes a lot of assumptions of your code. This can cause you to employ spooky workarounds in order to get things working the way they should on certain browsers. I avoid this type of thing like the plague.


From your question, it sounds like you're simply trying to load values into a form. Barring any of the issues above, you have 3 options:

Pure client-side

The disadvantage is that your users' loading time would double (one load for the blank form, another load for the data). However, subsequent updates to the form would not require a refresh of the page. Users will like this if there will be a lot of data fetching from the server loading into the same form.

Pure server-side

The advantage is that your page would load with the data. However, subsequent updates to the data would require refreshes to all/significant portions of the page.

Server-client hybrid

You would have the best of both worlds, however you would need to create two data extraction points, causing your code to bloat slightly.

There are trade-offs with each option so you will have to weigh them and decide which one offers you the most benefit.

James Jones
A: 

Hello,

I'd like to give my two cents on this subject.

I'm generally in favor of the server-side approach, and here is why.

  • More SEO friendly. Google cannot execute Javascript, therefor all that content will be invisible to search engines
  • Performance is more controllable. User experience is always variable with SOA due to the fact that you're relying almost entirely on the users browser and machine to render things. Even though your server might be performing well, a user with a slow machine will think your site is the culprit.
  • Arguably, the server-side approach is more easily maintained and readable.

I've written several systems using both approaches, and in my experience, server-side is the way. However, that's not to say I don't use AJAX. All of the modern systems I've built incorporate both components.

Hope this helps.

Jason Palmer
A: 

One consideration I have not heard mentioned was network bandwidth. To give a specific example, an app I was involved with was all done server-side and resulted in 200Mb web page being sent to the client (it was impossible to do less without major major re-design of a bunch of apps); resulting in 2-5 minute page load time.

When we re-implemented this by sending the JSON-encoded data from the server and have local JS generate the page, the main benefit was that the data sent shrunk to 20Mb, resulting in:

  • HTTP response size: 200Mb+ => 20Mb+ (with corresponding bandwidth savings!)

  • Time to load the page: 2-5mins => 20 secs (10-15 of which are taken up by DB query that was optimized to hell an further).

  • IE process size: 200MB+ => 80MB+

Mind you, the last 2 points were mainly due to the fact that server side had to use crappy tables-within-tables tree implementation, whereas going to client side allowed us to redesign the view layer to use much more lightweight page. But my main point was network bandwidth savings.

DVK
Thanks for your statistics. It impressed me greatly. Network bandwidth counts.
Tower Joo
A: 

If you do it in Ajax :

You'll have to consider accessibility issues (search about web accessibility in google) for disabled people, but also for old browsers, those who doesn't have JavaScript, bots (like google bot), etc.

You'll have to flirt with "progressive enhancement" wich is not simple to do if you never worked a lot with JavaScript. In short, you'll have to make your app work with old browsers and those that doesn't have JavaScript (some mobile for example) or if it's disable.

But if time and money is not an issue, I'd go with progressive enhancement.

But also consider the "Back button". I hate it when I'm browsing a 100% AJAX website that renders your back button useless.

Good luck!

mrmuggles
+1  A: 

I built a RESTful web application where all CRUD functionalities are available in the absence of JavaScript, in other words, all AJAX effects are strictly progressive enhancements.

I believe with enough dedication, most web applications can be designed this way, thus eroding many of the server logic vs client logic "differences", such as security, expandability, raised in your question because in both cases, the request is routed to the same controller, of which the business logic is all the same until the last mile, where JSON/XML, instead of the full page HTML, is returned for those XHR.

Only in few cases where the AJAXified application is so vastly more advanced than its static counterpart, GMail being the best example coming to my mind, then one needs to create two versions and separate them completely (Kudos to Google!).

Jerry