views:

134

answers:

5

I'm a desktop application developer who is temporarily working in the web. I'm working with a client that wants me to build an app for use by locations all over the state; however, these locations have very shaky connectivity.

They really want a centralized web app and are suggesting I build a "lean" web app. I don't know what a "lean web app" means: small HTTP requests but lots of them? or large HTTP requests with few of them? I tend to favor chunky vs chatty.. but I've never had to worry about connectivity before.

Do I suggest a desktop app that replicates data when connectivity exists? If not, what's the best way to approach a web app when connectivity is shaky?

EDIT: I must qualify my question with further information. Assuming the web option, they've disallowed the use of browser runtime technologies and anything that requires installation. Thus, Silverlight is out, Flash is out, Gears is out - only asp.net and javascript is available to me. Having state this, part of my question was whether to use a desktop app; I suppose that can be extended to "thicker technologies".

EDIT #2: Network is homogeneous - every node is Windows. This won't be changing.

+7  A: 

You should get a definition of what the client means by "lean" so that you don't have confusion surrounding it. Maybe present them with several options of lean that you think they might mean. One thing I've found is it's no good at all to guess about client requirements. Just get clarification before you waste a bunch of time.

Zak
A: 

You may consider using a framework like Google Gears to help provide functionality during network down time. This allows users to connect to the web page once (with a functioning connection) and then be able to use the web app from then on, even without a connection.

When the network is restored, the framework can sync changes back with the central database.

There is even a tutorial for using Google Gears with the .Net Framework.

Gears with other languages

Michael La Voie
This seems like a rather high-tech solution to a low-tech problem.
Steven Sudit
+1  A: 

If connectivity is so bad, I would suggest that you write a WinForm app that downloads information, locally edits it and then uploads it. This way, if your connection goes down, all you have to do is retry until it works.

They seem to be suggesting a plain vanilla web app that doesn't use AJAX or rely on .NET postbacks or do anything that might make it break down horribly if your connection goes away for a bit. Instead, it should be designed so that you can hit Refresh until it works. In other words, they seem to want the closest thing to a WinForm app, only uglier.

Steven Sudit
+1  A: 

Shaky connectivity definitely favors a desktop application. Web apps are great for users that have always-on Internet connections, and that might be using a variety of different browsers and operating systems.

Your client probably has locations that are all using Windows, so a desktop application is an appropriate choice. One other advantage of web applications is that they make the deployment issue easy to deal with. Auto-update technologies like ClickOnce make the deployment and update of desktop applications almost as easy.

And not to knock Google Gears, but it's relatively new and would have to be considered more risky than a tried-and-true desktop application.

Update: and if you're limited to just javascript on the client side, you definitely do not want to make this a web app. Your application simply will not be available whenever the Internet connection is down. There are ways to save stuff locally in javascript using cookies and user stores and whatnot, but you just don't want to do this.

MusiGenesis
I've lots of experience with WinForms, WPF and ClickOnce as it is. These are my sentiments; although, I'm really trying to do what they want and keep my desires out of it.
Travis Heseman
Wanting a lean web app for an environment with shaky connectivity is complete insanity. I worked for a company in Louisiana that wrote software for the Clerks of Court there. Most offices had dial-up connections that weren't working most of the time, so even our desktop application (which was written assuming always-on Internet connectivity, so it had no ability to cache data locally) performed very poorly.
MusiGenesis
I don't think they mean lean the way you do. At least I hope not!
Steven Sudit
@Steven: I'm assuming the same definition of "lean" as in OrbMan's comment, i.e. doing as much as possible on the server-side. This is of course not a good idea when the server isn't reachable much of the time.
MusiGenesis
I'm not sure what they meant, but if the task is largely data entry, then the big risk isn't having to wait, it's losing data. With a rich (ie. fat) client, there'll be lots of AJAX and no way to recover if the server goes away for a bit. With a thin client that just posts what's in some fields, you can recover by refreshing repeatedly. So, in this way, a thin client might be more robust when dealing with a bad connection.
Steven Sudit
@Steven: I think you would agree with me, based on your answer below, but I would refuse to write a web app like this for a client if it meant having to tell the users to just keep hitting the Refresh button until it worked. Unless I were starving, of course.
MusiGenesis
@Steven: If you're going to put a lot of work into making a rich (fat) client via html and javascript, why not just make it using a desktop technology and save the hassle of dealing with stateless HTTP. The code will be cleaner and simpler in a technology designed to be rich (fat), such as WPF, Winforms, Qt, Swing or Cocoa(MAC).
Travis Heseman
@Travis and MusiGenesis: I have to agree that a fat client is the best answer when connectivity is likely to be absent as much as it's present. On the server side, an RPC or REST interface would be appropriate, rather than an HTML-based one.
Steven Sudit
A: 

You mention that connectivity is shaky at these locations, but that the app needs to be centralized. One thing you might consider is using multiple decentralized read database servers and a single centralized write server. Mysql makes this possible and affordable if your app is small.

Have the main database server at the datacenter/central office. Put up small web/db servers at each location, with your app installed. You can even run them off a user computer if the remote location is not too big. Make the local database servers connect to the centralized database server as replication slaves. As changes come in to the centralized database, the slave servers will pull down the data and make it available locally. When the connection is unavailable, your app data is still at least available, if not up to date. When the connection is available, the database handles replicating all relevant data down.

Now all you have to do is make your app use two separate database handles: reading data it uses the local database, writing data it uses the central database.

Zak
So you're suggesting an app that runs at the client location rather than a centralized app that runs on a centralized server and uses replication (in some form)? Assuming their not going to install a server with IIS at the location, then it'd be a desktop app.
Travis Heseman
code can run anywhere, it's the data you are concerned about, correct? So it could be either a desktop app, or a simple locally installed webserver running a web app (which I tend to prefer)If what you care about is centralization of data, centralize your write database server, but distribute the data to the client locations via replication. If you are worried about availability, move the execution of the app to the client location. But keep your maintenance footprint as small as possible, use a web server at each location, it doesn't have to be a physical server, vmware player is fine.
Zak
also, same goes for the db server that gets replicated to. Build the whole thing as a vmware instance, and install it on one computer at the client location.
Zak
@Zak: I'm a big proponent of the KISS principle, so I have to say that I think your architecture here is hideously overcomplicated. You're losing most of the advantages of a centralized web application while gaining most of the disadvantages of a fat client approach.
MusiGenesis
It could be too complicated.. OTOH, as I mentioned in my other answer, there is no detail in what the client means by "lean" this answer is using the assumption that the client doesn't want to have to have a wide open pipe all the time. It's a specific solution to a specific (percieved) problem. Also, it's actually not that hard/complicated to set up a vmware instance with apache/php/mysql, and deploy it out to a vmware player... Then, you just have your remote app point at the CVS/SVN repository and do a remote update to bring the code curent every day... Instant patching...
Zak
As a consultant, I don't want to impose more processes (VMWare instances, databases and web servers) for their IT staff to have to maintain. Using a thick client with an embedded database, I can reduce those 3 extra processed per location to 1 process, the app.
Travis Heseman