views:

215

answers:

6

Hi,

We have an application running where IIS and SQL are on the same machine. It's a windows2003standard server, with 4gigs of RAM running on a VM.

Now the numbers of users are rising constantly. There are some huge statistics also, which can be run by the users but has very much impact of the performance for other users. So we need to improve the performance somehow.

I thought of separating IIS and SQL on 2 different machines with windows2008 64bit and at least 6gigs RAM for each machine, but it also should have a failover solution.

Can you recommend some scenarios for how to solve the performance and the failover issue?

Thanks

ps:

Just for info: we are now using inproc state management in IIS, but i think it will be better to change to sqlstatemanagement.

EDIT

I've broadened the question to the point of failover. As our client doesn't want to spend too much money on server and SQL licenses. Would it be "ok" to just have a replication to a second SQL server and use this as a failover? Do you know some better "cheap" solutions?

The application is just for internal use, but now more and more departments are involved in this project.

+1  A: 

SQL Server always works best if it is the "ONLY" thing running on a machine. You will have a quick, easy and good benefit from just doing that. It seems to like to take control of everything and is always happier when it can :)

Robin Day
That's not actually true about it working best. In low usage scenarios it performs better on the same machine.
RichardOD
The only reason there may be a performance increase on the same machine is due to latency and bandwidth across a network when transferring data. As far as SQL Server performing internally, this is where a dedicated server will help. If the processing power required by SQL is low and the amount of data transferred between the database and application is high then yes, the same machine would probably be better.
Robin Day
@Robin- I agree. It was more of a warning that moving to a seperate machine might not increase the performance, hence the expression "always" is misleading. In all companies I have worked for the SQL Server Servers have been on seperate machines (with the exception of TFS, as we only had 5 or 6 users).
RichardOD
+1  A: 

It sounds like you really asking should you put the DB on a separate machine. This may not improve performance (it will actually decrease as latency increases) but will improve scalability (which I am guessing is what you really need).

Performance <> scalability.

However more issues come into play- if you don't have enough RAM performance may decrease having the DB on the same box- SQL server likes to use RAM.

That's why for things like TFS that use SQL server, for a low number of users Microsoft recommend it is all installed on 1 machine, but for a higher number of users Microsoft recommend the DB is situated on a different server.

You can read about the deployment options for TFS here.

Switching SQL Server state management will not increase performance- it will likely decrease it, but you will gain other benefits (like reliability).

It sounds like you need to actually find out what the performance bottleneck is first. In my experience this is typically in the DB.

Have you looked into standard ASP.NET optimization techniques like caching? Microsoft provides guidance on application tuning that also might be useful to you.

As you using SQL Server in a Web application scenario, if you are using SQL Server 2005 and above you might wish to read about Snapshot isolation. Sometimes not using it is a cause of performance issues in Web applications.

RichardOD
A: 

Separation of the layers may help. Often the tuning of a machine for a DB is quite specific, so that's a reasonable first effort.

However if you have two kinds of user activities, one of which is very heavy then you are always going to run the risk of having a few heavy users hurt the rest of the population.

Two things you may consider:

  1. Can you adopt a "DataWarehouse" approach? Have a second DB trickle-fed from the first. The new DB being where the heavy users do their work. Sure their stats will be slightly out of date, but that's conceptually always going to be true -- by the time they look at the answers the world will have moved.
  2. Control the number of stats requests you allow at any one time. Perhaps have them submitted as a "job" to a queue. Run them as a low priority.
djna
i want to have this datawarehouse in any case, but this will take too much time. we need a quite fast solution, because for the datawarehouse we want to include also datas from different systems...i like the idea of the job queue
nWorx
+1  A: 

You must refer to these 2 articles on codeproject for ASP.Net performance tuning

1. http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx
2. http://www.codeproject.com/KB/aspnet/aspnetPerformance.aspx

I have personally implemented these techniques in my asp.net apps and gained more than 30% performance improvements.

Additionally you can refer to this article for 99.99% uptime for your application.

3. http://www.codeproject.com/KB/aspnet/ProdArch.aspx

this. __curious_geek
A: 

Naturally separating IIS and SQL server is the first step. Sql server really want to have a complete machine for itself.

The second thing that is important is to analyze the application as it runs. Never try to optimize the performance of your application without having real usage data, because you probably just spend time optimizing stuff that rarely gets called. One technique I have used with success in the past is to create a System.Diagnostics.Stopwatch in Request_Begin in the global.asax, then store it in a context variable

var sw = new Stopwatch();
sw.Start()
HttpContext.Current.Items["stopwatch"] = sw;

In Request_End, you obtain the stopwatch agin

sw = HttpContext.Current.Items["stopwatch"];
sw.Stop();
Timespan ts = sw.Elapsed;

And then write to a log table how long time it took to process the request. Also log the URL (both with and without query string parameters) and all sorts of stuff that will help you to analyze performance.

Then you can analyze you application and find which operations take the longest time, which are called the most, etc. That will allow you to see if there is one page that is requested a lot, and it generally takes a long time to complete, that should be a target of optimization, using whatever tools you have for that, both .NET and SQL profilers.

Other stuff I also normally log are, IP addresses, and user ID for logged in users. That also gives me an invaluable debbugging tool when errors arise.

A reason to put it in a table, as opposed to writing it to a log file is that you can use SQL syntax to filter out, group, calculate average times, etc.

Pete
A: 

Right now you have 32 bit OS on the VM I assume. Since Standard Edition does not allow AWE the two servers (IIS and SQL) the SQL Server will load up the maximum it can about 1.8 GB and leave plenty of RAM to IIS and OS. But once you move to 64 bit OS things will change as the SQL Server will take all the RAM for its buffer pool (~5GB if 6GB available) and then start giving it back to OS when notified. This behavior can be tweaked by configuring SQL Server memory options. By splitting the IIS and SQL onto separate VMs you leave all the memory on the SQL VM for its buffer pools, and that is good. Ideally you should have enough RAM so that SQL can load the entire database(s) into memory (including tempdb) and only touch the disk for log writes and when it has to checkpoint the database(s). In other words, more RAM means faster SQL. It is by far the most important hardware resource SQL requires for performance and will give the biggest bang for the buck.

Now back to the broad question on 'failover'. In SQL Server the solutions for high availability split into two categories: automatic and manual. For automatic failover you really have only a few solutions:

  • Clustering. Traditionally this is rather expensive to implement because of the high cost of hardware that supports clustering, but with VMs this is a different story. Standard Edition SQL supports two node clusters. Clustering is a bit hard to deploy, but is quite easy operate and requires no application changes to support. With clustering the unit of failover is an entire SQL Server instance (ie. every database, including master/tempdb/model and msdb, all logins, all SQL Agent jobs etc etc). A cluster is not a performance solution, as the stand-by server is just sitting idle in case the main one crashes. You can leverage the stand-by VM by deploying the so called 'active-active' cluster. What that mean is that you deploy two clusters, one active on VM1 and stand-by on VM2, the other active on VM2 and stand-by on VM1. In case of failover one of the VMs will have to take the load of both instances, and this is why active-active deployments are sometimes frowned upon. Given that you plan to deploy on VMs not on (expensive) metal, I'd recommend against it since there is no huge cost to 'ammortize'.
  • Mirroring. This will keep a hot stand-by mirror of your database (not instance). Mirroring is preferred to clustering because of lower cost to deploy (no special hardware), lower failover time (seconds as opposed to minutes in clustering) and geodistribution capabilities (mirroring supports distribution of nodes on separate continets, clustering only supports a short distance of few hundred meters between nodes). But because the unit of failover is a database, mirroring does not provide the ease of use of clustering. A lot of the resources needed by an application do not reside in the database: logins, Agent jobs, maintenance plans, database mail messages and so on and so forth. Because only the database fails over, the failover has to be carefully planned so that the application continues to work after the failover (eg. logins have to transferred). The application also has to be aware of the mirroring deployment so that it connects properly. With Standard Edition you will only be able to deploy mirroring in high-safety mode.
  • Hardware mirroring. I'm not going to enter into details on this one, it requires specialized SAN hardware capable of disk level mirroring.

If you're considering manual failover solutions then there are couple more alternatives:

  • Log Shipping. Log shipping is basically out-of-band mirroring. Instead of transferring log records in real-time over a dedicated TCP connection, the log is transferred via file copy operations. There are very few reasons to choose log shipping over mirroring: the stand-by database can be queried for reporting, the stand-by can be located on a location with sporadic connectivity, the stand-by can be housed by a really low powered machine.

  • Replication. This is really not a high availability solution. Replication is a solution to provide data copies and/or to exchange data updates between sites. While it can be used to deploy some sort of high-availability make-shift 'solution', there are many problems with that and basically no advantage. Compared to both log shipping and mirroring, it has a number of additional disadvantages because the unit of failover is not even a database, is only slices of data within a database (some of the tables). Metadata like user and security permissions are not failed over, schema changes have to be done in a replication aware mode and some changes cannot even be replicated. By contract both mirroring and log shipping provide stand-by copy identical with the production database that automatically cover any change done to the database.

You mention that you are concerned about license costs: you actually don't need a license for any of the passive servers with any of these technologies except replication. The stand-by servers require a license only if they become active and runs the database for more than 30 days.

Considering you plan to deploy on VMs my choice would be clustering. If you would deploy on metal, I'd recommend mirroing instead because of the cost of clustering hardware.

Remus Rusanu
I'd let someone else cover the IIS high availability options.
Remus Rusanu
thanks for the great informations
nWorx