views:

272

answers:

10

Hi all,

I have and normal select query that take nearly 3 seconds to execute (Select * from users). There are only 310 records in the user table.

The configuration of the my production server is

SQl Server Express Editon Server Configuration : Pentium 4 HT, 3 ghz , 2 GB ram

Column Nam  Type NULL COMMENTS
Column Nam     Type   NULL      
user_companyId    int   Not  Null
user_userId    int   Not Null Primary Column
user_FirstName    nvarchar(50)  Null       
user_lastName     nvarchar(60)  Null       
user_logon     nvarchar(50)  Null       
user_password     nvarchar(255)  Null       
user_emailid      nvarchar(255)  Null       
user_status    bit   Null       
user_Notification     bit   Null       
user_role      int   Null       
user_verifyActivation    nvarchar(255)  Null       
user_verifyEmail      nvarchar(255)  Null       
user_loginattempt     smallint  Null       
user_createdby    int   Null       
user_updatedby    int   Null       
user_createddate      datetime  Null       
user_updateddate      datetime  Null       
user_Department   nvarchar(1000)  Null       
user_Designation      nvarchar(1000)  Null
A: 

Without the table structure to know what your table looks like, we can't answer such a question....

What about not using SELECT * FROM Users, but actually specify which fields you really need from the table??

SELECT user_companyId, user_userId,
       user_FirstName, user_lastName, user_logon
FROM Users

How does that perform?? Do you still need 3 seconds for this query, or is this significantly faster?

If you really need all the users and all their attributes, then maybe that's just the time it takes on your system to retrieve that amount of data. The best way to speed up things is to limit the attributes retrieved (do you really need the user's photo??) by specifying a list of fields, and to limit the number of elements retrieved using a WHERE clause (do you REALLY need ALL users? Not just those that.........)

Marc

marc_s
I need pretty much all columns. Even if I did select lesser columns I didnt think pulling 310 records with less data in each column will take 3 seconds.
Dasharath Yadav
The point just is: if you can skip a large column (e.g. a photo, or a long text field), it might just need to read less data, so if you have a slow HD and you need to read less data, your query will be faster. Also, even if you need all columns, specifying them explicitly in the SELECT will be faster since SQL Server doesn't need to do a column lookup first to figure out what columns are in your table.
marc_s
E.g. if you could avoid retrieving the user_department and user_designation fields, both up to 2000 bytes each, that alone could save you several MB of data that doesn't need to be transferred
marc_s
+1  A: 

Well, indexes don't much matter here--you're getting a table scan with that query.

So, there are only a few other items that could be bogging this down. One is row size. What columns do you have? If you have tons of text or image columns, this could be causing a delay in bringing these back.

As for your hardware, what's your HDD's RPMs? Remember, this is reading off of a disk, so if there are any other IO tasks being carried out, this will obviously cause a performance hit.

Eric
+2  A: 

Is anything else happening on this machine?

Even if you made worst possible data structure, SELECT * FROM Users should not take 3 seconds for 310 records. Either there is more (a lot more) records inside or there is some problem outside of SQL server (some other process blocking code or hardware issues).

Josip Medved
Worst possible structure? Put three image columns on your table, and then fill them with 1.99GB files. That would take quite a while to return!
Eric
He could also have memory issues going on. Especially if Exchange Server is running on the same box and it's getting hit more than the web DB, causing the web DB to stay paged.
280Z28
@Eric: My image of "worst possible structure" in 31-row table didn't include enormous field. My oversight. :)
Josip Medved
@280Z28: But with only 310 records? Those records better be huge...
Josip Medved
+2  A: 

As there is no where clause this isn't down to indexes etc, Sql will do a full table scan and return all the data. I'd be looking at other things running on the machine, or SQL having run for a long time and used up a lot of VM. Bottom line is that this isn't a SQL issue - it's a machine issue.

MrTelly
A: 

The best thing to do is profile the server and also pay attention to what kinds of locks are occurring during the query.

If you are using any isolation level options, or the default, determine if you can lower the isolation level, which will decrease the locking occurring on the table. The more locking that occurs, the more conflicts you will have where the query has to wait for other queries to complete. However, be very careful when lowering the isolation level, because you need to understand how that will effect the data you get back.

Determine if you can add where criteria to your query. Restricting the records you query can reduce locks.

Edit: If you find locking issues, you need to also evaluate any other queries that are being run.

AaronLS
Don't lower isolation levels without understanding the possible consequences. You may see uncommitted data and get inconsistent results. It probably won't matter that much with most management reports, since no-one in their right mind usually reads them, and those that do read them will make bad decisions no matter what the reports say :-)
paxdiablo
That's why I made sure to say "you need to understand". LOL@your management report note. Even when they do read them they wouldn't know how to verify their correctness.
AaronLS
+1  A: 

There's a number of things you should consider:

  • Don't use the Express edition, it's probably called that for a reason. Go with a real DBMS (and, yes, this includes the non-express SQL Server).
  • Use of "select * from" is always a bad idea unless you absolutely need every column. Change it to get only the columns you need.
  • Are you using a "where" or "order by" clause. If so, you need to ensure you have indexes set up correctly (even for 330 rows, since tables always get bigger than you think).
  • Use EXPLAIN, or whatever tool Microsoft provides as an equivalent. It will show you why the query is running slow.
  • If your DB is on a different machine, there may be network issues (not necessarily problems, you may have stateful packet filters that slow down connections, for example).
  • Examine the system loads of the boxes you're using. If there are other processes using a lot of CPU grunt or disk I/O, they may be causing slowdown.
paxdiablo
IIRC, the database engine inside SQL Server Express is the same as that in the larger editions, just without some of the extra modules deployed, and with a limiter to stop the database growing larger than 2G. In other words, for this situation, use of Express isn't going to explain the slowness.
Bevan
@pax: No I'm not using "order by" or "where" clauseDB is in same server as the application server
Dasharath Yadav
@Bevan: absolutely right - I don't see how switching to another SQL server edition would help in any way....
marc_s
@Bevan, there are quite a few differences between Express and the others. See http://www.microsoft.com/Sqlserver/2005/en/us/compare-features.aspx for details. It may not be the specific cause of this problem but the scalability and performance issues alone (especially multi-core support) would convince me to steer clear of Express for production databases.
paxdiablo
Bevan is absolutely correct, Express has the same engine as SQL Server. Technically the limits are 1GB of RAM and 4GB of data. It is a "real" DBMS.
onupdatecascade
No, I think we'll just have to agree to disagree on that one. The limit of only being able to run on a single core is the killer to me. Have a look at all the questions on SO re DBMS performance - an artificial limit like single core (or memory/DB size for that matter) means Express is a developer-only variant for me. For production, I prefer something better.
paxdiablo
A: 

If it's consistently 3 seconds, the problem isn't the query or the schema, for any reason that wouldn't be obviously irrational to the designer. I'd guess it's hardware or network issues. Is there anything you can do with the database that doesn't take 3 seconds? What do SET STATISTICS IO ON and SET STATISTICS TIME ON tell you? I bet there's nothing there that supports 3 seconds.

le dorfier
A: 

Without a better indexing strategy, leaving certain columns out will only reduce the impact on the network, which shouldn't be awfulf for only 310 rows. My guess is that it's a locking issue.

So consider using:

SELECT * FROM Users (NOLOCK);

This will mean that you don't respect any locks that are currently on the table by other connections. You may get 'dirty reads', seeing data which hasn't been committed yet - but it could be worthwhile from a performance perspective.

Let's face it - if you've considered the ramifications, then it should be fine...

Rob

Rob Farley
A: 

The first thing you should do for any performance problem is to be an execution plan for the query - this is a representation of what run path SQL server chooses when it executes your query. Best place to look for info on how to do this is Google - you want a statistics plan as it includes information about how many rows are returned.

This doesn't sound like a problem with the execution plan however, as the query is so simple - in fact I'm fairly sure that query counts as a "trivial plan", i.e. there is only 1 possible plan. 

This leaves locking or hardware issues (is the query only slow on your production database, and is it always slow or does the execution time vary?) the query will attemp to get a shared lock on the whole table - if anyone is writing then you will be blocked from Reading until the writer is finished. You can check to see if this us the source of your problem by looking at the DMVs see http://sqllearningsdmvdmf.blogspot.com/2009/03/top-10-resource-wait-times.html

Finally, there are restrictions on SQL express in terms of CPU utilisation, memory use etc... What is the load on your server like? (operations per second)

Kragen
A: 

is there a possible way that the performance may degrade based on the column size (length of the data in the column)..

in your table u got last 2 columns with the size of NVARCHAR(1000), is it always filled with that amount of data..??

am not a sql expert, but consider the packetsize its about to return for 310 records with these 2 columns & without ll be different..

i saw similar post here in stack.. u can just go through this

performance-in-sql

Ramesh Vel