I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id
There are multiple users (distinct usr_id's)
time_stamp is not a unique identifier: sometimes user even...
I want to find the highest AutoIncremented value from a field. (its not being fetched after an insert where I can use @@SCOPE_IDENTITY etc)
Which of these two queries would run faster or gives better performance.
Id is the primary key and autoincrement field for Table1. And this is for Sql Server 2005.
SELECT MAX(Id) FROM Table1
SELEC...
The query:
SELECT tbl1.*
FROM tbl1
JOIN tbl2
ON (tbl1.t1_pk = tbl2.t2_fk_t1_pk
AND tbl2.t2_strt_dt <= sysdate
AND tbl2.t2_end_dt >= sysdate)
JOIN tbl3 on (tbl3.t3_pk = tbl2.t2_fk_t3_pk
AND tbl3.t3_lkup_1 = 2577304
AND tbl3.t3_lkup_2 = 1220833)
where tbl2.t2_lkup_1 = 1020000002981587;
Facts:
Oracle XE
tbl1.t1_pk is a pri...
I am developing a web application that can support threaded comments. I need the ability to rearrange the comments based on the number of votes received. (Identical to how threaded comments work in reddit)
I would love to hear the inputs from the SO community on how to do it.
How should I design the comments table?
Here is the structur...
The box this query is running on is a dedicated server running in a datacenter.
AMD Opteron 1354 Quad-Core 2.20GHz
2GB of RAM
Windows Server 2008 x64 (Yes I know I only have 2GB of RAM, I'm upgrading to 8GB when the project goes live).
So I went through and created 250,000 dummy rows in a table to really stress test some queries that L...
I have the following query
DECLARE @userId INT
DECLARE @siteId INT
SET @siteId = -1
SET @userId = 1828
SELECT a.id AS alertId,
a.location_id,
a.alert_type_id,
a.event_id,
a.user_id,
a.site_id,
a.accepted_by
FROM alerts AS a
JOIN alert_types AS ats ON a...
Hi everyone,
I'm wondering if there's some sort of runtime mechanism that would observe the queries that are running against my database server; record how many queries of each "type" are running; look at the performance of these queries; then, based on this runtime data, suggest what indexes need to be added/removed.
I'm working aga...
When constructing LINQ expressions (for me, linq to objects) there are many ways to accomplish something, some much, much better and more efficient than others.
Is there a good way to "tune" or optimize these expressions?
What fundamental metrics do folks employ and how do you gather them?
Is there a way to get at "total iterations...
Background: I have a table with 5 million address entries which I'd like to search for different fields (customer name, contact name, zip, city, phone, ...), up to 8 fields. The data is pretty stable, maximum 50 changes a day, so almost only read access.
The user isn't supposed to tell me in advance what he's searching for, and I also w...
I have a table that tracks inventory data by each individual piece. This is a simplified version of the table (some non-key fields are excluded):
UniqueID,
ProductSKU,
SerialNumber,
OnHandStatus,
Cost,
DateTimeStamp
Every time something happens to a given piece, a new audit record is created. For example, the first time my product ...
I have a MS SQL table McTable with column BigMacs nvarchar(255). I would like to get rows with BigMacs value greater than 5.
What I do is:
select * from
(
select
BigMacs BigMacsS,
CAST(BigMacs as Binary) BigMacsB,
CAST(BigMacs as int) BigMacsL
from
McTable
where
BigMacs Like '%[0-9]%'
...
I am working on a few PHP projects that use MVC frameworks, and while they all have different ways of retrieving objects from the database, it always seems that nothing beats writing your SQL queries by hand as far as speed and cutting down on the number of queries.
For example, one of my web projects (written by a junior developer) exe...
Here is my query:
select word_id, count(sentence_id)
from sentence_word
group by word_id
having count(sentence_id) > 100;
The table sentenceword contains 3 fields, wordid, sentenceid and a primary key id.
It has 350k+ rows.
This query takes a whopping 85 seconds and I'm wondering (hoping, praying?) there is a faster way to find all...
I have a few years experience developing with oracle and have now moved to a place where they use SQL server (2005). Where would be a good place to learn things like SQL Server query optimisation, basic dba stuff and sql server gotchas for someone with my background.
Thanks!
...
Could any one show me how to get record from this statement
Select random employee which is not an employee of the month in the last x months
Table Employee
ID
EmployeeName
Table EmployeeOfTheMonth
ID
EmployeeID
MonthStartedDate
MonthEndedDate
Thank you very much
...
I need to speed up a query. Is an index table what I'm looking for? If so, how do I make one? Do I have to update it each insert?
Here are the table schemas:
--table1-- | --tableA-- | --table2--
id | id | id
attrib1 | t1id | attrib1
attrib2 | t2id | attrib2
...
I've got a table structure that can be summarized as follows:
pagegroup
* pagegroupid
* name
has 3600 rows
page
* pageid
* pagegroupid
* data
references pagegroup;
has 10000 rows;
can have anything between 1-700 rows per pagegroup;
the data column is of type mediumtext and the column contains 100k - 200kbytes data per row
userdata...
This question is related to this one.
I have a page table with the following structure:
CREATE TABLE mydatabase.page (
pageid int(10) unsigned NOT NULL auto_increment,
sourceid int(10) unsigned default NULL,
number int(10) unsigned default NULL,
data mediumtext,
processed int(10) unsigned default NULL,
PRIMARY KEY (pageid...
I have a particularly slow query due to the vast amount of information being joined together. However I needed to add a where clause in the shape of id in (select id from table).
I want to know if there is any gain from the following, and more pressing, will it even give the desired results.
select a.* from a where a.id in (select id...
When I optimize my 2 single queries to run in less than 0.02 seconds and then UNION them the resulting query takes over 1 second to run. Also, a UNION ALL takes longer than a UNION DISTINCT. I would assume allowing duplicates would make the query run faster and not slower. Am I really just better off running the 2 queries separately? I w...