views:

45

answers:

1

The following query is using temporary and filesort. I'd like to avoid that if possible.

SELECT lib_name, description, count(seq_id), floor(avg(size)) 
FROM libraries l JOIN sequence s ON (l.lib_id=s.lib_id)
WHERE s.is_contig=0 and foreign_seqs=0 GROUP BY lib_name;

The EXPLAIN says:

id,select_type,table,type,possible_keys,key,key_len,ref,rows,Extra
1,SIMPLE,s,ref,libseq,contigs,contigs,4,const,28447,Using temporary; Using filesort
1,SIMPLE,l,eq_ref,PRIMARY,PRIMARY,4,s.lib_id,1,Using where

The tables look like this:

libraries

CREATE TABLE  `libraries` (
  `lib_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `lib_name` varchar(30) NOT NULL,
  `method_id` int(10) unsigned DEFAULT NULL,
  `lib_efficiency` decimal(4,2) unsigned DEFAULT NULL,
  `insert_avg` decimal(5,2) DEFAULT NULL,
  `insert_high` decimal(5,2) DEFAULT NULL,
  `insert_low` decimal(5,2) DEFAULT NULL,
  `amtvector` decimal(4,2) unsigned DEFAULT NULL,
  `description` text,
  `foreign_seqs` tinyint(1) NOT NULL DEFAULT '0' COMMENT '1 means the sequences in this library are not ours',
  PRIMARY KEY (`lib_id`),
  UNIQUE KEY `lib_name` (`lib_name`)
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=latin1;

sequence

CREATE TABLE  `sequence` (
  `seq_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `seq_name` varchar(40) NOT NULL DEFAULT '',
  `lib_id` int(10) unsigned DEFAULT NULL,
  `size` int(10) unsigned DEFAULT NULL,
  `add_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `sequencing_date` date DEFAULT '0000-00-00',
  `comment` text DEFAULT NULL,
  `is_contig` int(10) unsigned NOT NULL DEFAULT '0',
  `fasta_seq` longtext,
  `primer` varchar(15) DEFAULT NULL,
  `gc_count` int(10) DEFAULT NULL,
  PRIMARY KEY (`seq_id`),
  UNIQUE KEY `seq_name` (`seq_name`),
  UNIQUE KEY `libseq` (`lib_id`,`seq_id`),
  KEY `primer` (`primer`),
  KEY `sgitnoc` (`seq_name`,`is_contig`),
  KEY `contigs` (`is_contig`,`seq_name`) USING BTREE,
  CONSTRAINT `FK_sequence_1` FOREIGN KEY (`lib_id`) REFERENCES `libraries` (`lib_id`)
) ENGINE=InnoDB AUTO_INCREMENT=61508 DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC;

Are there any changes I can do to make the query go faster? If not, when (for a web application) is it worth putting the results of a query like the above into a MEMORY table?

+1  A: 

First strategy: make it faster for mySQL to locate the records you want summarized.

You've already got an index on sequence.is_contig. You might try indexing on libraries.foreign_seqs. I don't know if that will help, but it's worth a try.

Second strategy: see if you can get your sort to run in memory, rather than in a file. Try making the sort_buffer_size parameter bigger. This will consume RAM on your server, but that's what RAM is for.

Third strategy: IF your application needs to do this query a lot but updates the underlying data only a little, take your own suggestion and create a summary table. Perhaps use an EVENT to remake the summary table., and run it once every few minutes. If you're going to follow that strategy, start by creating a view with this table in it and have your app retrieve information from the view. Then get the summary table stuff working, drop the view, and give the summary table the same name as the view. That way your data model work and your application design work can proceed independently of each other.

Final suggestion: If this is truly slowly-changing summary data, switch to myISAM. It's a little faster for this kind of data wrangling.

Ollie Jones
Thanks for a very clear and complete answer. Reindexing has no effect in this case. The sort_buffer_size= 2M, which I think is on the higher end of normal- or at least that's what my reading is suggesting. So I'm going to go with a summary table.
dnagirl