innodb

Can't find my.cnf file so I can enable InnoDB - is there another way?

Hey all, I am trying to Magento running on a share server, and am having difficulty. When I look at the engines in PHPmyAdmin, I get InnoDB DISABLED. So I look in /etc/, and there is no my.cnf file. There is a ftpquota and a .boxtrapper file, but nothing else. I know I can probably create a new one, but this is a server that hosts ...

Database design: objects with different attributes

I'm designing a product database where products can have very different attributes depending on their type, but attributes are fixed for each type and types are not manageable at all. E.g.: magazine: title, issue_number, pages, copies, close_date, release_date web_site: name, bandwidth, hits, date_from, date_to I want to use InnoDB and...

Foreign key pointing to different tables

I'm implementing a table per subclass design I discussed in a previous question. It's a product database where products can have very different attributes depending on their type, but attributes are fixed for each type and types are not manageable at all. I have a master table that holds common attributes: product_type ============ pro...

mysql innodb max size of transaction

Using mysql 5.1.41 and innodb I'm doing some data import, but can't use load data infile, so I'm manually issuing insert statements. I found that it's much faster to disable auto commit and issue say, 100 insert statements and then commit, instead of the implicit commit after each insert. It got me thinking, what limits are there to ho...

mysql innodb:innodb_flush_method

in the following link http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_method it says:Different values of this variable can have a marked effect on InnoDB performance. For example, on some systems where InnoDB data and log files are located on a SAN, it has been found that setting innodb_flush_method to...

MEMORY(HEAP) vs. InnoDB in a Read and Write Environment

I want to program a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The...

How can I determine when an InnoDB table was last changed?

I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the cti...

Why does MySQL autoincrement increase on failed inserts?

A co-worker just made me aware of a very strange MySQL behavior. Assuming you have a table with an auto_increment field and another field that is set to unique (e.g. a username-field). When trying to insert a row with a username thats already in the table the insert fails, as expected. Yet the auto_increment value is increased as can be...

Lock innoDB table temporarily

Hi everyone, I make bigger inserts consisting of a couple of thousand rows in my current web app and I would like to make sure that no one can do anything but read the table, until the inserts have been done. What is the best way to do this while keeping the read availability open for normal, non-admin users? Thanks! ...

Databases design - one link table or multiple link tables?

Hi there, I'm working on a front end for a database where each table essentially has a many to many relationship with all other tables. I'm not a DB admin, just a few basic DB courses. The typical solution in this case, as I understand it, would be multiple link tables to join each 'real' table. Here's what I'm proposing instead: one l...

Optimize innodb table

When i run optimize table on a innodb table, i get this message instead. does it mean that the table has already been optimized, but in a different manner? "table | optimize | note | Table does not support optimize, doing recreate + analyze instead |" ...

Deleting huge chunks of data from mysql innodb

I need to delete a huge chunk of my data in my production database, which runs about 100GB in size. If possible, i would like to minimize my downtime. My selection criteria for deleting is likely to be DELETE * FROM POSTING WHERE USER.ID=5 AND UPDATED_AT<100 What is the best way to delete it? Build an index? Write a sequentia...

Dummies guide to locking in innodb

The typical documentation on locking in innodb is way too confusing. I think it will be of great value to have a "dummies guide to innodb locking" I will start, and I will gather all responses as a wiki: The column needs to be indexed before row level locking applies. EXAMPLE: delete row where column1=10; will lock up the table unle...

Column locking in innodb?

I know this sounds weird, but apparently one of my columns is locked. select * from table where type_id = 1 and updated_at < '2010-03-14' limit 1; select * from table where type_id = 3 and updated_at < '2010-03-14' limit 10; the first one would not finish running even in a few hours, while the second one completes smoothly. the only ...

Index part of the mysql/innodb table?

I am sorry if this is a dumb question (cause it sounds unlikely). I have a table that is 20 Million rows. However, only about 300K of these rows get accessed regularly, and they can be identified in a column condition called "app_user=1" Is there anyway i can just index those rows, and when I call a select, i will be sure to pass in t...

How much does an InnoDB table benefit from having fixed-length rows?

I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's g...

Innodb Lock out causing whole database down

I have searched many threads and stackoverflow but I couldn't found any solution. I am trying to insert records into few innodb tables randomly rather one condition matches which cause whole database down. I am getting this error "Lock wait timeout exceeded; try restarting transaction" One the question (#1103248) has been answered here ...

MySQL + InnoDB table size question

Hello, I have a simple test table. I'm trying to figure out storage requirements for different storage engines. I have this table: CREATE TABLE `mytest` ( `num1` int(10) unsigned NOT NULL, KEY `key1` (`num1`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; When I insert some values, and then run show table status; I get the following...

MySQL foreign key constraints, cascade delete

Hello. I want to use foreign keys to keep the integrity and avoid orphans (I already use innoDB). How do I make a SQL statment that DELETE ON CASCADE? If I delete a category then how do I make sure that it would not delete products that also are related to other categories. The pivot table "categories_products" creates a many-to-many...

[MySQL, InnoDb] Rating place

I'm trying to generate rating place table using following receipt http://stackoverflow.com/questions/1776821/assign-places-in-the-rating-mysql-php but my database is high loaded. I tried not to create table, but use MEMORY TABLE and update it using following SQL query insert into tops (uid) select uid from users order by exp desc; ...