Firstly, your table definition could make a big difference here. If you don't need NULL
values in your columns, define them NOT NULL
. This will save space in the index, and presumably time while creating it.
CREATE TABLE x (
i INTEGER UNSIGNED NOT NULL,
j INTEGER UNSIGNED NOT NULL,
nu DOUBLE NOT NULL,
A DOUBLE NOT NULL
);
As for the time taken to create the indexes, this requires a table scan and will show up as REPAIR BY SORTING
. It should be quicker in your case (i.e. massive data set) to create a new table with the required indexes and insert the data into it, as this will avoid the REPAIR BY SORTING
operation as the indexes are built sequentially on the insert. There is a similar concept explained in this article.
CREATE DATABASE trans_clone;
CREATE TABLE trans_clone.trans LIKE originalDB.trans;
ALTER TABLE trans_clone.trans ADD KEY idx_A (A);
Then script the insert into chunks (as per the article), or dump the data using MYSQLDUMP
:
mysqldump originalDB trans --extended-insert --skip-add-drop-table --no-create-db --no-create-info > originalDB .trans.sql
mysql trans_clone < originalDB .trans.sql
This will insert the data, but will not require an index rebuild (the index is built as each row is inserted) and should complete much faster.