There are really too many factors to consider, such as machine speed, memory, etc. that could impact the creation time. Plus, the nature of the data itself could have a significant effect on the creation time.
What I would do is pick one of the larger tables, create an index on it and see how long it takes. Then, take the time it took and divide by the number of rows in the table and that should give you a rough metric for what to expect. Note again, this is not going to be precise, but it's just a rule of thumb you could use. It's going to vary a lot because some tables have more columns, less sparse column values, etc., but it's a starting point.
Ex. It takes 3600 seconds to create a index on table X, which has 3 million rows.
So the metric is 3600 / 3,000,000 = 0.0012 seconds per row.
So if table Y has 8 million rows, you could expect
.0012 * 8,000,000 = 9600 seconds (or 160 minutes) to create the index.