A: 

Try to set longer connection timeout.

x2
I don't think this is a timeout issue -- especially since there's no exceptions being thrown. My understanding is that `MySqlConnection.ConnectionTimeout` is the amount of time `MySqlConnection` will wait when *establishing* a connection, not how long it will keep the connection open. `MySqlCommand.CommandTimeout` is the amount of time a query is allowed to run on the server. The default is unlimited.
josh3736
A: 

May be this is timeout on server-side?

Leotsarev
I'd think that a timeout on the server would raise an exception. At any rate, I set ridiculously high timeouts in my server's config and didn't see a change.
josh3736
A: 

Try setting the timeout on the MySqlCommand and not the MySqlConnection on both "cmd" and "updater". What happens if you do : SELECT TOP 100 ID FROM PEOPLE ?

So while the [docs claim](http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqlcommand.html#connector-net-examples-mysqlcommand-commandtimeout) that the default command timeout is unlimited, after browsing the [source](http://bazaar.launchpad.net/~mysql-clr-team/connectornet/6.2/annotate/head%3A/MySql.Data/Provider/Source/command.cs#L138), it looks like the default is actually 30 seconds. Either way, a timeout doesn't explain why the `MySqlDataReader` is stopping after 4 minutes. Additionally, command timeout affects query *execution*, not data streaming.
josh3736
A: 

Try this 1. Add reference System.Transactions;

using(TransactionScope scope = new TransactionScope())
{
       //Initialize connection
       // execute command
     :
     :
     scope.Complete();
}

Write your entire insert/update logic inside Scope's using. This will definetly help you.

Amit Ranjan
I don't want these inserts to be committed as part of a transaction. This code is part of a job which inserts work items into a queue for later processing. That processing can happen immediately and asynchronously, so each row needs to be available for reading as soon as possible. If all inserts were wrapped in a transaction, none of the queued items would be available until the entire job completes, thus wasting time. Worse, if I change the `IsolationLevel` to allow dirty reads, a rollback would wreak havoc - queued items which had already been acted upon would disappear!
josh3736
Try adding Application.DoEvents(); or Thread.Sleep(100) after i++
Amit Ranjan
@Amit: I appreciate your suggestions, but I don't think they're relevant. This code is not part of a Winforms app; `System.Windows.Forms.Application.DoEvents()` doesn't exist in a Windows Service. Furthermore, I don't see how pumping Windows messages relates to a database connection (this job runs on its own thread anyway). I also fail to see how `Thread.Sleep()` does anything but slow down my job. 100 ms * 25,000 rows = ~42 **minutes** sleeping -- no good.
josh3736
A: 

Add the following after creating your Command.

cmd.CommandTimeout = 0;

This will set the CommandTimeout to indefinitly. The reason your getting a timeout is probably because the connection though executed, is still in the 'command' phase because of the Reader.

Either try setting the CommandTimeout = 0 or reading everything first, then doing subsequent functions on the results. Otherwise the only other issue i could possibley see is that the Sql Server is dropping the result set against the specified process id due to a timeout on the server itself.

Jonathan
`CommandTimeout` has no impact on data streaming to a `DataReader`. It really only applies to the amount of time the SQL statement can execute on the server. Once the statement completes (and `MySqlCommand.ExecuteReader()` returns), the timeout value becomes irrelevant; you may take as long as you need to iterate through returned rows using `MySqlDataReader.Read()`. See [this answer](http://stackoverflow.com/questions/1484027/ado-net-datareader-timeout-issue/1484138#1484138) for more. Also, I can know this is **not** a timeout since no exceptions are being thrown.
josh3736
A: 

Hi,

i've found an article here http://corengen.wordpress.com/2010/06/09/mysql-connectornet-hangs-on-mysqldatareader-read/

What this guy experienced was something similar: a hang on the Read method at exactly the same moment, during reading of the same record (which is the same thing you experience i guess). In his case he called another webservice during the Read() loop, and that one timed out causing the Read() to hang without an exception.

Can it be the same at your machine, that an update in the Read() loop times out (i think that update uses the default 30 secs timeout) and causes the same effect?

Maybe a longshot, but reading the two stories the sounded a lot familiair.

Michel
Interesting article. I don't think I'm experiencing a hang, though. The loop runs just fine at a constant speed up to the 14896th record and then `Read()` simply returns false -- there is no hang or delay.
josh3736
+1  A: 

Perhaps you have a corrupted table - this guy's problem sounds very similar to yours: http://forums.asp.net/t/1507319.aspx?PageIndex=2 - repair the table and see what happens.

If that doesn't work, read on:

My guess is that you are hitting some type of Deadlock, especially considering you are reading and writing. This would explaing why it works with the simple loop, but doesn't work when you do updates. It would also explain why it happens around the same row / time each time.

There was a weird bug in SqlDataReader that squelched exceptions (http://support.microsoft.com/kb/316667). There might be something similar in MySqlDatareader - After your final .Read() call, try calling .NextResult(). Even if it's not a deadlock, it might help you diagnose the problem. In these type of situations, you want to lean more towards "trust but verify" - yes, the documentation says that and exception will be thrown on timeone, but sometimes (very rarely) that documentation lies :) This is especially true for 3rd party vendors - e.g. see http://bugs.mysql.com/bug.php?id=53439 - the mysql .net library has had a couple of problems like the one you are having in the past.

Another idea would be to watch what's happening in your database - make sure data is contantly being fetched up till the row that your code exits on.

Failing that, I would just read all the data in, cache it, and then do your modifications. By batching the modifications, the code would be less chatty and execute faster.

Alternatively, reset the reader every 1000 or so rows (and keep track of what row ID you were up to)

Hope something here helps you with your frustration! :)

Shaun McCarthy
The table repair thing was promising, but trying it on my InnoDB table resulted in "The storage engine for the table doesn't support repair." So no dice. Since I'm reading from one table (`People`) and writing to another (`Queue`), a deadlock situation seems unlikely. I did try adding a call to both `Read()` and `NextResult()` after my loop; both returned false. (I expect false from `NextResult()` - there's only one query in the command and therefore one resultset.) At this point, I'm pretty convinced that an exception is getting swallowed somewhere in `Read()`.
josh3736
@Shaun: it turns out there is an exception getting swallowed. See the update to my question.
josh3736
Cool :) Now you are getting somewhere. Perhaps it's to do with connection pooling - are you closing your connection each time you do an update, or creating new ones each time? Try switching to pooling=false to verify if it's some type of pooling issue. http://bugs.mysql.com/bug.php?id=6634 seems like the appropriate thread. Otherwise, perhaps you have some rows returning a lot of data? are any of the columns binary data?Is there a firewall between the app/web server and the database server?
Shaun McCarthy
As for the repair - try this: http://dev.mysql.com/doc/refman/5.0/en/rebuilding-tables.html - make sure to backup first!! :) - namely:For storage engines such as InnoDB that REPAIR TABLE does not support, use mysqldump to create a dump file and mysql to reload the file, as described earlier.
Shaun McCarthy
+1  A: 
josh3736