views:

404

answers:

2

I understand from the MSDN docs that the event DataReceived will not necessarily fire once per byte.

But does anyone know what exactly is the mechanism that causes the event to fire?

Does the receipt of each byte restart a timer that has to reach, say 10 ms between bytes, before the event fires?

I ask because I'm trying to write an app that reads XML data coming in from a serial port.

Because my laptop has no serial ports, I use a virtual serial port emulator. (I know, I know--I can't do anything about it ATM).

When I pass data through the emulated port to my app, the event fires once for each XML record (about 1500 bytes). Perfect. But when a colleague at another office tries it with two computers connected by an actual cable, the DataReceived event fires repeatedly, after every 10 or so bytes of XML, which totally throws off the app.

+4  A: 

DataReceived can fire at any time one or more bytes are ready to read. Exactly when it is fired depends on the OS and drivers, and also there will be a small delay between the data being received and the event being fired in .NET.

You shouldn't rely on the timing of DataReceived events for control flow.

Instead, parse the underlying protocol and if you haven't received a complete message, wait for more. If you receive more than one message, make sure to keep the left overs from parsing the first message because they will be the start of the next message.

Mark Byers
Yes, the serial port data is usually buffered for efficiency, so there is little predictability to updates. I have done similar to what Mark described, setting up a timer that repeatedly calls ReadExisting, processing all messages in the data, and saving any half finished messages. When I did this with GPS devices the messages contained check sums so you could verify their integrity and completeness to make sure it hadn't been cut off(in which case you grab the rest of the message on the next ReadExisting). I used this in a wrapper, that in turn fired an event for each complete message.
AaronLS
+1  A: 

As Mark Byers pointed out, this depends on the OS and drivers. At the lowest level, a standard RS232 chip (for the life of me, I can't remember the designation of the one that everyone copied to make the 'standard') will fire an interrupt when it has data in its inbound register. The 'bottom end' of the driver has to go get that data (which could be any amount up to the buffer size of the chip), and store it in the driver's buffer, and signal to the OS that it has data. It's at this point that the .NET framework can start finding out that the data is available. Depending on when the OS signals the application that opened the serial port (which is an OS level operation, and provides the 'real' link from the .NET framework to the OS/driver level implementation), there could literally be any amount of data > 1 byte in the buffer, because the driver's bottom end could've loaded up more data in the meantime. My bet is that on your system, the driver is providing a huge buffer, and only signalling after a significant pause in the data stream. Your colleague's system, on the other hand, signals far more frequently. Again, Mark Byer's advice to parse the protocol is spot on. I've implemented a similar system over TCP sockets, and the only way to handle the situation is to buffer the data until you've got a complete protocol message, then hand the full message over to the application.

Harper Shelby
Yeah, my original architecture was based on polling the serial port, buffering the data, looking for begin and end tags to pull out complete XML records, and buffering the leftovers. I was so excited when I got the DataReceived event working, thinking that might simplify the whole buffering scenario. But alas, I now have to re-implement the buffer and look for tags again.
Klay
@Klay - implementing the buffer isn't so bad. I mean, at least you can leave out the polling, which *is* bad!
Harper Shelby