I worked on this problem for several years, developing a LAN fax product. I doubt you can do it well.
Developing a virtual COM driver means developing a kernel driver (unless you can buy one off the shelf): which is doable (I did it) but I'd guess it's far more trouble than it's worth (I'd be surprised if it's worth your while).
Another problem is that there are a variety of fax modems and fax modem standards (and you say you're hoping to emulate one well enough to fool FaxMan).
Another (essential) problem is that the simpler (non-error-correcting) fax protocols are a (hard) real-time protocol: there is some (more or less) buffering on the fax modem, but the PC attached to the fax modem cannot to afford to underrun when sending or to overrun when receiving ... which means that redirecting this traffic via telnet (with the TCP timers and buffers) either breaks the fax session at worst (FaxMan will time out) or at best mean that your testing isn't representative of what the real-world (non-emulated) performance will be.
What are you trying to stress-test anyway: your application, or the third-party FaxMan?
I suggest that the cheapest solution and the most realistic test would be using real hardware: real COM ports, real fax modems, and real (or, possibly, simulated) telephone lines.
Edit to answer the questions from the comments in Michael's answer
Assuming that the transport of the data is a small problem (e.g. because you can simply connect two serial ports back to back), is writing software which emulates a fax modem a small problem?
It might be small: if your load test is merely "send fax data to the bit bucket" then your emulated modem mostly just needs to respond "OK" to every/anything that looks like an AT command, plus various other responses to the various fax-specific AT+F_whatever_ commands. But that's a pretty low-fidelity, not a very stringent, test.
That would be pretty simple - but isn't there some protocol involved in the FAX data transmission? Or is the protocol just a variant of the AT command set, and spoofing an "OK" is all there is to it? I honestly don't know, but I assumed there would be a somewhat more complex protocol.
The telephony protocols have names like "T.4" and "T.30". The PC-to-faxmodem protocol is usually a protocol called "class 1 fax" or "class 2 fax". The latter ("class 2" or "class 2.0") is the higher-level of the two: more ASCII and less binary data, not so timing-sensitive (class 1 is sensitive to 10s of msec iirc), because it encapsulates/wraps more of the underlying T.30 negotiation than class 1 does; it consists of extended AT commands (i.e. AT+F_something_ commands, and their responses) plus a dump of the binary-encoded fax image data.
Some of the responses are more than just "OK" (i.e. they represent the available/negotiated fax session parameters) but (in class 2 rather than class 1) they're ASCII-encoded rather than binary, so not too difficult really at all.
There has to be some sort of handshaking, right? Otherwise a plain, old FAX machine would likely lose a bunch of data when it was loading a new page.
Yes there's some handshaking ("May I send now?") between pages (i.e. before each page). A load-testing emulation which isn't testing the timing would just respond "yeah, go ahead (I'm only going to be dumping the data into the bit bucket anyway without even looking at it, so what do I care)" to the handshake enquiry.
The emulation would also have to watch the binary image data (which it's getting from the PC) for <DLE><ETX>
and <DLE><DLE>
, in order to respond OK at the end of the PC-dumps-image-data-to-the-modem.
I don't know what timers might be built into the FaxMan application (whether or not you might need to add artifical delays to your emulated responses, to prevent FaxMan's realising that the responses are abnormally quick): maybe not, but maybe.