views:

73

answers:

3

Is it possible to use the Unix netcat (nc) program to create a TCP proxy server and monitor? I would like all TCP traffic to be passed across the pipe, as well as sent to stdout for monitoring. Note this will be used for monitoring HTTP traffic between a hardware device and an HTTP server.

A: 

Sure, you can use it or a faucet|hose pair, but why do that when you can have a minimal Apache instance do the exact same thing and provide a more feature-full set of analysis?

Xepoch
+1  A: 

Not netcat on its own, since it would have to interpret the HTTP request and pass it on. For example, an HTTP request through a proxy starts with:

GET http://www.example.org/ HTTP/1.1

which your proxy then has to go, 'okay, I gotta connect to example.org and GET /'.

Now this could maybe be done by piping the nc output into a script which parses the HTTP req and then calls 'wget' to get the page, then slurp that back through netcat... oh heck, why?

Apache, or squid can probably do the job.

Spacedman
Thanks, HTTP proxy was misleading, because that's more than I need. I just want to statically establish a pipe that sends all TCP traffic from one port to another host and port, and provides monitoring.
landon9720
okay, maybe nc -l -p 9999 localhost | logging_processor | nc -p 9998 localhost - just make sure your logging_processor program passes everything through.
Spacedman
A: 

Yeah, should be possible.

When i [asked about writing a web server in bash1 on a newsgroup, i ended up with two decent ideas. One was to use xinetd as the actual server, and have it run a shell script for each connection; in your case, the script could then use tee and nc to forward and log the connection (with some file descriptor trickery to get a tee on each stream, i think). The other was to use socat, which effectively lets you write a fully operational server, with listening sockets and handler subprocesses, in bash; again, you would want tee and netcat to do the logging and proxying.

If you want a proper proxy server, than as @Spacedman says, you'd need to interpret the request line, but that's easy enough - read the first line, apply cut -d ' ' -f 2 to get the URL, some sed or shell string operators to pull out the domain and port, and continue. If you know upfront that all traffic is going to one endpoint, though, then you can hardwire it.

Tom Anderson