well, i have network that each proxy (lets assume we have 200 proxies), send UDP packages every constant amount of time. (let assume 10 seconds) to constant amount of hosts (lets assume 10) my question is how will 6 * 10 seconds * 200 proxies * 10 target hosts = 120,000 UDP roundtrip communication per minute will affect my network, in terms of available connections, speed, stability, UDP package loss rate etc... can anyone please refer me to some links on this issue ? thanks
You don't mention what kind of network, so I'll assume you're talking about a LAN.
Let's assume the worst: all proxies send at the same time, and your LAN's got old equipment and only supports 10 megabits a second.
You'll then have 200*10 = 2000 UDP packets going out in one shot. If you have 100 bytes in each then you're talking about ~195 kibiBytes hitting your network. That's about 0.02% of the cable's capacity. On a LAN you should have almost no packet loss.
I expect you wouldn't even notice the traffic.
When you test the setup, and you find that your network can't actually handle the spikes of traffic, one solution is to send out the packets within some interval, rather than a strict time. So instead of "send a packet every 10 seconds", "send a packet every 10 +-2 seconds".
To add to what Frank said, I'm guessing that the network interface cards have enough on-board intelligence to discard the packets which aren't addressed to them, without invoking the protocol stack and CPU on the computer.
If you had antique/dumb network cards (or if the device driver puts the card into "promiscuous receive" mode so that it receives even packets which aren't addressed to it, which is useful only for routers and packet sniffers), then packets-per-second would put load on the computers as well as on the network bandwidth.