We recently completed an analysis of multicast sending performance. Happily, Java and C performed almost identically as we tested different traffic sending rates on Windows and Solaris.
However, we noticed that the time to send a multicast message increases as the time between sends increases. The more frequently we call send, the less time it takes to complete the send call.
The application lets us control the amount of time we wait between calling send, below you see the time increasing as the delay between packets goes up. When sending 1000 packets/second (1 ms wait time), it only takes 13 microseconds to call send. At 1 packet/second (1000 ms wait time), that time increases to 20 microseconds.
Wait time (ms) us to send
We see this phenomenon from both Java and C and on both Windows and Solaris. We’re testing on a Dell 1950 server, with an Intel Pro 1000 dual port network card. Micro-benchmarking is hard, especially in Java, but we don’t think this is related to JITing or GC.
The java code is here, I've been running it with a command line like:
java -XX:+PrintCompilation -verbose:gc MulticastSender 10.144.124.86 126.96.36.199 30000 0 100