I have a question regarding network throughput in widows. I'm running the same code in windows and ubuntu (using sockets from boost). Simplified the application just consists of one thread creating a udp packet and then send it over and over in a loop. In linux I get a throughput of ~610 Mbit/s, while in windows vista I only get ~90 Mbit/s.
Does someone have a slightest idea why this is? The same equipment is used in both cases. I have also tried using rtpplay tool to generate traffic with the same result, so I do not think it has anything to do with my code. I have also tried to disable firewalls etc in windows.
This is awfully close to 100 Mb/s and 1000 Mb/s. My first guess would be that one of the connections is operating at 1000 Mb/s Full Dup, and the other is not.
Can you confirm that each adapter is setup and running at 1000 Mb/s Full Duplex?
Good Luck, Craig - CRG IT Solutions - Microsoft Gold Partner
-My posts after 02/2010 = .NET 4.0 and Visual Studio 2010
-My posts after 12/2007 = .NET 3.5 and Visual Studio 2008
-My posts after 04/2007 = .NET 3.0 and Visual Studio 2005
-My posts before 04/2007 = .NET 1.1/2.0 *VB.NET Tutorials , HowTos , Books , Helpful Links: Click Here *I do not follow threads, so if you have a secondary question, message me.
But even if it's not optimal size I doubt that the difference should be this big?
Actually, it wouldn't surpise me at all. The two OSs could translate packets in entirely different ways. For example, if one OS does error checking on each packet and the other just checks the final concatenation, this could be significantly different.
Either way, that's a small packet size. I would up it to at least 1Kbit (8192 bytes).
If the post was helpful...Rate it! Remember to use [code] or [php] tags.