-
March 23rd, 2006, 09:52 AM
#1
Flush TCP socket
Hi,
how can I force a socket (connection-oriented sockets - TCP socket) to send
the data in its buffer immediatelly (flush socket)? I need wait at the end when data was physically sended (from network card).
Jiri M.
-
March 23rd, 2006, 10:25 AM
#2
Re: Flush TCP socket
Well,
Please clarify what you are trying to do!
Additionally, do you mean for every send?
With TCP/IP, you are not in control of when the data actually goes. You send it from your buffer, it goes to the buffer in the TCP Stack, and in reality then TCP sends when it is good and ready.
You can continue through send and recv commands by using non-blocking sockets.
If you are just trying to shut down at the end try this link:
GraceFul Shutdown
HTH,
ahoodin
To keep the plot moving, that's why.
-
March 23rd, 2006, 11:03 AM
#3
Re: Flush TCP socket
I mean this for every send().
-
March 23rd, 2006, 05:09 PM
#4
Re: Flush TCP socket
Disable the Nagle algorithm (may not be a good thing to do), by using the setsockopt call with TCP_NODELAY. See MSDN och the Internet for example code.
-
August 23rd, 2007, 10:43 AM
#5
Re: Flush TCP socket
No, do not do disable Nagle. The question shows a misunderstanding of the basic nature of TCP, and correspondingly there are incorrect assumptions when writing code. You must fix your understanding of TCP, and then write code that is consistent with correct assumptions.
Mike
PS: This is an old question, but a link to it has been given as the solution to a similar question (see http://www.codeguru.com/forum/showth...69#post1618669 ). So I felt it was worthwhile to wake up the thread and post a comment. Disabling Nagle is a false solution; do not do it.
-
August 23rd, 2007, 01:55 PM
#6
Re: Flush TCP socket
Originally Posted by MikeAThon
No, do not do disable Nagle. The question shows a misunderstanding of the basic nature of TCP, and correspondingly there are incorrect assumptions when writing code. You must fix your understanding of TCP, and then write code that is consistent with correct assumptions.
As I said, disabling Nagle may not be a good thing to do... There is however some situations when it is useful, for instance a telnet-like application (you don't want to wait 200 ms + 200 ms in echo-back typing). We don't know what kind of app the OP was coding.
-
September 29th, 2010, 06:29 AM
#7
Re: Flush TCP socket
whoa there
found this page on google (very high up on page rank for "flush tcp socket") and i think some clarification is necessary.
contrary to what most people say, there is nothing foolish about wanting to hint to the tcp stack to flush its internal buffers immediately. here's one very common use case:
an http request over ssl (ssl implementations usually do a lot of careless writing/reading). the user wants to see the page as fast as possible. on a connection where latency is physically about 100-300ms, waiting another 200-300ms for the final packet of a request packet sequence to get flushed is 50% slow down, that's huge! at the point when a full request has been handed off to the OS there is no performance degradation associated with it flushing the outbound buffer immediately. packet fragmentation is only an issue when bandwidth is important but if you have just finished your transmission then upload bandwidth won't be utilized anymore anyway (unless you have more requests to send immediately).
this has nothing to do with treating tcp like a packet oriented protocol or something silly like expecting read(2) to return data in patterns similar to what was transmitted over the wire. this is about giving more information to a lower level stack so it can perform better. what's the old mantra? don't hide power.
the truth is that tcp sockets aren't magical streams that always performs optimally in every scenario and blindly telling the world to not disable nagle's algorithm because "it's a bad thing to do" doesn't seem helpful on a coding forum like this one. it is not uncommon for production protocol codes to use variables like MTU, SO_SNDBUF, TCP_NODELAY and other socket options to squeeze as much performance out of tcp as possible, especially when the default settings can cause up to 50% performance degradation.
-
September 29th, 2010, 01:19 PM
#8
Re: Flush TCP socket
99.9% of the people who try to disable Nagle are motivated by a basic misunderstanding of how TCP actually works. They might, for example, not understand that TCP is a stream protocol, and thus view Nagle as the source of the "problem" of receiving only partial messages. That might be the motivation for the OP here, and it clearly is the motivation for the question over at the linked-to topic at "http://www.codeguru.com/forum/showthread.php?p=1618669#post1618669"
For the remaining 0.1%, there might be legitimate reasons to consider the disabling of Nagle. Perhaps your application fits into that 0.1% category, but perhaps not. Your rationale, for example, might be viewed as a greedy and self-important use of network resources that are shared by all applications on the computer, i.e., a viewpoint that the SSL application is somehow entitled to priority use of network resources, at the expense of all other applications on the computer. And it goes beyond the local machine, too: one purpose of Nagle is to promote efficient and fair use of bandwidth in the entire network backbone.
So, I maintain my advice that Nagle should not be disabled, although I will temper the advice with the concession that the advice is general, and that in rare cases, for experienced programmers only, who are intimately familiar with TCP and who understand the purpose of Nagle, then there might be a 0.1% case where the disabling of Nagle can be considered as a legitimate alternative.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|