now i really dont understand why !
i made buffer of 10k for each send.
each time fread read bytes, its send the byte 1 then send read bytes.
else its send 0 for tell the client to stop looping - receive.
all go great, sending sending then stuck without reason !
here is my code:
c++ server
Code:
case 0x01:
{
SocketR[1].p = false;
FILE * stream;
int h;
BYTE buffer[10000];
char rtyh[260];
CString t;
int y = SocketR[1].Receive(&rtyh, 260);
rtyh[y] = '\0';
int ss = GetSize(rtyh);
SocketR[1].Send(&ss, 4);
if (( stream = fopen(rtyh, "rb")) != NULL )
{
do
{
h = fread(&buffer, sizeof(BYTE), 10000, stream);
/*CString j;
j.Format("%d", h);
MessageBox(j);*/
if (h)
{
int c = 1;
if(SocketR[1].Send(&c, 4))
if(SocketR[1].Send(&h, 4))
if(SocketR[1].Send(buffer, h));
memset(&buffer, NULL, 10000);
}
else
{
int s = 0;
SocketR[1].Send(&s, 4);
break;
}
}
while (1);
}
SocketR[1].p = true;
}
If you are using TCP, then you might need to re-adjust your thinking about TCP. TCP is a stream-based protocol, which means that the data on the wire is just a stream of bytes. Boundaries are not somehow inserted between individual calls to Send(). As a result, data on the wire from one Send() is often grouped together with data on the wire from a previous Send() and a subsequent Send().
The code seems to be written as if you expect one call to Send() (at the sender side) to result in exactly one corresponding call to Receive() (at the recipient side). It won't. For example, your sender might call Send() three times, each with 10000 bytes. Your recipient might get a get a three groups of 10000 bytes too, but usually it will not. Usually it will get something else, such as one group of 30000 bytes, or two gorups of 15000 bytes, or one group of 10000 bytes followed by four groups of 5000 bytes each, or any combination at all.
Your sender and your receiver must be written so that they both expect a pure stream of data, with no assumptions about how the stream might be grouped or re-grouped by all the routers on the path between the sender and the recipient.
The reason why Sleep() makes a difference is that Sleep() causes your thread to yield to another active thread in the system, and as a consequence, it probably signals the TCP stack that it's OK to send all the data in its current buffer. In other words, through a luck coincidence of your current debug environment, the call to Sleep() forces a one-to-one correspondece between one call to Send() and one call to Receiv(). However, that lucky coincidence will not last forever, and it will almost certainly fail "in the field", and fail often. As an example, you are probably testing your program on a single machine using the loopback address (127.0.0.1). Try deploying your program in a release build and also across a LAN, using different machines. The program should almost certainly fail.
No, that's not possible. According to the layered model for networks, the application layer (that's your program) cannot dictate to the lower layers (like the TCP stack) how the lower layer should do its job. See "TCP/IP model" at http://en.wikipedia.org/wiki/TCP/IP_model
When they first start programming for TCP, most people are surprised to learn about this "streaming" property of TCP, and their initial reaction is "how can I get around this"? You can't. Learn to appreciate the efficiencies that the TCP streaming model gives to all users of the overall bandwidth of the network, and then program according to the TCP streaming model.
Send(xxx,xxx); \\ Stay here untill This send finish his work, then call the second send.
Send(xxx,xxx);
doest it possible ?
Yes, it's possible once you interleave your sending with receiving of some server's reply (like "OK" or something). Then you have to take care of letting the server side to know about to start your custom session, acknowledging every chunk received and closing your session - this is what they call a "custom protocol". Do you really need this?
In other words, wait for an acknowledgement of each 10000 bytes before sending the next 10000 bytes.
But without waiting for an acknowledging handshake, there is no way to make the Send() function wait intrinsically for the recipient to receive all data, before exiting the Send() function.
Mike
PS: Thanks to Igor for pointing out that it's not impossible to get the OP's desired result. The performance would be terrible (as I think Igor is suggesting), and the OP will probably not want implement this for performance reasons, but it is not, in fact, impossible.
Last edited by MikeAThon; December 12th, 2007 at 11:38 AM.
Mike, that exactly was my point. Actually, I cannot see any reason for "waiting" for the end-of-send "event", as well as I cannot see how somebody could benefit from the knowing that chunk was actually sent.
The nature of TCP connection is to provide any possible efforts to transfer the data exactly as it was sent. It seems like the guy just tries to duplicate some TCP features.
No, im not searching to duplicate TCP features (i dont know how you tought this )
now i understood well that TCP socket not always sending/receiving the requested bytes.
Another questions, this is happen only to MFC Sockets or to all other socket ? (C#, Pure C++ [Not MFC]).
Cause i made my program in MFC C++ and still have the same problem,
i made the while loop untill it receive all the byte exactly same way as the project you gave (from code project), works well in my local computer but when i connect to a LAN computer, its not work, i put a break point and its lead me the receive while loop, i dont know why its happen.
Now i go to try it in C++ (Console Application [The Server]).
Last edited by yoni1993; December 14th, 2007 at 05:07 AM.
TCP works the way it works regardless of the OS and the sockets you use. It's always a streaming protocol, and it always is free to group and regroup your sent bytes into whatever chunks it wants to. For example, there's one algorithm (Nagle) that is precisely designed to group all sends of small amounts of bytes into one packet, until a prior packet is ACK'd. Another example is the re-transmisson algorithm: in the case of a re-transmission due to a lost packet, the recipient will receive everything from the lost packet onwards in one big chunk.
These algorithms are part of TCP itself, and are independent of the OS or the socket implementation. Thus, they will characterize TCP regardless of whose system it's run on.
So, the fact that "its work great in console application without the while loop for send all non-sent bytes" is probably a coincidence for now, and will exhibit the behavior you described earlier at some time in the future.
If you have specific questions, you should show us your code, since it probably has changed alot since when you first posted it.
TCP actually makes two guarantees that UDP does not. The first is guaranteed delivery, as you say. Te second is in-order delivery: TCP guarantees that the order in which the sender streams out the bytes is the order in which the recipient will receive them. UDP does not make this guarantee, and it's possible (maybe due to different routings) that a second datagram reaches a recipient before a first datagram that was sent first.
But guarantees of TCP don't change simply because one program uses a console approach and the other a GUI approach. There is still something wrong with your approach. CSocket is fine. The article doesn't say there is something wrong with CSocket; it merely says that the programmer needs to understand TCP or he will make a mistake in programming.
* The Best Reasons to Target Windows 8
Learn some of the best reasons why you should seriously consider bringing your Android mobile development expertise to bear on the Windows 8 platform.