On the server side, as soon as you receive data from a client, you should call send() to respond. You do not need to check on readyness to write (i.e., you do not need to wait until FD_ISSET for writing). The only time you need to check on readyness to write is if you call send() and send() returns an indication that the socket is not ready to write.
In addition, your code assumes that all data (including a terminating NULL) will be received in a single call to recv(), and further assumes that all data will be sent in a single call to send(). Both are bad assumptions. TCP is a stream-based protocol, not a message-based protocol, so when using TCP, you must implement some sort of serialization when sending and receiving bytes.
How you find client's code ? Does it need select and fd_isset too ? or i have to code it in other way?
If i connect to the server two telnet clients seems to work nice and that's why i m thinking that maybe the problem is in client.
In almost all situations, client or server, you do not need to check for readiness to write before calling send(). In other words, you do not need to wait until FD_ISSET before writing. The only time you need to check on readiness to write is if you call send() and send() returns an indication that the socket is not ready to write. If that happens, then you must wait before calling send() again until FD_ISSET.
So, in general, the algorithm is:
1 - call send()
2 - did send() complete correctly? If so, then you're done; call send or recv or whatever for new data.
3 - If send() did not complete correctly, call select on a write_fd_set and then call send() once it returns with FD_ISSET
Also can you make more clear your words in the second paragraph cuz didn't understood them very well.
"TCP is a stream-based protocol" means that the data on the wire is a stream of individual bytes: there is no notion of packets. If you call send() quickly, three times in a row, with 50 then 100 then 150 bytes, the result will be a stream of 300 bytes on the wire. The recipient must somehow decode these into the original three calls to send() (if that's the intention). Likewise, on the receiving end, one call to recv() might result in receipt of bytes from multiple calls to send() and indeed it's possible that the recipient receives only part of the bytes from a single call to send().
For that reason, it's up to you, as the application programmer, to institute some sort of application-level protocol in order to serialize and de-serialize the data being sent.
One protocol is for the server always to terminate sent bytes with a NULL when calling send(). It's wrong, however, for the client to assume that a single call to recv() will always contain the terminating NULL that was sent from the server; rather, the client must actively look for it, and continue to call recv() until the terminating NULL is found. It also cannot disregard bytes recieved after the terminating NULL, since those bytes might contain the beginning of a new NULL-terminated message from the server.
Incidentally, I just realized that your client is using a combination of blocking sockets and select(). That makes no sense, and as you have realized, your call to select() must include a timeout or else it might never return.
Use of select() makes sense only when using non-blocking sockets.
You have set the timeout to zero, which means that you are constantly polling for activity, without any pause. That's why you also needed the SleepEx call for 10 msec, since otherwise you would be using 100% CPU. Both are bad (meaning both of polling and SleepEx to avoid 100% CPU are bad).
See better designs at the Winsock FAQ site that I linked to.