Visual Studio 2008, C++, MFC, Windows XP Pro, Ethernet NIC is Intel based, private LAN closed off from the world.
My application builds a packet that it sent to another computer via TCP/IP over Ethernet. The final structure is a conglomeration of three structures with the one in question looking like this:
unsigned int date_tag;
unsigned int date_size;
unsigned int date_value;
unsigned int microseconds_tag;
unsigned int microseconds_size;
unsigned int microseconds_value;
unsigned int status_tag;
unsigned int status_size;
unsigned int status_value;
… couple more…
Please excuse typos, I cannot cut and paste from the work computer.
The structure is dictated by the vendor server application that receives this data. The tag values are integers that range from 1 to about 1K. All the size values are 4 indicating a four byte value. The problem is that the server does not accept the data. Using the debugger I get to the area that sends the structure and look at it before and after the call to the TCP class. All the sections are there and all parameters have the expected values, in their correct places. I threw in a few steps to get the address of each element and as expected, it increments by four each item.
I looked at some packets with Wire Shark and it shows that the values for microseconds_value are SHIFTED RIGHT by two bytes and overwrite the first two bytes of status_tag.
The microseconds value is hundreds of microseconds since midnight and uses almost the full 32 bits. All the others are much smaller and are just fine.
I have pondered this all day while working other tasks, but cannot think of anything that could possibly cause it. Every packet for this displays this problem, only in this one field.
Side note that might be related: I discovered that packets out of the computer had checksum errors. I resolved that by disabling the option to offload the checking to the adapter. I will be getting the latest drivers, but I really don’t think a driver would cause this problem.
While writing this I thought of forcing larger values into some of the other fields, and smaller values into the microseconds_value field. That might tell me something. Maybe I can declare another structure for microseconds and load it one byte at a time.
Has anyone seen a problem similar to this? Any clues?
First, in TCP, there is no such thing as a "packet". TCP is a stream protocol, not a packet or message protocol. Show us your send and receive code to confirm that you understand the implications of a stream versus a packet/message protocol.
Second, ensure that the compiler is packing the bits of your structures consecutively. In Visual C++, you would use #pragma pack, on both the send and the receive code.
#pragma pack, I had never heard of that. I will check and see if things are organized as I think they are.
I did make progress today. One obvious thing I realized is that the 32 bit Windows XP system Little Endian does not byte swap two bytes at a time. It byte reversed four bytes at a time. That and an error couting bytes got me two bytes off. There are a bunch of parameters that are small and with an offset of two and my Endian Swap error made my byte counting appear to be right, when it was not.
Geez, wait till I start seeing the native arangement of 64 bit code and complete reversal eight byte at a time.
First, in TCP, there is no such thing as a "packet".
That is puzzeling. I send my data in, lets call them chuncks, about 1500 bytes at a time. Each one is quite independent of the last and the next. (This is telemetry data) I don't want a stream at all. The vendor accepting the data has provided the layout that I must follow. To me it looks like "packaged data" and not a stream at all.
... To me it looks like "packaged data" and not a stream at all. ...
You might WANT a packet, but you won't get it with TCP alone. (The key word here is "alone" -- see below which discusses an application-level protocol)
Originally Posted by bkelly
... I don't want a stream at all. ...
TCP is a stream protocol. Period. There is no argument on this.
If you want something on TCP that resembes packets, then it is up to you to enforce an application-level protocol that serializes your packets, sends the serialized data out over TCP, and then de-serializes the data back into packets on receipt.
Originally Posted by bkelly
... I send my data in, lets call them chuncks, about 1500 bytes at a time ...
You might send 1500 bytes at a time, but there is no guarantee in TCP that all 1500 will be received in one single call to recv(). The 1500 bytes will get there, and they will get there in the correct order (those are some of the TCP "guarantees"), but it might take two or three or more calls to recv() before all are received, and in each call to recv(), there might be parts of a prior "chunk" (your terminology) or a successive "chunk".
Because of this, it's up to you and your application-level protocol to de-serialize TCP data upon receipt.