Well, after all this I have to conceed that I was just resisting the
change. No big deal either way, thanks for the discussion everyone.
Printable View
Well, after all this I have to conceed that I was just resisting the
change. No big deal either way, thanks for the discussion everyone.
I am not sure if anyone has pointed out this. Using the delimeter approach requires additional overhead in processing at both ends. The delimeter chosen should not appear inside the information content, or else portion of the information will be mistaken as the delimeter. To avoid this, we will have to do "byte stuffing" to differentiate delimeters from actual information content.
I would prefer to append a header for every messages sent, rather than using delimeters. It caters for both fixed length and variable length packets in a more organized and simpler way.
Perhaps I misunderstand you, but in the original question, the delimeter is a newline. This implies to me that the data is textual, and the delimiter is as effective as it is in millions of text files that exist.Quote:
Originally posted by Wombat
The delimeter chosen should not appear inside the information content, or else portion of the information will be mistaken as the delimeter.
Or do I misunderstand?
Okie, I am actually talking about the approach of using delimeter as a whole, rather than being specific to the context brought out by souldog. If the data is textual, then using newline as delimeter is acceptable without need of byte stuffing. (Actually I admit I did not pay much attection to the original post cos it errm.....sounds ambiguous :p)Quote:
Originally posted by Sam Hobbs
Perhaps I misunderstand you, but in the original question, the delimeter is a newline. This implies to me that the data is textual, and the delimiter is as effective as it is in millions of text files that exist.
Or do I misunderstand?
As for other system that contains not only textual data, byte stuffing would be required if use delimeter approach. If that is the case, I rather opt for appending header.
Yes, my first post is ambiguous :cry:
I didn't have a specific question. I just wanted to know if anyone
out there had some very good reason to prefer deliminated streams to using a header with the size. The type of token is not
important, but I had a particular example in mind.
SolarFlareeeeeeeeee:(Quote:
Originally posted by souldog
Yes, my first post is ambiguous
I didn't have a specific question. I just wanted to know if anyone
out there had some very good reason to prefer deliminated streams to using a header with the size. The type of token is not
important, but I had a particular example in mind.
You deleted my question to Souldog right?
I just wanted to know if he already solved the problem or not...True. (no joke this time, I dont want to joke like that either)
[Yves: it's not solarflare, it's me. I'm removing noise from the forum]
Soul Dog !!!!
Well, ur ache is common as the same thing aches me as well,
However wat i see the problem is that u want the goodies of TCP but want the communication as UDP, as packets, so u do not have the insert the message delimiters ur self,
However, i'd seriousely recomend u take a look into SCTP ( stream control transport protocol)
SCTP was originally made to replace SS7, and carry out telecom signalling info over IP networks, however it is so good and (REAL TIME) that it now being used every where.
This is a new baby, and not implemented in a whole lot OSs by default, however implementations are available for most of the OSs as pathces . U can fidn them over the internet.
The basics of SCTP are that It delivers "messages" and not a stream, with same gaurantees as TCP(stream), so we do not have to worry abt Message delimiters or Packet size, nor do we have to tink abt CRCs, that are a real bear to figure out,
As for compatibility, i know the linux flavours allready have the SCTP implementation, I also know that SCTP's implementation is available for Win platform as well,
As for some one said abt Litle endian and big endian prob, that has nothing to do with OS, it has to do with the Processor and how it deals with mem words given to it, so forget it, no one can ever achieve that independence, that is best left to be done in the Kernel in the protocol implementation.
I hope this is useful to u,
Best regards
Salman:wave:
No. If some application transfers binary data, it has to deal with it. Of course, if it wants to be completely cross-platform. All is good while both client and server are running on machines with the same architecture. If U cannot be sure that, U are seeking out problems.Quote:
Originally posted by salman108
As for some one said abt Litle endian and big endian prob, that has nothing to do with OS, it has to do with the Processor and how it deals with mem words given to it, so forget it, no one can ever achieve that independence, that is best left to be done in the Kernel in the protocol implementation.
There is a list of functions specified for that case:
htonl, htons, ntohl, ntohs - convert values between host and network byte order.
___________________________________________________
No. If some application transfers binary data, it has to deal with it. Of course, if it wants to be completely cross-platform. All is good while both client and server are running on machines with the same architecture. If U cannot be sure that, U are seeking out problems.
___________________________________________________
See, as far as i know, Network = Pair of wires, think of the byte
10101010.
now the bits have to go on the wire either starting from Left (1) or right (0).
The thing is what ever goes on the network has to be in Nework byte order, But prob is some processors do not comply with network byte order, they read the bits from the other hand first,
Intel processors are not in Nework byte order so we have to convert them
This is how the hyperDictionary.com describes this
" The order in which the bytes of a multi-byte number are transmitted on a network - most significant byte first (as in "big-endian" storage). This may or may not match the order in which numbers are normally stored in memory for a particular processor"
and this is how chaps at Novell like to describe this
Byte order is dependent on the CPU Word architecture. In processors compatible with Intel processors, 4 byte long integers are represented as a sequence of 4 bytes, with the less significant bytes of the integer stored in the more significant bytes. This byte order is called host byte order or little endian order (little-end-first).
I hope the problem is solved:D