Something that I have thought about for a while and have not been satisfied with my ideas is how to handle reallocation costs when input data is unbounded.

For instance, in a low-latency environment such as a trading platform or real-time data collection system or MMORPG where you can receive a large amount of input, you can not afford a stall while the data structure is rebuilt with a larger buffer. You can use historical data to "guess" an appropriate size but this is far from fool-proof. You can also create a sentinel thread that monitors the load factor of your data structure but that brings its own problems (e.g. another thread that needs execution time and to lock the data).

Can anyone recommend schemes they've seen that work?

Incidentally, I posted this question on SO and was accused of cheating on a homework or interview assignment. I am not in school and am not interviewing. This is a nagging question I am finally getting around to posting on a forum for input.