I hope that the stl gurus can help me with this...

Consider the following: In my current project, a team member implemented code such as the following:
int*    m_timeArray;
string* m_nameArray;
double* m_valueArray;

// Dynamically allocate space for the arrays here...

for(int i=0; i < numElements; i++)
  m_timeArray[i] = someTime;
  m_nameArray[i] = someName;
  m_valueArray[i] = someValue;
As these C-style arrays are actually representing an array of objects (each consisting of a time, a name and a value, in this simplified example), I suggested to
  • create a class which has time, name and value as members, and
  • use a std::vector instead of a C array.

Something along the lines of:
class Element
  Element(int time, const char* pName, double value);
  virtual ~Element();

  int    m_time;
  string m_name;
  double m_value;

vector<Element> elements;
Now this is a very performance critical part of the code, and the other team member (which is a hardcore C-programmer and performance fetishist anyway, who mistrusts stl and C++ in general) argued that it wouldn't be possible with this design to add elements to the array with the same efficiency. More specifically, code like
elements.push_back(Element(someTime, someName, someValue));
will of course create a temporary Element object and invoke Element's default copy constructor, which is obviously more costly than the C code. However, I made two bold statements saying that
  1. when switching from a C array to std::vector, very little changes will be necessary to the existing code (which is not true, as I can't add elements to the array with something like elements[i] = ..., but I will have to use push_back - right?)
  2. That it is possible to use a std::vector of elements with the same efficiency as using multiple C arrays (like in the original code).

Here I'm stuck - it seems that I can't add elements to the vector without creating a temporary object, and therefore effectively copying the data twice. Any ideas?