Sorting an array of unique random numbers at insertion

I have a piece of code that works well to fill an array with a unique random numbers. My problem now is, that I want to sort these numbers, but not after the array is full but as new numbers are being inserted. So as each new number is inserted into the array, if finds the position it is meant to be in. The code I have for creating unique random numbers is below, thank you in advance:

#include <iostream>

#include <stdio.h>

#include <stdlib.h>

#include <time.h>

using namespace std;

#define MAX 2000 // Values will be in the range (1 .. MAX)

static int seen[MAX]; // These are automatically initialised to zero

// by the compiler because they are static.

static int randomNum[1000];

int main (void) {

int i;

srand(time(NULL)); // Seed the random number generator.

for (i=0; i<1000; i++)

{

int r;

do

{

r = rand() / (RAND_MAX / MAX + 1);

}

while (seen[r]);

seen[r] = 1;

randomNum[i] = r + 1;

}

for (i=0; i<1000; i++)

cout << randomNum[i] << endl;

return 0;

}

Re: Sorting an array of unique random numbers at insertion

If you're looking for efficiency, a std::set may be more suitable than an array.

Re: Sorting an array of unique random numbers at insertion

Quote:

Originally Posted by

**woodman_102**
...I want to sort these numbers, but not after the array is full but as new numbers are being inserted.

What is the difference?

Anyway, instead of assigning your new random number to the randomNum[i], you could loop from randomNum[i] to randomNum[0] looking for the insertion point. If the new number is greater – assign; if it is less – move last element up and keep going.

Re: Sorting an array of unique random numbers at insertion

Quote:

Originally Posted by

**Lindley**
If you're looking for efficiency, a std::set may be more suitable than an array.

Not just for efficiency reasons. Std::set has the two properties the OP is asking for - elements are unique and they're kept sorted at all times.

So it's just to keep inserting random ints until the set reaches the wanted size. Whenever the set is iterated the ints will appear in sorted order (and they will be unique).

Re: Sorting an array of unique random numbers at insertion

Quote:

Originally Posted by

**VladimirF**
What is the difference?

Well, running time of course. Assuming a naive insertion sort is used, keeping the data sorted after every insertion results in a running time of O(n*(n+log n)) at best. Sorting once after all insertions are done is O(n + n*log n), much faster.

If a std::set is used, keeping the data sorted after every insertion is O(n*log n) but in some cases cache locality may not be as good.

If an array is required, "library sort" (basically insertion sort where some empty slots are left between valid elements) can be done in O(n*log n) but in practice it's still slower.

Re: Sorting an array of unique random numbers at insertion

Quote:

Originally Posted by

**woodman_102**
So as each new number is inserted into the array, if finds the position it is meant to be in. The code I have for creating unique random numbers is below, thank you in advance:

Note that strictly the randomNum[] array isn't necessary since seen[] already holds all information about which ints have been randomly selected. Also seen[] is inherently sorted. Each newly selected int will be at the "position it is meant to be in" because the position **is** the int.

Re: Sorting an array of unique random numbers at insertion

Quote:

Originally Posted by

**Lindley**
Well, running time of course. Assuming a naive insertion sort is used, keeping the data sorted after every insertion results in a running time of O(n*(n+log n)) at best. Sorting once after all insertions are done is O(n + n*log n), much faster.

If a std::set is used, keeping the data sorted after every insertion is O(n*log n) but in some cases cache locality may not be as good.

If an array is required, "library sort" (basically insertion sort where some empty slots are left between valid elements) can be done in O(n*log n) but in practice it's still slower.

That's not correct.

O(n*(n+log n)) and O(n + n*log n) are not even proper complexity measures. It's O(n*n) and O(n*log n) respectively.

In the first case, even if you use an O(log n) binary search to locate the insertion point you need to insert it which is an O(n) operation. Repeating n insertions gives a complexity of O(n * n).

In the second case you place the items in the array in arbitrary order which is an O(n) operation. Afterwards you sort at O(n* log n). Taken together this gives O(n * log n) complexity.

Inserting all items in an ordered set has O(n * log n) complexity as you stated.

Re: Sorting an array of unique random numbers at insertion

Some would argue that when it comes to reads, a sorted vector is faster than a set, because of locality (I believe Meyers mentions it too in one of his "more effective"). This is an adaptor that transforms the vector's interface into that of a set.

That said, it has methods specifically dedicated to **NOT** sort after every insertion, given its higher price.