Run-time Performance Anomaly?

I have a simple for() loop in a program that I am writing. This for() loop is the slow part of the program, so when modifications are made to the length of the loop, the effects are easy to observe.

The function in question approximates as follows:

Code:

`void function()`

{int x;

double num;

cin>>num;

cout<<" "; // 8 spaces (8 spaces = 1 tab)

for(x=0; x<num; x++)

{if(x%10==0)

{cout<<"\b\b\b\b\b\b\b\b"<<x/num<<\t%;}

<repeated algorithms>}}

Using the embedded if() statement, I can monitor how often the for() loop repeats 10 iterations (at which time the percent of for() loop iterations is displayed).

It seems to me that, because the algorithms never change, the length of time between percentage displays should be the same regardless of the value entered for num (though the entire process will take longer for larger values of num).

However, for some reason, this is not the case: As it turns out in my case, the value num does not only affect the total time required to complete the for() loop, but it also affects how long it takes for each algorithm to finish (even though the algorithm is independant from the value of num).

For example, in my case, when num is equal to 10, the loop takes only 1 second to finish, with milliseconds between each percentage display. However, when num is equal to 200, it takes about 10 seconds to finish each iteration of the for() loopl

Forgive me if my question is ambiguous.

Can anyone explain why this occurs, and offer a possible solution?

Thank you

Andy

abarrette@nc.rr.com

Re: Run-time Performance Anomaly?

Quote:

Originally Posted by **hixidom**

It seems to me that, because the algorithms never change,

Please post the "algorithm" you're speaking of. Too many times, posters state things that they claim are true, and only when they actually *post* what they have, it turns out to be false.

No one can give you an answer until we see the code ourselves.

Regards,

Paul McKenzie

Re: Run-time Performance Anomaly?

Quote:

Originally Posted by **hixidom**

However, for some reason, this is not the case: As it turns out in my case, the value num does not only affect the total time required to complete the for() loop, but it also affects how long it takes for each algorithm to finish (even though the algorithm is independant from the value of num).

You would think that the algorithm is independent of the value of num - but it is not. If it were, it should not have been inside the for loop for num times. Posting the code and timings you observed might help. Post results from an optimized build and the compiler name/version would also be helpful.

The only way to get any benefits would be to take out the code that gets called repeatedly but effectively does not need to. If you are calculating some intermediate values inside the loop that are invariants to it, take them out (cache them) as well.

Re: Run-time Performance Anomaly?

Quote:

Originally Posted by **hixidom**

For example, in my case, when num is equal to 10, the loop takes only 1 second to finish, with milliseconds between each percentage display. However, when num is equal to 200, it takes about 10 seconds to finish each iteration of the for() loopl

It is not clear from your code WHAT exactly are you measuring? There are no calls to time-measuring function...

Could it be that you measure NOT the time of one iteration, but the time to reach certain percentage mark? It will sure take 20 times longer to complete, for example, 10% of 200-iteration loop than of 10-iteration loop...

Re: Run-time Performance Anomaly?

First of all, I found the problem!

Second of all, I'm sorry that I did not post an example program when I first asked for help in this thread. I realize now that doing so would have made finding the problem easy.

Here is an example program that demonstrates the "problem":

Code:

`#include<iostream>`

#define pi 6.283185307179586

using namespace std;

double res, obj[1000][3], pc[5000][5000][3], pv[1000], sloc[2][3]={{20,20,10},{-20,10,20}};

int pint;

long int hz=5000000000000000

void dpc() //define the coordinates of a screen pixel (pixel)

{int x,y;

for(y=0; y<res; y++)

{for(x=0; x<res; x++)

{pc[y][x][0]=((100/res)*x)-50;

pc[y][x][1]=((100/res)*y)-50;

pc[y][x][2]=0;}}}

void dvariables() //prompts for num; assigns inconsequential values to obj[x] and pv[x]

{int x,y;

cout<<"num=";

cin>>num;

for(x=1; x<=8; x++)

{pv[x-1]=x;}

for(x=1; x<=8; x++)

{for(y=0; y<3; y++)

{obj[x-1][y]=x+y;}}}

int testcon(double coord[3], int p)

{//This function tests the continuity between a point on the screen and a point on the object

//However, the guts of this function have been removed and, for simplicity's sake, we can assume that testcon is always true (continuity always exists)

return 1;}

double dist(double za[3], double zb[3]) //find the distance between two points

{return sqrt(pow(za[0]-zb[0],2)+pow(za[1]-zb[1],2)+pow(za[2]-zb[2],2));}

void screencalc() //calculates the value of a single pixel on the screen (screen)...

{int x,y,za;

for(y=0; y<res; y++) //per vertical position

{for(x=0; x<res; x++) //per horizontal position

{dpc();

for(za=0; za<pint; za++) //per object point

{if(((y*(int)res)+x)%10==0)

{cout<<"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b"<<(((y*res)+x)*100)/(res*res)<<"\t%";};

if(!testcon(pc[y][x],za))continue; //only if the point is "visible"

screen[y][x]=screen[y][x]+(sin((pi*hz*(dist(obj[za],pc[y][x])+pv[za]))));}

screen[y][x]=screen[y][x]+(sin((pi*hz*dist(sloc[1],pc[y][x]))));}}}

int main()

{dvariables();

screencalc();

return 1;}

(In this example program, many of the variables that are specially defined in the original program (sloc, pv, obj, hz, etc.) are given inconsequential numbers via an added function: dvariable().)

The problem:

In screencalc(), it used to be that double pc[3] was defined for each iteration using the following function:

Code:

`void dpc(int x, int y)`

{for(y=0; y<res; y++)

{for(x=0; x<res; x++)

{pc[0]=((100/res)*x)-50;

pc[1]=((100/res)*y)-50;

pc[2]=0;}}}

The problem is that, when I decided to change pc[3] to pc[5000][5000][3] and dpc(int x, int y) to the current function, dpc() (which defines all of the values of pc at once, as opposed to one at a time), I forgot to take dpc() out of the for() loops in screencalc(). In effect, the reason the program was running so slow was because screencalc() was recalculating all of the values for pc[5000][5000][3] for every iteration when, in fact, the values never change.

So, I took dpc() out of screencalc() and set it up so that pc[5000][5000][3] is defined before main() even gets to screencalc().

The new code works great!

Thanks to everyone for your help!

In the future, I'll remember to post an example program (that demonstrates the problem) with my thread-opening post.

Andy

Re: Run-time Performance Anomaly?

this isn't scheme, i don't know how you can read any of that code!!