
March 3rd, 2013, 10:57 PM
#1
Big O notation
I have been studying Big O notation and watch videos and read book about the big O notation. I understand the purpose of it but I need to make sure that I got the whole concept of the Big O because I might have gotten the wrong idea. And I am hoping someone will be able to clarify that with me.
Let's Start, I know big O notation has different complexity times:
First, we have the constant time and that is O(1) and that when it does not include forloops and print out instantly, correct ?
Second, we have O(N) and that includes one for loop in the code and corresponds to the input size, correct ?
Third, we have O(N^2) the deeper we go with the for loops it increases.
Finally, We have logarithmic time that is O (log N) and that is used for binary tree or binary search tree, correct ? < example would be awesome if possible
Thanks in advance, I just want to have the concept solidified

March 4th, 2013, 03:55 AM
#2
Re: Big O notation
Originally Posted by majeed2003
I just want to have the concept solidified
Big O is a general concept and not related to loops per se. Big O is about how an algorithm responds (in space or time or both) to variation in the input load (often represented by a parameter N)
The loop examples you give may be correct but aren't necessarily. It's easy to find counter examples. Say for example N is the varying input then this loop won't represent an O(N) algorithm but an O(1) one:
for (int i=0; i<43; i++)
It's because it's unrelated to N and stays constant regardless of how N is varied.
This loop on the other hand would represent an O(log N) algorithm.
for (int i=1; i<N; i=2*i)
The number of loop iterations will be proportional to the logarithm of N because the loop variable isn't just incremented but doubled in each iteration step.
So even though loops are a convenient way to illustrate different Big O complexities they don't constitute the concept itself.
Last edited by nuzzle; March 4th, 2013 at 06:21 AM.

March 4th, 2013, 09:59 AM
#3
Re: Big O notation
Thank you very much Nuzzle, really grateful.
I just to make sure I got the idea. I want to see if I can determine the differ in time complexity in Big O notation. It seems if the code is unrelated to N then it is O(N) of course that says if the code is not binary tree which makes it O (LOG N). But how can I tell if the syntax or the code is unrelated to N ? and How can I tell if the code has more time complexity?
Take this code for example:
Code:
#include<stdio.h>
#include<stdlib.h>
/* Link list node */
struct node
{
int data;
struct node* next;
};
void push(struct node** head_ref, int new_data)
{
/* allocate node */
struct node* new_node =
(struct node*) malloc(sizeof(struct node));
/* put in the data */
new_node>data = new_data;
/* link the old list off the new node */
new_node>next = (*head_ref);
/* move the head to point to the new node */
(*head_ref) = new_node;
}
int detectloop(struct node *list)
{
struct node *slow_p = list, *fast_p = list;
while(slow_p && fast_p &&
fast_p>next )
{
slow_p = slow_p>next;
fast_p = fast_p>next>next;
if (slow_p == fast_p)
{
printf("Found Loop");
return 1;
}
}
return 0;
}
/* Drier program to test above function*/
int main()
{
/* Start with the empty list */
struct node* head = NULL;
push(&head, 20);
push(&head, 4);
push(&head, 15);
push(&head, 10);
/* Create a loop for testing */
head>next>next>next>next = head;
detectloop(head);
getchar();
}
What makes this code to have Time complexity O(n) ?
Thanks a lot in advance, thank you so much for your time.
Last edited by BioPhysEngr; March 4th, 2013 at 10:22 PM.
Reason: add code tags

March 4th, 2013, 10:36 PM
#4
Re: Big O notation
Oh, also, I added [code] and [/code] tags around your code to preserve formatting. It makes it easier for everyone to read (but nobody knows about it their first time posting)!
Best Regards,
BioPhysEngr
http://blog.biophysengr.net

All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.

March 7th, 2013, 04:56 AM
#5
Re: Big O notation
Originally Posted by majeed2003
It seems if the code is unrelated to N then it is O(N)
No if the code is unrelated to N it has O(1) complexity. When N varies the runtime stays the same. That's typical constant response to input variation.
What makes this code to have Time complexity O(n) ?
That's Floyd's cycle finding algorithm right, or at least a variation? (also called tortoise and hare, or slow/fast pointer)
When dealing with a standard algorithm you can always find the complexities in the litterature. It's very seldom you come across novel algorithms that require you to deduct it yourself. And the algorithms & data structures which are part of most standard libraries of programming languages usually are very well defined. Familiarity with dynamic arrays, linked lists, tree and hash based sets and maps, binary searching and sorting, etcetera will take you far.
Many algorithms are quite hard to analyze especially if you need to prove the results. That doesn't mean it's unimportant of course but the only time you would be required to do it is when you participate in some algorithms & data structures course or want to write a scientific paper. Usually you can rely on published results, intuition and clocking of the actual code.
I would go about the algorithm you posted like this: We're dealing with a linked list of n nodes with a possible cycle in it. The slow pointer advances one node in each loop iteration and if there is no cycle it will reach the end of the list in n iterations efter having visited n nodes. The fast pointer advances two nodes in each loop iteration so it reaches the end after just n/2 loop iterations when the slow pointer is at node n/2. So in the case of no cycle the fast pointer wins the race but the time it takes still will be proportional to the list length n so it's a clear case of O(N).
Now say there's a cycle. The fast pointer will reach the cycle first and start revolving around it. It will be at some node in the cycle when the slow pointer enters it. But sooner or later the fast pointer will catch up with the slow pointer from behind. The question is how long does it take? Well for sure it's before the slow pointer has completed one revolution of the cycle. It's because the fast pointer is twice as fast. While the slow pointer completes one cycle the fast pointer finishes two. Even if the fast pointer starts at maximum disadvantage (one node ahead when the slow pointer enters the cycle) it catches up just before the slow pointer has completed a full cycle. But regardless of where the fast pointer starts, the number of nodes the slow pointer manages to visit before it's caught will be proportional to the total number of nodes in the loop. (A simple argument indicates it's actually L/2 on average (random list and cycle lengths) where L is the number of nodes in the cycle. It's because it's reasonable to assume that on average the fast pointer is L/2 nodes away when the slow pointer enters a cycle of L nodes. Then the fast pointer will catch up after having completed one full cycle during which the slow pointer has visited half of that, namely L/2 nodes)
This mean the slow pointer will visit all nodes leading up to the cycle making this part O(N). The actual catch up within the cycle also will be O(N) because the number of nodes the slow pointer visits will be proportional to the number of nodes in the cycle never exceeding one revolution around it. Since the sum of two O(N) parts still is O(N) this means it's an O(N) algorithm also in the case the list has a cycle.
The final question is the termination criterion for the catch up. It's reached when the fast and slow pointers become equal but can we be sure this ever happens? Yes we can. For the fast pointer to pass the slow pointer it must first land directly on top of it or immediately before it in which case it will land on top of it in the next iteration. This shows the fast pointer cannot pass the slow pointer without first becoming equal to it. And this means pointer equality is a working termination criterion indeed.
This reasoning suggests an O(N) algorithm where N represents the length of the list.
Last edited by nuzzle; March 8th, 2013 at 04:08 AM.

March 8th, 2013, 11:02 AM
#6
Re: Big O notation
Thanks a lot, thanks for the effort and the time. It made things much easier for me in understanding the concept of Big O notation.
O(N): it is more of looking for an element in the index of the array or like BioPhysEngr reading a book, where n is the number of pages. It is the minimum amount of time it takes to read a book.
O(1): Just like buying a coffee from starbucks everyday or accessing specific element in the array and rerun it
O(log) N: Just like finding the next biggest element in the binary tree.
Thanks a lot. it really helped

March 8th, 2013, 11:03 AM
#7
Re: Big O notation
Thanks a lot, thanks for the effort and the time. It made things much easier for me in understanding the concept of Big O notation.
O(N): it is more of looking for an element in the index of the array or like BioPhysEngr reading a book, where n is the number of pages. It is the minimum amount of time it takes to read a book.
O(1): Just like buying a coffee from starbucks everyday or accessing specific element in the array and rerun it
O(log) N: Just like finding the next biggest element in the binary tree.
Thanks a lot. it really helped

March 4th, 2013, 10:35 PM
#8
Re: Big O notation
So let me give you two realworld examples that will help understand perhaps.
Suppose you have a list of peoples names. There are N people on the list.
Question: How much work does it take to find out if a person named Bob is on the list?
Solution algorithm: Read through the list and see if Bob ever appears. You should expect that if you have a list of 1000 people, it will take about 10x longer than if you had list of 100 people, and 100x longer than if you had a list of 10 people. The time it takes to read through the list is related linearly to the size of the list. So the time complexity is O(N).
Question 2: How much work does it take to find out if two people on the list have the same name?
Solution algorithm #1: Compare every two people on the list. There are n*(n1)/2 ~= (n^2)/2 pairs. So if I double the number of people on the list, it quadruples (=2^2) the number of pairs! If I triple it, it increases the pairs by 9x (=3^2). So the time complexity is O(n^2).
(Better) solution algorithm: #2: Sort the list first. Then go through the sorted list and check to see if any two names next to each other are the same. It turns out you can sort the list in O(n * log[n]) time and we learned from question 1 that we can read through the list in O(n) time. So it takes about n*log[n] + n operations but the n*log[n] term is greater than n, so we just write it takes O(n*log[n]) time.
So time complexity is not really a property of the code. It's property of the algorithm. Read the code to understand the algorithm being described, then reason  in terms of the input  about how much work it takes for that specific algorithm to solve the problem at hand.
Best Regards,
BioPhysEngr
http://blog.biophysengr.net

All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.

March 5th, 2013, 09:05 PM
#9
Re: Big O notation
Perhaps it's worth adding that, given the list of names is sorted, sample task #1, looking for Bob, can be achieved in O(log N) using binary search.
I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.
This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.
Posting Permissions
 You may not post new threads
 You may not post replies
 You may not post attachments
 You may not edit your posts

Forum Rules

Click Here to Expand Forum to Full Width
This a Codeguru.com survey!
OnDemand Webinars (sponsored)
