Theory behind Big 'O' notation

* When we write any program, 2 things are very important :
            a. Time 
            b. Space

* The whole or a piece of a code block execution time varies based on input.  And this execution time can be represented using "Big-O notation".

* The Big-O notation measures the worst-case complexity of an algorithm. 

* In Big-O notation, "n" represents the number of inputs. The question asked with Big-O is the following: “What will happen as "n" approaches infinity?”

Our Logic time complexity we are going to represent using based on input (that is all about Big-O). Why not other parameters? 

Below are the comparisons:

Execution times? Not a good measure as execution times are specific to a particular computer.

The number of statements executed? Not a good measure, since the number of statements varies with the programming language as well as the style of the individual programmer.

Ideal solution? Let us assume that we express the running time of a given algorithm as a function of the input size n (i.e., f(n)) and compare these different functions corresponding to running times. This kind of comparison is independent of machine time, programming style, etc.

Note: While representing Big-O notation, we will ignore constants(you can observe in the below examples). We might be asking a question ourselves, why to ignore? (below screenshot will help us to understand the same)

Examples and identifying the time complexity: 

Example 1: Retrieving/accessing an element from Array or List by it's Index. 

function callMe(n)
{
  var arr=[2,3,4];
  Console.log(arr[2]); //output: 4
}

Time complexity: O(1) 
note: This will be always 1 which means execution time is not dependent on the input.

Example 2:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
function callMe(n)
{
  var count =0;
  for (var i=0;i<n;i++)
  {
  	count+=1;
  }
  for (var i=0;i<5*n;i++)
  {
  	count+=1;
  }
  return count;
}
In this example, line 6 has f(n) = n, and line 10 has f(n) = 5n. This results in 5n+n = 6n. However, constants are ignored when applying "Big-O", the final result is O(n) = n.

Time complexity: O(n)

Example 3:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
function callMe(n)
{
  var count =0;
  for (var i=0;i<n;i++)
  {
    count+=1;
    for (var i=0;i<5*n;i++)
    {
      count+=1;
    }
  }
  return count;
}
In this example, f(n) = 5n*n because line 9 runs 5n times for a total of n iterations.
Therefore, this results in a total of 5n2 operations. However, constants are ignored when applying "Big-O", the final result is O(n)=n^2.

Time complexity: O(n2)  ; n sqaure 

Example 4:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
function callMe(n)
{ for (var i=0;i<n;i++) { for (var j=0;j<n;j++) { for (var k=0;k<n;k++) { for (var l=0;l<10;l++) { console.log(i+j+k+l); } } } } }

Line 11, will run a constant number of times. So, this will be ignored during the "Big-O notation" formation.

Time complexity: O(n3) ; n cube

Example 5:
1
2
3
4
5
6
7
function callMe(n)
{ for (var i=0;i<1000;i++) { console.log("hi"); } }

Time complexity: O(1). Constant complexity. The function runs from 0 to 1000. This does not depend on n.

Example 6:
* The efficiency of logarithmic time complexities is apparent with large inputs such as a million items.
* Finally, an example algorithm of logarithmic time complexity is printing elements that are a power of 2 between 2 and n.
1
2
3
4
5
6
7
function callMe(n)
{
  var count =0;
  for (var i=1;i<=n;i=i*2)
  {
    console.log(n+' count '+(count++)+'\n');
  }
}

callMe(1,000,000); 

Note: for a value of 1 million the above will execute 19 times.

Time complexity: O(log2n)

Example 7:
1
2
3
4
5
6
7
function callMe(n) 
{
  while (true)
  {
    console.log(n);
  }
}

Time complexity: O(∞), Infinite loop. This function will not end.

Reference :
Book 1: Data Structures and Algorithms Made Easy: Data Structures and Algorithmic Puzzles, Narasimha Karumanchi
Book 2: JavaScript Data Structures and Algorithms, Sammie Bae

Comments