https://leanpub.com/discretemathematicalalgorithmanddatastructures

What is time complexity and Big O notation. In this article, we will have a very brief look at this concept.

We have already seen many examples of different kind of data structures and algorithms. We have also found out that to implement such algorithms, we do not have to go through low-level plumbing. Any stable good programming language provides many libraries to avoid such low-level plumbing and different types of algorithms, such as sorting, shuffling, or searching can be done through them.

We have also found out another important fact that tells us about the relationship between algorithms and discrete mathematics. Data structures are discrete structures and hence, the algorithms are all about discrete structures.

Let us consider a simple Java program, where we iterate over two loops at the same time. These outer and inner loops are connected with each other and they finally give us an output.

```
package fun.sanjibsinha;
public class Main {
static int i, j, totalOne, totalTwo;
public static void main(String[] args) {
for (i = 0; i <= 5; i++){
totalOne += i;
System.out.print("i = " + i);
System.out.println("--------");
for (j = 0; j <= 5; j++){
totalTwo += j;
System.out.println("j = " + j);
}
}
System.out.println("The total of outer loop: " + totalOne);
System.out.println("The total of inner loop: " + totalTwo);
}
}
```

And from this very simple program, we get an output like the following:

```
//output of 8.1
i = 0--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 1--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 2--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 3--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 4--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 5--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
The total of outer loop: 15
The total of inner loop: 90
```

You have probably noticed that it is a simple algorithm of finding the total of inner loop and the outer loop. However, to do that, we need to jump sequentially; each iteration has taken place within a space of discrete data structure.

Any problem has one or more possible solutions. Suppose, we were asked to find the total of the one loop only; we should change the above code to the following code:

```
package fun.sanjibsinha;
public class Main {
static int i, j, totalOne, totalTwo;
public static void main(String[] args) {
for (i = 0; i <= 5; i++){
totalOne += i;
System.out.print("i = " + i);
System.out.println("--------");
/*
for (j = 0; j <= 5; j++){
totalTwo += j;
System.out.println("j = " + j + ", ");
}
*/
}
System.out.println("The total of loop: " + totalOne);
// System.out.println("The total of outer loop: " + totalTwo);
}
}
```

Now, in the changed circumstance, we will come up with this output:

```
//output of 8.2
i = 0--------
i = 1--------
i = 2--------
i = 3--------
i = 4--------
i = 5--------
The total of loop: 15
```

Now we can also solve this problem another way. By calling the function recursively, we can also solve the same problem. Our problem was, how we could get the total of a series of positive integers that starts from 0 and ends at 5.

It looks like this: total = 0 + 1 + 2 + 3 + 4 + 5; here the end point was chosen as 5. The value of total is 15.

We could have taken the user input and get the total of any big positive integer. Manually it is easy to get the total of a small series. But consider a case, where user gives us an input like 10 to the 6. Now, manually it is not possible to add all the numbers from 0 to that number. It becomes cumbersome.

Therefore our first algorithm was based on using looping. It easily adds all the numbers, whatever be the end number.

As we have said, any problem might have one or more solutions. Here we can solve the same problem, recursive way.

```
package fun.sanjibsinha;
public class Main {
static int total;
static int getTotal(int i){
if (i != 0){
total = i + getTotal(i - 1);
return total;
} else {
return 0;
}
}
public static void main(String[] args) {
System.out.println("Total : " + getTotal(5));
}
}
And we get the same output:
Total : 15
```

In the above case, the same jumping of iteration occurs through a discrete structure, but in a different way. It starts from a discrete positive integer 5, and the addition process goes on until we reach the base number by calling the function recursively.

Now we have found that this problem has two solutions. Question is which is desirable? Which algorithm is better than the other?

Here comes the question of time complexity. Time-Complexity has nothing to do with the execution time. But it has many things to do with the algorithm. Time-Complexity talks all about the better algorithm. A better algorithm always takes less time to reach the desirable discrete element (here output of a total).

Here we have to iterate over a series of positive integers, to reach our desirable goal. If we could have reached in one iteration, that would be the best algorithm, no doubt. But it is Utopian.

On the contrary, the worst case will take us to an iteration, that has to traverse each element for an indefinite period of time. In like manner, you may imagine a situation where user gives an input to find whether the integer is prime or not. There are several algorithms to test a number whether it is prime or not. Although there will be one algorithm that is better than other, and takes less time to give you the output. It depends entirely on the size of the input. For a real big integer, one algorithm might take several minutes to complete the operation and for another algorithm, it finishes the operation in a few seconds.

Before trying to understand Time-Complexity, we should remember that actual time requires to execute code is machine dependent; another key thing is network load. But the Time-Complexity has nothing to do with your machine configuration. It is all about better algorithm. That is why, whenever the terms data structures and algorithms appear, time-complexity comes with them.

In this chapter, we will try to understand how time-complexity works. What does the term Big O notation stand for? But before that, we need to understand what does ‘order of n’ mean?

## Order of n, or O(n)

Let us implement an algorithm that will check whether the number or ‘n’ is prime or not. We know that a prime number is divided by only two numbers, 1 and the number itself and it has no remainder. Consider a prime number 11, it has only two factors 1, and 11. If we divide 11 by 1 and 11, we have no remainders. That is the most basic definition.

Therefore, using a normal looping construct, we can try to check whether that number is prime or not. We can write our algorithm in natural language, this way:

```
for each number where integer i starts from 2 to (n - 1)
if n % i == 0, then n is not prime
else n is prime
```

We can test this algorithm by using a small positive integer like 11. In that case, if we iterate from 2 to (11 – 1), we will see that between 2 to 10, there is no integer that can divide 11 with no remainder. Therefore, 11 is prime. When the value of n is small, our algorithm does not take much time. In fact, that time may appear negligible. Suppose for each iteration, it takes one millisecond or ms. This fraction of second stands for 10 to the power minus 3, that is, if you divide 1 second by 1000, then you get one millisecond.

By the way, as a student of computer science student we should know that one microsecond is 10 to the power minus 6, one nanosecond is 10 to the power of minus 9 and one picosecond is 10 to the power minus 12.

Now, let us assume that each iteration takes one microsecond. When the number of iteration is small, it does take mush time. However, with the increase of iteration, this time also increases. Instead of 11, we want to check a value like 1000003, it will iterate more than one million times. Now our algorithm appears to crumble. Because our algorithm will take huge time to finish the process of iteration.

We can transport this natural language algorithm to a Java code.

```
package fun.sanjibsinha.timecomplexity;
import java.util.Scanner;
public class TimeOne {
static long userInput;
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter a positive integer to test prime or not: ");
userInput = sc.nextInt();
for (long i = 2; i < (userInput - 1); i++){
if (userInput % i == 0){
System.out.println(userInput + " is not prime.");
} else {
System.out.println(userInput + " is prime.");
}
}
}
}
```

We are not going to run this code; it is just for an example of time-complexity. In the above code you can guess that to reach our goal we need to iterate ‘n’ number of times. Here ‘n’ is user’s input. It is called ‘order of n’ or O(n). As the value of ‘n’ starts increasing, our algorithm starts crumbling. For a value like 10 to power 6 plus 3, it takes huge to time and it simply goes out of our hands.

Therefore, we have to search for a better algorithm to solve the same problem.

Let us take the user’s input and instead of iterating over the whole range of that number, we can calculate the square root of that user’s input. After that we can iterate up to that value.

It serves our purpose, and at the same time, shortens the length of iteration in a great way. Let us write the code in Java.

```
package fun.sanjibsinha.timecomplexity;
import java.util.Scanner;
public class TimeTwo {
static double userInput;
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter a positive integer to test prime or not: ");
userInput = sc.nextInt();
// 10 to the power 6 + 3 = 1000003
for (long i = 2; i < (Math.sqrt(userInput)); i++){
if (userInput % i == 0){
System.out.println(userInput + " is not prime.");
} else {
System.out.println(userInput + " is prime.");
}
}
}
}
```

If we have a user’s input like 1000003, our algorithm allows us not to iterate more than 1 million times. On the contrary, it allows us to iterate in and around 1000 times.

We write another short Java code to find the square root of any number.

```
package fun.sanjibsinha.timecomplexity;
public class TestClass {
public static void main(String[] args) {
System.out.println(Math.sqrt(1000003));
//1000.001499998875
}
}
//output of 8.6
Square root of 10 to the power of 6 plus 3 is: 1000.001499998875
```

In the above code we can clearly see that the square root of 1000003

Let us go back to the code 8.5, where we have successfully shortened the length of iteration by calculating the square root of user’s input. Reducing the length of iteration means the algorithm takes much less time. Now the running time of our code is not ‘order of m’, but ‘order of square root of n’; it means O(square root of n).

It is quite evident that code 8.5 represents much better algorithm than code 8.4 to solve the same problem. Why so? Because, it solves the large test cases in acceptable time. Better algorithm is necessary for better performance.

Here the running time of the program depends on the user’s input; and, user’s input represents the growth of time which exerts influence on the running time.

As a conclusion to this section, we can say that time-complexity only cares about the ‘input’. It also cares about the algorithm that tackles the ‘input’ in a better way.

Moreover, time-complexity does not care about the machine’s hardware configuration. Whether the machine is single processor or multi processor does not matter. What is the network load, is unimportant. The time-complexity really does not care about 32 bit or 64 bit.

Input is the key word for time-complexity. Nothing else.

In the above examples, we have inputs like single integer. It could have been an array, a data structure; is not it? Now, again, we come to the point of the data structures, and algorithms. And, of course, now we also understand how time-complexity is related to them.

## Big O Notation

We have already seen how through order of ‘n’, or ‘input’, we can represent any algorithm. Consider the code 8.4, where we have to iterate over one million times if ‘n’ is 1000003. Therefore, the O(n) is the worst case scenario. Any algorithm could not be worse than that O(n).

Big O notation is the most convenient way to express the worst-case scenario for an algorithm. Compare code 8.5 with the previous code 8.4. In code 8.5, the Big O notation is like this: O(square root of n). This algorithm could not be worse than anything. Now, comparing these two worst case scenarios, we can safely tell that the algorithm of code 8.5 is better than code 8.4.

If you are still confused with the definition, do not worry. Everything takes time. The concept of time-complexity is related to algorithm. More you practice, you will be able to get your head around this topic.

Let us try call back the second code snippet of this chapter. The code 8.2 looks like this:

```
package fun.sanjibsinha;
public class Main {
public static void main(String[] args) {
int totalOne = 0;
for (i = 0; i <= 5; i++){
totalOne += i;
}
System.out.println("The total of loop: " + totalOne);
}
}
```

For brevity we have trimmed the code omitting the commented parts. When we have declared the variable ‘totalOne’ to 0, the constant is 1 and the time is also 1.

In the next step, the first line of the ‘for loop’ tells us about two things. First, the constant is more than one, and the same rule applies for the time.

In the next line the algorithm keeps adding the value of iteration for ‘n’ number of times. Therefore, we have two constants, but the time equals ‘n’.

The last line gives us the output. The constant is 1, and the time is also 1.

When we add them up, we get O(n). It happens so, because while summing it up, we remove all the constants.

However, for the code 5.1, where we have used nested loop to get the total of inner loop, this order of ‘n’ changes to (n * n), that is n squared. Therefore, the Big O notation or the worst case scenario of that algorithm is O(n^2). Although in that code, we have total of the outer loop also. That means, we have O(n) at the same breath. Since there are consecutive statements, we count the statement with maximum complexity, that is O(n^2).

We have got an initial idea of how Big O notation works, and what principles govern it. Many common algorithms we use very often, use the above Big O notation O(n^2). Common algorithms, such as bubble sort, selection sort and insertion sort takes the above Big O notation.

There are many other Big O notations that are related to different types of algorithms. The idea of time-complexity rests on finding the better algorithm to solve a problem. We should analyse the Big O notations, or worst-case scenarios for the test cases and we will try to find the best solution.