Discrete Mathematical Algorithm, and Data Structure

Readers will learn discrete mathematical abstracts as well as its implementation in algorithm and data structures shown in various programming languages, such as C, C++, PHP, Java, C#, Python and Dart.

This book combines two major components of Mathematics and Computer Science under one roof.

Without the core conceptions and tools derived from discrete mathematics, one cannot understand the abstract or the general idea involving algorithm and data structures in Computer Science.

 

discrete-mathematical-algorithm-and-data-structure


Purchase

 

The objects of data structures are basically objects of discrete mathematics.

This book tries to bridge the gap between two major components of Mathematics and Computer Science.

Computer Science and Discrete Mathematics

In any computer science course, studying discrete mathematics is essential, although they are taught separately, except in a few cases.

Yet, a comprehensive book, combining these two major components, is hard to find out; not only that, it is almost impossible to understand one without the help of other.

Hope, this book will fill the gap. Readers will learn discrete mathematical abstracts as well as its implementation in algorithm and data structures shown in various programming language, such as C++, Java, C#, Python and Dart.

This book combines two major components of Computer Science under one roof.

The book covers the following chapters, you see below, although it’s quite tentative, as it might change a little bit while writing the book.

Table of Contents

  • 1. Introduction to the Discourse
    Is Discrete Mathematics enough to study Computer Science?
    A short Introduction to Discrete Mathematics
    What is Discrete Mathematics
    What is the relationship between Discrete Mathematics and Computer Science
    Introducing necessary conceptions
  • 2. Introduction to Programming Language and Boolean Algebra
    Logic, Mathematics, and Programming Language
    Introduction to Boolean Algebra
  • 3. De Morgan’s Laws on Boolean Algebra, Logical Expression, and Algorithm
    Logical Expression
    Short Circuit Evaluation
    Syntax, Semantics and Conditional Execution
    Why we need Control Constructs
    Discrete Mathematical Notations and Algorithm
  • 4. Data Structures in different Programming languages
    Mean, Median and Mode
    Array, the First Step to Data Structure
    Let us understand some Array features
    Set Theory, Probability and Array
    Skewed Mean, Maximized Median
    Complex Array Algorithm
  • 5. Data Structures: Abstractions and Implementation
    How objects work with each other
    More Algorithm and Time Complexity
    Introducing Data Structures
    How Calculus and Linear Algebra are Related to this Discourse
  • 6. Data Structures in Detail
    Frequently Asked Questions about Data Structures
    Abstract Data Type (ADT)
    Linear Data Structures
    Modeling of a Structure
    ArrayList to overcome limitations of Array
    ArrayList or LinkedList, which is faster?
    Collection Framework in programming languages
    Stack and Queue in Java
    Deque, a high-performance Abstract Data Type
  • 7. Algorithm, Data Structure, Collection Framework and Standard Template Library (STL)
    Introducing Algorithm Library
    Different types of Algorithms
    Binary Tree and Data Structure
    Collection Framework in Java
    Discrete Mathematical Abstractions and Implementation through Java Collection
    Comparator, Comparable and Iterator
    Standard Template Library in C++
  • 8. Time Complexity
    Order of n, or O(n)
    Big O Notation
  • 9. Set, Symmetric Difference and Propositional Logic
    Why Set is important in Data Structures
    How Symmetric Difference and Propositional Logic combine
  • 10. Combinatorics and Counting, Permutation and Combinations
    Permutation and Combination
  • What Next

Permutation and combination in Data Structure and Algorithm

discrete-mathematical-algorithm-data-structures
discrete-mathematical-algorithm-data-structures

https://leanpub.com/discretemathematicalalgorithmanddatastructures

Let us see some common examples of Combinatorial algorithms. We can think of generating list of all structures of a given type. In this scenario, we can think of permutation with repetition, or without repetition. We can think of combinations with repetition, or without repetition.

Search algorithms are good examples where Optimization and Approximation algorithms are used to solve such problems. Optimization methods or algorithms also include dynamic programming. On the other hand, Approximation methods include greedy algorithms.

We cannot say that one algorithm is the best solution. There could be a better algorithm than the previously claimed ‘best’.

Let us solve a problem and see whether the solution is best or not.

There are five things. We have found out that if there are five things, there could be 120 possibilities. Let us pick any two of them and try to rearrange them in order. However, we cannot repeat the same thing twice. It is a permutation without repetition, still it is little bit different.

If we want to rearrange any two things from that five elements without repetition, what would be the order.



package chapterten;

/**
* permutation is a combination of collections where order matters
* permutation could be with repetition or without
* here we see an example of
* permutations without repetition
* if there are 16 things, and if we choose 14
* then we cannot choose 14 again
* in that case, 15 things remain
* no repetitions and order matters
*/

/**
* Here is our problem:
* what order could 5 balls be in
* for 5 balls the possibility is factorial(5)
* if we take any otwo out of it
* the possibility is factorial(5 - 3)
* but the order of 2 out of 5 balls
* is factorial(5) / factorial(5 - 3)
*/

public class SimplePermutation {

static int permutationWithoutRepetition(int n){
int i = 1;
int k = 1;
while (i <= n){
k *= i;
i++;
}
//we get the factorial of the number we have passed as parameter
return k;
}

public static void main(String[] args) {

/**
* the formula of permutation without repetition
* is factorial(listOfNumbers)/factorial(listOfNumbers - restOfNumbers)
*/
int listOfNumbers = 5;
int restOfNumbers = 3;
System.out.println("The list of numbers are (5, 4, 3, 2, 1)");
System.out.println("Our first choice has 5 possibilities.");
System.out.println("The number of orders 5 numbers could be in: ");
System.out.println(permutationWithoutRepetition(5));
System.out.println("Now we need any two numbers from this collection:");
System.out.println("The order of 2 numbers out of 5 numbers could be: ");
System.out.println(permutationWithoutRepetition(5)/permutationWithoutRepetition(3));

}
}

 

Here is the explanation. When we pick up 2 elements from 5 elements, 3 elements are left behind. The algorithm is quite simple.

First, find out the factorial of 5.
Second, find out the factorial of 3.
Third, divide factorial of 5 by factorial of 3.

We have done the same thing. And here is the output:



The list of numbers are (5, 4, 3, 2, 1)
Our first choice has 5 possibilities.
The number of orders 5 numbers could be in:
120
Now we need any two numbers from this collection:
The order of 2 numbers out of 5 numbers could be:
20

As we have said earlier, there are hundreds of Combinatorial algorithms. Besides more common ‘sorting’ and ‘searching’, we have different types of generating ‘permutations and combinations’, graphs, and many others.

Our next problem will deal with some kind of generating permutation and combination. How many ways we can rearrange a string? If repetition is allowed, it is obvious that number of rearrangement will be more.
How it happens?

A string is a sequence of characters. It is a representation of data structure, an array of characters.

Therefore, it has a length. If repetition is allowed, we can count the length of the string, and safely say that the factorial of that number is the possibility.

But the real trick is in the process of rearrangement.



package chapterten;

/**
* When repetition is allowed, we can rearrange the order
* of a string in various combination
* since we will keep the order with repetitions,
* we can call it a permutation of a string
*/

public class StringPermutation {
// we need a global recursive method
// that will print all the permutations of the string
static void arrangeTheStringWithRepetition(String anyString, String anotherString){

// we need to check if the given string is empty
if (anyString.length() == 0) {
System.out.print(anotherString + " ");
return;
}

for (int i = 0; i < anyString.length(); i++) {

// reaching to the last character
char ch = anyString.charAt(i);

// we can display the rest of the character after
// excluding the last character
String restOfTheString = anyString.substring(0, i) +
anyString.substring(i + 1);

// calling the method recursively
arrangeTheStringWithRepetition(restOfTheString, anotherString + ch);
}
}

public static void main(String[] args) {

String stringToArrange = "abcd";
// the given string 'abcd' is of length 4
// factors of 4 is 4,3,2,1
// factorial is 24
// the program will rearrange the string 24 times
arrangeTheStringWithRepetition(stringToArrange, " ");
System.out.println();

}

}

// output

abcd abdc acbd acdb adbc adcb bacd badc bcad bcda bdac bdca cabd cadb cbad cbda cdab cdba dabc dacb dbac dbca dcab dcba

In the above code snippet, the string is of length 4.

Therefore, factorial of 4, that is 24 is the number of possibility. The above program rearranges the string 24 ways. However, the condition was repetition was allowed.

 

Time Complexity and Big O Notation

discrete-mathematical-algorithm-data-structures
discrete-mathematical-algorithm-data-structures

https://leanpub.com/discretemathematicalalgorithmanddatastructures

What is time complexity and Big O notation. In this article, we will have a very brief look at this concept.
We have already seen many examples of different kind of data structures and algorithms. We have also found out that to implement such algorithms, we do not have to go through low-level plumbing. Any stable good programming language provides many libraries to avoid such low-level plumbing and different types of algorithms, such as sorting, shuffling, or searching can be done through them.
We have also found out another important fact that tells us about the relationship between algorithms and discrete mathematics. Data structures are discrete structures and hence, the algorithms are all about discrete structures.
Let us consider a simple Java program, where we iterate over two loops at the same time. These outer and inner loops are connected with each other and they finally give us an output.

package fun.sanjibsinha;

public class Main {

static int i, j, totalOne, totalTwo;

public static void main(String[] args) {

for (i = 0; i <= 5; i++){
totalOne += i;
System.out.print("i = " + i);
System.out.println("--------");
for (j = 0; j <= 5; j++){
totalTwo += j;
System.out.println("j = " + j);
}
}
System.out.println("The total of outer loop: " + totalOne);
System.out.println("The total of inner loop: " + totalTwo);
}
}

And from this very simple program, we get an output like the following:

//output of 8.1
i = 0--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 1--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 2--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 3--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 4--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
i = 5--------
j = 0
j = 1
j = 2
j = 3
j = 4
j = 5
The total of outer loop: 15
The total of inner loop: 90

You have probably noticed that it is a simple algorithm of finding the total of inner loop and the outer loop. However, to do that, we need to jump sequentially; each iteration has taken place within a space of discrete data structure.
Any problem has one or more possible solutions. Suppose, we were asked to find the total of the one loop only; we should change the above code to the following code:


package fun.sanjibsinha;

public class Main {

static int i, j, totalOne, totalTwo;

public static void main(String[] args) {

for (i = 0; i <= 5; i++){
totalOne += i;
System.out.print("i = " + i);
System.out.println("--------");
/*
for (j = 0; j <= 5; j++){
totalTwo += j;
System.out.println("j = " + j + ", ");
}
*/
}
System.out.println("The total of loop: " + totalOne);
// System.out.println("The total of outer loop: " + totalTwo);
}
}

Now, in the changed circumstance, we will come up with this output:

//output of 8.2
i = 0--------
i = 1--------
i = 2--------
i = 3--------
i = 4--------
i = 5--------
The total of loop: 15

Now we can also solve this problem another way. By calling the function recursively, we can also solve the same problem. Our problem was, how we could get the total of a series of positive integers that starts from 0 and ends at 5.

It looks like this: total = 0 + 1 + 2 + 3 + 4 + 5; here the end point was chosen as 5. The value of total is 15.

We could have taken the user input and get the total of any big positive integer. Manually it is easy to get the total of a small series. But consider a case, where user gives us an input like 10 to the 6. Now, manually it is not possible to add all the numbers from 0 to that number. It becomes cumbersome.
Therefore our first algorithm was based on using looping. It easily adds all the numbers, whatever be the end number.
As we have said, any problem might have one or more solutions. Here we can solve the same problem, recursive way.

package fun.sanjibsinha;

public class Main {

static int total;

static int getTotal(int i){
if (i != 0){
total = i + getTotal(i - 1);
return total;
} else {
return 0;
}
}

public static void main(String[] args) {

System.out.println("Total : " + getTotal(5));

}
}

And we get the same output:


Total : 15

In the above case, the same jumping of iteration occurs through a discrete structure, but in a different way. It starts from a discrete positive integer 5, and the addition process goes on until we reach the base number by calling the function recursively.
Now we have found that this problem has two solutions. Question is which is desirable? Which algorithm is better than the other?
Here comes the question of time complexity. Time-Complexity has nothing to do with the execution time. But it has many things to do with the algorithm. Time-Complexity talks all about the better algorithm. A better algorithm always takes less time to reach the desirable discrete element (here output of a total).
Here we have to iterate over a series of positive integers, to reach our desirable goal. If we could have reached in one iteration, that would be the best algorithm, no doubt. But it is Utopian.
On the contrary, the worst case will take us to an iteration, that has to traverse each element for an indefinite period of time. In like manner, you may imagine a situation where user gives an input to find whether the integer is prime or not. There are several algorithms to test a number whether it is prime or not. Although there will be one algorithm that is better than other, and takes less time to give you the output. It depends entirely on the size of the input. For a real big integer, one algorithm might take several minutes to complete the operation and for another algorithm, it finishes the operation in a few seconds.
Before trying to understand Time-Complexity, we should remember that actual time requires to execute code is machine dependent; another key thing is network load. But the Time-Complexity has nothing to do with your machine configuration. It is all about better algorithm. That is why, whenever the terms data structures and algorithms appear, time-complexity comes with them.
In this chapter, we will try to understand how time-complexity works. What does the term Big O notation stand for? But before that, we need to understand what does ‘order of n’ mean?

Order of n, or O(n)

Let us implement an algorithm that will check whether the number or ‘n’ is prime or not. We know that a prime number is divided by only two numbers, 1 and the number itself and it has no remainder. Consider a prime number 11, it has only two factors 1, and 11. If we divide 11 by 1 and 11, we have no remainders. That is the most basic definition.
Therefore, using a normal looping construct, we can try to check whether that number is prime or not. We can write our algorithm in natural language, this way:

for each number where integer i starts from 2 to (n - 1)
if n % i == 0, then n is not prime
else n is prime

We can test this algorithm by using a small positive integer like 11. In that case, if we iterate from 2 to (11 – 1), we will see that between 2 to 10, there is no integer that can divide 11 with no remainder. Therefore, 11 is prime. When the value of n is small, our algorithm does not take much time. In fact, that time may appear negligible. Suppose for each iteration, it takes one millisecond or ms. This fraction of second stands for 10 to the power minus 3, that is, if you divide 1 second by 1000, then you get one millisecond.

By the way, as a student of computer science student we should know that one microsecond is 10 to the power minus 6, one nanosecond is 10 to the power of minus 9 and one picosecond is 10 to the power minus 12.
Now, let us assume that each iteration takes one microsecond. When the number of iteration is small, it does take mush time. However, with the increase of iteration, this time also increases. Instead of 11, we want to check a value like 1000003, it will iterate more than one million times. Now our algorithm appears to crumble. Because our algorithm will take huge time to finish the process of iteration.
We can transport this natural language algorithm to a Java code.

package fun.sanjibsinha.timecomplexity;

import java.util.Scanner;

public class TimeOne {

static long userInput;

public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter a positive integer to test prime or not: ");
userInput = sc.nextInt();

for (long i = 2; i < (userInput - 1); i++){
if (userInput % i == 0){
System.out.println(userInput + " is not prime.");
} else {
System.out.println(userInput + " is prime.");
}
}
}
}

We are not going to run this code; it is just for an example of time-complexity. In the above code you can guess that to reach our goal we need to iterate ‘n’ number of times. Here ‘n’ is user’s input. It is called ‘order of n’ or O(n). As the value of ‘n’ starts increasing, our algorithm starts crumbling. For a value like 10 to power 6 plus 3, it takes huge to time and it simply goes out of our hands.
Therefore, we have to search for a better algorithm to solve the same problem.
Let us take the user’s input and instead of iterating over the whole range of that number, we can calculate the square root of that user’s input. After that we can iterate up to that value.
It serves our purpose, and at the same time, shortens the length of iteration in a great way. Let us write the code in Java.

package fun.sanjibsinha.timecomplexity;

import java.util.Scanner;

public class TimeTwo {
static double userInput;

public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter a positive integer to test prime or not: ");
userInput = sc.nextInt();

// 10 to the power 6 + 3 = 1000003
for (long i = 2; i < (Math.sqrt(userInput)); i++){
if (userInput % i == 0){
System.out.println(userInput + " is not prime.");
} else {
System.out.println(userInput + " is prime.");
}
}
}

}

If we have a user’s input like 1000003, our algorithm allows us not to iterate more than 1 million times. On the contrary, it allows us to iterate in and around 1000 times.
We write another short Java code to find the square root of any number.


package fun.sanjibsinha.timecomplexity;

public class TestClass {

public static void main(String[] args) {

System.out.println(Math.sqrt(1000003));
//1000.001499998875
}
}

//output of 8.6

Square root of 10 to the power of 6 plus 3 is: 1000.001499998875

In the above code we can clearly see that the square root of 1000003
Let us go back to the code 8.5, where we have successfully shortened the length of iteration by calculating the square root of user’s input. Reducing the length of iteration means the algorithm takes much less time. Now the running time of our code is not ‘order of m’, but ‘order of square root of n’; it means O(square root of n).
It is quite evident that code 8.5 represents much better algorithm than code 8.4 to solve the same problem. Why so? Because, it solves the large test cases in acceptable time. Better algorithm is necessary for better performance.
Here the running time of the program depends on the user’s input; and, user’s input represents the growth of time which exerts influence on the running time.
As a conclusion to this section, we can say that time-complexity only cares about the ‘input’. It also cares about the algorithm that tackles the ‘input’ in a better way.
Moreover, time-complexity does not care about the machine’s hardware configuration. Whether the machine is single processor or multi processor does not matter. What is the network load, is unimportant. The time-complexity really does not care about 32 bit or 64 bit.
Input is the key word for time-complexity. Nothing else.
In the above examples, we have inputs like single integer. It could have been an array, a data structure; is not it? Now, again, we come to the point of the data structures, and algorithms. And, of course, now we also understand how time-complexity is related to them.

Big O Notation

We have already seen how through order of ‘n’, or ‘input’, we can represent any algorithm. Consider the code 8.4, where we have to iterate over one million times if ‘n’ is 1000003. Therefore, the O(n) is the worst case scenario. Any algorithm could not be worse than that O(n).
Big O notation is the most convenient way to express the worst-case scenario for an algorithm. Compare code 8.5 with the previous code 8.4. In code 8.5, the Big O notation is like this: O(square root of n). This algorithm could not be worse than anything. Now, comparing these two worst case scenarios, we can safely tell that the algorithm of code 8.5 is better than code 8.4.
If you are still confused with the definition, do not worry. Everything takes time. The concept of time-complexity is related to algorithm. More you practice, you will be able to get your head around this topic.
Let us try call back the second code snippet of this chapter. The code 8.2 looks like this:

package fun.sanjibsinha;

public class Main {
public static void main(String[] args) {
int totalOne = 0;
for (i = 0; i <= 5; i++){
totalOne += i;
}
System.out.println("The total of loop: " + totalOne);
}
}

For brevity we have trimmed the code omitting the commented parts. When we have declared the variable ‘totalOne’ to 0, the constant is 1 and the time is also 1.
In the next step, the first line of the ‘for loop’ tells us about two things. First, the constant is more than one, and the same rule applies for the time.
In the next line the algorithm keeps adding the value of iteration for ‘n’ number of times. Therefore, we have two constants, but the time equals ‘n’.
The last line gives us the output. The constant is 1, and the time is also 1.
When we add them up, we get O(n). It happens so, because while summing it up, we remove all the constants.
However, for the code 5.1, where we have used nested loop to get the total of inner loop, this order of ‘n’ changes to (n * n), that is n squared. Therefore, the Big O notation or the worst case scenario of that algorithm is O(n^2). Although in that code, we have total of the outer loop also. That means, we have O(n) at the same breath. Since there are consecutive statements, we count the statement with maximum complexity, that is O(n^2).
We have got an initial idea of how Big O notation works, and what principles govern it. Many common algorithms we use very often, use the above Big O notation O(n^2). Common algorithms, such as bubble sort, selection sort and insertion sort takes the above Big O notation.
There are many other Big O notations that are related to different types of algorithms. The idea of time-complexity rests on finding the better algorithm to solve a problem. We should analyse the Big O notations, or worst-case scenarios for the test cases and we will try to find the best solution.

Recursive or iterative, which algorithm is better

discrete-mathematical-algorithm-data-structures
discrete-mathematical-algorithm-data-structures

https://leanpub.com/discretemathematicalalgorithmanddatastructures

The advantage of using recursion is its shortness, cleanliness, and moreover, it is closer to our discrete mathematical definitions. If you think and model your code keeping the mathematical conceptions in your mind, then recursion is close to your definition.
It is more evident when we use recursion in finding the Fibonacci series. Mathematically, the Fibonacci is defined as the following:

Fibonacci of 1 or 2 is 1
Fibonacci of F (for F > 2) is Fibonacci of (F - 1) + Fibonacci of (F – 2)

If we want to find the Fibonacci series using recursion, it is not only the simplest version, but also mathematically similar.
Watch the next line of code snippet.

package fun.sanjibsinha.recursive;

public class AlgoFibTwo {

static int getFibonacci(int f) {
if ((f == 1) || (f == 2)){
return 1;
} else {
return (getFibonacci(f - 1) + getFibonacci(f - 2));
}
}

public static void main(String[] args) {

int newNumber = 6, result;
result = getFibonacci(newNumber);
System.out.printf("Fibonacci series of " + newNumber + " = " + result);
System.out.printf("");
System.out.printf("");
}

}

//output of the above code

Fibonacci series of 6 = 8

We can do the same thing using iterative way, but it does not reflect the mathematical conception as the recursion does. The next code snippet shows us the same thing.

package fun.sanjibsinha.recursive;

public class AlgoFibOne {

static int getFibinacci(int f) {
int k1, k2, k3;
k1 = k2 = k3 = 1;
for (int j = 3; j <= f; j++) {
k3 = k1 + k2;
k1 = k2;
k2 = k3;
}
return k3;
}

public static void main(String[] args) {

int newNumber = 6, result;
result = getFibinacci(newNumber);
System.out.printf("Fibonacci series of " + newNumber + " = " + result);
System.out.printf("");
System.out.printf("");
}
}

//output of the above code
Fibonacci series of 6 = 8

Ability to simulate the mathematical models cannot always give the moral support to the recursions. Because there are many other factors while we program, we need to keep them in our mind.
Maintaining that the recursive versions are slower than the iterative versions, we may still want to adopt it for some reasons. When the code is heavy in iterative versions, it is much simpler to adopt the recursions, it also easier to debug and maintain. Whatever our reasons to adopt the recursions, slowing down the program may cost at the end.
We cannot save the memory space, we cannot speed up the execution; yet, in some cases, recursions are essential.

As we progress, we will find that later.

What is Algorithm?

discrete-mathematical-algorithm-data-structures
discrete-mathematical-algorithm-data-structures

https://leanpub.com/discretemathematicalalgorithmanddatastructures

Algorithm is expressed as a set of steps. By describing the actions at each step we instruct the computer do something. Usually we can use any natural language to describe the actions to perform at each step.

Consider this simple description.

  • 1. Enter one integer
    2. Enter another integer
    3. Compare both integers and return the maximum value
    4. Compare both integers and return the minimum value

In C++ programming language, on one hand, we can create generic functions to find the maximum or minimum value; and, on the other, we can take help from the ‘algorithm’ template library to find the same values.

Every high level language comes with its own algorithm library. They do so for one reason. Any system of counting or calculation by means of a device like computer involves following a steps or directions. Computer scientists use the word ‘algorithm’ to describe such as ‘set of directions’.

In some cases, these directions could be simple as described above. In most cases, it is much more complex. For complex cases, we need the help of ‘algorithm’ library. Otherwise, we have to do the low-level plumbing, which is much more time consuming and that takes us away from building other important parts of any application.

Historically, the derivation of this word has some interesting facts. At the beginning of ninth century an Arabian mathematician wrote a book called ‘Kitab al jabr w’al muqabala’ (Rules of Restoration and Reduction). The word ‘algebra’ comes from the title of the book. This textbook introduced the use of Hindu numerals and included a systematic discussion of fundamental operations on integers.

The word ‘algorithm’ comes from the name of the mathematician, Abu Ja’far Mohammed ibn Musa al-Khowarizmi.

One of the most famous and well known algorithms is of course Euclid’s Algorithm. It is a process for calculating the greatest common divisor (GCD) of two integers.
We can illustrate this algorithm in the following way.

  • 1. Take two integers x and y
    2. Divide y by x and get the remainder as r
    3. Now replace the value of x with the value of r and replace the value of y with the value of x
    4. Again divide y by x
    5. This process will continue until we get r = 0
    6. Once we get r = 0, stop the calculation, x is the GCD

Notice that the algorithm is expressed as a set of steps. Each step describes some action to take. The important thing is to describe the actions to be performed clearly and unambiguously.

Let us summarize this introductory part on algorithm in one sentence.
Data go inside the computer as inputs, algorithm takes charge, processing the data and after that the data as outputs come out.

By the way, people often mistake the word ‘data’ as singular; but, it is actually a plural form of the Latin word ‘datum’. Since we have used this word too often in our discourse, and will use in future, therefore, for the curious readers I opened up the Oxford dictionary and searched for the word: datum.

Oxford dictionary defines datum as “A thing given or granted; a thing known or assumed as a fact, and made the basis of reasoning or calculation; a fixed starting-point for a series of measurements etc.” It has also made it clear that the plural form of ‘datum’ is ‘data’.

For instance, in Java we have Collection class and in C++ we have containers that manage this data structure part. We are going to find out how they are related in the next post.

Introduction Data Structures and Algorithm

discrete-mathematical-algorithm-data-structures
discrete-mathematical-algorithm-data-structures

https://leanpub.com/discretemathematicalalgorithmanddatastructures

We have already been introduced to data structures before. Of course, we have learned a few operations using Array in various languages, so we can say that the concept of data structures is not completely alien to us.
We need a good way to store, organize and use our data. As times passes by, the nature of data is becoming not only more and more complex, but also it’s getting bigger in quantity. More and more people are getting hooked to the Internet, exchanging huge amount of data every day, in various forms; scientific data are getting larger, we need weather data to be processed to get more accurate weather prediction, medical data are becoming humongous; this list is endless.
Therefore, we need more efficient way to sort, organize, and use that data.
Data Structures are all about this. It has a very close relation with Algorithm, because managing such huge amount of data is less tedious if we have more efficient Algorithm, ready at our hand.While managing such huge humongous data, by sorting, organizing, or using them, one is not only prone to error, but also fail to satisfy one of the most important requirements – time and space. Yes, time complexity really matters, so the space.

Continue reading Introduction Data Structures and Algorithm

Data Structures in Java

https://leanpub.com/u/sanjibsinha

 

How we could manage a collection of data in Java? There are many options, different type of data structures.

You have already seen array. However, array has many limitations, such as it holds fixed number of values of a single type. You cannot change it.
To overcome such limitations, Java introduces core collection interfaces. They encapsulate different type of collections. You have learned the role of an interface, it is actually a contract. So it allows collections to be manipulated independently. The foundation of Java Collections Framework consists of such Core Collection Interfaces.
They are ‘Set’, ‘List’, ‘Queue’, ‘Deque’, and ‘Map’.
In the last problem, we have seen an example of ‘LinkedList’ class, which is a collection based on a Linked List. As the name suggests, it is a collection of container that holds data and each container has a link identifier that links it either to the next one or the previous one. The last link identifier is always ‘null’, because it points to nothing.

Continue reading Data Structures in Java

Dart: How to add nodes of Linked Lists manually

https://leanpub.com/u/sanjibsinha

discrete-mathematical-algorithm-data-structures
discrete-mathematical-algorithm-data-structures
https://leanpub.com/discretemathematicalalgorithmanddatastructures Linked Lists are very well known feature of Data Structures. Almost every popular object oriented programming language has in-built support for this. These are nothing but Lists that are linked. We call them nodes. Each node can have any type of data, and they are linked with one another. Can we create such nodes manually and add them with one another? Yes, that is possible. Here is an example in Dart. Continue reading Dart: How to add nodes of Linked Lists manually

Dart: why we need to understand object and classes to get our head around about data structures

https://leanpub.com/u/sanjibsinha

discrete-mathematical-algorithm-data-structures
discrete-mathematical-algorithm-data-structures
https://leanpub.com/discretemathematicalalgorithmanddatastructures We all know that data structure is all about storing, retrieving, arranging or sorting a collection of data in the most efficient way. That is why it belongs to the “Collection” class in most programming languages. However, any kind of data has some ‘type’ attached with it. It could be primitive data type, such as, integer, Boolean or double in Java, it could be an object coming down straight from a class, having a blueprint behind it. That exactly happens in Dart. In Dart, everything is object. So before playing around a large topic like Data Structure, we need to get our head around about object and classes. The next example will show you how, we can use a user defined object inside another object; and, at the end, how they interact with each other. Continue reading Dart: why we need to understand object and classes to get our head around about data structures

How to rotate an array clockwise by an element using Dart

https://leanpub.com/u/sanjibsinha

Discrete Mathematical Algorithm, and Data Structures
Discrete Mathematical Algorithm, and Data Structures
https://leanpub.com/discretemathematicalalgorithmanddatastructures We have seen some complex array algorithm before. Our next challenge is to rotate an array clockwise by one element in a few lines of code.
void main(){

List<int> anArray = List(4);
anArray[0] = 1;
anArray[1] = 2;
anArray[2] = 3;
anArray[3] = 4;
print("The array before rotating by one element.");
for(int element in anArray){
print(element);
}
print("Rotating the array clockwise just by one element.");
justRotate(anArray);

}

void justRotate(List<int> someArray){
int x;
int i;
x = someArray.length;
for(i = (someArray.length - 1); i > 0; i--){
someArray[i] = someArray[i - 1];
}
someArray[0] = x;
for(int element in someArray){
print(element);
}
}
Here goes the output: Continue reading How to rotate an array clockwise by an element using Dart