Big O Notation Basics for Web Developers

 



Today's article introduces mathematical notation that may look intimidating at first. However, this does not need to be the case as we will be discussing what the Big O Notation is starting off with the basics. This topic will revolve around the time complexity of algorithms of which the Big O notation can basically have running time ranging from the best to the worst whereby space complexity can be a result of this output. Let's dive into discussion!

What is the Big O notation?

Big O notation, commonly known as Landau's symbol, is used to express an algorithm's efficiency or complexity. Big O notation is a type of mathematical notation that expresses how a function limits itself when the argument tends to be zero or infinity. Big O can be used to characterize an algorithm's execution time or space usage, and also has the ability to define the worst-case scenario.

Why web developers should know about Big O notation?

The concept of considering the time and space complexity of algorithms as they are created must become second nature to web developers. This makes understanding Big O notation crucial since it enables the early identification of potential performance and optimization problems. 

What is a quadratic function (O(n2))?

The big O squared O(n²), denotes that the calculation takes quadratic time because the input data is squared in size. Here, the function and the letter "n" stand in for the input size. For example, “g(n) = n²” inside the “O()” provides us with a sense of the algorithm's complexity relative to the size of the input. This means that if we are given an input of size n, then the number of steps it takes to accomplish a task would be equal to the square of n. As a result, the algorithm is said to have a quadratic time complexity.

Let's look at the example below to understand the quadratic function. As we can see, the operations variable is being increased by two nested loops after each iteration. We will have a count of 16 operations if n is our "smallCollection" which will require a few operations. But what if n represents our enormous collection such as a quintillion? This will mean that there will be a lot of operations thereby making the process more complex. 

//quadratic function (O(n2))
const smallCollection = [1, 2, 3, 4];
const giganticCollection = [1, 2, 3, ..., 1000000000];
function countOperations(n) {
let operations = 0;
for (let i = 0; i < n; i++) {
for (let j = 0; j < n; j++) {
operations++;
}
}
return operations;
}
view raw quadratic.js hosted with ❤ by GitHub

How does a linear search compare with a binary search algorithm?

A linear search searches each element one at a time. Let's say we wish to find an element in an array or list, all we need to do is determine its length, skipping over items in the process.


// Linear search
function linearSearch(arr, target) {
for (let i in arr) {
if (arr[i] === target) return i
}
return -1
}
console.log(linearSearch([1, 2, 3, 4], 1))
console.log(linearSearch([1, 2, 3, 4], 4))
console.log(linearSearch([1, 2, 3, 4], 6))
console.log(linearSearch([3, 4, 1, 6, 3], 6))
view raw linearsearch.js hosted with ❤ by GitHub

A binary search is one in which the calculation of the middle element determines whether it is less or greater than the element being searched. The fundamental benefit of a binary search is that each entry in the list is not scanned. This means that the search is conducted on the first half of the list rather than through each entry. As a result, a binary search finds an element faster than a linear search does.

// Binary search
function binarySearch(arr, target) {
let start = 0
let end = arr.length - 1
while (start <= end) {
let middle = Math.floor((start + end) / 2)
if (arr[middle] < target) {
start = middle + 1
} else if (arr[middle] > target) {
end = middle - 1
} else if (arr[middle] === target) {
return middle
}
}
return -1
}
console.log(binarySearch([1, 2, 3, 4], 1))
console.log(binarySearch([1, 2, 3, 5, 9], 4))
console.log(binarySearch([1, 2, 3, 4, 5], 5))
console.log(binarySearch([0, 3], 3))
view raw binarysearch.js hosted with ❤ by GitHub

So what are the differences between these two search algorithms?
The main differences between binary and linear search methods are as follows: 
1. While linear search doesn't require sorted data organized by value, binary search must. 
2. When compared to linear search, which simply needs equality comparisons, binary search involves ordering comparisons of the form greater than or less than.
3. Lastly, the complexity and requirements for data access vary between the two approaches.

Which searching algorithm is more efficient? 

Compared to Linear Search, which has a Big O(n) linear time complexity, Binary Search has a significantly lower time complexity. We can see from the Big O Notation graph below that Binary Search (yellow line) will be more faster to compute than Linear Search for larger input arrays (blue line). Linear Search might be a preferable option if the array is small or if we just need to search for it once. If we need to conduct numerous searches on huge arrays, binary search would be a better choice. For example, if we have a large array of 20 000, a linear search would require 20 000 comparisons at worst case. A binary search on the other hand would require log(20 000), that is about 28 comparisons which is much less.

Photo by Learn2torials

How to use Big O notation to describe each type of searching algorithm?

Linear Search:
In complexity terms this is a Linear algorithm O(n) search. This describes an algorithm where the performance will grow linearly and in direct proportion to the size of the input data set. Thus, the time taken to search the list gets bigger at the same rate as the list does.

Binary Search:
In complexity terms this is a Logarithmic algorithm O(log(N)) search. The number of search operations grows more slowly than the list does as a result of halving the search space with each operation. It is an algorithm whose growth decreases when the data set increases following a logarithmic curve.

What is the Fibonacci sequence? 

The Fibonacci sequence is a series of numbers that begin with a one or zero, is followed by the number one again, and continue according to the principle that each number in the series is equal to the sum of the two numbers that came before it. The Fibonacci sequence can be displayed in the following order: 1, 1, 2, 3, 5, 8, 13, 21, 34.

Below I have displayed the different algorithms used to find the numbers in the Fibonacci sequence. Can you guess which algorithm is more efficient?

Loop Fibonacci:


// Loop Fibonacci
// Created a JavaScript program to display the first 5 numbers in the Fibonacci sequence (1,1,2,3,5)
// This function is written without using a recursive function.
/**
* First we take the input from the user.
* Two values (1 and 1) are used which will be displayed.
* The while loop will iterate over the values to find the first 5 numbers of the Fibonacci sequence.
**/
const num = parseInt(5);
let n1 = 1,
n2 = 1,
nextNum;
console.log('The first 5 Fibonacci numbers are:');
console.log(n1);
console.log(n2);
nextNum = n1 + n2;
while (nextNum <= num) {
console.log(nextNum);
n1 = n2;
n2 = nextNum;
nextNum = n1 + n2;
}


Recursive Fibonacci


// Recursive Fibonacci
// Created a JavaScript program to display the first 5 numbers in the Fibonacci sequence (1,1,2,3,5).
/**
* We create a recursivefib function.
* If the number is smaller than and equal to 1, the sequence will return the value of 1.
* We make use of the push() function in order to add 2 previous values.
* We then return and store the result into the total variable.
* Used console.log to return the 5 numbers.
**/
const recursivefib = (num) => {
if (num <= 1) return [1, 1];
else {
total = recursivefib(num - 1);
total.push(total[total.length - 1] + total[total.length - 2]);
return total;
}
};
console.log('The first 5 Fibonacci numbers are: ' + recursivefib(4));


Which algorithm is the more efficient solution to the problem?

Here the non-recursive function (loop Fibonacci) is more efficient than the recursive function since O(n) will have a shorter running time. This simple loop is therefore the best and fastest solving algorithm!


Conclusion

To round up, we can say that developers can assess an algorithm's scalability using the Big O notation. Additionally, based on the amount of data the program must deal with, using the Big O Notation will help to show the maximum number of operations that an algorithm can perform before producing a result.

I hope this article has been helpful to you.

Thank you for reading!

Source

Dev - Coding Interview Question: Fibonacci Number by nazmelnDoable Danny - Linear Search in JavaScript

Comments