Learn more about Big O Notation, a mathematical process that we can use to measure the performance and complexity of an algorithm.

When we first start to learn about programming, starting to write our first algorithms, it’s common to write code that’s not performative and readable. Remembering your first lines of code in your favorite programming language might cause you nightmares about how you did some things back then and how you do these same things today.

As we go and learn more about programming, our experience is always increasing, our way to solve problems is always improving, as we go we learn something new every single day. Programming does not mean type a few words in an editor and it will magically turn into a program. There’s a lot of work behind it, a lot of tools and concepts that developers need to learn, in order to write the best, most performative, readable and manageable code.

Big O Notation can save us a lot of precious time by helping us in order to write better code. We’ve been using this mathematical process for a long time. Even if you don’t know a lot about this process, somehow you might have heard of it before.

I will show you how you can, even if you are not a mathematics expert, measure the performance and complexity of your algorithm, and improve it easily by following a few steps.

Sometimes, developers can hear the term “Big O Notation” and be scared, as they think that this might be too complex a concept. Although the Wikipedia definition of Big O Notation might be a little complex for beginners, the mathematical process itself is not that complex.

There’s a lot of different ways to explain what Big O Notation is and why it exists. This is how we can define Big O Notation in a simple phrase:

**Big O Notation is a representation of the complexity of an algorithm.**

It’s a mathematical process that allows us to measure the performance and complexity of our algorithm.

Usually, Big O Notation uses two factors to analyze an algorithm:

● **Time Complexity**—How long it takes an algorithm to run

● **Space Complexity**—The memory that is required by the algorithm

As our inputs grow larger, does the runtime of this algorithm stay the same? Will this code be scalable, which means will we need to worry about the performance of this code in the future?

Usually people like to measure the quality of a code by the readability. Although still a valid parameter, readability alone does not guarantee that what you wrote can be considered good code.

Generally, when working in a project, you will have a lot of different functions, and different functions have different Big O complexities. We can easily compare two different algorithms using the Big O Notation and tell which one is better.

One nice thing to know is that Big O Notation doesn’t measure things in seconds. Instead, it always considers the worst-case scenario, which means how quickly our runtime grows.

Let’s learn now about the different types of time complexity in Big O Notation and see the differences between them.

An algorithm will have a **constant time** complexity when it runs in the same amount of time, no matter the input size.

Let’s imagine that we have a function that takes an input array of three items, and we want to return the first element of this array every time we call the function.

```
const arr = [1, 2, 3];
function logTwoFirstItems(items) {
console.log(items[0]);
};
logTwoFirstItems(arr);
```

No matter how big the size of the array is, our function will always run in the same amount of time. Constant time is considered the best-case scenario for an algorithm.

In Big O Notation, **O** stands for the order of magnitude, and what’s inside the parentheses represents the complexity of a task. That’s why we used the **O(1)** to show that this code has **constant time** complexity.

A logarithm is a mathematical operation that determines how many times a certain number, called the base, is multiplied by itself to reach another number. A logarithmic function is the opposite of an exponential function.

An algorithm will have a **logarithmic time** complexity when the runtime grows linearly while the input size grows exponentially.

Imagine that we take one second to compute an input array of 10 elements. As the time complexity would grow linearly, we would take two seconds to compute an input array of 20 elements, three seconds for an input array of 30 elements, etc.

An algorithm that has logarithmic time complexity is the binary search algorithm.

The binary search algorithm is a very efficient algorithm to find an element inside a sorted list of items.

Imagine that you want to look up a specific item inside a huge list of 10,000 items. If we go through this list and compare every item with the specific item that we wanted to return, this algorithm would have a **linear time** complexity (which we'll learn more about in the next section).

The binary search algorithm works differently. Instead of going through the list and comparing each item, it divides the list by two ranges of the same size.

In each step, the algorithm will pick the middle element of the array and compare it to the item. If the elements are matched, the item is returned. It will repeatedly divide the range in half and look for the item that you want until it finds the item.

An algorithm will have a **linear time** complexity when the runtime of the algorithm changes linearly with the input size.

Let’s take as an example a function that receives an input array with four items only, and we want to map this input array and check for a specific item.

```
const arr = new Array(4).fill('hello');
function findHello(arr) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === 'hello') {
console.log('hello!');
}
}
}
findHello(large);
```

As the input size changes, the number of operations changes as well—that’s called **linear time** complexity.

If we had an input array of 10,000 elements, the runtime would grow proportionally to the input size.

Every time we see a loop, we can consider this algorithm a linear time complexity.

An algorithm has a **quadratic time** complexity when the runtime is proportional to the square of the size of the input.

Let’s imagine that we have an input array, and for each item of this array, we will loop again to compare the current element with the other elements of the array.

```
const arr = new Array(4).fill('hello');
const newArr = new Array(4).fill('hello');
function findHello(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < newArr.length; j++) {
if (arr[i] === newArr[j]) {
console.log('hello!');
}
}
}
}
findHello(large);
```

Every time we see nested loops, we use multiplication. Every time the number of elements increases, the complexity increases quadratically.

This is really something that you should pay attention to if you have more than two nested loops—that’s really bad code and you probably are doing something wrong.

An algorithm with the **factorial time** complexity finds all permutations of a given set/string. It reaches toward infinity much faster than the other types of complexities, and remember infinity is the enemy of performance.

Factorial, “oh no!” This means that we’re adding a nested loop for every input that we have—a big no-no.

You will probably never see it, but it’s good to know that it exists.

Premature optimization can be the root of all evil. Sometimes optimizing for time or space can negatively impact the readability of your code, especially if you don’t know exactly what you are doing.

When we write code, we want to write code that scales, so that we don’t have to go back constantly and fix things as our applications grow in size. Big O is an important mathematical process that can help us to write scalable code, think in the long-term, and prevent us from possible problems in the future.

Try to understand the code that you are working on, how things are being done, how you can improve such things, and this will produce a better result in the future, helping you save money and time.

Space and time complexity are important things that we should pay attention to on a daily basis. The way we write our code can influence the performance and success of our applications. Writing code that can perform to scale by millions is not an easy task, but it’s definitely something that should be encouraged by developers.

About the Author
### Leonardo Maldonado

Leonardo is a full-stack developer, working with everything React-related, and loves to write about React and GraphQL to help developers. He also created the 33 JavaScript Concepts.

Comments are disabled in preview mode.