Big O Notation

Emiko N-P.
3 min readJan 3, 2022
Photo by Bernard Hermant on Unsplash

Let’s say you and your friend are working on an application and run into a tricky problem. You each come up with a solution. Now you need to compare your ideas to find the one that’s the most efficient so that your app will run quickly, and you users will be happy. How do we as programmers quantify and communicate how much time and memory space a function or piece of code takes to run? We use something called Big O notation.

Big O notation works by accounting for the number, n, of operations that happen each time a function runs in a simple way. This makes it easy to compare the efficiency of different algorithms or solutions. Let’s look at a simple example to see how it works:

function logger(array){for(const element of array){console.log(element)}}

The function above, called logger, logs the value of each element in the input array to the console. We would notate the time efficiency of this function as O(n). That’s because logger does one operation, printing, for each element of the input array. Since the input array could be of any length, the number of operations is represented by the variable n. In big O notation we write the number of operations in parentheses following the letter O for operations, so we get O(n). But what if we first multiplied all our elements by 2 before printing them out? That would be another n operations, so in total we would have 2n operations, but our big O notation would still look like O(n). This is because we ignore constants in big O, and 2 is a constant so we leave it out and still get O(n). Here’s a slightly more complex example:

function pairs(array1, array2){for(const i of array1){for(const j of array2){console.log(i, j)}}}

Here we have a nested loop. Each time the outer loop runs, the inner loop does n operations, one for each entry in array2. The outer loop also runs n times, once for each element of array1. Putting it all together, we are doing n operations, n times. This means a total of n2 operations are occurring, so our big O notation is O(n2). That’s not very efficient because of we have a list of say 10 entries, we are doing 102, or 100 operations. Yikes!

Another common big O notation that we see in O(log n). Binary search is an example of an algorithms that has this big O notation. O(log n) is a very efficient notation since it grow more slowly the larger n gets. There are also some really inefficient algorithms that have a big O notation of O(!n), or n factorial. Uh oh. As you have seen, by being able to notate how fast or slow an algorithm or function runs, we can easily communicate to our fellow coders if something is efficient or not compared to other options. Go our there and explore the big O notations for some of your favorite algorithms, you might be surprised!

--

--

Emiko N-P.
0 Followers

Hello, my name is Emiko. I am an aspiring Software Engineer and student at Flatiron School.