Big O Notation - explained as easily as possible

The title's a clickbait, sucker! πŸ˜‚

Featured on Hashnode

Subscribe to my newsletter and never miss my upcoming articles

If you want to learn the math involved with the Big O, read Analysing Algorithms: Worst Case Running Time.

Data Structures and Algorithms is about solving problems efficiently. A bad programmer solves their problems inefficiently and a really bad programmer doesn't even know why their solution is inefficient. So, the question is, How do you rank an algorithm's efficiency?

The simple answer to that question is the Big O Notation. How does that work? Let me explain!

Say you wrote a function which goes through every number in a list and adds it to a total_sum variable.

Screenshot 2021-01-16 at 2.02.19 PM.png

If you consider "addition" to be 1 operation then running this function on a list of 10 numbers will cost 10 operations, running it on a list of 20 numbers costs 20 operations and similarly running it on a list of n numbers costs the length of list (n) operations.

Screen Recording 2021-01-16 at 2.01.09 PM.gif

Now let's assume you wrote another function that would return the first number in a list.

Screenshot 2021-01-16 at 2.09.39 PM.png

Now, no matter how large this list is, this function will never cost more than one operation. Fairly, these two algorithms have different time complexity or relationship between growth of input size and growth of operations executed. We communicate these time complexities using Big O Notation.

Big O Notation is a mathematical notation used to classify algorithms according to how their run time or space requirements grow as the input size grows.

Referring to the complexities as 'n', common complexities (ranked from good to bad) are:

  • Constant - O(1)
  • Logarithmic O(log n)
  • Linear - O(n)
  • n log n - O(n log n)
  • Quadratic - O(nΒ²)
  • Exponential - O(2ⁿ)
  • Factorial - O(n!)

Screenshot 2021-01-16 at 2.23.08 PM.png

Our first algorithm runs in O(n), meaning its operations grew in a linear relationship with the input size - in this case, the amount of numbers in the list. Our second algorithm is not dependent on the input size at all - so it runs in constant time. Let's take a look at how many operations a program has to execute in function with an input size of n = 5 vs n = 50.

n = 5n = 50
O(log n)46
O(n log n)20300

It might not matter when the input is small, but this gap gets very dramatic as the input size increases.

Screen Recording 2021-01-16 at 2.33.46 PM.gif

If n were 10000, a function that runs in log(n) would only take 14 operations and a function that runs in n! would set your computer on fire!

For Big O Notation, we drop constants so O(10.n) and O(n/10) are both equivalent to O(n) because the graph is still linear.

Screenshot 2021-01-16 at 2.47.52 PM.png

Big O Notation is also used for space complexity, which works the same way - how much space an algorithm uses as n grows or relationship between growth of input size and growth of space needed.

So, yeah! This has been the simplest possible explanation of the Big O Notation from my side and I hope you enjoyed reading this. If you found this information helpful, share it with your friends on different social media platforms and consider clicking that follow button up there, it really motivates me to write more.

Screenshot 2021-01-17 at 10.17.42 AM.png

And If you REALLY liked the article, consider buying me a coffee 😊. If you have any suggestions or feedback, feel free to let me know that in the comments.

Happy Programming!

Twitter Follow GitHub followers

Rohit Kumar's photo

bhi samajh nhi aaya kuchh lekin like comment krne aa gye


Show +1 replies
Rohit Kumar's photo

kr diye follow hmko pta hi nhi tha ki yaha v ye sb hota hai khair bhai ne bola follow krne ka mtlb krne ka Conrad Reeves

Conrad Reeves's photo

Haha.. thanks bhai Rohit Kumar

Braydon Coyer's photo

Well done! Very easy to follow! Thanks for sharing!

Conrad Reeves's photo

Thank you. Please consider following (only if you want to ☺️) for more theoretical computer science blog posts.

Bhargav Ponnapalli's photo

This was a fantastic read! Thank you!

Conrad Reeves's photo

Thanks! Much appreciated πŸ™‚

Sodiq Afolayan's photo


Serzhan Akhmetov's photo

I'm confused, how did you get $$log(5)=4$$ and $$log(50)=6$$?

What base logarithm have you used?

Conrad Reeves's photo

I get your confusion. Clearly log(n) has no value until we fix the base. Since Big O is all about growth rates, the value of the function should not matter at all. The purpose of mentioning log(n) is to help people understand the rate of growth of a particular algorithm or code β€” it is only to help people see things in perspective.

I hope that clears your doubt.