Algorithmic Complexity
 Summary

Discussion
 Can you give realworld examples of various algorithmic complexities?
 What notations are used to represent algorithmic complexity?
 What does it mean to state bestcase, worstcase and average time complexity of algorithms?
 Why should we care about an algorithm's performance when processors are getting faster and memories are getting cheaper?
 Are there techniques to figure out the complexity of algorithms?
 If an algorithm is inefficient, does that mean that we can't use it?
 Since algorithmic complexity is about algorithms, is it relevant to talk about data structures?
 Can you state the complexity of wellknown sorting algorithms?
 Can you give the complexity of some important algorithms?
 Milestones
 Sample Code
 References
 Further Reading
 Article Stats
 Cite As
Algorithmic complexity is a measure of how long an algorithm would take to complete given an input of size n. If an algorithm has to scale, it should compute the result within a finite and practical time bound even for large values of n. For this reason, complexity is calculated asymptotically as n approaches infinity. While complexity is usually in terms of time, sometimes complexity is also analyzed in terms of space, which translates to the algorithm's memory requirements.^{}
Analysis of an algorithm's complexity is helpful when comparing algorithms or seeking improvements.^{} Algorithmic complexity falls within a branch of theoretical computer science called computational complexity theory.^{} It's important to note that we're concerned about the order of an algorithm's complexity, not the actual execution time in terms of milliseconds.
Algorithmic complexity is also called complexity or running time.^{}
Discussion

Can you give realworld examples of various algorithmic complexities? Suppose you're looking for a specific item in a long unsorted list, you'll probably compare with each item. Search time is proportional to the list size. Here complexity is said to be linear.^{}
On the other hand, if you search for a word in a dictionary, the search will be faster because the words are in sorted order, you know the order and can quickly decide if you need to turn to earlier pages or later pages. This is an example of logarithmic complexity.^{}
If you're asked to pick out the first word in a dictionary, this operation is of constant time complexity, regardless of number of words in the dictionary. Likewise, joining the end of a queue in a bank is of constant complexity regardless of how long the queue is.^{}
Suppose you are given an unsorted list and asked to find all duplicates, then the complexity becomes quadratic. Checking for duplicates for one item is of linear complexity. If we do this for all items, complexity becomes quadratic. Similarly, if all people in a room are asked to shake hands with every other person, the complexity is quadratic.^{}

What notations are used to represent algorithmic complexity? BigO notation is the prevalent notation to represent algorithmic complexity. It gives an upper bound on complexity and hence it signifies the worstcase performance of the algorithm. With such a notation, it's easy to compare different algorithms because the notation tells clearly how the algorithm scales when input size increases. This is often called the order of growth.
Constant runtime is represented by \(O(1)\); linear growth is \(O(n)\); logarithmic growth is \(O(log\,n)\); loglinear growth is \(O(n\,log\,n)\); quadratic growth is \(O(n^2)\); exponential growth is \(O(2^n)\); factorial growth is \(O(n!)\). Their orders of growth can also be compared from best to worst:^{}
\(O(1) < O(log\,n) < O(\sqrt{n}) < O(n) < O(n\,log\,n) < \\ O(n^2) < O(n^3) < O(2^n) < O(10^n) < O(n!)\)
In complexity analysis, only the dominant term is retained. For example, if an algorithm requires \(2n^3+log\,n+4\) operations, its order is said to be \(O(n^3)\) since \(2n^3\) is the dominant term. Constants and scaling factors are ignored since we are concerned only about asymptotic.
Audrey Nasar gives formal definitions of BigO.^{} Wikipedia lists orders of common functions.^{}

What does it mean to state bestcase, worstcase and average time complexity of algorithms? Let's take the example of searching for an item sequentially within a list of unsorted items. If we're lucky, the item may occur at the start of the list. If we're unlucky, it may be the last item in the list. The former is called bestcase complexity and the latter is called worstcase complexity. If the searched item is always the first one, then complexity is \(O(1)\); if it's always the last one, then complexity is \(O(n)\). We can also calculate the average complexity, which will turn out to be \(O(n)\).^{} The term "complexity" normally refers to worstcase complexity.^{}
Mathematically, different notations are defined (example is for linear complexity):^{}
 Worstcase or upper bound: BigO: \(O(n)\)
 Bestcase or lower bound: BigOmega: \(\Omega(n)\)
 Averagecase: BigTheta: \(\Theta(n)\)
 Nontight upper bound: \(o(n)\)
 Nontight lower bound: \(\omega(n)\)
When upper or lower bounds don't coincide with average complexity, we can call them nontight bounds.^{}
As an example, Quicksort's complexity is \(\Omega(n\,log\,n)\), \(\Theta(n\,log\,n)\) and \(O(n^2)\).^{}
There's also amortized complexity in which complexity is calculated by averaging over a sequence of operations.^{}

Why should we care about an algorithm's performance when processors are getting faster and memories are getting cheaper? Complexity analysis doesn't concern itself with actual execution time, which depends on processor speed, instruction set, disk speed, compiler, etc.^{} Likewise, the same algorithm written in assembly will run faster than in Python.^{} Programming languages, hardware and memories are external factors. Complexity is about the algorithm itself, the way it processes the data to solve a given problem. It's a software design concern at the "idea level".^{}
It's possible to have an inefficient algorithm that's executed on highend hardware to give a result quickly. However, with large input datasets, the limitations of the hardware will become apparent. Thus, it's desirable to optimize the algorithm first before thinking about hardware upgrades.
Suppose your computer can process 10,000 operations/sec. An algorithm of order \(O(n^4)\) would take 1 sec to process 10 items but more than 3 years to process 1,000 items. Comparatively, a more efficient algorithm of order \(O(n^2)\) would take only 100 secs for 1,000 items.^{} With even larger inputs, better hardware cannot compensate for algorithmic inefficiency. It's for this reason algorithmic complexity is defined in terms of asymptotic behaviour.

Are there techniques to figure out the complexity of algorithms? Instead of looking for exact execution times, we should evaluate the number of highlevel instructions in relation to the input size.^{}
A single loop that iterates through the input is linear. If there's a loop within a loop, with each loop iterating through the input, then the algorithm is quadratic. It doesn't matter if the loops process only alternative items or skip a fixed number of items. Let's recall that complexity ignores constants and scaling factors. Likewise, a loop within a loop, followed by another loop, is quadratic, since we need to consider only the dominant term.^{}
A recursive function that calls itself n times is linear, provided other operations within the function don't depend on input size. However, a recursive implementation of Fibonacci series is exponential.^{}
A search algorithm that partitions the input into two parts and discards one of them at each iteration, is logarithmic. An algorithm such as Mergesort that partitions the input into halves at each iteration, plus does a merge operation in linear time at each iteration, has a loglinear complexity.^{}

If an algorithm is inefficient, does that mean that we can't use it? Polynomial complexity algorithms of order \(O(n^c)\), for c > 1, may be acceptable.^{} They can be used for inputs up to thousands of items. Anything exponential can probably work for only inputs less than 20.^{}
Algorithms such as Quicksort that have complexity of \(O(n^2)\) rarely experience worstcase inputs and often obey \(\Theta(n\,log\,n)\) in practice.^{} In some case, we can preprocess the input so that worstcase scenarios don't occur. Likewise, we can go with suboptimal solutions so that complexity is reduced to polynomial time.^{}
In practice, a linear algorithm can perform worse than a quadratic one if large constants are involved and n is comparable to these constants. It's also important to analyze every operation of an algorithm to ensure that nontrivial operations are not hidden or abstracted away within libraries.^{}

Since algorithmic complexity is about algorithms, is it relevant to talk about data structures? Data structures only store data but the algorithmic complexity comes into consideration when we operate on them. Operations such as insertion, deletion, searching and indexing need to be analyzed. The intent is to choose the right data structures so that complexity is reduced.
For example, accessing an array by index has constant complexity whereas the same operation with a linked list has linear complexity. Searching by key in a hash table incurs constant average complexity but this operation is linear with stacks, queues, arrays and linked list. A more detailed discussion on what data structures to use is given by Svetlin Nokav and others.^{}

Can you state the complexity of wellknown sorting algorithms? The simplest sorting algorithm is probably Bubblesort but it's quadratic in the average case and hence not efficient. Better alternatives are those with loglinear complexity: Quicksort, Mergesort, Heapsort, etc. If the list is already sorted, the bestcase complexity occurs with Bubblesort, Timsort, Insertionsort and Cubesort, all completing in linear time.^{}
It's been noted that best and worst cases rarely occur. The average case is based on an input distribution model that could be a random sample as well. Analysis of these averages or abstract basic operations can help us pick the best suited algorithm for a specific problem.^{}

Can you give the complexity of some important algorithms? Here's a selection:^{}
 Fast Fourier Transform: \(O(n\,log\,n)\)
 Multiply two ndigit numbers using Karatsuba algorithm: \(O(n^{1.59})\)
 Matrix multiplication due to Coppersmith and Winograd: \(O(n^{2.496})\)
 Prime recognition of an ndigit integer due to Adleman, Pomerance and Rumley: \(n^{O(log\,log\,n)}\)
 Gaussian elimination: \(O(n^{3})\) but \(O(x^{2^n})\) bit complexity of its operands^{}
 GCD(a, b) by Euclid's algorithm: \(O(log\,(a+b))\) but \(O({(log\,(a+b))}^2)\) bit complexity when integers a and b are large^{}
Bit complexity matters when integers exceed the 64bit machine capability. In these cases, when arbitrary precision is used, operations such as division and modulo arithmetic are no longer trivial and their complexity must be accounted for. From a theoretical perspective, this doesn't matter to computer scientists but it matters to programmers who have to work within machine limits.
Milestones
Charles Babbage, while building his Analytical Engine, predicts the importance of studying the performance of algorithms.^{}
Mathematician Paul Bachmann defines the BigO notation.^{} It originally meant "order of". Decades later, Donald Knuth calls it the capital omicron.^{}
The theory of computational complexity is born in the 1960s.^{} This is also the decade when problems that can't be solved in polynomial time (NP problems) are recognized.^{}
Tarjan makes the case that when designing data structures, focusing on worst case running time is less important than amortized running time, that is, performance averaged over operations on a long sequence of input. His later contributions include the splay tree (1980) and the Fibonacci heap, both achieving good amortized performance.^{}
Donald Knuth introduces new notations and clarifies the meaning of \(O\), \(\Omega\) and \(\Theta\).^{}
R. LibeskinHadas introduces the term oblivious algorithm that's applied to an algorithm whose complexity is independent of the input structure. In other words, the bestcase, worstcase and average complexity of the algorithm are all the same.^{}
Sample Code
References
 Adamchik, Victor S. 2009. "Algorithmic Complexity." CMU. Accessed 20171220.
 BigO Cheat Sheet. 2016. "Know Thy Complexities!" August 23. Accessed 20171220.
 CS7800 12F. 2012. "Orders of Growth." The CS7800 12F Homepage, Northeastern University, College of Computer and Information Science. December 9. Accessed 20171220.
 Clobo. 2017. "Learning and Understanding BigO Notation." Topcoder. June 17. Accessed 20171220.
 Codility. 2017. "Euclidean algorithm." Accessed 20171220.
 Cook, Stephen A. 1983. "An Overview of Computational Complexity." Comms of the ACM, June, vol. 26, no. 6, pp. 401408. Accessed 20171220.
 Exam Crazy. 2017. "The big O notation: Order of growth of an algorithm." Exam Crazy. Accessed 20171220.
 Fang, Xin Gui, and George Havas. 1997. "On the worstcase complexity of integer Gaussian elimination". Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation. ISSAC '97, pp. 28–31. Accessed 20171220.
 King, V. 2022. "Robert (Bob) Endre Tarjan." Citation, Turing Award 1986, Association for Computing Machinery. Accessed 20220219.
 Knuth, Donald E. 1976. "Big Omicron and Big Omega and Big Theta." SIGACT News. AprilJune. Accessed 20171220.
 Nakov et al., Svetlin. 2013. "Fundamentals of Computer Programming with C#." Accessed 20171220.
 Nasar, Audrey A. 2016. "The history of Algorithmic complexity." The Mathematics Enthusiast, vol. 13, no. 3, pp. 217242. Accessed 20171220.
 Sebastian. 2012. "Why is quicksort better than other sorting algorithms in practice?" StackExchange. March 7. Accessed 20171220.
 Studytonight. 2017. "Time Complexity of Algorithms." Accessed 20171220.
 Tarjan, R. E. 1983. "Data structures and network algorithms." CBMSNSF Regional Conference Series in Applied Mathematics, SIAM. Accessed 20220219.
 Vilches, Bruno. 2015. "Introduction to Algorithm Complexity." Belatrix Software. September 15. Accessed 20171220.
 Wikipedia. 2017a. "Big O notation." December 18. Accessed 20171220.
 Wikipedia. 2017b. "Quicksort." December 20. Accessed 20171220.
 Wikipedia. 2017c. "Computational complexity theory." November 25. Accessed 20171220.
 Zindros, Dionysis. 2012. "A Gentle Introduction to Algorithm Complexity Analysis." Discrete Mathematics, National Technical University of Athens. Accessed 20171220.
Further Reading
 Wessen, Ken. 2016. "Not just a matter of time: Measuring complexity." Plus Magazine. October 12. Accessed 20171222.
 Nasar, Audrey A. 2016. "The history of Algorithmic complexity." The Mathematics Enthusiast, vol. 13, no. 3, pp. 217242. Accessed 20171220.
 Clobo. 2017. "Learning and Understanding BigO Notation." Topcoder. June 17. Accessed 20171220.
 HackerRank. 2016. "Big O Notation." YouTube. September 27. Accessed 20171220.
 Fortnow, Lance, and Steve Homer. 2002. "A Short History of Computational Complexity." IEEE Conference on Computation Complexity. Accessed 20171220.
Article Stats
Cite As
See Also
 Big O Notation
 Order of Approximation
 Binary Search
 Kolmogorov Complexity
 P vs NP Problem
Article Warnings
 Readability score of this article is below 60 (51.6). Use shorter sentences. Use simpler words.