# Algorithmic Complexity

Algorithmic complexity is a measure of how long an algorithm would take to complete given an input of size n. If an algorithm has to scale, it should compute the result within a finite and practical time bound even for large values of n. For this reason, complexity is calculated asymptotically as n approaches infinity. While complexity is usually in terms of time, sometimes complexity is also analyzed in terms of space, which translates to the algorithm's memory requirements.

Analysis of an algorithm's complexity is helpful when comparing algorithms or seeking improvements. Algorithmic complexity falls within a branch of theoretical computer science called computational complexity theory. It's important to note that we're concerned about the order of an algorithm's complexity, not the actual execution time in terms of milliseconds.

Algorithmic complexity is also called complexity or running time.

## Discussion

• Can you give real-world examples of various algorithmic complexities?

Suppose you're looking for a specific item in a long unsorted list, you'll probably compare with each item. Search time is proportional to the list size. Here complexity is said to be linear.

On the other hand, if you search for a word in a dictionary, the search will be faster because the words are in sorted order, you know the order and can quickly decide if you need to turn to earlier pages or later pages. This is an example of logarithmic complexity.

If you're asked to pick out the first word in a dictionary, this operation is of constant time complexity, regardless of number of words in the dictionary. Likewise, joining the end of a queue in a bank is of constant complexity regardless of how long the queue is.

Suppose you are given an unsorted list and asked to find all duplicates, then the complexity becomes quadratic. Checking for duplicates for one item is of linear complexity. If we do this for all items, complexity becomes quadratic. Similarly, if all people in a room are asked to shake hands with every other person, the complexity is quadratic.

• What notations are used to represent algorithmic complexity?

Big-O notation is the prevalent notation to represent algorithmic complexity. It gives an upper bound on complexity and hence it signifies the worst-case performance of the algorithm. With such a notation, it's easy to compare different algorithms because the notation tells clearly how the algorithm scales when input size increases. This is often called the order of growth.

Constant runtime is represented by $$O(1)$$; linear growth is $$O(n)$$; logarithmic growth is $$O(log\,n)$$; log-linear growth is $$O(n\,log\,n)$$; quadratic growth is $$O(n^2)$$; exponential growth is $$O(2^n)$$; factorial growth is $$O(n!)$$. Their orders of growth can also be compared from best to worst:

$$O(1) < O(log\,n) < O(\sqrt{n}) < O(n) < O(n\,log\,n) < \\ O(n^2) < O(n^3) < O(2^n) < O(10^n) < O(n!)$$

In complexity analysis, only the dominant term is retained. For example, if an algorithm requires $$2n^3+log\,n+4$$ operations, its order is said to be $$O(n^3)$$ since $$2n^3$$ is the dominant term. Constants and scaling factors are ignored since we are concerned only about asymptotic.

Audrey Nasar gives formal definitions of Big-O. Wikipedia lists orders of common functions.

• What does it mean to state best-case, worst-case and average time complexity of algorithms?

Let's take the example of searching for an item sequentially within a list of unsorted items. If we're lucky, the item may occur at the start of the list. If we're unlucky, it may be the last item in the list. The former is called best-case complexity and the latter is called worst-case complexity. If the searched item is always the first one, then complexity is $$O(1)$$; if it's always the last one, then complexity is $$O(n)$$. We can also calculate the average complexity, which will turn out to be $$O(n)$$. The term "complexity" normally refers to worst-case complexity.

Mathematically, different notations are defined (example is for linear complexity):

• Worst-case or upper bound: Big-O: $$O(n)$$
• Best-case or lower bound: Big-Omega: $$\Omega(n)$$
• Average-case: Big-Theta: $$\Theta(n)$$
• Non-tight upper bound: $$o(n)$$
• Non-tight lower bound: $$\omega(n)$$

When upper or lower bounds don't coincide with average complexity, we can call them non-tight bounds.

As an example, Quicksort's complexity is $$\Omega(n\,log\,n)$$, $$\Theta(n\,log\,n)$$ and $$O(n^2)$$.

There's also amortized complexity in which complexity is calculated by averaging over a sequence of operations.

• Why should we care about an algorithm's performance when processors are getting faster and memories are getting cheaper?

Complexity analysis doesn't concern itself with actual execution time, which depends on processor speed, instruction set, disk speed, compiler, etc. Likewise, the same algorithm written in assembly will run faster than in Python. Programming languages, hardware and memories are external factors. Complexity is about the algorithm itself, the way it processes the data to solve a given problem. It's a software design concern at the "idea level".

It's possible to have an inefficient algorithm that's executed on high-end hardware to give a result quickly. However, with large input datasets, the limitations of the hardware will become apparent. Thus, it's desirable to optimize the algorithm first before thinking about hardware upgrades.

Suppose your computer can process 10,000 operations/sec. An algorithm of order $$O(n^4)$$ would take 1 sec to process 10 items but more than 3 years to process 1,000 items. Comparatively, a more efficient algorithm of order $$O(n^2)$$ would take only 100 secs for 1,000 items. With even larger inputs, better hardware cannot compensate for algorithmic inefficiency. It's for this reason algorithmic complexity is defined in terms of asymptotic behaviour.

• Are there techniques to figure out the complexity of algorithms?

Instead of looking for exact execution times, we should evaluate the number of high-level instructions in relation to the input size.

A single loop that iterates through the input is linear. If there's a loop within a loop, with each loop iterating through the input, then the algorithm is quadratic. It doesn't matter if the loops process only alternative items or skip a fixed number of items. Let's recall that complexity ignores constants and scaling factors. Likewise, a loop within a loop, followed by another loop, is quadratic, since we need to consider only the dominant term.

A recursive function that calls itself n times is linear, provided other operations within the function don't depend on input size. However, a recursive implementation of Fibonacci series is exponential.

A search algorithm that partitions the input into two parts and discards one of them at each iteration, is logarithmic. An algorithm such as Mergesort that partitions the input into halves at each iteration, plus does a merge operation in linear time at each iteration, has a log-linear complexity.

• If an algorithm is inefficient, does that mean that we can't use it?

Polynomial complexity algorithms of order $$O(n^c)$$, for c > 1, may be acceptable. They can be used for inputs up to thousands of items. Anything exponential can probably work for only inputs less than 20.

Algorithms such as Quicksort that have complexity of $$O(n^2)$$ rarely experience worst-case inputs and often obey $$\Theta(n\,log\,n)$$ in practice. In some case, we can preprocess the input so that worst-case scenarios don't occur. Likewise, we can go with sub-optimal solutions so that complexity is reduced to polynomial time.

In practice, a linear algorithm can perform worse than a quadratic one if large constants are involved and n is comparable to these constants. It's also important to analyze every operation of an algorithm to ensure that non-trivial operations are not hidden or abstracted away within libraries.

• Since algorithmic complexity is about algorithms, is it relevant to talk about data structures?

Data structures only store data but the algorithmic complexity comes into consideration when we operate on them. Operations such as insertion, deletion, searching and indexing need to be analyzed. The intent is to choose the right data structures so that complexity is reduced.

For example, accessing an array by index has constant complexity whereas the same operation with a linked list has linear complexity. Searching by key in a hash table incurs constant average complexity but this operation is linear with stacks, queues, arrays and linked list. A more detailed discussion on what data structures to use is given by Svetlin Nokav and others.

• Can you state the complexity of well-known sorting algorithms?

The simplest sorting algorithm is probably Bubblesort but it's quadratic in the average case and hence not efficient. Better alternatives are those with log-linear complexity: Quicksort, Mergesort, Heapsort, etc. If the list is already sorted, the best-case complexity occurs with Bubblesort, Timsort, Insertionsort and Cubesort, all completing in linear time.

It's been noted that best and worst cases rarely occur. The average case is based on an input distribution model that could be a random sample as well. Analysis of these averages or abstract basic operations can help us pick the best suited algorithm for a specific problem.

• Can you give the complexity of some important algorithms?

Here's a selection:

• Fast Fourier Transform: $$O(n\,log\,n)$$
• Multiply two n-digit numbers using Karatsuba algorithm: $$O(n^{1.59})$$
• Matrix multiplication due to Coppersmith and Winograd: $$O(n^{2.496})$$
• Prime recognition of an n-digit integer due to Adleman, Pomerance and Rumley: $$n^{O(log\,log\,n)}$$
• Gaussian elimination: $$O(n^{3})$$ but $$O(x^{2^n})$$ bit complexity of its operands
• GCD(a, b) by Euclid's algorithm: $$O(log\,(a+b))$$ but $$O({(log\,(a+b))}^2)$$ bit complexity when integers a and b are large

Bit complexity matters when integers exceed the 64-bit machine capability. In these cases, when arbitrary precision is used, operations such as division and modulo arithmetic are no longer trivial and their complexity must be accounted for. From a theoretical perspective, this doesn't matter to computer scientists but it matters to programmers who have to work within machine limits.

## Milestones

1864

Charles Babbage, while building his Analytical Engine, predicts the importance of studying the performance of algorithms.

1894

Mathematician Paul Bachmann defines the Big-O notation. It originally meant "order of". Decades later, Donald Knuth calls it the capital omicron.

1960

The theory of computational complexity is born in the 1960s. This is also the decade when problems that can't be solved in polynomial time (NP problems) are recognized.

1975

Tarjan makes the case that when designing data structures, focusing on worst case running time is less important than amortized running time, that is, performance averaged over operations on a long sequence of input. His later contributions include the splay tree (1980) and the Fibonacci heap, both achieving good amortized performance.

1976

Donald Knuth introduces new notations and clarifies the meaning of $$O$$, $$\Omega$$ and $$\Theta$$.

1998

R. Libeskin-Hadas introduces the term oblivious algorithm that's applied to an algorithm whose complexity is independent of the input structure. In other words, the best-case, worst-case and average complexity of the algorithm are all the same.

## Sample Code

• // Source: Nakov et al., 2013, ch. 19.

// Linear complexity: O(N*N)
int FindMaxElement(int[] array)
{
int max = int.MinValue;
for (int i = 1; i < array.Length; i++)
{
if (array[i] > max)
{
max = array[i];
}
}
return max;
}

// Quadratic complexity: O(N*N)
int FindInversions(int[] array)
{
int inversions = 0;
for (int i = 0; i < array.Length - 1; i++)
{
for (int j = i + 1; j < array.Length; j++)
{
if (array[i] > array[j])
{
inversions++;
}
}
}
return inversions;
}

// Quadratic complexity: O(N*M)
// Innermost loop executes N*min(N,M); thus, this algorithm is not of cubic complexity.
long SumMN(int n, int m)
{
long sum = 0;
for (int x = 1; x <= n; x++)
{
for (int y = 1; y <= m; y++)
{
if (x == y)
{
for (int i = 1; i <= n; i++)
{
sum += i * x * y;
}
}
}
}
return sum;
}

// Exponential complexity: O(2^n)
long Fibonacci(int n)
{
if (n == 0)
{
return 1;
}
else if (n == 1)
{
return 1;
}
else
{
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
}

## Author-wise Stats for Article Edits

Author
No. of Edits
No. of Chats
DevCoins dineshpathak
2
0
3782 arvindpdmn
6
1
188
1805
Words
59
Likes
94514
Hits

## Cite As

Devopedia. 2022. "Algorithmic Complexity." Version 8, February 19. Accessed 2022-09-22. https://devopedia.org/algorithmic-complexity
• Site Map