Algorithmic Complexity

Algorithmic complexity is a measure of how long an algorithm would take to complete given an input of size n. If an algorithm has to scale, it should compute the result within a finite and practical time bound even for large values of n. For this reason, complexity is calculated asymptotically as n approaches infinity. While complexity is usually in terms of time, sometimes complexity is also analyzed in terms of space, which translates to the algorithm's memory requirements.

Analysis of an algorithm's complexity is helpful when comparing algorithms or seeking improvements. Algorithmic complexity falls within a branch of theoretical computer science called computational complexity theory. It's important to note that we're concerned about the order of an algorithm's complexity, not the actual execution time in terms of milliseconds.

Algorithmic complexity is also called complexity or running time.

Discussion

  • Can you give real-world examples of various algorithmic complexities?

    Suppose you're looking for a specific item in a long unsorted list, you'll probably compare with each item. Search time is proportional to the list size. Here complexity is said to be linear.

    On the other hand, if you search for a word in a dictionary, the search will be faster because the words are in sorted order, you know the order and can quickly decide if you need to turn to earlier pages or later pages. This is an example of logarithmic complexity.

    If you're asked to pick out the first word in a dictionary, this operation is of constant time complexity, regardless of number of words in the dictionary. Likewise, joining the end of a queue in a bank is of constant complexity regardless of how long the queue is.

    Suppose you are given an unsorted list and asked to find all duplicates, then the complexity becomes quadratic. Checking for duplicates for one item is of linear complexity. If we do this for all items, complexity becomes quadratic. Similarly, if all people in a room are asked to shake hands with every other person, the complexity is quadratic.

  • What notations are used to represent algorithmic complexity?
    Order of growth of algorithms specified in Big-O notation. Source: Big-O Cheat Sheet, 2016.
    Order of growth of algorithms specified in Big-O notation. Source: Big-O Cheat Sheet, 2016.

    Big-O notation is the prevalent notation to represent algorithmic complexity. It gives an upper bound on complexity and hence it signifies the worst-case performance of the algorithm. With such a notation, it's easy to compare different algorithms because the notation tells clearly how the algorithm scales when input size increases. This is often called the order of growth.

    Constant runtime is represented by \(O(1)\); linear growth is \(O(n)\); logarithmic growth is \(O(log\,n)\); log-linear growth is \(O(n\,log\,n)\); quadratic growth is \(O(n^2)\); exponential growth is \(O(2^n)\); factorial growth is \(O(n!)\). Their orders of growth can also be compared from best to worst:

    \(O(1) < O(log\,n) < O(\sqrt{n}) < O(n) < O(n\,log\,n) < \\ O(n^2) < O(n^3) < O(2^n) < O(10^n) < O(n!)\)

    In complexity analysis, only the dominant term is retained. For example, if an algorithm requires \(2n^3+log\,n+4\) operations, its order is said to be \(O(n^3)\) since \(2n^3\) is the dominant term. Constants and scaling factors are ignored since we are concerned only about asymptotic.

    Audrey Nasar gives formal definitions of Big-O. Wikipedia lists orders of common functions.

  • What does it mean to state best-case, worst-case and average time complexity of algorithms?

    Let's take the example of searching for an item sequentially within a list of unsorted items. If we're lucky, the item may occur at the start of the list. If we're unlucky, it may be the last item in the list. The former is called best-case complexity and the latter is called worst-case complexity. If the searched item is always the first one, then complexity is \(O(1)\); if it's always the last one, then complexity is \(O(n)\). We can also calculate the average complexity, which will turn out to be \(O(n)\). The term "complexity" normally refers to worst-case complexity.

    Mathematically, different notations are defined (example is for linear complexity):

    • Worst-case or upper bound: Big-O: \(O(n)\)
    • Best-case or lower bound: Big-Omega: \(\Omega(n)\)
    • Average-case: Big-Theta: \(\Theta(n)\)
    • Non-tight upper bound: \(o(n)\)
    • Non-tight lower bound: \(\omega(n)\)

    When upper or lower bounds don't coincide with average complexity, we can call them non-tight bounds.

    As an example, Quicksort's complexity is \(\Omega(n\,log\,n)\), \(\Theta(n\,log\,n)\) and \(O(n^2)\).

    There's also amortized complexity in which complexity is calculated by averaging over a sequence of operations.

  • Why should we care about an algorithm's performance when processors are getting faster and memories are getting cheaper?

    Complexity analysis doesn't concern itself with actual execution time, which depends on processor speed, instruction set, disk speed, compiler, etc. Likewise, the same algorithm written in assembly will run faster than in Python. Programming languages, hardware and memories are external factors. Complexity is about the algorithm itself, the way it processes the data to solve a given problem. It's a software design concern at the "idea level".

    It's possible to have an inefficient algorithm that's executed on high-end hardware to give a result quickly. However, with large input datasets, the limitations of the hardware will become apparent. Thus, it's desirable to optimize the algorithm first before thinking about hardware upgrades.

    Suppose your computer can process 10,000 operations/sec. An algorithm of order \(O(n^4)\) would take 1 sec to process 10 items but more than 3 years to process 1,000 items. Comparatively, a more efficient algorithm of order \(O(n^2)\) would take only 100 secs for 1,000 items. With even larger inputs, better hardware cannot compensate for algorithmic inefficiency. It's for this reason algorithmic complexity is defined in terms of asymptotic behaviour.

  • Are there techniques to figure out the complexity of algorithms?

    Instead of looking for exact execution times, we should evaluate the number of high-level instructions in relation to the input size.

    A single loop that iterates through the input is linear. If there's a loop within a loop, with each loop iterating through the input, then the algorithm is quadratic. It doesn't matter if the loops process only alternative items or skip a fixed number of items. Let's recall that complexity ignores constants and scaling factors. Likewise, a loop within a loop, followed by another loop, is quadratic, since we need to consider only the dominant term.

    A recursive function that calls itself n times is linear, provided other operations within the function don't depend on input size. However, a recursive implementation of Fibonacci series is exponential.

    A search algorithm that partitions the input into two parts and discards one of them at each iteration, is logarithmic. An algorithm such as Mergesort that partitions the input into halves at each iteration, plus does a merge operation in linear time at each iteration, has a log-linear complexity.

  • If an algorithm is inefficient, does that mean that we can't use it?

    Polynomial complexity algorithms of order \(O(n^c)\), for c > 1, may be acceptable. They can be used for inputs up to thousands of items. Anything exponential can probably work for only inputs less than 20.

    Algorithms such as Quicksort that have complexity of \(O(n^2)\) rarely experience worst-case inputs and often obey \(\Theta(n\,log\,n)\) in practice. In some case, we can preprocess the input so that worst-case scenarios don't occur. Likewise, we can go with sub-optimal solutions so that complexity is reduced to polynomial time.

    In practice, a linear algorithm can perform worse than a quadratic one if large constants are involved and n is comparable to these constants. It's also important to analyze every operation of an algorithm to ensure that non-trivial operations are not hidden or abstracted away within libraries.

  • Since algorithmic complexity is about algorithms, is it relevant to talk about data structures?
    Complexity of operations on data structures. Source: Big-O Cheat Sheet, 2016.
    Complexity of operations on data structures. Source: Big-O Cheat Sheet, 2016.

    Data structures only store data but the algorithmic complexity comes into consideration when we operate on them. Operations such as insertion, deletion, searching and indexing need to be analyzed. The intent is to choose the right data structures so that complexity is reduced.

    For example, accessing an array by index has constant complexity whereas the same operation with a linked list has linear complexity. Searching by key in a hash table incurs constant average complexity but this operation is linear with stacks, queues, arrays and linked list. A more detailed discussion on what data structures to use is given by Svetlin Nokav and others.

  • Can you state the complexity of well-known sorting algorithms?
    Complexity of sorting algorithms. Source: Big-O Cheat Sheet, 2016.
    Complexity of sorting algorithms. Source: Big-O Cheat Sheet, 2016.

    The simplest sorting algorithm is probably Bubblesort but it's quadratic in the average case and hence not efficient. Better alternatives are those with log-linear complexity: Quicksort, Mergesort, Heapsort, etc. If the list is already sorted, the best-case complexity occurs with Bubblesort, Timsort, Insertionsort and Cubesort, all completing in linear time.

    It's been noted that best and worst cases rarely occur. The average case is based on an input distribution model that could be a random sample as well. Analysis of these averages or abstract basic operations can help us pick the best suited algorithm for a specific problem.

  • Can you give the complexity of some important algorithms?

    Here's a selection:

    • Fast Fourier Transform: \(O(n\,log\,n)\)
    • Multiply two n-digit numbers using Karatsuba algorithm: \(O(n^{1.59})\)
    • Matrix multiplication due to Coppersmith and Winograd: \(O(n^{2.496})\)
    • Prime recognition of an n-digit integer due to Adleman, Pomerance and Rumley: \(n^{O(log\,log\,n)}\)
    • Gaussian elimination: \(O(n^{3})\) but \(O(x^{2^n})\) bit complexity of its operands
    • GCD(a, b) by Euclid's algorithm: \(O(log\,(a+b))\) but \(O({(log\,(a+b))}^2)\) bit complexity when integers a and b are large

    Bit complexity matters when integers exceed the 64-bit machine capability. In these cases, when arbitrary precision is used, operations such as division and modulo arithmetic are no longer trivial and their complexity must be accounted for. From a theoretical perspective, this doesn't matter to computer scientists but it matters to programmers who have to work within machine limits.

Milestones

1864

Charles Babbage, while building his Analytical Engine, predicts the importance of studying the performance of algorithms.

1894

Mathematician Paul Bachmann defines the Big-O notation. It originally meant "order of". Decades later, Donald Knuth calls it the capital omicron.

1960
Spectrum of computational complexity. Source: Tarjan 1983, fig. 1.3.
Spectrum of computational complexity. Source: Tarjan 1983, fig. 1.3.

The theory of computational complexity is born in the 1960s. This is also the decade when problems that can't be solved in polynomial time (NP problems) are recognized.

1975

Tarjan makes the case that when designing data structures, focusing on worst case running time is less important than amortized running time, that is, performance averaged over operations on a long sequence of input. His later contributions include the splay tree (1980) and the Fibonacci heap, both achieving good amortized performance.

1976

Donald Knuth introduces new notations and clarifies the meaning of \(O\), \(\Omega\) and \(\Theta\).

1998

R. Libeskin-Hadas introduces the term oblivious algorithm that's applied to an algorithm whose complexity is independent of the input structure. In other words, the best-case, worst-case and average complexity of the algorithm are all the same.

Sample Code

  • // Source: Nakov et al., 2013, ch. 19.
     
    // Linear complexity: O(N*N)
    int FindMaxElement(int[] array)
    {
            int max = int.MinValue;
            for (int i = 1; i < array.Length; i++)
            {
                 if (array[i] > max)
                 {
                       max = array[i];
                 }
            }
            return max;
    }
     
     
    // Quadratic complexity: O(N*N)
    int FindInversions(int[] array)
    {
        int inversions = 0;
        for (int i = 0; i < array.Length - 1; i++)
        {
            for (int j = i + 1; j < array.Length; j++)
            {
                if (array[i] > array[j])
                {
                    inversions++;
                }
            }
        }
        return inversions;
    }
     
     
    // Quadratic complexity: O(N*M)
    // Innermost loop executes N*min(N,M); thus, this algorithm is not of cubic complexity.
    long SumMN(int n, int m)
    {
        long sum = 0;
        for (int x = 1; x <= n; x++)
        {
            for (int y = 1; y <= m; y++)
            {
                if (x == y)
                {
                    for (int i = 1; i <= n; i++)
                    {
                        sum += i * x * y;
                    }
                }
            }
        }
        return sum;
    }
     
     
    // Exponential complexity: O(2^n)
    long Fibonacci(int n)
    {
        if (n == 0)
        {
            return 1;
        }
        else if (n == 1)
        {
            return 1;
        }
        else
        {
            return Fibonacci(n - 1) + Fibonacci(n - 2);
        }
    }
     

References

  1. Adamchik, Victor S. 2009. "Algorithmic Complexity." CMU. Accessed 2017-12-20.
  2. Big-O Cheat Sheet. 2016. "Know Thy Complexities!" August 23. Accessed 2017-12-20.
  3. CS7800 12F. 2012. "Orders of Growth." The CS7800 12F Homepage, Northeastern University, College of Computer and Information Science. December 9. Accessed 2017-12-20.
  4. Clobo. 2017. "Learning and Understanding Big-O Notation." Topcoder. June 17. Accessed 2017-12-20.
  5. Codility. 2017. "Euclidean algorithm." Accessed 2017-12-20.
  6. Cook, Stephen A. 1983. "An Overview of Computational Complexity." Comms of the ACM, June, vol. 26, no. 6, pp. 401-408. Accessed 2017-12-20.
  7. Exam Crazy. 2017. "The big O notation: Order of growth of an algorithm." Exam Crazy. Accessed 2017-12-20.
  8. Fang, Xin Gui, and George Havas. 1997. "On the worst-case complexity of integer Gaussian elimination". Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation. ISSAC '97, pp. 28–31. Accessed 2017-12-20.
  9. King, V. 2022. "Robert (Bob) Endre Tarjan." Citation, Turing Award 1986, Association for Computing Machinery. Accessed 2022-02-19.
  10. Knuth, Donald E. 1976. "Big Omicron and Big Omega and Big Theta." SIGACT News. April-June. Accessed 2017-12-20.
  11. Nakov et al., Svetlin. 2013. "Fundamentals of Computer Programming with C#." Accessed 2017-12-20.
  12. Nasar, Audrey A. 2016. "The history of Algorithmic complexity." The Mathematics Enthusiast, vol. 13, no. 3, pp. 217-242. Accessed 2017-12-20.
  13. Sebastian. 2012. "Why is quicksort better than other sorting algorithms in practice?" StackExchange. March 7. Accessed 2017-12-20.
  14. Studytonight. 2017. "Time Complexity of Algorithms." Accessed 2017-12-20.
  15. Tarjan, R. E. 1983. "Data structures and network algorithms." CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM. Accessed 2022-02-19.
  16. Vilches, Bruno. 2015. "Introduction to Algorithm Complexity." Belatrix Software. September 15. Accessed 2017-12-20.
  17. Wikipedia. 2017a. "Big O notation." December 18. Accessed 2017-12-20.
  18. Wikipedia. 2017b. "Quicksort." December 20. Accessed 2017-12-20.
  19. Wikipedia. 2017c. "Computational complexity theory." November 25. Accessed 2017-12-20.
  20. Zindros, Dionysis. 2012. "A Gentle Introduction to Algorithm Complexity Analysis." Discrete Mathematics, National Technical University of Athens. Accessed 2017-12-20.

Further Reading

  1. Wessen, Ken. 2016. "Not just a matter of time: Measuring complexity." Plus Magazine. October 12. Accessed 2017-12-22.
  2. Nasar, Audrey A. 2016. "The history of Algorithmic complexity." The Mathematics Enthusiast, vol. 13, no. 3, pp. 217-242. Accessed 2017-12-20.
  3. Clobo. 2017. "Learning and Understanding Big-O Notation." Topcoder. June 17. Accessed 2017-12-20.
  4. HackerRank. 2016. "Big O Notation." YouTube. September 27. Accessed 2017-12-20.
  5. Fortnow, Lance, and Steve Homer. 2002. "A Short History of Computational Complexity." IEEE Conference on Computation Complexity. Accessed 2017-12-20.

Article Stats

Author-wise Stats for Article Edits

Author
No. of Edits
No. of Chats
DevCoins
2
0
9734
6
1
636
1805
Words
79
Likes
156K
Hits

Cite As

Devopedia. 2022. "Algorithmic Complexity." Version 8, February 19. Accessed 2023-11-12. https://devopedia.org/algorithmic-complexity
Contributed by
2 authors


Last updated on
2022-02-19 16:05:04
  • Big O Notation
  • Order of Approximation
  • Binary Search
  • Kolmogorov Complexity
  • P vs NP Problem