Algorithmic complexity is a way of comparing the efficiency of an algorithm. Complexity can be measured in terms of the time that it takes for a program to run (time complexity) or in terms of the memory that it will use (space complexity).

In computer science, whenever we want to solve some computational problem then we define a set of steps that need to be followed to solve that problem. These steps are collectively known as an algorithm.

For example, you have two integers a and b and you want to find the sum of those two number. How will you solve this? One possible solution for the above problem can be:

  1. Take two integers as input
  2. Create a variable sum to store the sum of two integers
  3. Put the sum of those two variables in the sum variable
  4. Return the sum variable
// Taking two integers as input
int sum(int a, int b) 
{
    int sum; // creating the sum variable
    sum = a + b; // storing the sum of a and b
    return sum; // returning the sum variable
}

In the above example, you will find three things i.e. input, algorithm, and output:

  • Input: Input is something for which you need to write an algorithm and transform it into the desired output. In our example, the input is the two numbers i.e. a and b.
  • Algorithm: An algorithm is well-defined steps by step procedure that take some value or set of values as input and produce some value or set of values as output. In the above example, we are having three steps to find the sum of two numbers. So, all three steps are collectively called an algorithm to find the sum of two numbers.
  • Output: Output is the desired result in the problem. For example, if we are finding the sum of two integers a and b then for every value of a and b it must produce the correct sum as an output.

The efficiency of an algorithm is mainly defined by two factors i.e. space and time. A good algorithm is one that is taking less time and less space, but this is not possible all the time. There is a trade-off between time and space. If you want to reduce the time, then space might increase. Similarly, if you want to reduce the space, then the time may increase. So, you have to compromise with either space or time.

Time Complexity

Every algorithm requires some amount of time to execute its instruction to perform the task. This time required is called time complexity.

Time complexity is generally expressed as a function of the input size, often denoted as \(n\). The input size refers to the amount of data the algorithm needs to process (e.g., the number of elements in an array).

Time complexity analysis ignores machine-dependent details like processor speed, memory cache, or other low-level factors, because these are specific to hardware and can vary greatly. Instead, the focus is on how the algorithm behaves as the input size changes, regardless of these machine-specific factors.

When calculating time complexity, we consider the number of fundamental operations the algorithm performs (arithmetic, logical, comparison, assignments, etc.). Each of these operations is assumed to take constant time, even though on a real machine, their execution time may differ. The core idea behind time complexity is to measure how the number of operations scales as the input size increases, rather than calculating the exact time each operation takes.

Time complexity can vary depending on the input data, so algorithms may have different complexities for the worst-case, best-case, and average-case scenarios. The focus is often on worst-case complexity because it gives the upper bound on the algorithm's runtime.

Calculating the time complexity of an algorithm based on system configuration can be difficult, given that hardware characteristics like CPU speed, memory, and cache sizes vary across systems. This is why time complexity is typically calculated in a machine-independent manner using an abstract model of computation.

To calculate the time complexity of an algorithm, we need to define a model machine. Let us assume a machine with following assumptions:

  • It performs sequential execution. The machine executes instructions one after another, without any parallelism.
  • It requires \(1\) unit of time for arithmetic and logical operations. Any arithmetic or logical operation (e.g., addition, subtraction, comparison) takes exactly \(1\) unit of time.
  • It requires \(1\) unit of time for assignment and return value. Assigning a value to a variable or returning a value from a function takes \(1\) unit of time.
  • It requires \(1\) unit of time for read and write operations. Reading from memory (e.g., accessing array elements) and writing to memory (e.g., updating a variable) both take \(1\) unit of time.
  • It requires \(1\) unit of time for increment and decrement operations.

The following are the primary types of time complexities that commonly appear in algorithm analysis:

  • Constant Time Complexity
  • Logarithmic Time Complexity
  • Linear Time Complexity
  • Linearithmic Time Complexity
  • Quadratic Time Complexity
  • Cubic Time Complexity
  • Exponential Time Complexity
  • Factorial Time Complexity
Constant Time Complexity

Constant time complexity, refers to an algorithm or operation that takes the same amount of time to complete, regardless of the size of the input. This means that no matter how large the input data set is, the time it takes to perform the operation remains constant.

Consider the following piece of code

int sum(int a, int b)
{
   return a + b;
}

Let's break down the operations inside the sum function above and calculate the total time complexity:

  • The addition and return statement return a + b take \(2\) units of time (\(1\) for addition, \(1\) for return statement).

Totally it takes \(2\) units of time to complete its execution and it is Constant Time Complexity. It does not change based on the input values of a and b. That means for all input values, it requires the same amount of time i.e. \(2\) units.

Linear Time Complexity

Linear time complexity refers to an algorithm or operation whose time to complete increases linearly with the size of the input data. This means that if you double the input size, the time taken to complete the operation also roughly doubles.

Consider the following piece of code

int sum(int arr[], int n)
{
    int sum = 0;
    for(int i = 0; i < n; i++)
        sum = sum + arr[i];
    return sum;
}

Let's break down the operations inside the sum function above and calculate the total time complexity:

  • The initialization int sum = 0 takes \(1\) unit of time
  • The initialization int i = 0 in the for loop takes \(1\) unit of time
  • The loop runs \(n\) times, and each iteration involves:
    • The comparison i > n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(i = n\)).
    • The increment i++ takes \(1\) unit of time, repeated n times.
    • Reading an element from the array (array access) arr[i] takes \(1\) unit of time, repeated \(n\) times.
    • The addition and assignment sum = sum + arr[i] take \(2\) units of time (\(1\) for addition, \(1\) for assignment), repeated \(n\) times.
  • The return statement return sum takes \(1\) unit of time.

Totally it takes \(5n + 3\) units of time to complete its execution and it is Linear Time Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the time required also increases linearly.

Logarithmic Time Complexity

Logarithmic time complexity describes an algorithm whose running time increases logarithmically as the size of the input data set increases. This means that if the input size doubles, the time taken to complete the operation increases by a constant amount. This complexity is much more efficient than linear time complexity for large inputs.

Consider the following piece of code

int countDigits(int n) {
    int digits = 0;

    while (n > 0) {
        n = n / 10;
        digits++;
    }

    return digits;
}

Let's break down the operations inside the countDigits function above and calculate the total time complexity:

  • The initialization int digits = 0; takes \(1\) unit of time
  • The loop runs \(\lfloor \log_{10}(n) \rfloor + 1\) times, and each iteration involves:
    • The comparison n > 0 takes \(1\) unit of time, repeated \(\lfloor \log_{10}(n) \rfloor + 1\) times.
    • The increment digits++ takes \(1\) unit of time, repeated \(\lfloor \log_{10}(n) \rfloor + 1\) times.
  • The return statement return digits takes \(1\) unit of time.

Totally it takes \(2\lfloor \log_{10}(n) \rfloor + 4\) units of time to complete its execution and it is Logarithmic Time Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the time required also increases logarithmically.

Quadratic Time Complexity

Quadratic time complexity refers to algorithms whose running time increases quadratically as the size of the input data set increases. This means that if the input size doubles, the time taken to complete the operation increases by a factor of four. This type of complexity is often associated with algorithms that involve nested loops.

Consider the following piece of code

int sumSquareMatrix(int** matrix, int n) {
    int sum = 0;

    for (int i = 0; i < n; i++) {
        for (int j = 0; j < n; j++) {
            sum += matrix[i][j];
        }
    }

    return sum;
}

Let's break down the operations inside the sumSquareMatrix function above and calculate the total time complexity:

  • The initialization int sum = 0 takes \(1\) unit of time
  • The initialization int i = 0 in the outer loop takes \(1\) unit of time
  • The outer loop runs \(n\) times, and each iteration involves:
    • The comparison i < n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(i = n\)).
    • The increment i++ takes \(1\) unit of time, repeated \(n\) times.
    • The initialization int j = 0 in the inner loop takes \(1\) unit of time, repeated \(n\) times.
    • The inner loop runs \(n\) times for each outer loop iteration (the total number of iterations is \(n \cdot n\)), and each iteration involves:
      • The comparison j < n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(j = n\)) for each outer loop iteration.
      • The increment j++ takes \(1\) unit of time, repeated \(n\) times for each outer loop iteration.
      • The array access matrix[i][j] takes 1 unit of time, repeated \(n\) times for each outer loop iteration.
      • The addition and assignment sum += matrix[i][j] take \(2\) units of time (\(1\) for addition, \(1\) for assignment), repeated \(n\) times for each outer loop iteration.
  • The return statement return sum takes \(1\) unit of time.

Totally it takes \(4n^{2} + 4 + 4\) units of time to complete its execution and it is Quadratic Time Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the time required also increases quadratically.

Cubic Time Complexity

Cubic time complexity refers to an algorithm whose running time grows proportionally to the cube of the size of the input. In other words, as the input size \(n\) increases, the number of operations increases as \(n \times n \times n\), or \(n^3\).

Consider the following piece of code

int sumCubeMatrix(int*** matrix, int n) {
    int sum = 0;

    for (int i = 0; i < n; i++) {
        for (int j = 0; j < n; j++) {
            for (int k = 0; k < n; k++) {
                sum += matrix[i][j][k];
            }
        }
    }

    return sum;
}

Let's break down the operations inside the sumCubeMatrix function above and calculate the total time complexity:

  • The initialization int sum = 0 takes \(1\) unit of time
  • The initialization int i = 0 in the outer loop takes \(1\) unit of time
  • The outer loop runs \(n\) times, and each iteration involves:
    • The comparison i < n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(i = n\)).
    • The increment i++ takes \(1\) unit of time, repeated \(n\) times.
    • The initialization int j = 0 in the middle loop takes \(1\) unit of time, repeated \(n\) times.
    • The middle loop runs \(n\) times for each outer loop iteration (the total number of iterations is \(n \times n\), and each iteration involves:
      • The comparison j < n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(j = n\)) for each outer loop iteration.
      • The increment j++ takes \(1\) unit of time, repeated \(n\) times for each outer loop iteration.
      • The initialization int k = 0 in the inner loop takes \(1\) unit of time, repeated \(n\) times for each outer loop iteration.
      • The inner loop runs \(n\) times for each middle loop iteration (the total number of iterations is \(n \times n \times n\)), and each iteration involves:
        • The comparison k < n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(k = n\)) for each middle loop iteration.
        • The increment k++ takes \(1\) unit of time, repeated \(n\) times for each middle loop iteration.
        • The array access matrix[i][j][k] takes \(1\) unit of time, repeated \(n\) times for each middle loop iteration.
        • The addition and assignment sum += matrix[i][j][k] take \(2\) units of time (\(1\) for addition, \(1\) for assignment), repeated \(n\) times for each middle loop iteration.
  • The return statement return sum takes \(1\) unit of time.

Totally it takes \(4n^{3} + 4n^{2} + 4n + 4\) units of time to complete its execution and it is Quadratic Time Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the time required also increases quadratically.

Exponential Time Complexity

Exponential time complexity refers to an algorithm whose growth rate doubles with each additional input. In exponential time algorithms, the number of operations grows extremely fast as the size of the input increases.

Consider the following piece of code

int fibonacci(int n) {
    if (n <= 1)
        return n;
    else
        return fibonacci(n - 1) + fibonacci(n - 2);
}

Here's a representation of the fibonacci tree for \(n = 5\) based on the recursive function:

fibonacci(5)
├── fibonacci(4)
│   ├── fibonacci(3)
│   │   ├── fibonacci(2)
│   │   │   ├── fibonacci(1)
│   │   │   └── fibonacci(0)
│   │   └── fibonacci(1)
│   └── fibonacci(2)
│       ├── fibonacci(1)
│       └── fibonacci(0)
└── fibonacci(3)
    ├── fibonacci(2)
    │   ├── fibonacci(1)
    │   └── fibonacci(0)
    └── fibonacci(1)

This tree structure shows how fibonacci(5) breaks down into smaller subproblems recursively. The leaves of the tree are the base cases, fibonacci(1) and fibonacci(0).

To count how many times fibonacci(0) and fibonacci(1) are called, we need to track these calls across the recursion tree. Let's count the calls to fibonacci(0) and fibonacci(1) in the recursion tree for fibonacci(5):

fibonacci(5)
├── fibonacci(4)
│   ├── fibonacci(3)
│   │   ├── fibonacci(2)
│   │   │   ├── fibonacci(1)  <-- 1st call to fibonacci(1)
│   │   │   └── fibonacci(0)  <-- 1st call to fibonacci(0)
│   │   └── fibonacci(1)      <-- 2nd call to fibonacci(1)
│   └── fibonacci(2)
│       ├── fibonacci(1)      <-- 3rd call to fibonacci(1)
│       └── fibonacci(0)      <-- 2nd call to fibonacci(0)
└── fibonacci(3)
    ├── fibonacci(2)
    │   ├── fibonacci(1)      <-- 4th call to fibonacci(1)
    │   └── fibonacci(0)      <-- 3rd call to fibonacci(0)
    └── fibonacci(1)          <-- 5th call to fibonacci(1)

fibonacci(0) is called \(3\) times and fibonacci(1) is called \(5\) times.

The number of calls to fibonacci(1) is the Fibonacci number \(F_{n}\) and the number of calls to fibonacci(0) is the Fibonacci number \(F_{n-1}\).

The total number of function calls can be represented by the recurrence relation:

\[ T(n) = T(n - 1) + T(n - 2) + 1 \]

Where:

  • \(T(n)\) is the total call count for fibonacci(n)
  • The \(+1\) accounts for the current call itself

Let's break down the operations inside the fibonacci function above and calculate the total time complexity:

  • The comparison n <= 1 takes \(1\) unit of time, repeated \(T(n)\) times recursively.
  • The return statement return n in the if block takes \(1\) unit of time, repeated \(F_{n}+F_{n-1}\) times recursively.
  • The addition fibonacci(n - 1) + fibonacci(n - 2) take \(1\) unit of time, repeated \(\frac{T(n)-1}{2}\) times recursively.
  • The return statement return fibonacci(n - 1) + fibonacci(n - 2) in the else block takes \(1\) unit of time, repeated \(\frac{T(n)-1}{2}\) times recursively.

Totally it takes \(2T(n)+F_{n}+F_{n-1}-1\) units of time to complete its execution and it is Exponential Time Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the time required also increases exponentially.

The Fibonacci function can be analyzed using the recursion tree method to understand its time complexity more intuitively.

  1. Build the Recursion Tree
    • Root Node: The root of the tree is the initial call \(F(n)\).
    • Child Nodes: Each node splits into two child nodes:
      • The left child is \(F(n-1)\)
      • The right child is \(F(n-2)\)
    • For \(n = 6\), the recursion tree looks like this:
      					 F(5)                   
                         /     \
                     F(4)       F(3)
                    /   \       /   \
                F(3)   F(2)  F(2)  F(1)
               /   \   /  \   / \
           F(2)  F(1) F(1) F(0) F(1)
           / \
        F(1) F(0)
  2. Calculate the Height of the Tree
    • The height of the tree corresponds to how many levels we need to reach the base case \(F(0)\) or \(F(1)\).
    • The maximum height for Fibonacci \(F(n)\) is \(n\) because we keep decrementing \(n\) until it reaches \(0\) or \(1\).
  3. Count the Number of Nodes at Each Level
    • Level 0: 1 node (the root node, \(F(n)\))
    • Level 1: 2 nodes (\(F(n−1)\), \(F(n−2)\))
    • Level 2: 4 nodes (\(F(n−2)\), \(F(n−3)\), \(F(n−3)\), \(F(n−4)\))
    • Level 3: 8 nodes (\(F(n−3)\), \(F(n−4)\), \(F(n−4)\), \(F(n−5)\), \(F(n−4)\), \(F(n−5)\), \(F(n−5)\), \(F(n−6)\))
    • Level k: The number of nodes at level \(k\) is approximately \(2^{k}\)
  4. Calculate Total Work at Each Level
    • Level 0: \(1 \times O(1) = O(1)\)
    • Level 1: \(2 \times O(1) = O(2)\)
    • Level 2: \(4 \times O(1) = O(4)\)
    • Level 3: \(8 \times O(1) = O(8)\)
    • Level k: \(2^{k} \times O(1) = O(2^{k})\)
  5. The total work done can be calculated by summing the work done at each level of the tree until we reach the maximum height \(n\):
    \[ T(n) = O(1) + O(2) + O(4) + O(8) + ... + O(2n) \] This series can be simplified using the formula for the sum of a geometric series. The sum of a geometric series where the first term is \(a\) and the common ratio is \(r\) is given by:
    \[ S = a\frac{(r^{n}-1)}{r-1} \] Calculating the series gives:
    \[ T(n) = O(2^{n+1}-1) = O(2^n) \]
Linearithmic Time Complexity

Linearithmic time complexity is a common complexity class for algorithms that are more efficient than quadratic time complexity but less efficient than linear time complexity. This complexity typically arises in divide-and-conquer algorithms, where a problem is divided into smaller subproblems that are solved independently and then combined.

Consider the following piece of code

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void fft(double* xRe, double* xIm, int n) {
    if (n <= 1) return;

    // Divide
    double* evenRe = malloc(n / 2 * sizeof(double));
    double* evenIm = malloc(n / 2 * sizeof(double));
    double* oddRe = malloc(n / 2 * sizeof(double));
    double* oddIm = malloc(n / 2 * sizeof(double));

    for (int i = 0; i < n / 2; i++) {
        evenRe[i] = xRe[i * 2];
        evenIm[i] = xIm[i * 2];
        oddRe[i] = xRe[i * 2 + 1];
        oddIm[i] = xIm[i * 2 + 1];
    }

    // Conquer
    fft(evenRe, evenIm, n / 2);
    fft(oddRe, oddIm, n / 2);

    // Combine
    for (int k = 0; k < n / 2; k++) {
        double tRe = cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k];
        double tIm = sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k];

        xRe[k] = evenRe[k] + tRe;
        xIm[k] = evenIm[k] + tIm;
        xRe[k + n / 2] = evenRe[k] - tRe;
        xIm[k + n / 2] = evenIm[k] - tIm;
    }

    free(evenRe);
    free(evenIm);
    free(oddRe);
    free(oddIm);
}

int main() {
    // Example input
    int n = 8; // Size of input (must be a power of 2)
    double xRe[] = {0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0}; // Real part
    double xIm[] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}; // Imaginary part

    // Perform FFT
    fft(xRe, xIm, n);

    // Print the results
    printf("FFT Results:\n");
    for (int i = 0; i < n; i++) {
        printf("x[%d] = %.2f + %.2fi\n", i, xRe[i], xIm[i]);
    }

    return 0;
}

The size of the input \(n\) is required to be a power of \(2\). This allows the algorithm to recursively divide the input into two halves until it reaches a single element.

The recursive nature of the Fast Fourier Transform (FFT) algorithm leads to the division of the input array in half at each recursive step. This process creates a series where each subsequent level of recursion processes half the elements of the previous level.

Breakdown of Recursive Division:

  • Level \(0\) (Initial Input of Size \(n\)):
    • This is the first level, where the input array of size \(n\) is processed.
    • The array is split into two subarrays, each of size \(\frac{n}{2}\).
    • The total work done at this level is to process all \(n\) elements (dividing them into even and odd parts), but no recursive calls have been made yet.
  • Level \(1\) (Subarrays of Size \(\frac{n}{2}\)):
    • After splitting the array into two subarrays, each of size \(\frac{n}{2}\), we recursively process both subarrays.
    • There are two subarrays, each with \(\frac{n}{2}\) elements.
    • The total work done at this level is:
      \[ \frac{n}{2} + \frac{n}{2} = n \]
    • Even though each subarray is smaller, we still process a total of \(n\) elements.
  • Level \(2\) (Subarrays of Size \(\frac{n}{4}\)):
    • Each of the \(\frac{n}{2}\)-sized subarrays is split further into two subarrays of size \(\frac{n}{4}\), making a total of four subarrays.
    • Each subarray has \(\frac{n}{4}\) elements.
    • The total work done at this level is:
      \[ \frac{n}{4} + \frac{n}{4} + \frac{n}{4} + \frac{n}{4} = n \]
    • Again, we process \(n\) elements in total.
  • Level \(3\) (Subarrays of Size \(\frac{n}{8}\)):
    • Each of the \(\frac{n}{4}\)-sized subarrays is split further into two subarrays of size \(\frac{n}{8}\), making a total of eight subarrays.
    • Each subarray has \(\frac{n}{8}\) elements.
    • The total work done at this level is:
      \[ \frac{n}{8} + \frac{n}{8} + \frac{n}{8} + \frac{n}{8} + ... + \frac{n}{8} = n \]
    • The total work at this level is still \(n\).
  • Level \(k\) (Subarrays of Size \(\frac{n}{2^{k}}\)):
    • At each level, the array is split into more subarrays, but the total number of elements processed at that level always remains \(n\).
    • At level \(k\), there are \(2^k\) subarrays, each of size \(\frac{n}{2^k}\).
    • The total work at level \(k\) is:
      \[ 2^k \times \frac{n}{2^k} = n \]

The total number of levels in the recursion is \(\log_{2}(n)\), since the input size is divided by \(2\) at each level. At each level, the work done is \(n\), and there are \(\log_{2}(n)\) levels. Therefore, the total work done by the algorithm is:

\[ n \times \log_{2}(n) \]

Let's break down the operations inside the fft function above and calculate the total time complexity:

  • Level \(0\) (function call with size \(n\)):
    • The comparison n <= 1 takes \(1\) unit of time
    • The initialization int i = 0 in the for loop takes \(1\) unit of time
    • The for loop runs \(\frac{n}{2}\) times, and each iteration involves:
      • The comparison i < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{2} + 1\) times (including the last comparison when \(i = \frac{n}{2}\)).
      • The increment i++ takes \(1\) unit of time, repeated \(\frac{n}{2}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment evenRe[i] = xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment evenIm[i] = xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment oddRe[i] = xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment oddIm[i] = xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
    • The initialization int k = 0 in the for loop takes \(1\) unit of time
    • The for loop runs \(\frac{n}{2}\) times, and each iteration involves:
      • The comparison k < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{2} + 1\) times (including the last comparison when \(k = \frac{n}{2}\)).
      • The increment k++ takes \(1\) unit of time, repeated \(\frac{n}{2}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The substraction cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment double tRe = cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The addition sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment double tIm = sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The addition evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment xRe[k] = evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The addition evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment xIm[k] = evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The substraction evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment xRe[k + n / 2] = evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The substraction evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{2}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{2}\) times.
      • The assigment xIm[k + n / 2] = evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{2}\) times.
  • Level \(1\) (There are \(2\) recursive calls with size \(\frac{n}{2}\)):
    • In each of recursive calls, the comparison n <= 1 takes \(1\) unit of time
    • In each of recursive calls, the initialization int i = 0 in the for loop takes \(1\) unit of time
    • In each of recursive calls, the for loop runs \(\frac{n}{4}\) times, and each iteration involves:
      • The comparison i < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{4} + 1\) times (including the last comparison when \(i = \frac{n}{4}\)).
      • The increment i++ takes \(1\) unit of time, repeated \(\frac{n}{4}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment evenRe[i] = xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment evenIm[i] = xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment oddRe[i] = xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment oddIm[i] = xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
    • In each of recursive calls, the initialization int k = 0 in the for loop takes \(1\) unit of time
    • In each of recursive calls, the for loop runs \(\frac{n}{4}\) times, and each iteration involves:
      • The comparison k < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{4} + 1\) times (including the last comparison when \(k = \frac{n}{4}\)).
      • The increment k++ takes \(1\) unit of time, repeated \(\frac{n}{4}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{4}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{4}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The substraction cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment double tRe = cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{4}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{4}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The addition sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment double tIm = sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The addition evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment xRe[k] = evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The addition evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment xIm[k] = evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The substraction evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment xRe[k + n / 2] = evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The substraction evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{4}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{4}\) times.
      • The assigment xIm[k + n / 2] = evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{4}\) times.
  • Level \(2\) (There are \(4\) recursive calls with size \(\frac{n}{4}\)):
    • In each of recursive calls, the comparison n <= 1 takes \(1\) unit of time
    • In each of recursive calls, the initialization int i = 0 in the for loop takes \(1\) unit of time
    • In each of recursive calls, the for loop runs \(\frac{n}{8}\) times, and each iteration involves:
      • The comparison i < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{8} + 1\) times (including the last comparison when \(i = \frac{n}{8}\)).
      • The increment i++ takes \(1\) unit of time, repeated \(\frac{n}{8}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment evenRe[i] = xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment evenIm[i] = xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment oddRe[i] = xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment oddIm[i] = xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
    • In each of recursive calls, the initialization int k = 0 in the for loop takes \(1\) unit of time
    • In each of recursive calls, the for loop runs \(\frac{n}{8}\) times, and each iteration involves:
      • The comparison k < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{8} + 1\) times (including the last comparison when \(k = \frac{n}{8}\)).
      • The increment k++ takes \(1\) unit of time, repeated \(\frac{n}{8}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{8}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{8}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The substraction cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment double tRe = cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{8}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{8}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The addition sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment double tIm = sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The addition evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment xRe[k] = evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The addition evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment xIm[k] = evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The substraction evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment xRe[k + n / 2] = evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The substraction evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{8}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{8}\) times.
      • The assigment xIm[k + n / 2] = evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{8}\) times.
  • Level \(k\) (There are \(2^{k}\) recursive calls with size \(\frac{n}{2^{k}}\)):
    • In each of recursive calls, the comparison n <= 1 takes \(1\) unit of time
    • In each of recursive calls, the initialization int i = 0 in the for loop takes \(1\) unit of time
    • In each of recursive calls, the for loop runs \(\frac{n}{2^{k+1}}\) times, and each iteration involves:
      • The comparison i < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{2^{k+1}} + 1\) times (including the last comparison when \(i = \frac{n}{2^{k+1}}\)).
      • The increment i++ takes \(1\) unit of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment evenRe[i] = xRe[i * 2] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication i * 2 take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment evenIm[i] = xIm[i * 2] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment oddRe[i] = xRe[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication and addition i * 2 + 1 take \(2\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment oddIm[i] = xIm[i * 2 + 1] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
    • In each of recursive calls, the initialization int k = 0 in the for loop takes \(1\) unit of time
    • In each of recursive calls, the for loop runs \(\frac{n}{2^{k+1}}\) times, and each iteration involves:
      • The comparison k < n / 2 takes \(1\) unit of time, repeated \(\frac{n}{2^{k+1}} + 1\) times (including the last comparison when \(k = \frac{n}{2^{k+1}}\)).
      • The increment k++ takes \(1\) unit of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The substraction cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment double tRe = cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The sin function sin(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication sin(-2 * M_PI * k / n) * oddRe[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication and division -2 * M_PI * k / n take \(3\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The cos function cos(-2 * M_PI * k / n) take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The multiplication cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The addition sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment double tIm = sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The addition evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment xRe[k] = evenRe[k] + tRe take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The addition evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment xIm[k] = evenIm[k] + tIm take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access evenRe[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The substraction evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment xRe[k + n / 2] = evenRe[k] - tRe take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The array access evenIm[k] take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The substraction evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The division and addition k + n / 2 take \(2\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
      • The assigment xIm[k + n / 2] = evenIm[k] - tIm take \(1\) units of time, repeated \(\frac{n}{2^{k+1}}\) times.
  • Level final (There are \(2^{\log_{2}(n)}\) recursive calls with size \(\frac{n}{2^{\log_{2}(n)}}\) or \(1\)):
    • In each of recursive calls, the comparison n <= 1 takes \(1\) unit of time
    • In each of recursive calls, the return statement return takes \(1\) unit of time

Thus, the total number of operations is:

  • The comparison n <= 1 takes \(2^{0} + 2^{1} + 2^{2} + ... + 2^{\log_{2}(n)} = 2^{\log_{2}(n) + 1} - 1\) units of time.
  • The return statement return takes \(2^{\log_{2}(n)}\) unit of time.
  • The initialization int i = 0 in the first for loop takes \(2^{0} + 2^{1} + 2^{2} + ... + 2^{\log_{2}(n)-1} = 2^{\log_{2}(n)} - 1\) units of time.
  • The first for loop takes \(\frac{n}{2} \times \log(n) \times 16\) units of time.
  • The initialization int k = 0 in the second for loop takes \(2^{0} + 2^{1} + 2^{2} + ... + 2^{\log_{2}(n)-1} = 2^{\log_{2}(n)} - 1\) units of time.
  • The second for loop takes \(\frac{n}{2} \times \log(n) \times 46\) units of time.

Totally it takes \(31n \log(n) + 4n - 3\) units of time to complete its execution and it is Linearithmic Time Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the time required also increases linearithmically.

Factorial Time Complexity

Factorial time complexity describes algorithms whose running time grows factorially with the size of the input \(n\). This complexity arises in scenarios where the algorithm generates all possible arrangements (permutations) or combinations of \(n\) items.

Consider the following piece of code

#include <stdio.h>
#include <stdlib.h>

void swap(int *x, int *y) {
    int temp = *x;
    *x = *y;
    *y = temp;
}

// Function to generate permutations recursively and store them in a 2D array
void generatePermutations(int *digits, int start, int n, int **permutations, int *index) {
    if (start == n - 1) {
        // Store the current permutation in the 2D array
        for (int i = 0; i < n; i++) {
            permutations[*index][i] = digits[i];
        }
        (*index)++;  // Move to the next row
        return;
    } else {
        for (int i = start; i < n; i++) {
            // Swap the current element with the start
            swap(&digits[start], &digits[i]);

            // Recursively generate permutations for the remaining elements
            generatePermutations(digits, start + 1, n, permutations, index);

            // Backtrack: restore the original order
            swap(&digits[start], &digits[i]);
        }
    }
}

int main() {
    // Try to crack a lock with 3 dials
    printf("Cracking the lock with 3 dials:\n");
    // Calculate the total number of permutations (n!)
    int num_permutations = 1;
    for (int i = 1; i <= n; i++) {
        num_permutations *= i;
    }

    // Allocate memory for the 2D array to store the permutations
    int **permutations = malloc(num_permutations * sizeof(int *));
    for (int i = 0; i < num_permutations; i++) {
        permutations[i] = malloc(n * sizeof(int));
    }

    // Initialize the array with digits [0, 1, 2, ..., n-1]
    int digits[n];
    for (int i = 0; i < n; i++) {
        digits[i] = i;
    }

    // Initialize the index to track the current permutation
    int index = 0;

    // Generate all permutations and store them in the 2D array
    generatePermutations(digits, 0, n, permutations, &index);

    // Print the stored permutations
    printf("All generated permutations:\n");
    for (int i = 0; i < num_permutations; i++) {
        printf("Permutation %d: ", i + 1);
        for (int j = 0; j < n; j++) {
            printf("%d ", permutations[i][j]);
        }
        printf("\n");
    }

    // Free the allocated memory
    for (int i = 0; i < num_permutations; i++) {
        free(permutations[i]);
    }
    free(permutations);

    return 0;
}

Consider the array digits[] = {0, 1, 2, 3} with \(n = 4\). The detailed Steps with \(n = 4\):

  • Initial Call generatePermutations(digits, 0, 4, permutations, index)
  • There is no swap since i == start
  • start = 0 and the current permutation is not stored since start != n - 1.
  • The loop runs from i = 0 to i = 4.
  • For i = 0:
    • There is no swap since i == start
    • Recursively call generatePermutations(digits, 1, 4, permutations, index)
    • start = 1 and the current permutation is not stored since start != n - 1
    • The loop runs from i = 1 to i = 4.
    • For i = 1:
      • There is no swap since i == start
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {0, 1, 2, 3} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {0, 1, 3, 2}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {0, 1, 3, 2} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {0, 1, 2, 3}
      • There is no swap since i == start
    • For i = 2:
      • Swap digits[1] with digits[2], thus digits[] = {0, 2, 1, 3}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {0, 2, 1, 3} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {0, 1, 3, 2}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {0, 1, 3, 2} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {0, 1, 2, 3}
      • Swap back digits[1] with digits[2], thus digits[] = {0, 1, 2, 3}
    • For i = 3:
      • Swap digits[1] with digits[3], thus digits[] = {0, 3, 2, 1}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {0, 3, 2, 1} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {0, 3, 1, 2}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {0, 3, 1, 2} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {0, 3, 2, 1}
      • Swap back digits[1] with digits[3], thus digits[] = {0, 1, 2, 3}
    • There is no swap since i == start
  • For i = 1:
    • Swap digits[0] with digits[1], thus digits[] = {1, 0, 2, 3}
    • Recursively call generatePermutations(digits, 1, 4, permutations, index)
    • start = 1 and the current permutation is not stored since start != n - 1
    • The loop runs from i = 1 to i = 4.
    • For i = 1:
      • There is no swap since i == start
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {1, 0, 2, 3} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {1, 0, 3, 2}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {1, 0, 3, 2} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {1, 0, 2, 3}
      • There is no swap since i == start
    • For i = 2:
      • Swap digits[1] with digits[2], thus digits[] = {1, 2, 0, 3}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {1, 2, 0, 3} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {1, 2, 3, 0}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {1, 2, 3, 0} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {1, 2, 0, 3}
      • Swap back digits[1] with digits[2], thus digits[] = {1, 0, 2, 3}
    • For i = 3:
      • Swap digits[1] with digits[3], thus digits[] = {1, 3, 2, 0}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {1, 3, 2, 0} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {1, 3, 0, 2}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {1, 3, 0, 2} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {1, 3, 2, 0}
      • Swap back digits[1] with digits[3], thus digits[] = {1, 0, 2, 3}
    • There is no swap since i == start
  • For i = 2:
    • Swap digits[0] with digits[2], thus digits[] = {2, 1, 0, 3}
    • Recursively call generatePermutations(digits, 1, 4, permutations, index)
    • start = 1 and the current permutation is not stored since start != n - 1
    • The loop runs from i = 1 to i = 4.
    • For i = 1:
      • There is no swap since i == start
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {2, 1, 0, 3} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {2, 1, 3, 0}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {2, 1, 3, 0} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {2, 1, 0, 3}
      • There is no swap since i == start
    • For i = 2:
      • Swap digits[1] with digits[2], thus digits[] = {2, 0, 1, 3}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {2, 0, 1, 3} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {2, 0, 3, 1}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {2, 0, 3, 1} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {2, 0, 1, 3}
      • Swap back digits[1] with digits[2], thus digits[] = {2, 1, 0, 3}
    • For i = 3:
      • Swap digits[1] with digits[3], thus digits[] = {2, 3, 0, 1}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {2, 3, 0, 1} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {2, 3, 1, 0}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {2, 3, 1, 0} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {2, 3, 0, 1}
      • Swap back digits[1] with digits[3], thus digits[] = {2, 1, 0, 3}
    • Swap back digits[0] with digits[2], thus digits[] = {0, 1, 2, 3}
  • For i = 3:
    • Swap digits[0] with digits[3], thus digits[] = {3, 1, 2, 0}
    • Recursively call generatePermutations(digits, 1, 4, permutations, index)
    • start = 1 and the current permutation is not stored since start != n - 1
    • The loop runs from i = 1 to i = 4.
    • For i = 1:
      • There is no swap since i == start
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {3, 1, 2, 0} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {3, 1, 0, 2}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {3, 1, 0, 2} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {3, 1, 2, 0}
      • There is no swap since i == start
    • For i = 2:
      • Swap digits[1] with digits[2], thus digits[] = {3, 2, 1, 0}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {3, 2, 1, 0} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {3, 2, 0, 1}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {3, 2, 0, 1} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {3, 2, 1, 0}
      • Swap back digits[1] with digits[2], thus digits[] = {3, 1, 2, 0}
    • For i = 3:
      • Swap digits[1] with digits[3], thus digits[] = {3, 0, 2, 1}
      • Recursively call generatePermutations(digits, 2, 4, permutations, index)
      • start = 2 and the current permutation is not stored since start != n - 1
      • The loop runs from i = 2 to i = 4.
      • For i = 2:
        • There is no swap since i == start
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {3, 0, 2, 1} is stored since start == n - 1
        • Increment index and return to the previous call.
        • There is no swap since i == start
      • For i = 3:
        • Swap digits[2] with digits[3], thus digits[] = {3, 0, 1, 2}
        • Recursively call generatePermutations(digits, 3, 4, permutations, index)
        • start = 3 and the current permutation {3, 0, 1, 2} is stored since start == n - 1
        • Increment index and return to the previous call.
        • Swap back digits[2] with digits[3], thus digits[] = {3, 0, 2, 1}
      • Swap back digits[1] with digits[3], thus digits[] = {3, 1, 2, 0}
    • Swap back digits[0] with digits[3], thus digits[] = {0, 1, 2, 3}
  • There is no swap since i == start

Let's break down the operations inside the generatePermutations function above and calculate the total time complexity:

  • Level \(0\) (initial call):
    • The comparison start == n - 1 takes \(1\) unit of time
    • The initialization int i = 0 in the second for loop takes \(1\) unit of time
    • The for loop runs \(n\) times, and each iteration involves:
      • The comparison i < n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(i = n\)).
      • The increment i++ takes \(1\) unit of time, repeated \(n\) times.
      • The swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n\) times
      • The second swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n\) times
  • Level \(1\) (There are \(n\) recursive calls):
    • The comparison start == n - 1 takes \(1\) unit of time
    • The initialization int i = 0 in the second for loop takes \(1\) unit of time
    • The for loop runs \(n-1\) times, and each iteration involves:
      • The comparison i < n takes \(1\) unit of time, repeated \(n - 1 + 1\) times (including the last comparison when \(i = n-1\)).
      • The increment i++ takes \(1\) unit of time, repeated \(n - 1\) times.
      • The swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n - 1\) times
      • The second swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n - 1\) times
  • Level \(2\) (There are \(n \cdot (n-1)\) recursive calls):
    • The comparison start == n - 1 takes \(1\) unit of time
    • The initialization int i = 0 in the second for loop takes \(1\) unit of time
    • The for loop runs \(n-2\) times, and each iteration involves:
      • The comparison i < n takes \(1\) unit of time, repeated \(n - 2 + 1\) times (including the last comparison when \(i = n-2\)).
      • The increment i++ takes \(1\) unit of time, repeated \(n - 2\) times.
      • The swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n - 2\) times
      • The second swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n - 2\) times
  • Level \(k\) (There are \(n \cdot (n-1) \cdot ... \cdot (n-k+1)\) recursive calls):
    • The comparison start == n - 1 takes \(1\) unit of time
    • The initialization int i = 0 in the second for loop takes \(1\) unit of time
    • The for loop runs \(n-k\) times, and each iteration involves:
      • The comparison i < n takes \(1\) unit of time, repeated \(n - k + 1\) times (including the last comparison when \(i = n-k\)).
      • The increment i++ takes \(1\) unit of time, repeated \(n - k\) times.
      • The swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n - k\) times.
      • The second swap function swap(&digits[start], &digits[i]) takes \(5\) unit of time, repeated \(n - k\) times.
  • Level final (There is \(n \cdot (n-1) \cdot ... \cdot (n-k+1) \cdot 1\) recursive call):
    • The comparison start == n - 1 takes \(1\) unit of time
    • The initialization int i = 0 in the first for loop takes \(1\) unit of time
    • The for loop runs \(n\) times, and each iteration involves:
      • The comparison k < n takes \(1\) unit of time, repeated \(n + 1\) times (including the last comparison when \(i = n\)).
      • The increment i++ takes \(1\) unit of time, repeated \(n\) times.
      • The array access digits[i] takes \(1\) unit of time, repeated \(n\) times.
      • The assignment function permutations[*index][i] = digits[i] takes \(1\) unit of time, repeated \(n\) times.
    • The increment (*index)++ takes \(1\) unit of time
    • The return statement return takes \(1\) unit of time

Thus, the total number of operations is:

  • The comparison start == n - 1 takes \(n + n \cdot (n-1) + n \cdot (n-1) \cdot (n-2) + ... + n! + 1\) units of time.
  • The initialization int i = 0 in the first for loop takes \(n + n \cdot (n-1) + n \cdot (n-1) \cdot (n-2) + ... + n \cdot (n-1) \cdot (n-2) \cdot ... \cdot (n-k+1) \cdot 2 + 1\) unit of time
  • The first for loop takes \(n! \times (4n + 1)\) units of time.
  • The increment (*index)++ takes \(n!\) units of time
  • The return statement return takes \(n!\) units of time
  • The second for loop takes \((12 \cdot n + 1) + (12 \cdot n \cdot (n-1) + 1) + (12 \cdot n \cdot (n-1) \cdot (n-2) + 1) + ... + (12 \cdot n \cdot (n-1) \cdot (n-2) \cdot ... \cdot (n-k+1) + 1)\) units of time.

Totally it takes \(15 \cdot n + 14 \cdot n \cdot (n-1) + 14 \cdot n \cdot (n-1) \cdot (n-2) + ... + 14 \cdot n \cdot (n-1) \cdot (n-2) \cdot ... \cdot (n-k+1) \cdot 2 + (4n + 4) \cdot n! + 1\) units of time to complete its execution and it is Factorial Time Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the time required also increases factorially.

Space Complexity

Space complexity measures the total amount of memory space required by an algorithm to execute, including both the space needed for the input data and any auxiliary space needed for computations.

When a program is executing, it utilizes memory primarily for these three reasons:

  1. Instruction Space: To store the compiled code of the program, which includes all the instructions the CPU will execute.
  2. Environmental Stack: To manage function calls and local variables. This includes information about active functions, their parameters, and where to return after a function execution.
  3. Data Space: To hold all the variables, constants, and data structures that the program uses during its execution.

When analyzing an algorithm's space complexity, we typically focus on the data space, which includes the memory required for variables, constants, data structures, and any additional memory needed during the execution of the algorithm. Instruction space (the code itself) and environmental stack (such as function call stacks) are generally not included in this analysis. This helps in understanding how much memory an algorithm will require as the input size increases.

To calculate the space complexity, we must know the memory required to store different datatype values (according to the compiler).

In C, the memory required for different data types can vary based on the compiler and the architecture, but here are some common sizes:

  1. char: Typically \(1\) byte (8 bits).
  2. int: Usually \(4\) bytes (32 bits) on most platforms, but can be 2 bytes (16 bits) on older systems.
  3. float: Generally \(4\) bytes (32 bits).
  4. double: Usually \(8\) bytes (64 bits).
  5. long: Typically \(4\) bytes (32 bits) on 32-bit systems and \(8\) bytes (64 bits) on 64-bit systems.
  6. long long: Generally \(8\) bytes (64 bits).
  7. memory address: Generally \(8\) bytes (64 bits) for a 64-bit computer machine or \(4\) bytes for 32-bit computer machine.
Constant Space Complexity

Constant space complexity refers to an algorithm that requires a fixed amount of memory space regardless of the input size. In other words, the memory needed does not increase as the size of the input grows.

Consider the following piece of code

int sum(int a, int b)
{
   return a + b;
}

Let's break down the operations inside the sum function above and calculate the total space complexity:

  • The first parameter a requires \(4\) bytes.
  • The second parameter b requires \(4\) bytes.
  • The return value requires another \(4\) bytes.

Totally it requires \(12\) bytes of memory to complete its execution and it is Constant Space Complexity. It does not change based on the input values of a and b. That means for all input values, it requires the same amount of memory i.e. \(12\) bytes.

Linear Space Complexity

Linear space complexity refers to an algorithm where the amount of memory required grows linearly with the size of the input. In other words, if the input size doubles, the memory usage also roughly doubles.

Consider the following piece of code

int sum(int arr[], int n)
{
    int sum = 0;
    for(int i = 0; i < n; i++)
        sum = sum + arr[i];
    return sum;
}

Let's break down the operations inside the sum function above and calculate the total space complexity:

  • The first parameter int arr[] requires \(n \cdot 4\) bytes.
  • The second parameter int n requires \(4\) bytes.
  • The local variable int sum requires \(4\) bytes.
  • The local variable int i within the for loop requires \(4\) bytes.
  • The return value requires another \(4\) bytes.

Totally it requires \(4n + 16\) bytes of memory to complete its execution and it is Linear Space Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the memory required also increases linearly.

Logarithmic Space Complexity

Logarithmic space complexity refers to algorithms where the amount of memory used grows logarithmically with the size of the input. This means that the space required increases very slowly as the input size grows, typically as the logarithm of the input size in some base (commonly base \(2\)).

Consider the following piece of code

#include <stdio.h>

// Function to compute GCD using recursion
int gcd(int a, int b) {
    if (b == 0) // Base case: GCD(a, 0) = a
        return a;
    return gcd(b, a % b); // Recursive call
}

int main() {
    int a, b;
    
    // Input two integers
    printf("Enter two integers: ");
    scanf("%d %d", &a, &b);
    
    // Calculate GCD
    int result = gcd(a, b);
    
    // Display the result
    printf("GCD of %d and %d is %d\n", a, b, result);
    
    return 0; // Exit program
}

Let's break down the operations inside the gcd function above and calculate the total space complexity:

  • The first parameter int a requires \(4\) bytes.
  • The second parameter int b requires \(4\) bytes.
  • The return value return a requires another \(4\) bytes.
  • The return address return gcd(b, a % b) requires \(8\) bytes.

So, each recursive call adds \(4 + 4 + 4 + 8 = 20\) bytes. The number of recursive calls is \(\log(min(a, b))\). Therefore, the total space complexity is \(20 \cdot \log(min(a, b))\).

Totally it requires \(20 \cdot \log(min(a, b))\) bytes of memory to complete its execution and it is Logarithmic Space Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the memory required also increases logarithmically.

Quadratic Space Complexity

Quadratic space complexity refers to an algorithm or process where the amount of memory (or storage) required grows quadratically with respect to the size of the input.

Consider the following piece of code

void fillSquareMatrix(int** matrix, int n) {
    for (int i = 0; i < n; i++) {
        for (int j = 0; j < n; j++) {
            matrix[i][j] = i + j;
        }
    }
}

Let's break down the operations inside the fillSquareMatrix function above and calculate the total time complexity:

  • The first parameter int** matrix requires \(8 \times n\) bytes for the pointer and \(4 \cdot n^{2}\) bytes for the array of integers.
  • The second parameter int n requires \(4\) bytes.
  • The local variable int i requires another \(4\) bytes.
  • The local variable int j requires another \(4\) bytes.

Totally it requires \(4n^{2} + 8n + 12\) bytes of memory to complete its execution and it is Quadratic Space Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the memory required also increases quadratically.

Cubic Space Complexity

Cubic space complexity refers to an algorithm or process where the amount of memory (or storage) required grows cubically with respect to the size of the input.

Consider the following piece of code

void fillCubeMatrix(int*** matrix, int n) {

    for (int i = 0; i < n; i++) {
        for (int j = 0; j < n; j++) {
            for (int k = 0; k < n; k++) {
                matrix[i][j][k] = i + j + k;
            }
        }
    }
}

Let's break down the operations inside the fillCubeMatrix function above and calculate the total time complexity:

  • The first parameter int*** matrix requires \(8 \times n^{2}\) bytes for the pointer and \(4 \cdot n^{3}\) bytes for the array of integers.
  • The second parameter int n requires \(4\) bytes.
  • The local variable int i requires another \(4\) bytes.
  • The local variable int j requires another \(4\) bytes.
  • The local variable int k requires another \(4\) bytes.

Totally it takes \(4n^{3} + 8n^{2} + 16\) bytes of memory to complete its execution and it is cubic space Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the amount of memory required also increases cubically.

Exponential Space Complexity

Exponential space complexity refers to an algorithm whose growth rate doubles with each additional input. In exponential space algorithms, the amount of memory (or storage) required grows extremely fast as the size of the input increases.

Consider the following piece of code

int fibonacci(int n) {
    if (n <= 1)
        return n;
    else
        return fibonacci(n - 1) + fibonacci(n - 2);
}

The total number of function calls can be represented by the recurrence relation:

\[ T(n) = T(n - 1) + T(n - 2) + 1 \]

Where:

  • \(T(n)\) is the total call count for fibonacci(n)
  • The \(+1\) accounts for the current call itself

Let's break down the operations inside the fibonacci function above and calculate the total space complexity:

  • The parameter int n requires \(4\) bytes.
  • The return value return n or return fibonacci(n - 1) + fibonacci(n - 2) requires \(4\) bytes.
  • The return address fibonacci(n - 1) requires \(8\) bytes.
  • The return address fibonacci(n - 2) requires \(8\) bytes.

So, each recursive call adds \(4 + 4 + 8 + 8 = 24\) bytes. The number of recursive calls is \(T(n)\). Therefore, the total space complexity is \(24 \cdot T(n)\).

Totally it requires \(24 \cdot T(n)\) bytes of memory to complete its execution and it is exponential Space Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the memory required also increases exponentially.

Linearithmic Space Complexity

Linearithmic complexity refers to the amount of space an algorithm requires in terms of both a linear and logarithmic relationship to the size of the input. This means that as the input size grows, the memory usage grows slightly faster than linear but much slower than quadratic or cubic space complexities.

Consider the following piece of code

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void fft(double* xRe, double* xIm, int n) {
    if (n <= 1) return;

    // Divide
    double* evenRe = malloc(n / 2 * sizeof(double));
    double* evenIm = malloc(n / 2 * sizeof(double));
    double* oddRe = malloc(n / 2 * sizeof(double));
    double* oddIm = malloc(n / 2 * sizeof(double));

    for (int i = 0; i < n / 2; i++) {
        evenRe[i] = xRe[i * 2];
        evenIm[i] = xIm[i * 2];
        oddRe[i] = xRe[i * 2 + 1];
        oddIm[i] = xIm[i * 2 + 1];
    }

    // Conquer
    fft(evenRe, evenIm, n / 2);
    fft(oddRe, oddIm, n / 2);

    // Combine
    for (int k = 0; k < n / 2; k++) {
        double tRe = cos(-2 * M_PI * k / n) * oddRe[k] - sin(-2 * M_PI * k / n) * oddIm[k];
        double tIm = sin(-2 * M_PI * k / n) * oddRe[k] + cos(-2 * M_PI * k / n) * oddIm[k];

        xRe[k] = evenRe[k] + tRe;
        xIm[k] = evenIm[k] + tIm;
        xRe[k + n / 2] = evenRe[k] - tRe;
        xIm[k + n / 2] = evenIm[k] - tIm;
    }

    free(evenRe);
    free(evenIm);
    free(oddRe);
    free(oddIm);
}

int main() {
    // Example input
    int n = 8; // Size of input (must be a power of 2)
    double xRe[] = {0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0}; // Real part
    double xIm[] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}; // Imaginary part

    // Perform FFT
    fft(xRe, xIm, n);

    // Print the results
    printf("FFT Results:\n");
    for (int i = 0; i < n; i++) {
        printf("x[%d] = %.2f + %.2fi\n", i, xRe[i], xIm[i]);
    }

    return 0;
}

Let's break down the operations inside the fft function above and calculate the total space complexity:

  • The parameter double* xRe requires \(8\) bytes for the pointer and \(8 \times \frac{n}{2^{k}}\) bytes for the array of doubles. For each level of recursion \(k\), the array size is dynamically allocated.
  • The parameter double* xIm requires \(8\) bytes for the pointer and \(8 \times \frac{n}{2^{k}}\)bytes for the array of doubles. For each level of recursion \(k\), the array size is dynamically allocated.
  • The parameter int n requires \(4\)bytes.
  • The local array double* evenRe requires \(8\) bytes for the pointer and \(8 \times \frac{1}{2} \times \frac{n}{2^{k}}\) bytes for the array of doubles. For each level of recursion \(k\), the array size is dynamically allocated.
  • The local array double* evenIm requires \(8\) bytes for the pointer and \(8 \times \frac{1}{2} \times \frac{n}{2^{k}}\) bytes for the array of doubles. For each level of recursion \(k\), the array size is dynamically allocated.
  • The local array double* oddRe requires \(8\) bytes for the pointer and \(8 \times \frac{1}{2} \times \frac{n}{2^{k}}\) bytes for the array of doubles. For each level of recursion \(k\), the array size is dynamically allocated.
  • The local array double* oddIm requires \(8\) bytes for the pointer and \(8 \times \frac{1}{2} \times \frac{n}{2^{k}}\) bytes for the array of doubles. For each level of recursion \(k\), the array size is dynamically allocated.
  • The return address fft(evenRe, evenIm, n / 2) requires \(8\) bytes.
  • The return address fft(oddRe, oddIm, n / 2) requires \(8\) bytes.
  • The local variable double tRe requires \(8\) bytes.
  • The local variable double tIm requires \(8\) bytes.

So, each recursive call adds \(8 \times \frac{n}{2^{k}} + 8 + 8 \times \frac{n}{2^{k}} + 8 + 4 \times \frac{n}{2^{k}} + 8 + 4 \times \frac{n}{2^{k}} + 8 + 4 \times \frac{n}{2^{k}} + 8 + 4 \times \frac{n}{2^{k}} + 8 + 8 + 8 + 8 + 8 = \frac{32n}{2^{k}} + 56\) bytes. The total number of recursive calls is \(2^{0} + 2^{1} + 2^{2} + ... + 2^{\log_{2}(n)} = 2^{\log_{2}(n) + 1} - 1\). Therefore, the total space complexity is \(2^{0} \times (\frac{32n}{2^{0}} + 56) + 2^{1} \times (\frac{32n}{2^{1}} + 56) + 2^{2} \times (\frac{32n}{2^{2}} + 56) + ... + 2^{\log_{2}(n)} \times (\frac{32n}{2^{\log_{2}(n)}} + 56) = 32n \log(n) + 108n - 56\).

Totally it requires \(32n \log(n) + 108n - 56\) bytes of memory to complete its execution and it is linearithmic space complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the memory required also increases linearithmically.

Factorial Space Complexity

Factorial space complexity arises in algorithms where the amount of memory used grows factorially with the size of the input. This is typical in problems involving permutations or combinations of a set of elements, where you need to store all possible arrangements or selections.

Consider the following piece of code

#include <stdio.h>
#include <stdlib.h>

void swap(int *x, int *y) {
    int temp = *x;
    *x = *y;
    *y = temp;
}

// Function to generate permutations recursively and store them in a 2D array
void generatePermutations(int *digits, int start, int n, int **permutations, int *index) {
    if (start == n - 1) {
        // Store the current permutation in the 2D array
        for (int i = 0; i < n; i++) {
            permutations[*index][i] = digits[i];
        }
        (*index)++;  // Move to the next row
        return;
    } else {
        for (int i = start; i < n; i++) {
            // Swap the current element with the start
            swap(&digits[start], &digits[i]);

            // Recursively generate permutations for the remaining elements
            generatePermutations(digits, start + 1, n, permutations, index);

            // Backtrack: restore the original order
            swap(&digits[start], &digits[i]);
        }
    }
}

int main() {
    // Try to crack a lock with 3 dials
    printf("Cracking the lock with 3 dials:\n");
    // Calculate the total number of permutations (n!)
    int num_permutations = 1;
    for (int i = 1; i <= n; i++) {
        num_permutations *= i;
    }

    // Allocate memory for the 2D array to store the permutations
    int **permutations = malloc(num_permutations * sizeof(int *));
    for (int i = 0; i < num_permutations; i++) {
        permutations[i] = malloc(n * sizeof(int));
    }

    // Initialize the array with digits [0, 1, 2, ..., n-1]
    int digits[n];
    for (int i = 0; i < n; i++) {
        digits[i] = i;
    }

    // Initialize the index to track the current permutation
    int index = 0;

    // Generate all permutations and store them in the 2D array
    generatePermutations(digits, 0, n, permutations, &index);

    // Print the stored permutations
    printf("All generated permutations:\n");
    for (int i = 0; i < num_permutations; i++) {
        printf("Permutation %d: ", i + 1);
        for (int j = 0; j < n; j++) {
            printf("%d ", permutations[i][j]);
        }
        printf("\n");
    }

    // Free the allocated memory
    for (int i = 0; i < num_permutations; i++) {
        free(permutations[i]);
    }
    free(permutations);

    return 0;
}

Let's break down the operations inside the generatePermutations function above and calculate the total space complexity:

  • The parameter int *digits requires \(8\) bytes for the pointer and \(4 \times n\) for the array of integers.
  • The parameter int start requires \(4\) bytes.
  • The parameter int n requires \(4\) bytes.
  • The parameter int **permutations requires \(8 \times n!\) bytes for the pointer and \(4 \times n \times n!\) bytes for the array of integers.
  • The parameter int *index requires requires \(8\) bytes for the pointer and \(4\) for the array of integers.
  • The local variable int i in the first for loop requires \(4\) bytes.
  • The local variable int i in the second for loop requires \(4\) bytes.
  • The return address swap(&digits[start], &digits[i]) requires \(8\) bytes.
  • The return address generatePermutations(digits, start + 1, n, permutations, index) requires \(8\) bytes.
  • The return address swap(&digits[start], &digits[i]) requires \(8\) bytes.

So, each recursive call adds \(8 + 4n + 4 + 4 + 8n! + 4n \cdot n! + 8 + 4 + 4 + 4 + 8 + 8 + 8 = 4n \cdot n!+ 8n! + 4n + 60\) bytes. The number of recursive calls is \(n!\). Therefore, the total space complexity is \(n! \times (4n \cdot n! + 8n! + 4n + 60)\).

Totally it requires \(n! \times (4n \cdot n!+8n!+4n+62)\) bytes of memory to complete its execution and it is factorial Space Complexity. It changes based on the \(n\) value. If we increase the \(n\) value then the memory required also increases factorially.