Abstract

Algorithm efficiency impacts performance, scalability, resource utilization, user experience, competitiveness, and increasingly moreso: environmental sustainability. It is a fundamental aspect of software engineering that directly influences the effectiveness and success of software systems.

Tip

> Effective Use: The key is to focus on understanding the worst-case scenario’s growth rate in terms of time or space complexity and aim for algorithms with lower complexities for the given problem:

  1. Identify the most impactful operations (loops, recursion) affecting complexity.
  2. Simplify to the dominant term for analysis.
  3. Select the most efficient algorithm (lowest complexity).
  4. Consider trade-offs between time and space complexity.
  5. Test with real data for practical insights.
  6. Understand both time and space complexity.
  7. Refine algorithms for better efficiency.

Sometimes, an algorithm might have a lower time complexity but higher space complexity or vice versa. Choose based on which resources are more critical for your specific situation.


Big O Notation

Big O notation is a representation of the efficiency of an algorithm concerning the input size. It represents the upper bound of an algorithm’s complexity and indicates how the runtime or space requirements grow concerning the input size in the worst-case scenario.

By leveraging Big O notation and its understanding, developers and engineers can make informed decisions in algorithm selection, optimization, and implementation to maximize efficiency concerning time and space complexities for various applications and scenarios.

Sometimes, an algorithm might have a lower time complexity but higher space complexity or vice versa. Choose based on which resources are more critical for your specific situation.


Time Complexities

O(1) - CONSTANT TIME:

Operations that take the same amount of time regardless of input size.

Example: Accessing an element in an array by index.

O(LOG N) - LOGARITHMIC TIME:

Operations that halve the input size with each step.

Example: Binary search in a sorted array.

O(N) - LINEAR TIME:

Operations that scale linearly with the input size.

Example: Iterating through an array or a linked list.

O(N LOG N) - LINEARITHMIC TIME:

Typically seen in divide-and-conquer algorithms.

Example: Efficient sorting algorithms like Merge Sort and Quick Sort.

O(N 2) - QUADRATIC TIME:

Operations that grow proportionally to the square of the input size.

Example: Nested loops where each iteration depends on the input size.

O(2 N) - EXPONENTIAL TIME:

Operations that double with each addition to the input data set.

Example: Brute-force algorithms or recursive algorithms without memoization.

O(N!) - FACTORIAL TIME:

Operations that grow at factorial rates with the input size.

Example: Algorithms generating all permutations or combinations.


Space Complexities

O(1) - CONSTANT SPACE:

Algorithms that use a fixed amount of memory regardless of input size.

Example: Iterating through an array to find the maximum element using only a single variable to keep track of the maximum element found so far.

O(N) - LINEAR SPACE:

Algorithms where memory usage grows linearly with input size.

Example: A merge sorting algorithm that divides an array into smaller parts and sorts them whilst requiring additional space proportianal to the input size for temporary storage of data during the merging phase.

O(N 2) - QUADRATIC SPACE:

Algorithms that use space proportional to the square of the input size.

Example: A brute-force algorithm that involves nested loops to explore all pairs/combinations of elements and store them can require quadratic space.