Abstract

Algorithm efficiency impacts performance, scalability, resource utilization, user experience, competitiveness, and increasingly moreso: environmental sustainability. It is a fundamental aspect of software engineering that directly influences the effectiveness and success of software systems.

Tip

Effective Use: The key is to focus on understanding the worst-case scenario’s growth rate in terms of time or space complexity and aim for algorithms with lower complexities for the given problem:

  1. Identify the most impactful operations (loops, recursion) affecting complexity.
  2. Simplify to the dominant term for analysis.
  3. Select the most efficient algorithm (lowest complexity).
  4. Consider trade-offs between time and space complexity.
  5. Test with real data for practical insights.
  6. Understand both time and space complexity.
  7. Refine algorithms for better efficiency.

Sometimes, an algorithm might have a lower time complexity but higher space complexity or vice versa. Choose based on which resources are more critical for your specific situation.


Big O Notation

Big O notation is a representation of the efficiency of an algorithm concerning the input size. It represents the upper bound of an algorithm’s complexity and indicates how the runtime or space requirements grow concerning the input size in the worst-case scenario.

By leveraging Big O notation and its understanding, developers and engineers can make informed decisions in algorithm selection, optimization, and implementation to maximize efficiency concerning time and space complexities for various applications and scenarios.

Sometimes, an algorithm might have a lower time complexity (T) but higher space complexity (S) or vice versa. Choose based on which resources are more critical for your specific situation.


Time Complexities

- CONSTANT TIME:

- CONSTANT TIME:

Operations that take the same amount of time regardless of input size.

Example: Accessing an element in an array by index.

Example

fn main() {
	let an_array = [1, 2, 3];
	let x = an_array[0]; // T = O(1) - Directly indexed
	println!("x is {}", x);
}

- LOGARITHMIC TIME:

- LOGARITHMIC TIME:

Operations that halve the input size with each step.

Example: Binary search in a sorted array.

Example

fn binary_search(array: &[i32], target: i32) -> Option<usize> {
    let mut left = 0;
    let mut right = array.len() - 1;
    while left <= right {
        let mid = (left + right) / 2; // T = O(log n) - Input halved
        if array[mid] == target {
            return Some(mid);
        } else if array[mid] < target {
            left = mid + 1;
        } else {
            right = mid - 1;
        }
    }
    None
}
 
fn main() {
    let an_array = [1, 2, 3, 4, 7, 8, 9];
    let x = binary_search(&an_array, 7); // T = O(log n)
    println!("x is {:?}", x);
}

- LINEAR TIME:

- LINEAR TIME:

Operations that scale linearly with the input size.

Example: Iterating through an array or a linked list.

Example

#[derive(Debug)]
struct Node {
    value: i32,
    next: Option<Box<Node>>,
}
 
fn insert(node: &mut Node, value: i32) {
    let mut current = node;
    while let Some(ref mut next) = current.next { // T = O(n), S = O(1)
        current = next;
    }
    current.next = Some(Box::new(Node {
        value,
        next: None,
    }));
}
 
fn main() {
    let mut a_list = Node {
        value: 1,
        next: Some(Box::new(Node {
            value: 2,
            next: Some(Box::new(Node {
                value: 3,
                next: None,
            })),
        })),
    };
    insert(&mut a_list, 4);
    println!("{:?}", a_list);
}

- LINEARITHMIC TIME:

- LINEARITHMIC TIME:

Typically seen in divide-and-conquer algorithms.

Example: Efficient sorting algorithms like Merge Sort and Quick Sort.

Example

fn quicksort(arr: &mut [u32]) {
    if arr.len() <= 1 {
        return; // Already sorted
    }
 
    let pivot_index = partition(arr);
    // O(log n)
    quicksort(&mut arr[0..pivot_index]);
    quicksort(&mut arr[pivot_index + 1..]);
}
 
fn partition(arr: &mut [u32]) -> usize {
    let pivot_index = arr.len() - 1;
    let mut i = 0;
 
 	// O(n)
    for j in 0..pivot_index {
        if arr[j] <= arr[pivot_index] {
            arr.swap(i, j);
            i += 1;
        }
    }
 
    arr.swap(i, pivot_index);
    i
}
 
fn main() {
    let mut a_list = vec![7, 2, 1, 3, 2, 2, 9];
	// T = O(n) * O(log n) = O(n log n)
    quicksort(&mut a_list);
    println!("{:?}", a_list);
}

- QUADRATIC TIME:

- QUADRATIC TIME:

Operations that grow proportionally to the square of the input size.

Example: Nested loops where each iteration depends on the input size.

Example

fn bubble_sort(arr: &mut [u32]) {
    // T = O(n^2), S = O(1)
    // Time complexity: O(n^2) because of the nested loops.
    // Space complexity: O(1) as the sorting is done in place without any additional memory.
 
    let mut n = arr.len();
    let mut swapped;
 
    while n > 1 {
        swapped = false;
        for i in 0..n - 1 {
            if arr[i] > arr[i + 1] {
                arr.swap(i, i + 1);
                swapped = true;
            }
        }
        if !swapped {
            break; // Break early if no swaps were made
        }
        n -= 1;
    }
}
 
fn main() {
    let mut numbers = vec![64, 34, 25, 12, 22, 11, 90];
    bubble_sort(&mut numbers);
    println!("{:?}", numbers);
}

- EXPONENTIAL TIME:

- EXPONENTIAL TIME:

Operations that double with each addition to the input data set.

Example: Brute-force algorithms or recursive algorithms without memoization.

Example

fn generate_subsets(nums: &[u32]) -> Vec<Vec<u32>> {
    // T = O(2^n), S = O(n * 2^n)
    // Time complexity: O(2^n), as there are 2^n subsets for an array of size n.
    // Space complexity: O(n * 2^n), for storing all subsets.
 
    let mut subsets = vec![];
 
    fn backtrack(start: usize, nums: &[u32], current: &mut Vec<u32>, subsets: &mut Vec<Vec<u32>>) {
        subsets.push(current.clone()); // Store the current subset
        for i in start..nums.len() {
            current.push(nums[i]); // Include nums[i]
            backtrack(i + 1, nums, current, subsets);
            current.pop(); // Backtrack
        }
    }
 
    backtrack(0, nums, &mut vec![], &mut subsets);
    subsets
}
 
fn main() {
    let nums = vec![1, 2, 3];
    let subsets = generate_subsets(&nums);
    println!("All subsets of {:?}: {:?}", nums, subsets);
}

- FACTORIAL TIME:

- FACTORIAL TIME:

Operations that grow at factorial rates with the input size.

Example: Algorithms generating all permutations or combinations.

Example

fn generate_permutations(nums: &mut Vec<u32>, start: usize, result: &mut Vec<Vec<u32>>) {
    // T = O(n!), S = O(n!)
    // Time complexity: O(n!) because there are n! permutations of n elements.
    // Space complexity: O(n!) to store all permutations.
 
    if start == nums.len() {
        result.push(nums.clone()); // Store the current permutation
        return;
    }
 
    for i in start..nums.len() {
        nums.swap(start, i); // Swap to place nums[i] at the current position
        generate_permutations(nums, start + 1, result); // Recurse for the next position
        nums.swap(start, i); // Swap back (backtrack)
    }
}
 
fn main() {
    let mut nums = vec![1, 2, 3];
    let mut result = vec![];
    generate_permutations(&mut nums, 0, &mut result);
    println!("All permutations of {:?}: {:?}", nums, result);
}

Space Complexities

- CONSTANT SPACE:

Algorithms that use a fixed amount of memory regardless of input size.

Example: Iterating through an array to find the maximum element using only a single variable to keep track of the maximum element found so far.

- LINEAR SPACE:

Algorithms where memory usage grows linearly with input size.

Example: A merge sorting algorithm that divides an array into smaller parts and sorts them whilst requiring additional space proportianal to the input size for temporary storage of data during the merging phase.

- QUADRATIC SPACE:

Algorithms that use space proportional to the square of the input size.

Example: A brute-force algorithm that involves nested loops to explore all pairs/combinations of elements and store them can require quadratic space.