Loading W Code...
Understand algorithm efficiency with Big O notation, asymptotic analysis & Master Theorem.
6
Topics
50
Minutes
C++
Language
| Complexity | Name | n=10 | n=100 | n=1000 | Example |
|---|---|---|---|---|---|
| O(1) | Constant | 1 | 1 | 1 | Array access |
| O(log n) | Logarithmic | 3 | 7 | 10 | Binary search |
| O(n) | Linear | 10 | 100 | 1,000 | Single loop |
| O(n log n) | Linearithmic | 33 | 664 | 9,966 | Merge sort |
| O(n²) | Quadratic | 100 | 10,000 | 1,000,000 | Nested loops |
| O(2ⁿ) | Exponential | 1,024 | 10³⁰ | ∞ | Recursive fib |
Why it matters:
- Helps compare algorithms objectively
- Predicts performance for large inputs
- Essential for interviews and competitive programming
Key Points:
- We measure operations, not actual time (time varies by machine)
- Focus on worst-case (Big O) usually
- Ignore constants and lower-order terms: O(2n + 100) = O(n)
Example:
- Array search: Check each element → O(n)
- Binary search: Halve each step → O(log n)
- Nested loops: n × n iterations → O(n²)
// How to count operations?
// O(1) - Constant: Same operations regardless of n
int first = arr[0]; // 1 operation
// O(n) - Linear: Operations grow with n
for (int i = 0; i < n; i++) { // n operations
cout << arr[i];
}
// O(n²) - Quadratic: Nested loops
for (int i = 0; i < n; i++) { // n
for (int j = 0; j < n; j++) { // × n
cout << i + j; // = n² operations
}
}
// O(log n) - Logarithmic: Halving
while (n > 1) { // log₂(n) times
n = n / 2;
}Three Main Notations:
1. Big O (O) - Upper Bound
- Worst-case scenario
- "At most this much time"
- Most commonly used
- f(n) = O(g(n)) means f(n) ≤ c·g(n) for large n
2. Big Omega (Ω) - Lower Bound
- Best-case scenario
- "At least this much time"
- f(n) = Ω(g(n)) means f(n) ≥ c·g(n) for large n
3. Big Theta (Θ) - Tight Bound
- Exact growth rate
- "Exactly this much time"
- f(n) = Θ(g(n)) means c₁·g(n) ≤ f(n) ≤ c₂·g(n)
Example - Linear Search:
- Best case: Ω(1) - found at first position
- Worst case: O(n) - found at last or not found
- Average: Θ(n/2) = Θ(n)
// Linear Search Analysis
int linearSearch(vector<int>& arr, int target) {
for (int i = 0; i < arr.size(); i++) {
if (arr[i] == target) return i;
}
return -1;
}
/*
Best Case: Ω(1)
- Target is at arr[0]
- Only 1 comparison needed
Worst Case: O(n)
- Target is at arr[n-1] or not present
- n comparisons needed
Average Case: Θ(n)
- On average, check half the array
- n/2 comparisons → still O(n)
We usually say: Linear Search is O(n)
(We report worst case with Big O)
*/O(1) - Constant ⭐ Best
- Same time regardless of input
- Array access, hash table, push/pop
O(log n) - Logarithmic
- Halves input each step
- Binary search, balanced BST operations
O(n) - Linear
- One pass through all elements
- Linear search, sum of array, single loop
O(n log n) - Linearithmic
- Best comparison-based sorting
- Merge sort, quick sort (average), heap sort
O(n²) - Quadratic
- Nested loops over same data
- Bubble sort, check all pairs
O(n³) - Cubic
- Triple nested loops
- Matrix multiplication (naive)
O(2ⁿ) - Exponential ❌ Worst
- Doubles with each input increase
- Recursive fibonacci, all subsets
O(n!) - Factorial ❌ Very Worst
- All permutations
- Traveling salesman (brute force)
// Quick Reference
// O(1) - Constant
arr[5]; // Direct access
stack.push(x);
// O(log n) - Logarithmic
binarySearch(arr, target);
while (n > 1) n /= 2;
// O(n) - Linear
for (int x : arr) sum += x;
linearSearch(arr, target);
// O(n log n) - Linearithmic
sort(arr.begin(), arr.end());
mergeSort(arr);
// O(n²) - Quadratic
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
// ...
// O(2ⁿ) - Exponential
int fib(int n) {
if (n <= 1) return n;
return fib(n-1) + fib(n-2);
}
// Which to choose for n = 1,000,000?
// O(n²) = 1,000,000,000,000 ops ❌ Too slow!
// O(n log n) = 20,000,000 ops ✅ Fast!Rule 1: Drop Constants
- O(2n) → O(n)
- O(100) → O(1)
- O(n/2) → O(n)
Rule 2: Drop Lower Order Terms
- O(n² + n) → O(n²)
- O(n³ + n² + n) → O(n³)
- O(n + log n) → O(n)
Rule 3: Different Variables
- If two inputs: O(a + b) or O(a × b)
- Don't simplify O(n + m) to O(n)!
Rule 4: Loops
- Single loop: O(n)
- Nested loop: O(n × m) or O(n²)
- Loop that halves: O(log n)
Rule 5: Recursion
- Count recursive calls × work per call
- Use recursion tree or Master Theorem
Common Patterns:
- Loop 1 to n: O(n)
- Loop 1 to n, step 2: O(n/2) = O(n)
- Loop 1 to n, multiply by 2: O(log n)
- Two nested loops: O(n²)
// Examples of Calculation
// Example 1: Simple loop
for (int i = 0; i < n; i++) { // O(n)
cout << i; // O(1)
}
// Total: O(n) × O(1) = O(n)
// Example 2: Nested loops
for (int i = 0; i < n; i++) { // O(n)
for (int j = 0; j < n; j++) { // O(n)
arr[i][j] = i + j; // O(1)
}
}
// Total: O(n) × O(n) × O(1) = O(n²)
// Example 3: Sequential loops (ADD)
for (int i = 0; i < n; i++) {...} // O(n)
for (int j = 0; j < m; j++) {...} // O(m)
// Total: O(n) + O(m) = O(n + m)
// Example 4: Logarithmic
for (int i = 1; i < n; i *= 2) { // O(log n)
cout << i;
}
// i = 1, 2, 4, 8... 2^k = n → k = log n
// Example 5: Half loop
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) { // n-i times
// ...
}
}
// Total: n + (n-1) + (n-2) + ... + 1 = n(n+1)/2 = O(n²)T(n) = aT(n/b) + O(n^k)
Where:
- a = number of subproblems
- b = factor by which input is divided
- n^k = cost of dividing/merging
Three Cases:
Case 1: If k < log_b(a)
→ T(n) = Θ(n^(log_b(a)))
Work at leaves dominates
Case 2: If k = log_b(a)
→ T(n) = Θ(n^k × log n)
Work distributed evenly
Case 3: If k > log_b(a)
→ T(n) = Θ(n^k)
Work at root dominates
Examples:
- Merge Sort: T(n) = 2T(n/2) + n
- a=2, b=2, k=1, log₂(2)=1
- Case 2: k = log_b(a) → Θ(n log n)
- Binary Search: T(n) = T(n/2) + 1
- a=1, b=2, k=0, log₂(1)=0
- Case 2: k = log_b(a) → Θ(log n)
// Master Theorem Examples
// 1. Merge Sort: T(n) = 2T(n/2) + O(n)
void mergeSort(vector<int>& arr, int l, int r) {
if (l < r) {
int m = (l + r) / 2;
mergeSort(arr, l, m); // T(n/2)
mergeSort(arr, m+1, r); // T(n/2)
merge(arr, l, m, r); // O(n)
}
}
// a=2, b=2, k=1 → log₂(2)=1=k → Case 2 → Θ(n log n)
// 2. Binary Search: T(n) = T(n/2) + O(1)
int binarySearch(vector<int>& arr, int t, int l, int r) {
if (l > r) return -1;
int m = (l + r) / 2;
if (arr[m] == t) return m;
if (arr[m] > t) return binarySearch(arr, t, l, m-1);
return binarySearch(arr, t, m+1, r);
}
// a=1, b=2, k=0 → log₂(1)=0=k → Case 2 → Θ(log n)
// 3. Strassen's Matrix: T(n) = 7T(n/2) + O(n²)
// a=7, b=2, k=2 → log₂(7)≈2.81 > k → Case 1 → Θ(n^2.81)Types of Space:
1. Auxiliary Space
- Extra space used (excluding input)
- What we usually measure
2. Total Space
- Input space + Auxiliary space
Common Space Complexities:
O(1) - Constant ✅ Best
- Fixed number of variables
- In-place algorithms (swap, selection sort)
O(log n) - Logarithmic
- Recursive binary search (call stack)
- Divide and conquer stack depth
O(n) - Linear
- Creating a copy of array
- Recursion with n depth
- Hash map storage
O(n²) - Quadratic
- 2D matrix/grid
- Adjacency matrix for graphs
Space vs Time Tradeoff:
Often we trade space for time:
- Use hash map (O(n) space) to get O(1) lookup
- Memoization uses space to avoid recomputation
// Space Complexity Examples
// O(1) - Constant Space
void swap(int& a, int& b) {
int temp = a; // Just 1 extra variable
a = b;
b = temp;
}
// O(n) - Linear Space
vector<int> copy(vector<int>& arr) {
vector<int> result(arr.size()); // n elements
for (int i = 0; i < arr.size(); i++) {
result[i] = arr[i];
}
return result;
}
// O(n) - Recursion Call Stack
int factorial(int n) {
if (n <= 1) return 1;
return n * factorial(n - 1); // n stack frames!
}
// O(1) vs O(n) - Same problem, different space
// Fibonacci O(n) space
int fibRecursive(int n) { // O(n) stack space
if (n <= 1) return n;
return fibRecursive(n-1) + fibRecursive(n-2);
}
// Fibonacci O(1) space ✅ Better!
int fibIterative(int n) {
int a = 0, b = 1; // Only 2 variables
for (int i = 2; i <= n; i++) {
int temp = a + b;
a = b;
b = temp;
}
return b;
}