Skip to main content

Foundations

This document introduces the core ideas and tools used throughout competitive programming. The goal is to build strong habits for analyzing problems and choosing efficient solutions.

1. How Competitive Programming Problems Are Solved

Competitive programming problems are solved by combining:

  1. Careful observation
  2. Knowledge of common techniques
  3. Awareness of time and memory limits

Successful solutions rarely come from coding immediately. They come from understanding structure first.

2. Core Problem-Solving Principles

2.1 Blackboxing

Many tools in competitive programming are treated as black boxes.

A black box:

  • Has a clear purpose
  • Comes with performance guarantees
  • Does not require understanding internal implementation

Examples:

  • Sorting algorithms
  • Binary search
  • Standard data structures (set, multiset, priority_queue)

When solving problems, focus on what a tool provides, not how it is built.

2.2 Generalization

Individual problems are rarely unique. Most are variations of a small number of patterns.

Examples:

  • Pairing objects → sorting + two pointers
  • Repeated best-choice decisions → greedy algorithms
  • Searching for an optimal value → binary search

The goal is to recognize which known pattern a problem fits.

2.3 From Observation to Technique

A common workflow:

  1. Analyze constraints and input sizes
  2. Make observations about the structure of the problem
  3. Translate those observations into known techniques

Algorithms are consequences of observations, not the starting point.

3. Time Complexity and Feasibility

Before implementing a solution, estimate whether it will run in time.

A common rule of thumb:

  • About 10810^8 operations per second

Rough feasibility guide (nn = input size):

nn upper boundPossible complexities
1010O(n!)O(n!), O(n7)O(n^7), O(n6)O(n^6)
2020O(2nn)O(2^n \cdot n), O(n5)O(n^5)
8080O(n4)O(n^4)
400400O(n3)O(n^3)
7,5007{,}500O(n2)O(n^2)
71047 \cdot 10^4O(nn)O(n \sqrt{n})
51055 \cdot 10^5O(nlogn)O(n \log n)
51065 \cdot 10^6O(n)O(n)
101810^{18}O(log2n)O(\log^2 n), O(logn)O(\log n), O(1)O(1)

If an approach is too slow, the solution requires a different idea, not micro-optimizations.

4. Fundamental Data Structures

4.1 Vector

A vector is a dynamic array that supports:

  • Fast random access
  • Efficient iteration
  • Compatibility with sorting algorithms

Vectors are the default container in competitive programming.

4.2 Multiset

A multiset stores elements in sorted order and allows duplicates.

Key properties:

  • Logarithmic insertion and deletion
  • Efficient access to smallest or largest elements
  • Supports queries such as “largest value X\leq X

Use a multiset when:

  • Order matters
  • Elements are added and removed dynamically
  • You need to repeatedly choose the best valid option

Practice:

5. Greedy Algorithms and Sorting

5.1 Greedy Strategy

Greedy algorithms make a locally optimal decision at each step.

They are commonly effective when:

  • Decisions do not negatively affect future choices after sorting
  • The problem involves pairing, scheduling, or assigning resources

Sorting is often the first step in greedy solutions.

5.2 Example Patterns

  • Pairing lightest with heaviest → two pointers
  • Assigning largest possible valid item → multiset or priority queue

Understanding why greedy works is more important than memorizing implementations.

Practice:

6. Binary Search Beyond Arrays

Binary search is a general technique for solving problems with a monotonic structure.

Common usage:

  • Searching for the minimum or maximum value that satisfies a condition
  • Turning optimization problems into yes/no feasibility checks

Key idea: If a condition holds for some value XX, it often holds for all larger (or smaller) values.

Binary search on the answer is a fundamental competitive programming pattern.

Practice:

More practices

7. Brute Force

Brute force is the simplest approach: try all possibilities.

When to consider brute force:

  • The search space is small enough (check constraints carefully)
  • No obvious greedy or mathematical structure
  • Useful for small subtasks or preprocessing

Key insight: Even when brute force is too slow for the full problem, it can be:

  • A starting point for understanding the problem structure
  • A solution for small test cases
  • A component of a more sophisticated approach

Always verify that the search space fits within time constraints before implementing.

8. Advanced Greedy Techniques

Greedy algorithms extend beyond simple sorting and pairing.

8.1 Greedy with Mathematical Structure

Some greedy solutions require deeper observations about the problem structure. The key is identifying what property makes a greedy choice optimal.

Common patterns:

  • Making choices that preserve future flexibility
  • Using prefix/suffix information to make informed decisions
  • Combining multiple greedy criteria

Practice:

9. Two Pointers Technique

The two pointers technique uses two indices that traverse an array or sequence in a coordinated way.

When to use:

  • The problem involves a contiguous subarray or subsequence
  • There's a monotonic property (e.g., if a condition holds for a range, it holds for subranges)
  • You need to find pairs or ranges that satisfy a condition

Key idea: Instead of checking all pairs or ranges explicitly, maintain two pointers that move based on the current state. This reduces complexity from O(n2)O(n^2) to O(n)O(n) in many cases.

Common patterns:

  • Sliding window: maintain a valid window while expanding/contracting
  • Meeting in the middle: start from both ends and move toward the center
  • Fast and slow pointers: different speeds for different purposes

Practice:

10. Prefix Sums and Difference Arrays

10.1 Prefix Sums

Prefix sums precompute cumulative values to answer range queries quickly.

Canonical example: Given an array aa and queries of the form (l,r)(l, r), output al+al+1++ara_l + a_{l+1} + \ldots + a_r for each query.

Solution:

  • Precompute prefix array: p[i]=a1+a2++aip[i] = a_1 + a_2 + \ldots + a_i (with p[0]=0p[0] = 0)
  • Answer query (l,r)(l, r): p[r]p[l1]p[r] - p[l-1]
  • Time: O(n)O(n) preprocessing, O(1)O(1) per query

Practice:

10.2 Difference Arrays

Difference arrays efficiently apply range updates.

Canonical example: Given an empty array and updates of the form (l,r,x)(l, r, x) meaning "add xx to al,al+1,,ara_l, a_{l+1}, \ldots, a_r", output the array after all updates.

Solution:

  • Use difference array dd where d[i]=a[i]a[i1]d[i] = a[i] - a[i-1]
  • For update (l,r,x)(l, r, x): add xx to d[l]d[l], subtract xx from d[r+1]d[r+1]
  • Reconstruct final array: a[i]=d[1]+d[2]++d[i]a[i] = d[1] + d[2] + \ldots + d[i]
  • Time: O(1)O(1) per update, O(n)O(n) reconstruction

Key insight: Range updates become point updates in the difference array.

Practice:

10.3 General Pattern

Both techniques rely on the same principle:

  • Transform the problem to work with differences or cumulative sums
  • Apply updates/queries in the transformed space
  • Convert back to the original representation when needed

11. Topological Sort

Topological sort orders the vertices of a directed acyclic graph (DAG) such that for every edge (uv)(u \to v), uu appears before vv in the ordering.

When to use:

  • Problems involving dependencies or prerequisites
  • Scheduling tasks with ordering constraints
  • Any problem that can be modeled as a DAG

Key properties:

  • Only possible for DAGs (no cycles)
  • Multiple valid orderings may exist
  • Can be computed using DFS or Kahn's algorithm (BFS-based)

Common applications:

  • Course prerequisites
  • Build dependencies
  • Task scheduling with constraints

Practice: