PrevNext

Time Complexity

Authors: Darren Yao, Benjamin Qi

Evaluating a program's time complexity, or how fast your program runs.

Resources
IUSACOmodule is based off this
CPHIntro and examples
PAPSMore in-depth. In particular, 5.2 gives a formal definition of Big O.
YoutubeIf you prefer watching a video instead

In programming contests, your program needs to finish running within a certain timeframe in order to receive credit. For USACO, this limit is seconds for C++ submissions, and seconds for Java/Python submissions. A conservative estimate for the number of operations the grading server can handle per second is , but it could be closer to given good constant factors.

Complexity Calculations

We want a method to calculate how many operations it takes to run each algorithm, in terms of the input size . Fortunately, this can be done relatively easily using Big O Notation, which expresses worst-case time complexity as a function of as gets arbitrarily large. Complexity is an upper bound for the number of steps an algorithm requires as a function of the input size. In Big O notation, we denote the complexity of a function as , where constant factors and lower-order terms are generally omitted from . We'll see some examples of how this works, as follows.

The following code is , because it executes a constant number of operations.

C++

1int a = 5;
2int b = 7;
3int c = 4;
4int d = a + b + c + 153;

Java

1int a = 5;
2int b = 7;
3int c = 4;
4int d = a + b + c + 153;

Python

1a = 5
2b = 7
3c = 4
4d = a + b + c + 153

Input and output operations are also assumed to be . In the following examples, we assume that the code inside the loops is .

The time complexity of loops is the number of iterations that the loop runs. For example, the following code examples are both .

C++

1for (int i = 1; i <= n; i++) {
2 // constant time code here
3}
1int i = 0;
2while (i < n) {
3 // constant time code here
4 i++;
5}

Java

1for (int i = 1; i <= n; i++) {
2 // constant time code here
3}
1int i = 0;
2while (i < n) {
3 // constant time code here
4 i++;
5}

Python

1for i in range(1, n+1):
2 # constant time code here
1i = 0
2while (i < n):
3 # constant time code here
4 i += 1

Because we ignore constant factors and lower order terms, the following examples are also :

C++

1for (int i = 1; i <= 5*n + 17; i++) {
2 // constant time code here
3}
1for (int i = 1; i <= n + 457737; i++) {
2 // constant time code here
3}

Java

1for (int i = 1; i <= 5*n + 17; i++) {
2 // constant time code here
3}
1for (int i = 1; i <= n + 457737; i++) {
2 // constant time code here
3}

Python

1for i in range(5*n + 17):
2 # constant time code here
3
4for i in range(n + 457737):
5 # constant time code here

We can find the time complexity of multiple loops by multiplying together the time complexities of each loop. This example is , because the outer loop runs iterations and the inner loop .

C++

1for (int i = 1; i <= n; i++) {
2 for (int j = 1; j <= m; j++) {
3 // constant time code here
4 }
5}

Java

1for (int i = 1; i <= n; i++) {
2 for (int j = 1; j <= m; j++) {
3 // constant time code here
4 }
5}

Python

1for i in range(n):
2 for j in range(m):
3 # constant time code here

In this example, the outer loop runs iterations, and the inner loop runs anywhere between and iterations (which is a maximum of ). Since Big O notation calculates worst-case time complexity, we treat the inner loop as a factor of . Thus, this code is .

C++

1for (int i = 1; i <= n; i++) {
2 for (int j = i; j <= n; j++) {
3 // constant time code here
4 }
5}

Java

1for (int i = 1; i <= n; i++) {
2 for (int j = i; j <= n; j++) {
3 // constant time code here
4 }
5}

Python

1for i in range(n):
2 for j in range(i, n):
3 # constant time code here

If an algorithm contains multiple blocks, then its time complexity is the worst time complexity out of any block. For example, the following code is .

C++

1for (int i = 1; i <= n; i++) {
2 for(int j = 1; j <= n; j++) {
3 // constant time code here
4 }
5}
6for (int i = 1; i <= n + 58834; i++) {
7 // more constant time code here
8}

Java

1for (int i = 1; i <= n; i++) {
2 for(int j = 1; j <= n; j++) {
3 // constant time code here
4 }
5}
6for (int i = 1; i <= n + 58834; i++) {
7 // more constant time code here
8}

Python

1for i in range(n):
2 for j in range(n):
3 # constant time code here
4
5for i in range(n + 58834):
6 # more constant time code here

The following code is , because it consists of two blocks of complexity and , and neither of them is a lower order function with respect to the other.

C++

1for (int i = 1; i <= n; i++) {
2 for (int j = 1; j <= n; j++) {
3 // constant time code here
4 }
5}
6for (int i = 1; i <= m; i++) {
7 // more constant time code here
8}

Java

1for (int i = 1; i <= n; i++) {
2 for (int j = 1; j <= n; j++) {
3 // constant time code here
4 }
5}
6for (int i = i; j <= m; i++) {
7 // more constant time code here
8}

Python

1for i in range(n):
2 for j in range(n):
3 # constant time code here
4
5for i in range(m):
6 # more constant time code here

Common Complexities and Constraints

Complexity factors that come from some common algorithms and data structures are as follows:

Warning!

Don't worry if you don't recognize most of these! They will all be introduced later.

  • Mathematical formulas that just calculate an answer:
  • Binary search:
  • Ordered set/map or priority queue: per operation
  • Prime factorization of an integer, or checking primality or compositeness of an integer naively:
  • Reading in items of input:
  • Iterating through an array or a list of elements:
  • Sorting: usually for default sorting algorithms (mergesort, Collections.sort, Arrays.sort)
  • Java Quicksort Arrays.sort function on primitives:
  • Iterating through all subsets of size of the input elements: . For example, iterating through all triplets is .
  • Iterating through all subsets:
  • Iterating through all permutations:

Here are conservative upper bounds on the value of for each time complexity. You might get away with more than this, but this should allow you to quickly check whether an algorithm is viable.

Possible complexities
, ,
,
, ,

Warning!

A significant portion of Bronze problems will have . This doesn't give much of a hint regarding the intended time complexity. The intended solution could still be !

Constant Factor

Constant factor refers to the idea that different operations with the same complexity take slightly different amounts of time to run. For example, three addition operations take a bit longer than a single addition operation. Another example is that although binary search and set insertion are both , binary searching is noticeably faster.

Constant factor is entirely ignored in big-O notation. This is fine most of the time, but if the time limit is particularly tight, you may TLE with the intended complexity. When this happens, it is important to keep the constant factor in mind. One example is, a piece of code that iterates through all ordered triplets runs in time, but could be sped up by a factor of by iterating through unordered triplets.

For now, don't worry about optimizing constant factors -- just be aware of them.

Module Progress:

Join the USACO Forum!

Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!

Give Us Feedback on Time Complexity!

PrevNext