This is the third post regarding the notion of *asymptotic comparison*. The first one dealt with *equivalents*, the rigorous way of saying “these two things look alike”, while the second post was about *little o*, meaning “this thing is much smaller than this other one”. Naturally, where there is a little o, there should be a **big O**, and this is what we shall discuss. The informal definition of big O is “this thing is at most as large as this other thing”.

I would very much recommend reading the first posts in the series. The big O work in the same way as the little o computationally, so this post will be shorter. As before, we will focus on sequences, and introduce the analogous notions for functions in a later post.

## Formal definition

Unlike for equivalents and little o, there is no division involved, so we do not need to assume that the sequences never take the value zero.

Consider two sequences and . We say that is a **big O** of if for some ,

for all (sufficiently large). We then write

If never takes the value , it means that the sequence is bounded.

This can also be read “ equals big O of “. As for little o, this is not an equality in the usual sense of the term: things go from left to right, and we should not write 😦 It is usually merely written with the letter O in , but you can also see other options here.

An important difference with the notion of little o and equivalents is the fact that there is no limit here. It is very possible that the limit of could not exist. For instance, we have

,

which is just a short way to state that the sequence is bounded. However, this sequence has no limit.

You probably know that the sequence has no limit, but do you have a single argument to show this? No need to use equidistribution or rational approximations of , but a trig formula or two might do the trick.

## Big O and little o

It is always good to remember that a big O is *less restrictive* than a little o, in the following sense.

Take two sequences and . Then

Naturally, **the converse is not true**. For instance, if , then , but is not a little o of . There is some sort of converse though.

Take three sequences . Then if and , then .

For instance, if we know that , then , and in particular .

Prove Propositions II.1 and II.2.

## Properties

As expected, big O have similar properties as o, as we summarize below.

### Basic properties

- If and , then
- If and , then

Naturally, other comparative growths mentioned in the little o post are still valid after replacing little o by big O.

Consider sequences .

- If and , then .
- If and , then
- If , then for any , we have
- If and , then
- If , and is never , then .
- If , then for any , .

It is worth noticing that most of these behave like inequalities and this is why it may be convenient, though slightly informal, to write rather than . However, the big O notation has the advantage that it essentially behaves like an equality and allows to simplify algebraic computations, as we discuss below.

Another useful property is that can essentially be replaced by any sequence tending to .

Take sequences , and a sequence of integers that tends to . If , then .

### Big O and limits

Limits are often computed using little o or big O. Recall that if and only if . With big O, we often use the following.

If where is a sequence tending to , then

### Words of caution

The same warnings as for little o apply to big O.

- keep things as simple as possible in the big O;
- do not cancel big O;
- do not take functions of big O.

## Asymptotic expansion

### Definition

We can write* ***asymptotic expansions** using little o or big O. For little o, we want to say: this sequence looks like this much more simple sequence, *plus* something which is smaller. For big O, we want to say: this sequence looks like this much more simple sequence, *plus* something which is at most as large as this quantity, where the quantity in question is much smaller than the rest. Therefore, we could write

but **not**

😦

which is no better than saying

We saw that derivatives allow to obtain asymptotic expansions with a little o, such as

.

Replacing this little o with a big O would be possible, but it would only say , which is less precise. We can actually go further and write

.

Note that this is *better* than the previous expansion. For instance, we can obtain

which says that . The expansion with the little o would only tell that , which is not as good.

But where is the expansion coming from? For this, we need to know more, namely Taylor expansions, which we will cover this in another post.

### Algebra of big O

Naturally, the algebra of big O works just like for little o. For instance if we have and , then

Note how the is absorbed into the , while it would **not** be absorbed into a . Similarly, we could get an asymptotic extension of the product as follows:

Note that the worse that we get is , coming from either or , so all terms of order or lower are absorbed into it. Similarly to what happens for little o, we see (using e.g. Proposition II.2) that in such an asymptotic expansion, the sequence is **equivalent to the first term of the asymptotic expansion**. For instance, we get here that

and that

Finally, just like we do not want two little o or two big o in an equality, we also do not want a little o and a big O together either. For instance, simplifies to , while simplifies to . But note that *does not simplify*! In real life, such an oddity should however never occur. One should aim to do computations with the least precision needed (yes, be as lazy as possible!). This may mean using big O or little o, and with a bit of experience, this can be usually inferred from the get-go.

Assume that and . Compute an asymptotic expansion of and of .