In the coming posts, I want to discuss the notions of asymptotic comparison, which is essentially the mathy way of saying “these two things look alike” or “this one is much larger than this other one”. The point, as with many new notions, is to simplify. Most of the expressions / formulas / functions that we encounter in real life are often extremely cumbersome and too complicated to deal with directly. This might not be obvious from a typical undergrad math course, because often, well, things just work: derivatives are easy to compute, everything factorizes nicely, integrals are explicit, Gaussian elimination involves no fractions, and eigenvalues are all integers. It is sadly a myth; more often than not, direct or explicit computations are not possible, and it is necessary to understand and simplify before getting into the thick of things. What does this mean? Well, this depends on what you are trying to achieve. Asymptotic comparison is about studying the behavior of a sequence when is “large”, or of a function “close to” some point
In my long rant about L’Hospital rule, I said that “ looks like when is small”, and that “ wins over when is large”. What does this really mean? I will keep this for another time, and focus the first posts about asymptotic comparison on sequences, since as usual, most notions are easier to grasp with sequences. We will seek to compare two sequences when is large. For now, it will mostly help to compute limits, but I hope to make apparent in other posts how such arguments can drastically facilitate a bunch of calculus concepts, while becoming very much necessary for more advanced topics.
This post is specifically about equivalents, a.k.a. the clear notion of “looking alike”. Let us look at three sequences , , and As often for sequences, we only care about what happens when is large (whatever that means). If we take , we get
while for $n = 50,000$, we obtain
Clearly, all these numbers are large, and we can make them as large as we want by taking large enough: this is just saying that
However, and are clearly much larger than . On the other hand, and are quite similar. Think of money: you would much rather have 2.5 billion bucks than 250 thousand, but if you have 2.5 billion, 150 thousand more in the bank will not make much of a difference for you.
The point is: everything is relative. A quantity is never small or large by itself, but it can be much larger or much smaller than another quantity (measured in the same unit). On the same note, two quantities are similar if they differ by something small… compared to one of them. This will justify the following definitions.
In all the following, to simplify, we assume that all sequences never take the value zero (at least for large enough). For all intents and purposes, it is not an issue.
Consider two sequences and . We say that and are equivalent if
We then write
The symbol is one of what is collectively known as the Landau notation. I will cover the other ones in other posts. In , this symbol is given by \sim.
This definition does not seem to be exactly what we hinted at before, but this is indeed the same, as can be easily checked.
Show that and are equivalent if and only the difference between them is small compared to either of them. In symbols, check that if and only if
and if and only if
Remember that “if and only if” means that both implications are true: if , then , and if , then .
Both of these could be taken as a definition, and both are good to remember. In practice, it is however usually easier and more useful to use the fact that .
From the earlier discussion, we should have that , and this is easy to check, since
More generally, assume that is a sum of powers of . Then is equivalent to its term of highest degree. In formulas, if
with and , then
If we call two things are “equivalent”, we often mean that they are “essentially the same”, so it would be a really poor choice of vocabulary if we did not have a few basic properties.
Take three sequences . Then we have the following properties.
- (Reflexivity) We have
- (Symmetry) If , then
- (Transitivity) If and , then
This is merely a sanity check which is quite easy to verify.
Prove Proposition II.1.
However, one of the easy but very exciting properties of equivalents is that equivalents can be multiplied and exponentiated to any power, in the following sense.
Take sequences . Then we have the following properties.
- If and , then
- If , then for any , we have
- In particular, if and , then and
Let us check the first one. Assume that that and . Then we want to compute the limit of . But we have
since and , which means and .
Check the other properties.
All this allows to simply compute a bunch of equivalents. From Exercise I.2 and Proposition II.2, we can get
Bottom line: we transform a complicated expression into a very simple one.
Equivalents and limits
One important property of equivalents is that they preserve limits: two equivalent expressions have the same limit.
Let be two sequences and
- If and , then
- If and is a non-zero finite number, then
As before, this is proved by simple operations on limits. In the first case,
while in the second case
Note that we need .
Show that if and has no limit, then has no limit.
This gives simple ways to compute limits, without much effort.
Let us take a look at
We first have and by Exercise I.2. Then Proposition II.2 gives and , and then
which means, by Proposition II.3, that
With a bit of habit, this takes one line, and no effort at all!
One thing seems innocuous in Proposition II.3, but is very much not. This result essentially tells that the equivalent is the limit. But it only works for limits which are neither zero nor . So having a limit is the same as being equivalent to . On the other hand, there are different ways to tend to zero. The sequences and both tend to zero, but converges much faster: we have but . Informally, two sequences tending to zero are equivalent if they tend to zero at the same speed. Similarly, two sequences tending to are equivalent if they tend to at the same speed. This is not true for finite non-zero limits. The two sequences and are equivalent, since they both converge to the non-zero finite number 1. However, goes to 1 much faster than . The speed of convergence should really be measured by how fast and go to , and indeed, these two latter sequences (which are the and from before) are very much not equivalent.
Sort the following sequences in different groups of equivalent sequences.
Exercise I.2 provides a neat way to find an equivalent of a sum of powers of : we keep the term with the largest power. Similarly, it is easy to find an equivalent of a sum of geometric sequences: we keep the term with the largest common ratio. For instance, take and . Then
since and thus . This means .
More generally, assume that is a sum of geometric sequences. Then is equivalent to the term with the largest common ratio (in absolute value). In formulas, if
with and , then
This allows us to do computations such as
and therefore this sequence tends to . Even more: it tends to at the same speed as , a.k.a. super fast.
Functions of equivalents: the nice case
For now, we only know how to get equivalents for simple expressions involving powers or geometric sequences. What happens if we wish to get an equivalent of an expression involving a logarithm, an exponential, or others? In some cases, it is actually quite easy with a bit of calculus. Remember the meaning of the derivative: essentially, if we look close enough, a nice function looks like a straight line whose slope is given by the derivative. Informally, this reads, when is close to ,
For instance, this would give, if is small,
Replacing by a sequence converging to 0 exactly yields the following result.
Assume that is differentiable at with , and that is a sequence converging to . Then we have the equivalent
Note how the right-hand side of is much simpler than . The proof of this is essentially the definition of the derivative.
Show Theorem II.4.
In most of the cases, we will have , and is a usual function, from which we can deduce the following.
If , then we have the following equivalents.
- For any ,
Let us try this on an all-time favorite. In my experience, the computation of this limit unfortunately often turns into a math freak show.
Let us compute, for a fixed ,
It is of the form so it is very much an indeterminate form. For these, we take the log and get . But , so from the previous result , and thus
Let us not forget that we took the log, so we need to exponentiate to get the limit, and finally obtain
Again, the amount of computation and effort done is pretty much zero.
Equivalents and logarithms
In the previous result, we rely enormously on the fact that the sequence is convergent to , and thus, is well-approximated by . If, for instance, , then we do not have such tools at our disposal. There is a less important property of equivalents, which can however turn out to be quite useful and simplify things tremendously.
Assume that and are two strictly positive sequences with and
In words, we can take the logarithm of equivalents, provided they tend to .
To see this, use properties of the logarithm to write
But (since ), so . Moreover , so the last limit is of the type , and is therefore (it would be very unwise to try to use L’Hospital rule!). We conclude that , which is exactly what we wanted.
By Exercise 7, we have . But , so Proposition 4 gives . Therefore
Of course, this is doable with L’Hospital, but I what would be the success rate? What if the denominator were replaced by
The assumptions that is very important. Otherwise, it probably does not work, as the following example shows.
Consider and . Show that but and are not equivalent.
Some words of caution
Equivalents and sum
The main point of equivalents is to say that some complicated sequence looks like a much more simple one, typically a power sequence or a geometric sequences , maybe something with a logarithm , and, if you are feeling frisky, a product of all these . And there is not really a choice: a sequence cannot be equivalent to two different sequences of the above type!
What does this last statement really mean? Write it as a precise mathematical result, then prove it. You may need to use comparative growth: see the post on L’Hospital rule, and the next post on asymptotic comparison.
You may also have noticed that most of the previous results turn a big sum into just one term: in some sense, we only keep the one that wins over the other ones. This is for a very good reason: writing a sum in an equivalent is useless (on its right-side, which should be the “simple” expression).
Careful. Do not write sums in equivalents.
Think back to the first example given. Having 5,000$ is pretty neat. Having 1,000,000 is better. But having 1,003,000 or 1,008,000 or 999,873 is well, pretty much the same. In math, all sequences , , , are equivalent. The in all these formulas wins, and what is written afterwards is essentially irrelevant. Yet, all these “rests” are very much not the same. But in comparison to the dominant term , they are all ridiculously small. Therefore, if we let , it is true that
but it is just as true that
but the only relevant thing to write is
Anything that we could write after the is totally insignificant. If you have more or less 1 million in the bank, and someone gives you (or steals from you) a couple thousand bucks, well, you still have more or less one million in the bank.
Equivalents and sum, again
The previous discussion should make clear the following point.
Careful. Equivalents cannot be summed.
In other words, if and , then it is in general wrong that .
For instance, resuming the previous discussion, let and . It is true that and , but is very much not equivalent to . In our definition, it does not even make any sense! To be more convincing, it is also true that yet is very much not equivalent to .
You may notice that this happens because the largest term, which is the only relevant one, gets cancelled out. In some sense, it is possible to sum equivalents if they do not cancel out, but it is a slippery slope that is better not trodden.
Functions of equivalents: the bad case
Consider the sequences and . As we saw . However the two sequences and are not equivalent, since
and this does not converge to 1. Another example of this phenomenon is in Exercise 9. A particularly terrible version of this is on limits of a sequence of the type , where and . This justifies the following rule.
Careful. We cannot take functions of equivalents.
In other words, if and is some function, then it is in general wrong that .
We saw two cases when we know what to do. One is that it is possible to take the log when the sequences tend to . Essentially, this is due to the fact that the log grows so slowly that differences are erased, whereas the exponential in the first example of this section exacerbates differences.
The other case is Theorem 4. As a reminder: if is differentiable at with , and that is a sequence converging to , then
Note how there are quite a few assumptions here, the most important being
- the derivative of at this point must not be ;
- the sequence needs to converge to a real number .
The first point says, for instance, that we cannot so easily get a nice equivalent for a sequence like . This is when the whole might of Taylor’s formulas is needed. This shall be covered in another post. The second point, on the other hand, has no really universal solution. Derivatives at are not really a thing (though one may try to use the mean value theorem), and it is often better done on a case by case basis.
Note also that, if , then we have . Moreover, and
and thus, if , then . In this sense, we did indeed take a function of an equivalent. However, this is really an extraneous consideration.
Do we really need this whole machinery to obtain the limit / equivalent of ? What are we merely saying? What minimal reasonable assumption do we need to make on to get this limit?
Theorem 4 is much more precise: it states that not only tends to , but also the speed of convergence. As we discussed before, this means getting an equivalent of .
Find the limit of the following sequences, as well as a simple equivalent if the limit is or .
Find the limit of the following sequences, as well as a simple equivalent if the limit is or .
Find the limit of the following sequences.