What are "Asymptotic and Perturbation Methods"? Why we need to study them? And why we study them together?
Asymptotic methods. Usually we study solutions of some ODEs and PDEs (as you know many phenomenae are described by ODEs and PDEs). These equations or just their solutions depend on (infinitely) small or (infinitely) large parameter and we derive solutions as asymptotic decompositions with respect to this parameter. Sometimes we consider integrals depending on this parameter and such integrals usually represent such solutions.
Perturbation methods. Assume that equation depends on the small parameter $\varepsilon \ll 1$ and as $\varepsilon =0$ we have also an equation which is easier to solve. If this equation as $\varepsilon =0$ is just a special case of the general equation we have a regular case; otherwise (usually equation with $\varepsilon=0$ degenerates, f.e. has a lesser order) we have a singular case.
Consider now different chapters.
We start from Chapter 2 "Asymptotic expansion of integrals" where we consider \begin{equation*} I(k):=\int f(x)e^{k \phi(x)}\,dx \end{equation*} and \begin{equation*} I(k)=\int f(x)e^{ik \phi(x)}\,dx \end{equation*} with real-valued function $\phi(x)$ (both $\phi$ and $f$ are infinitely smooth) and their complex and multidimensional versions. We are interested how such integrals behave as $k \to +\infty$. We are interested in this because in many cases we get approximate solutions in this form.
In Chapter 3 we study how solutions of ODEs behave near singular point? F.e. how behave solutions of the following equations \begin{gather*} \sqrt{t}y'=f(t,y),\\ t y'=f(t,y),\\ t^2y'=f(t,y) \end{gather*} as $t\to+ 0$?
How solutions behave near infinity (as $t\to +\infty$?)
Assume that equation includes a small (say, $\varepsilon \ll 1$) for a large parameter (say, $\lambda \gg 1$). How solutions behave as $\varepsilon \to +0$ or $\lambda \to +\infty$?
What is a proper description? And we could consider similar problems for PDEs.
In Chapter 4 we consider ODE or PDE containing a small parameter $\varepsilon$--may be even not in equation but in the initial conditions. Assume that we know how to solve this equation as $\varepsilon=0$. How to solve it as $\varepsilon \ll 1$?
The simplest answer would be \begin{equation} X=X_0+X_1\varepsilon + X_2\varepsilon^2+\ldots \label{eq-1.1.1} \end{equation} but in many cases it would be not true. If (\ref{eq-1.1.1}) holds perturbation is regular.
The classical example is a celestial dynamics when masses of the planets are small in comparison to the mass of the Sun. As planetary masses are simply $0$ then planets are moving along Keplerian ellipses which for our Solar System are not very eccentric (eccentricity about $.1$). However masses of the planets are really small (like Jupiter's mass is $10^{-3}$ of the mass of the solar mass) and the gravity of the planets perturb orbits of other planets and the trajectories are not closed but it it could be observed only with much more precise observations in comparison to eccentricity. Large parts of 18th-19th centuries mathematicians were calculating orbits of planets using perturbation methods and finally Leverrie and Adams found that Uranus's orbit is perturbed by an unknown planet and were able to calculate where this planet (Neptune) was and astronomers found it!
Before this Clairaut found perturbations to the lunar orbit due to the Sun: system Earth-Moon rotates around Sun but since Sun's gravity differs at points where Earth and Moon are located, it perturbs the rotation of the moon about Earth. This perturbation is more significant than the perturbation due to gravity pull between planets.
But the case of the singular perturbation is even more interesting. F.e. consider the following two-point problem for ODE: \begin{align} -&\varepsilon^2 u'' + u=0\qquad 0 < x < l, \label{eq-1.1.2}\\ &u(0)=b_1,\ u (l)=b_2. \label{eq-1.1.3} \end{align} One can prove easily that the solution exists for all $\varepsilon>0$ and is uniformly bounded. But does it mean that $u=u_\varepsilon (x)\to u$ as $\varepsilon \to +0$ which solves the same problem as $\varepsilon=0$? The answer is kind of "yes" but the convergence is not uniform. Indeed, as $\varepsilon =0$ equation (\ref{eq-1.1.2}) becomes $u=f$ and for this equation conditions (\ref{eq-1.1.3}) cannot be imposed. Thus, unless $f(0)=b_1$ and $f(l)=b_2$ convergence $u_\varepsilon \to f$ cannot be uniform and $u_0(x)$ satisfies (\ref{eq-1.1.2}) but not (\ref{eq-1.1.3}).
The better approximation (with $O(\varepsilon)$ error) is given by \begin{equation} u_\varepsilon = f + \underbracket{(b_1-f(0))e^{-x/\varepsilon}} + \underbracket{(b_2-f(l))e^{-(l-x)/\varepsilon}} \label{eq-1.1.4} \end{equation} where selected are boundary layer types terms (they are negligible as $x\gg \varepsilon$ and $l-x\gg \varepsilon$ respectively).
But we want a better, multi-term approximation similar to (\ref{eq-1.1.1}) but with the boundary layer types terms. We could be interested in the different BVP.
And also in multidimensional problems: \begin{align} -&\varepsilon^2 \Delta u'' + u=0\qquad x\in \Omega\label{eq-1.1.5}\\ &u|_{\partial \Omega}=g\label{eq-1.1.6} \end{align} where $\Omega$ is a domain and $\partial\Omega$ its boundary.
Or we can consider a Neumann (or Robin) boundary problem on the whole boundary or on its part.
Remark. As Dirichlet boundary problem is given on $\Gamma_1\subset \partial\Omega$ and Neumann boundary problem is given on $\Gamma_2= \partial\Omega\setminus \Gamma_1$ and $\Gamma_1$ and $\Gamma_2$ are not disjoint, this is a singular problem even as $\varepsilon=1$ and asymptotics near $\Gamma_1\cap\Gamma_2$ could be studied.
In Chapter 5 we consider shortwave and semi-classical asymptotics. Let us change sign at $\varepsilon^2\Delta$ in (\ref{eq-1.1.5}). Situation changes drastically, it becomes even more complicated. We get Helmholtz equation \begin{equation} \Delta u + k^2 u=0 \label{eq-1.1.7} \end{equation} with $k=1/\varepsilon$.
This equation could be obtained from wave equation \begin{equation} \Delta u -c^{-2}\partial_t^2 u=0 \label{eq-1.1.8} \end{equation} after substitution $u=e^{i\omega t}v(x)$ with $\omega =c k$.
Solutions of wave equation satisfying initial conditions \begin{equation} u|_{t=0} = a(x) e^{i\phi_0 (x)k},\qquad u|_{t=0} = kb(x) e^{i\phi_0 (x)k} \label{eq-1.1.9} \end{equation} is constructed as \begin{equation} u(x,t)= A^+(x,t,k) e^{i\phi^+ (x,t)k} + A^-(x,t,k) e^{i\phi^- (x,t)k} \label{eq-1.1.10} \end{equation} where eikonals $\phi_\pm$ satisfy eikonal equation \begin{equation} \phi^\pm_t= \pm c|\nabla_x\phi^{\pm}| \label{eq-1.1.11} \end{equation} with initial data \begin{equation} \phi^\pm|_{t=0}=\phi_0 \label{eq-1.1.12} \end{equation} and amplitudes are \begin{equation} A^\pm (x,t,k)\sim \sum_{k=0} ^\infty a^\pm_{j}(x,t)k^{-j} \label{eq-1.1.13} \end{equation} and $a^\pm_j$ satisfy transport equations. The series here is asymptotic (the notion we'll learn from the very beginning). This is a short wave approximation.
The construction seems to be straightforward, but there is a pitfall: eikonal is constructed by geometrical ray construction which itself works for all $t$, eikonal may become non-smooth due to caustics or focussing of the rays and short wave approximation fails there. We will answer the following questions:
Oscillatory integrals will be handy and in Chapter 6 and Chapter 7 we develop the theory of such integrals.
Similarly we can consider short-wave approximations for Maxwell's equations. We also consider semiclassical approximation (i.e. as $\hbar \ll 1$) fo Schrödinger equation \begin{equation} i\hbar \psi_t = -\frac{\hbar^2}{2m}\Delta \psi + V\psi . \label{eq-1.1.14} \end{equation}
Related topic: calculate approximately (as $\hbar\ll 1$) eigenvalues of $1$-dimensional Schrödinger operator \begin{equation} \hat{H}= -\frac{\hbar^2}{2m}\partial_x^2 + V . \label{eq-1.1.15} \end{equation}
Let us return to the celestial mechanics. There everything seems to be straightforward because there is a regularly perturbed ODE system. Not so fast: usually $X_n$ in the decomposition (\ref{eq-1.1.1}) satisfy $X_n(t)=O(t^n)$ and therefore (\ref{eq-1.1.1}) provides a good approximation only for $\varepsilon t\ll 1$. Can we get a good approximation under less restrictive assumption: say $\varepsilon^N t\ll 1$? Or better without any restriction at all (that means for all $t$)? The answer to the first question provides Chapter 8 with decomposition (\ref{eq-1.1.1}) \begin{equation*} X =X_0+X_1\varepsilon + X_2\varepsilon^2+\ldots \tag{1} \end{equation*} but with $X_k= X_k(t, \varepsilon t, \ldots, \varepsilon ^Nt)$ rather than $X_k=X_k(t)$..
The answer to the second question is much more complicated and far beyond the scope of this class.
Finally, in Chapter 9 "Burgers equation" we consider solution to Burgers equation \begin{gather} u_t + \frac{1}{2}(u^2)_x = \varepsilon u_{xx} \qquad -\infty < x <\infty ,\ t > 0 \label{eq-1.1.16} \end{gather} satisfying initial condition $u|_{t=0}=\left\{\begin{aligned} &u_- && x<0,\\ & u_+ && x>0 \end{aligned}\right.\ $ with $u_-> u_+$ and its asymptotics when viscosity $\varepsilon \to +0$. It tends to the solution $u=\left\{\begin{aligned} &u_- && x< vt,\\ & u_+ && x> vt \end{aligned}\right.\ $ of (\ref{eq-1.1.16}) with $\varepsilon=0$; $v=\frac{1}{2}(u_++u_-)$
Problems which we consider we can symbolically write as \begin{gather} L_\varepsilon u_\varepsilon =f. \label{eq-1.1.17} \end{gather} We are mainly interested in asymptotic solutions satisfying \begin{gather} L_\varepsilon v_\varepsilon \sim f \label{eq-1.1.18} \end{gather} in the sense explained later. It does not necessarily mean, however, that \begin{gather} u_\varepsilon \sim v_\varepsilon \label{eq-1.1.19} \end{gather} The proof of this requires some non-trivial restrictions and some knowledge of Real Analysis which would make this class would not be accessible to anyone but mathematics specialist students.
Guessing the form in which we are looking for approximation is often than not an art (well, here we are talking about an original research). I have been privileged to know probably the greatest artists in this areas Arlen Il'in and Vasilii Babicg
https://ru.wikipedia.org/wiki/Ильин,_Арлен_Михайлович https://ru.wikipedia.org/wiki/Бабич,_Василий_Михайлович
(sorry, Russian only but you can use Google translate)