Spectral method approximation of small Laplacian deformations on Sobolev spaces

I'll give a clean, self-contained explanation showing

uXk    CkfXk,u=(εΔ+1)1f\|u\|_{X^{k}} \;\le\; C_k \|f\|_{X^{k}}, \qquad u=( -\varepsilon\Delta +1)^{-1} f

and point out exactly where GεδG_\varepsilon\to\delta is used.

1) Representation by the Green kernel

On a compact Riemannian manifold SS the solution of

(εΔx+1)Gε(x,y)=δy(x)(-\varepsilon\Delta_x +1)G_\varepsilon(x,y)=\delta_y(x)

is the (symmetric, smooth off the diagonal) resolvent kernel Gε(x,y)G_\varepsilon(x,y), and the solution uu of

(εΔ+1)u=f(-\varepsilon\Delta +1)u=f

has the representation

u(y)=SGε(x,y)f(x)d ⁣Vx.u(y)=\int_S G_\varepsilon(x,y)\,f(x)\,d\!V_x.

So the operator (εΔ+1)1(-\varepsilon\Delta+1)^{-1} is the integral operator with kernel GεG_\varepsilon​.

2) LL^\infty-estimate (where the weak convergence to δ\delta is relevant)

Integrate the kernel equation in xx:

S(εΔxGε(x,y)+Gε(x,y))dVx=Sδy(x)dVx=1.\int_S\bigl(-\varepsilon\Delta_x G_\varepsilon(x,y)+G_\varepsilon(x,y)\bigr)\,dV_x =\int_S\delta_y(x)\,dV_x=1.

The Laplacian term integrates away (or use divergence theorem), so

SGε(x,y)dVx=1for every y.\int_S G_\varepsilon(x,y)\,dV_x = 1\qquad\text{for every }y.

Moreover the maximum principle gives Gε0G_\varepsilon\ge 0. Thus Gε(,y)G_\varepsilon(\cdot,y) is a probability density on SS for each fixed yy. From the kernel representation

u(y)fLSGε(x,y)dVx=fL,|u(y)| \le \|f\|_{L^\infty}\int_S G_\varepsilon(x,y)\,dV_x = \|f\|_{L^\infty},

so

uLfL.\|u\|_{L^\infty}\le \|f\|_{L^\infty}.

(Here the weak convergence GεδG_\varepsilon\rightharpoonup\delta explains that the kernel concentrates as ε0\varepsilon\downarrow0, and in the limit the integral operator tends pointwise to evaluation at the point — but the inequality above does not require taking limits: it follows from positivity and unit mass of GεG_\varepsilon​.)

3) HkH^k-estimate (spectral / Fourier multipliers)

Let {φj}\{\varphi_j\} be an L2L^2-orthonormal basis of eigenfunctions of Δ-\Delta with eigenvalues λj0\lambda_j\ge0:

Δφj=λjφj.-\Delta\varphi_j=\lambda_j\varphi_j.

Expand f=jfjφjf=\sum_j f_j\varphi_j​, then

u=(1+ελj)1fjφj,u=(1+\varepsilon\lambda_j)^{-1}f_j\varphi_j,

so the Fourier coefficient of uu on φj\varphi_j​ is uj=fj1+ελju_j=\dfrac{f_j}{1+\varepsilon\lambda_j}. The HkH^k-norm squared is (up to equivalent normalization)

wHk2j(1+λj)kwj2.\|w\|_{H^k}^2 \simeq \sum_j (1+\lambda_j)^k |w_j|^2.

Hence

uHk2=j(1+λj)k(1+ελj)2fj2.\|u\|_{H^k}^2 = \sum_j \frac{(1+\lambda_j)^k}{(1+\varepsilon\lambda_j)^2}\,|f_j|^2.

But for every λj0\lambda_j\ge0 and every ε>0\varepsilon>0,

(1+λj)k(1+ελj)2(1+λj)k1=(1+λj)k,\frac{(1+\lambda_j)^k}{(1+\varepsilon\lambda_j)^2}\le (1+\lambda_j)^k\cdot 1 = (1+\lambda_j)^k,

because (1+ελj)21(1+\varepsilon\lambda_j)^2\ge1. Therefore

uHk2j(1+λj)kfj2=fHk2,\|u\|_{H^k}^2 \le \sum_j (1+\lambda_j)^k |f_j|^2 = \|f\|_{H^k}^2,

so

uHkfHk.\|u\|_{H^k}\le \|f\|_{H^k}.

(Equivalently, the multiplier (1+ελ)1(1+\varepsilon\lambda)^{-1} has modulus 1\le 1, so (εΔ+1)1(-\varepsilon\Delta+1)^{-1} is a contraction on every Sobolev space HkH^k.)

4) Combine to get the XkX^k-estimate

Recall Xk:=HkLX^k := H^k\cap L^\infty with norm vXk=vHk+vL\|v\|_{X^k}=\|v\|_{H^k}+\|v\|_{L^\infty}. From the two bounds above we get, with Ck=1C_k=1,

uXk  =  uHk+uLfHk+fL=fXk.\|u\|_{X^k} \;=\;\|u\|_{H^k}+\|u\|_{L^\infty} \le \|f\|_{H^k}+\|f\|_{L^\infty} = \|f\|_{X^k}.

So the desired inequality holds (in fact with constant 1) for every ε>0\varepsilon>0.


Remarks / why Ricciardi mentioned GεδG_\varepsilon\to\delta

  • The convergence GεδG_\varepsilon\rightharpoonup\delta explains the limiting behaviour of the solution as ε0\varepsilon\downarrow0: u=(Gεf)fu=(G_\varepsilon * f)\to f pointwise (and in suitable norms) — consistent with the fact that (εΔ+1)1Id(-\varepsilon\Delta+1)^{-1}\to\mathrm{Id}.
  • But to prove the operator norm bound uXkCkfXk\|u\|_{X^k}\le C_k\|f\|_{X^k}​ you do not actually need the weak convergence; you only need the two facts used above:
    1. Gε(,y)0G_\varepsilon(\cdot,y)\ge0 and SGε(,y)=1\int_S G_\varepsilon(\cdot,y)=1 for each yy (gives the LL^\infty bound),
    2. the spectral multiplier (1+ελ)1(1+\varepsilon\lambda)^{-1} has modulus 1\le1 (gives the HkH^k bound).

So the convergence to the delta is a helpful intuition and gives pointwise limit information, but the inequality itself follows from the kernel mass + positivity and from the spectral (Fourier multiplier) estimate.