Elements of Statistical Learning CH5
1. (Ex. 5.4) \(f(X)\) is given as \[f(X) = \sum_{i=0}^{3} \beta_i X^i + \sum_{k=1}^{K} \theta_k (X - \xi_k)_+^3\]
i) Show that the condition of natural cubic splines implies: \[\beta_2 = \beta_3 = 0, \quad \sum_{k=1}^{K} \theta_k = 0, \quad \sum_{k=1}^{K} \xi_k \theta_k = 0\]
solution
1) \(\beta_2 = \beta_3 = 0\)
\(f(X)\) should be linear when \(X \leq \xi_1\).
\[ f(X) = \beta_0 + \beta_1 X + \beta_2 X^2 + \beta_3 X^3 \quad (X \leq \xi_1), \]
so \(\beta_2 = \beta_3 = 0\).
2) \(\sum_{k=1}^{K} \theta_k = 0, \quad \sum_{k=1}^{K} \xi_k \theta_k = 0\)
\(f(X)\) is linear when \(X \geq \xi_K\):
\[ \begin{align*} f(X) &= \sum_{i=0}^{3} \beta_i X^i + \sum_{k=1}^{K} \theta_k (X - \xi_k)_+^3 \\ &= \sum_{i=0}^{3} \beta_i X^i + \sum_{k=1}^{K} \theta_k (X^3 - 3 \xi_k X^2 + 3 \xi_k^2 X - \xi_k^3) \\ &= \left\{ \beta_0 + \sum_{k=1}^{K} \theta_k (-\xi_k^3) \right\} + \left( \beta_1 + \sum_{k=1}^{K} \theta_k 3 \xi_k^2 \right) X + \left( \sum_{k=1}^{K} \theta_k 3 \xi_k \right) X^2 + \left( \sum_{k=1}^{K} \theta_k \right) X^3 \quad (X \geq \xi_K) \end{align*} \]
\(f(X)\) is linear, so:
\[ \sum_{k=1}^{K} \theta_k = 0 \quad \& \quad \sum_{k=1}^{K} \xi_k \theta_k = 0 \]
ii) Show that:\[N_1(X) = 1, \quad N_2(X) = X, \quad N_{k+2}(X) = d_k(X) - d_{K-1}(X)\]where\[d_k(X) = \frac{(X - \xi_k)^3_+ - (X - \xi_{K})^3_+}{\xi_K - \xi_k}, \quad (k = 1, \dots, K - 2)\]
solution
\(X \to x\) for simplicity.
\[\begin{aligned} \sum_{k=1}^{K} \theta_k (x - \xi_k)_+^3 &= \sum_{k=1}^{K-2} \theta_k (x - \xi_k)_+^3 + \theta_{K-1} (x - \xi_{K-1})_+^3 + \theta_K (x - \xi_K)_+^3 \\ &= \sum_{k=1}^{K-2} \theta_k (x - \xi_k)_+^3 + \theta_{K-1} (x - \xi_{K-1})_+^3 + \theta_K (x - \xi_K)_+^3 - \theta_{K-1}(x-\xi_K)_+^3+ \theta_{K-1}(x-\xi_K)_+^3\\ &= \sum_{k=1}^{K-2} \theta_k (x - \xi_k)_+^3 +(\theta_K + \theta_{K-1})(x-\xi_K)_+^3 + \theta_{K-1}\{(x-\xi_{K-1})_+^3 - (x -\xi_K)_+^3\} \\ &= \sum_{k=1}^{K-2} \theta_k (x - \xi_k)_+^3 - \sum_{k=1}^{K-2} \theta_k (x - \xi_K)_+^3 + \frac{\xi_K - \xi_{K-1}}{\xi_K - \xi_{K-1}}\theta_{K-1} \sum_{k=1}^{K-2}\{(x-\xi_{K-1})_+^3 - (x -\xi_K)_+^3\} \qquad\because \sum_{k=1}^{K} \theta_k = 0 \\ &= \sum_{k=1}^{K-2} \theta_k \frac{(x - \xi_k)_+^3 - (x - \xi_K)_+^3}{\xi_K - \xi_k} \cdot (\xi_K - \xi_k) + \sum_{k=1}^{K-2} \theta_k \frac{(x - \xi_{K-1})_+^3 - (x - \xi_K)_+^3}{\xi_K - \xi_{K-1}} (\xi_K - \xi_k) \\ &= \sum_{k=1}^{K-2} (\xi_K - \xi_k) \theta_k \left\{ \frac{(x - \xi_k)_+^3 - (x - \xi_K)_+^3}{\xi_K - \xi_k} - \frac{(x - \xi_{K-1})_+^3 - (x - \xi_K)_+^3}{\xi_K - \xi_{K-1}} \right\} \\ &:= \sum_{k=1}^{K-2} \alpha_{k+2} \left( d_k(x) - d_{K-1}(x) \right), \quad (\alpha_{k+2} = \theta_k (\xi_K - \xi_k), \quad k = 1, 2, \dots) \end{aligned}\]
Letting \(x_1 = \beta_1\) and \(x_2 = \beta_2\), we have
\[ \begin{align*} f(x) &= \beta_0 + \beta_1 x + \sum_{k=1}^{K} \theta_k (x - \xi_k)_+^3 \\ &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \alpha_{k+2} N_{k+2}(x) \\ &= \sum_{k=1}^{K} \alpha_k N_k(x). \end{align*} \]
2. (Ex. 5.7)
(a) Show that:\[\int_{a}^{b} g''(x) h''(x) dx = 0\]
solution
\[ \int_{a}^{b} g''(x) h''(x) dx = \left[ g''(x) h'(x) \right]_{x=a}^{x=b} - \int_{a}^{b} g'''(x) h'(x) dx \]
\(g\) is NCS (Natural Cubic Spline), so \(g\) is linear at \(x = a, b\). That is, \(g''(a) = g''(b) = 0\). So,
\[ \left[ g''(x) h'(x) \right]_{x=a}^{x=b} = g''(b) h'(b) - g''(a) h'(a) = 0 \]
\(g\) is NCS, so \(g'''(x)\) is constant for each interval \((\xi_i, \xi_{i+1})\). Since \(g'''(\xi_i) = g'''(\xi_{i+1}) = 0\),
\[ \int_{a}^{b} g'''(x) h'(x) dx = \sum_{i=1}^{N} g'''(\xi_i) \int_{\xi_i}^{\xi_{i+1}} h'(x) dx = \sum_{i=1}^{N} g'''(\xi_i) \left( h(\xi_{i+1}) - h(\xi_i) \right) \]
\(h = \hat{g} - g\) and \(\hat{g}\) is an interpolant of \(\{ (\xi_i, z_i) \}_{i=1}^{N}\), so \(\hat{g}(\xi_i) = g(\xi_i)\). This leads to:
\[ \int_{a}^{b} g'''(x) h'(x) dx = 0 \]
Thus,
\[ \int_{a}^{b} g''(x) h''(x) dx = 0 \]
(b) Show that: \[\int_{a}^{b} \hat{g}''(x)^2 dt \geq \int_{a}^{b} g''(x)^2 dt\]
(\(\int_{a}^{b} = S\), \(\int g(x) dx \Rightarrow \int g\)) for simplicity.
solution
\[ \int \hat{g}''^2 = \int (g'' + h'')^2 = \int g''^2 + \int h''^2 + 2 \int g'' h''= \int g''^2 + \int h''^2 \quad (\because \int g'' h'' = 0) \geq \int g''^2 \quad (\because \int h''^2 \geq 0) \]
The equality holds when \(\int (h'')^2 = 0 \ \Rightarrow \ h'' = 0\) a.e. Then \(h' = \int h'' + c = c\) (Constant), and \(h = c x + d\) (linear). We have \(h(\xi_i) = 0\) \(\forall i = 1, \dots, N(N \geq 2)\), so \(c = d = 0\) and \(h = 0\).
(c) Show that: \[\arg\min_{f} \left\{ \sum_{i=1}^{N} (y_i - f(x_i))^2 + \lambda \int (f'')^2 \right\} \quad \text{is NCS}.\]
solution
Let: \[ \hat{f} = \arg\min_{f} \left\{ \sum_{i=1}^{N} (y_i - f(x_i))^2 + \lambda \int (f'')^2 \right\} \] and let \(g\) be a NCS interpolant to the pairs \(\{(x_i, \hat{f}(x_i))\}_{i=1}^{N}\). It is obvious that \(y_i - \hat{f}(x_i) = y_i - g(x_i)\), so: \[ \sum_{i=1}^{N} (y_i - \hat{f}(x_i))^2 = \sum_{i=1}^{N} (y_i - g(x_i))^2. \] \(\hat{f}\) is the minimizer, so:
\[ \int (g'')^2 \geq \int (\hat{f}'')^2. \]
\(g\) is NCS, so:
\[ \int (\hat{f}'')^2 \geq \int (g'')^2 \quad (\text{by part (b)}). \]
This shows:
\[ \int (\hat{f}'')^2 = \int (g'')^2. \]
So, \(g\) also minimizes the given RSS.
\[ \therefore \quad \arg\min_{f} \left\{ \sum_{i=1}^{N} (y_i - f(x_i))^2 + \lambda \int (f'')^2 \right\} = g. \]
3. (Ex. 5.15) Prove the following questions.
(\(\langle \cdot, \cdot \rangle_{\gamma_k} \Rightarrow \langle \cdot, \cdot \rangle_{\mathcal{H}_K} \Rightarrow \| \cdot \|, \quad \sum \Rightarrow \sum, \quad \langle \cdot, \cdot \rangle_{L_2} \Rightarrow (\cdot \cdot)\) for simplicity.)
(a) \(\langle K(\cdot, x), f \rangle = f(x)\)
proof
Let:
\[ f(t) = \sum_{i} c_i \phi_i(t), \quad K(x, y) = \sum_{i} \frac{1}{r_i} \phi_i(x) \phi_i(y) \]
Then:
\[ \begin{align*} \langle K(\cdot, x), f \rangle &= \left\langle \sum_{i} \frac{1}{r_i} \phi_i(x) \phi_i(\cdot), \sum_{j} c_j \phi_j(\cdot) \right\rangle \\ &= \sum_{i} \frac{1}{r_i} \phi_i(x) \langle \phi_i(\cdot), \sum_{j} c_j \phi_j(\cdot) \rangle \\ &= \sum_{i} \frac{1}{r_i} \phi_i(x) \sum_{j} c_j \langle \phi_i(\cdot), \phi_j(\cdot) \rangle \\ &= \sum_{i} \frac{1}{r_i} \phi_i(x) \sum_{j} c_j r_i \delta_{ij} \\ &= \sum_{i} \phi_i(x) c_i \\ &= f(x) \end{align*} \]
(b) \(\langle K(\cdot, x_i), K(\cdot, x_j) \rangle = K(x_i, x_j)\)
proof
\[\begin{aligned} \langle K(\cdot, x_i), K(\cdot, x_j) \rangle &= \langle \sum_k r_k \phi_k(x_i) \phi_k(\cdot), \sum_k r_k\phi_k(x_j) \phi_k(\cdot) \rangle \\ &= \sum_k\frac{1}{r_k}\bigl( \sum_l r_l \phi_l(x_i) \phi_l(\cdot), \phi_k(\cdot) \bigr) \bigl( \sum_l r_l \phi_l(x_j) \phi_l(\cdot), \phi_k(\cdot) \bigr) \\ &= \sum_n \frac{1}{r_n} \phi_n(x_i) \phi_n(x_j) \\ &= K(x_i, x_j). \end{aligned}\]
(c) If \(g(x) = \sum_{i=1}^{N} \alpha_i K(x, x_i)\), then \[J(g) = \sum_{i=1}^{N} \sum_{j=1}^{N} \alpha_i \alpha_j K(x_i, x_j)\]
proof
\(J(g) = \langle g, g \rangle\) by definition. So,
\[\begin{aligned} J(g) &= \left\langle \sum_{i=1}^{N} \alpha_i K(\cdot, x_i), \sum_{j=1}^{N} \alpha_j K(\cdot, x_j) \right\rangle \\ &= \sum_{i=1}^{N} \sum_{j=1}^{N} \alpha_i \alpha_j \langle K(\cdot, x_i), K(\cdot, x_j) \rangle \\ &= \sum_{i=1}^{N} \sum_{j=1}^{N} \alpha_i \alpha_j K(x_i, x_j) \quad \text{(by part (b))}. \end{aligned}\]
(d) Suppose \[\hat{g} = g + \rho\] with \(\rho \in \mathcal{H}_K\) and orthogonal to each \(K(x_i, \cdot)\) in \(\mathcal{H}_K\) \(\ \forall i = 1, \dots, N\). Show that \[\sum_{i=1}^{N} L(y_i, \hat{g}(x_i)) + \lambda J(\hat{g}) \geq \sum_{i=1}^{N} L(y_i, g(x_i)) + \lambda J(g).\]
proof
1) \(L(y_i, \hat{g}(x_i)) = L(y_i, g(x_i))\)
\(\langle K(x_i, \cdot), \rho(\cdot) \rangle = \rho(x_i) = 0\) \(\forall i = 1, \dots, N\). So, \(\hat{g}(x_i) = g(x_i)\) and
\[ L(y_i, \hat{g}(x_i)) = L(y_i, g(x_i)) \quad \forall i. \]
2) \(J(\hat{g}) \geq J(g)\)
\[ J(\hat{g}) = \langle g + \rho, g + \rho \rangle = \| g \|^2 + \| \rho \|^2 + 2 \langle g, \rho \rangle \]
\[ \langle g, \rho \rangle = \left\langle \sum_{i} \alpha_i K(x_i, \cdot), \rho(\cdot) \right\rangle = \sum_{i} \alpha_i \langle K(x_i, \cdot), \rho(\cdot) \rangle = 0,\qquad \because K(x_i, \cdot) \perp \rho(\cdot) \]
So, \[ J(\hat{g}) = J(g) + \| \rho \|^2 \geq J(g) \]
The equality holds iff \(\rho = 0\) (trivial).
3) Conclusion
Combining 1) and 2), \[ \sum_{i=1}^{N} L(y_i, \hat{g}(x_i)) + \lambda J(\hat{g}) \geq \sum_{i=1}^{N} L(y_i, g(x_i)) + \lambda J(g). \]
cf) In conclusion:
\[ \arg\min_{f \in \mathcal{H}_K} \left\{ \sum_{i=1}^{N} L(y_i, f(x_i)) + \lambda J(f) \right\} = g. \]
proof
Consider a Hilbert space \(G\) whose basis consists of \(\{ K(x_i, \cdot), \dots, K(x_N, \cdot) \}\), and let \(G^{\perp}\) be an orthogonal complement of \(G\) with respect to \(\mathcal{H}_K\). (\(\because K(x_i, \cdot) \in \mathcal{H}_K\)) We have: \[ \mathcal{H}_K = G \oplus G^{\perp} \quad (\because \text{Proposition 4.2, Real Analysis (Stein)}) \]
So, any \(f \in \mathcal{H}_K\) can be written as:
\[ f = g + \rho, \]
where:
\[ g(\cdot) = \sum_{i=1}^{N} \alpha_i K(x_i, \cdot) \in G, \quad \rho \in G^{\perp}. \]
\(\rho \in G^{\perp}\) directly means:
\[ \langle \rho, K(x_i, \cdot) \rangle = 0 \quad \forall i = 1, \dots, N, \]
so we can apply (d) to complete the proof.
4. (Ex. 5.16) Consider a ridge regression problem and \(M \geq N\): \[\min_{\{c_i\}_{i=1}^{M}} \sum_{i=1}^{N} \left( y_i - \sum_{j=1}^{M} \frac{c_j}{\sqrt{r_j}} \phi_j(x_i) \right)^2 + \lambda \sum_{j=1}^{M} \frac{c_j^2}{r_j}.\] Assume you have a kernel \(K\) that computes inner product:\[K(x_i, y) = \sum_{m=1}^{M} h_m(x_i) h_m(y).\]
(a) Derive \(h(x) = V D_r^{\frac{1}{2}} \phi(x)\). How would you compute \(V\) and \(D_r\) given \(K\)? Hence, show that the above expression is equivalent to the following:
\[ \min_{\{\beta_m\}_{m=1}^{M}} \sum_{i=1}^{N} \left( y_i - \sum_{m=1}^{M} \beta_m h_m(x_i) \right)^2 + \lambda \sum_{m=1}^{M} \beta_m^2 \]
proof
1) \(h(x) = V D_r^{\frac{1}{2}} \phi(x)\)
Let \(h(x) = (h_1(x), \dots, h_M(x))^T\) and \(\phi(x) = (\phi_1(x), \dots, \phi_M(x))^T\).
\[ K(x, y) = \sum_{m=1}^{M} h_m(x) h_m(y) = \sum_{k=1}^{M} r_k \phi_k(x) \phi_k(y) \quad \text{by definition of } K \ (5.4.5) \]
Compute inner product:
\[ \langle K(x, y), \phi_k(x) \rangle = \sum_{m=1}^{M} h_m(x) \langle h_m(y), \phi_k(x) \rangle \]
\[ \langle K(x, y), \phi_k(x) \rangle = \sum_{k=1}^{M} r_k \phi_k(x) \langle \phi_k(y), \phi_k(x) \rangle = r_k \phi_k(y) \]
Then:
\[ \sum_{m=1}^{M} \langle h_m(x), \phi_k(x) \rangle h_m(y) = r_k \phi_k(y) \quad (1) \]
Also:
\[ r_k \langle \phi_k(x), \phi_l(y) \rangle = \sum_{m=1}^{M} \langle h_m(x), \phi_k(x) \rangle \langle h_m(y), \phi_l(y) \rangle := \sum_{m=1}^{M} q_{k, m} q_{l, m} \]
where:
\[ q_{k, m} = \langle h_m(\cdot), \phi_k(\cdot) \rangle \]
Note that:
\[ \sum_{m=1}^{M} q_{k, m} q_{l, m} = r_k \quad \text{when} \ l = k, \ \text{and 0 otherwise}. \]
The (1) can be written as:
\[ \sum_{m=1}^{M} q_{k, m} h_m(x) = r_k \phi_k(x), \]
which is equivalent to:
\[ G_M h(x) = D_r \phi(x) \quad \sim (2) \]
where:
\[ G_M = [q_{k, m}] \in \mathbb{R}^{M \times M} \quad \text{and} \quad D_r = \text{diag}(r_1, \dots, r_M). \]
Compute:
\[ [G_M G_M^T]_{k, l} = \sum_{m=1}^{M} q_{k, m} [G_M]_{m, l} = \sum_{m=1}^{M} q_{k, m} q_{l, m} = \begin{cases} r_k & (l = k) \\ 0 & (l \neq k) \end{cases} \]
Thus:
\[ G_M G_M^T = D_r. \]
Note that \(G_M\) is non-singular, and \(G_M^T = G_M D_r \quad (r_k > 0)\). Also, \(D_r^{-\frac{1}{2}} G_M D_r^{-\frac{1}{2}} = I_M\) so \((D_r^{\frac{1}{2}} G_M)^T = G_M D_r^{\frac{1}{2}}.\) Then:
\[\begin{aligned} &G_M h(x) = D_r \phi(x) \quad \because (2) \\ &\Leftrightarrow D_r^{\frac{1}{2}} G_M h(x) = D_r^{\frac{1}{2}} \phi(x) \\ &\Leftrightarrow h(x) = G_M^T \phi(x) = G_M^T D_r^{-1} D_r \phi(x) \\ &\Leftrightarrow h(x) = V D_r^{\frac{1}{2}} \phi(x) \quad (\text{let} \ V := G_M^T D_r^{-\frac{1}{2}}) \end{aligned}\]
\((D_r^{\frac{1}{2}} G_M)^{-1} = G_M D_r^{-\frac{1}{2}}\), so \(G_M D_r^{-\frac{1}{2}} D_r^{\frac{1}{2}} G_M = I_M\). Using this:
\[ V V^T = G_M D_r^{-\frac{1}{2}} D_r^{\frac{1}{2}} G_M^T = (G_M D_r^{-1} G_M)^{-1} = I_M \]
Also:
\[ V^T V = D_r^{-\frac{1}{2}} G_M^T G_M D_r^{-\frac{1}{2}} = D_r^{\frac{1}{2}} (G_M G_M^T) D_r^{\frac{1}{2}} = I_M \]
This shows \(V\) is \((M \times M)\) orthogonal matrix.
2) \(\min_{\{c_i\}_{i=1}^{M}} \sum_{i=1}^{N} \left( y_i - \sum_{j=1}^{M} \frac{c_j}{\sqrt{r_j}} \phi_j(x_i) \right)^2 + \lambda \sum_{j=1}^{M} \frac{c_j^2}{r_j}\Leftrightarrow \min_{\{\beta_m\}_{m=1}^{M}} \sum_{i=1}^{N} \left( y_i - \sum_{m=1}^{M} \beta_m h_m(x_i) \right)^2 + \lambda \sum_{m=1}^{M} \beta_m^2\)
\[\begin{aligned} \min_{\beta} \sum_{i=1}^{N} (y_i - \beta^T h(x_i))^2 + \lambda \beta^T \beta &= \sum_{i=1}^{N} (y_i - \beta^T V D_r^{\frac{1}{2}} \phi(x_i))^2 + \lambda \beta^T \beta \\ &= \min_{c} \sum_{i=1}^{N} (y_i - c^T \phi(x_i))^2 + \lambda (V D_r^{-\frac{1}{2}} c)^T (V D_r^{-\frac{1}{2}} c) \\ & \Bigl(\because \text{by letting } D_r^{\frac{1}{2}} V^T \beta = c \in \mathbb{R}^M \Rightarrow \beta = (V^T)^{-1} D_r^{-\frac{1}{2}} c = V D_r^{-\frac{1}{2}} c \quad (\text{orthogonal } V) \Bigr) \\ &= \min_{c} \sum_{i=1}^{N} (y_i - c^T \phi(x_i))^2 + \lambda c^T D_r^{-1} c \quad \because c^T D_r^{-\frac{1}{2}} V^T V D_r^{-\frac{1}{2}} c = c^T D_r^{-1} c, \quad V V^T = I \\ &= \min_{\{c_i\}_{i=1}^{M}} \sum_{i=1}^{N} \left( y_i - \sum_{j=1}^{M} \frac{c_j}{\sqrt{r_j}} \phi_j(x_i) \right)^2 + \lambda c^T D_r^{-1} c \\ &= \min_{\{c_i\}_{i=1}^{M}} \sum_{i=1}^{N} \left( y_i - \sum_{j=1}^{M} \frac{c_j}{\sqrt{r_j}} \phi_j(x_i) \right)^2 + \lambda \sum_{j=1}^{M} \frac{c_j^2}{r_j}. \end{aligned}\]
Letting \(M \to \infty\) leads to the desired conclusion.
(b) Show that:\[\hat{f} = H \hat{\beta} = K (K + \lambda I)^{-1} Y\]where \(H\) is \(N \times M\) matrix of \(h_m(x_i)\), and \(K = H H^T\) is \(N \times N\) matrix of inner product \(h(x_i)^T h(x_j)\).
proof
\[ y_i = \beta^T h(x_i) = y_i - h(x_i)^T \beta = (Y - H \beta)_i \]
where \([H]_{i, m} = h_m(x_i)\) and \(r_i(H) = h(x_i)\), \(\quad (Y = (y_1, \dots, y_N)^T)\).
Let:
\[\begin{aligned} Q(\beta) &= \sum_{i=1}^{N} (y_i - \beta^T h(x_i))^2 + \lambda \beta^T \beta \\ &= (Y - H \beta)^T (Y - H \beta) + \lambda \beta^T \beta \\ &= Y^T Y - 2 Y^T H \beta + \beta^T H^T H \beta + \lambda \beta^T \beta \end{aligned}\]
Since \[ \frac{\partial}{\partial \beta} Q(\beta) = -2 H^T Y + 2 (H^T H + \lambda I) \beta, \]
minimizer \(\hat{\beta}\) satisfies:
\[ H^T Y - (H^T H + \lambda I) \hat{\beta} = 0 \]
and:
\[ \hat{\beta} = (H^T H + \lambda I)^{-1} H^T Y \in \mathbb{R}^M \]
Note that:
\[ x^T (H^T H + \lambda I) x = \| H x \|_2^2 + \lambda \| x \|_2^2 > 0 \]
for nonzero \(x \in \mathbb{R}^M\). So \(H^T H + \lambda I\) is PD, \(\Rightarrow\) non-singular.
Then:
\[ \hat{f}(x_i) = \sum_{m=1}^{M} \hat{\beta}_m h_m(x_i) = h(x_i)^T \hat{\beta} \quad (\sim (3)) \]
and:
\[ \hat{f} = (f(x_1), \dots, f(x_N)) = H \hat{\beta} = H (H^T H + \lambda I)^{-1} H^T Y \]
The Woodbury identity leads to:
\[ (\lambda I + H H^T)^{-1} = (\lambda I)^{-1} - (\lambda I)^{-1} H (I + H^T (\lambda I)^{-1} H)^{-1} H^T (\lambda I)^{-1} = \frac{1}{\lambda} I - \frac{1}{\lambda} H (I + \frac{1}{\lambda} H^T H)^{-1} \frac{1}{\lambda} H^T \]
Then:
\[\begin{aligned} \hat{f} &= H \left( \frac{1}{\lambda} I - \frac{1}{\lambda} H (I + \frac{1}{\lambda} H^T H)^{-1} H^T \right) Y \\ &= \frac{1}{\lambda} H H^T Y - \frac{1}{\lambda} H (I + \frac{1}{\lambda} H^T H)^{-1} H^T H^T Y \\ &= \frac{1}{\lambda} H H^T \left( I - (\lambda I + H^T H)^{-1} H^T H \right) Y \\ &= \frac{1}{\lambda} H H^T (I - (\lambda I + H^T H)^{-1} H^T H) Y \qquad \text{let }H H^T = K \\ &= \frac{1}{\lambda} K \left( (I + K^T)(\lambda I + K) - (I + K^T) K \right) Y \\ &= \frac{1}{\lambda} K \left( (\lambda I + K^T)(\lambda I) \right) Y \\ &= K (\lambda I + K)^{-1} Y \end{aligned}\]
Thus,
\[ [K]_{i, j} = [H H^T]_{i, j} = \sum_{m=1}^{M} [H]_{i, m} [H^T]_{m, j} = \sum_{m=1}^{M} h_m(x_i) h_m(x_j) = h(x_i)^T h(x_j) = K(x_i, x_j). \]
(c) Show that: \[\hat{f}(x) = h(x)^T \hat{\beta} = \sum_{j=1}^{N} K(x, x_j) \hat{\alpha}_j\] \[\left( \hat{\alpha} = (K + \lambda I)^{-1} Y \in \mathbb{R}^N \right)\]
proof
We’ve shown in (b) that:
\[ \hat{f}(x_i) = \sum_{m=1}^{M} \hat{\beta}_m h_m(x_i) = h(x_i)^T \hat{\beta} \]
The conclusion of (b) is that:
\[ \hat{f} = K (K + \lambda I)^{-1} Y = K \hat{\alpha}. \] So, \[ \hat{f}(x_i) = (\hat{f})_i = (K \hat{\alpha})_i = r_i(K) \cdot \hat{\alpha} = \sum_{j=1}^{N} K(x_i, x_j) \hat{\alpha}_j = \sum_{j=1}^{N} K(x_i, x_j) \hat{\alpha}_j \]
(d) How would you modify your solution if \(M < N\)?
solution
\(H H^T + \lambda I\) is positive definite whether \(M < N\) or not (\(\lambda > 0\)). So, \(\hat{f}\) can still be written as:
\[ K (K + \lambda I)^{-1} Y. \]