Marginal Likelihood Computation#
Recall Lemma 2#
Under same HMM setup, we are interested in marginal \(p(y)\):
Then
Marginal Likelihood form#
Consider the HMM:
We know the exact marginal likelihood \(p(y_{1:T})\) is given by
We also note that equivalently
Where
Since we have a fully Gaussian HMM setup, we can use Kalman filter to compute \(p(x_t \mid y_{1:t-1}) = \mathcal{N}(x_t; \hat{m}_t, \hat{P}_t)\), where
So that we have
By Lemma 2, this is
Where we let \(S_t = H \hat{P}_t H^\top + R\) for simplicity
Kalman for marginal likelihood#
The full algorithm for computing \(\log p(y_{1:T}) = \sum_{t=1}^T \log p(y_t \mid y_{1:t-1})\) is given by
Input: Starting point \( m_0, P_0\), and the sequence of observations \( y_{1:T} \) for the specific T.
Set \(\hat{m}_0 = m_0, \hat{P}_0 = P_0\)
Filtering:
For \( n = 1, \dots, T \) doPrediction step:
\[\begin{align*} \hat{m}_t &= \theta m_{t-1} \\ \hat{P}_t &= \theta P_{t-1} \theta^\top + Q \end{align*}\]Update step:
\[\begin{align*} S_t &= H \hat{P}_t H^\top + R \\ K_t &= \hat{P}_t H^\top (S_t)^{-1} \\ m_t &= \hat{m}_t + K_t (y_t - H \hat{m}_t) \\ P_t &= (I - K_t H) \hat{P}_t \end{align*}\]End for
Return \( \hat{m}_{1:T}, S_{1:T}\)
And we output