summaryrefslogtreecommitdiff
path: root/report/np_clustering.tex
diff options
context:
space:
mode:
Diffstat (limited to 'report/np_clustering.tex')
-rw-r--r--report/np_clustering.tex15
1 files changed, 8 insertions, 7 deletions
diff --git a/report/np_clustering.tex b/report/np_clustering.tex
index 002877b5..17ff31a4 100644
--- a/report/np_clustering.tex
+++ b/report/np_clustering.tex
@@ -3,27 +3,28 @@
\chapter{Nonparametric Models}
-In this chapter we describe several closely related Bayesian nonparametric models for inducing categories in a synchronous context-free grammar. Our nonparametric models are variations on Latent Dirichlet Allocation (LDA) model of \cite{blei:2003}. Rather than modeling sentences (or sentence pairs), we assume that rule extraction heuristics determine the set of valid constituents and grammar rules, and so our task is only to determine the category labels. As discussed in the previous chapter, we make the critical assumption that each phrase (or pair), $\p$, can be clustered on the basis of the contexts it occurs in. We therefore define a generative model of a corpus that consists of collections of contexts (one context collection for each phrase pair type).
+In this chapter we describe a Bayesian nonparametric model for inducing categories in a synchronous context-free grammar. As discussed in Chapter~\ref{chapter:setup}, we hypothesize that each phrase pair, $\p$, can be clustered on the basis of the contexts it occurs in. Using this as our starting point, we define a generative model where contexts are generated by the (latent) category type of the phrases they occur in. In contrast to most prior work using Bayesian models for synchronous grammar induction \citep{blunsom:nips2008,blunsom:acl2009,zhang:2008}, we do not model parallel sentence pairs directly. Rather, we assume that our corpus is a {\emph collection of contexts} (grouped according to the phrases they occur in), where each context is conditionally independent of the others, given the type of the category it surrounds. The models used here are thus variations on the Latent Dirichlet Allocation (LDA) model of \cite{blei:2003}.
-\section{Model}
+In Section~\ref{sec:npmodel} we describe the basic structure of our nonparametric models as well as how inference was carried out.
-The high-level structure of our model is as follows: each observed phrase (pair), $\p$, consists of a finite mixture of categories, $\theta_{\p}$. The list of contexts $C_{\p}$ is generated as follows. A category type $z_i$ is drawn from $\theta_{\p}$, and this generates the observed context, $\textbf{c}_i$, according to a category-specific distribution over contexts types, $\phi_{z_i}$. Since we do not know the values of $\theta_{\p}$ and $\phi_z$, we place priors on the distributions, to reflect our prior beliefs about the shape these distributions should have and infer their values from the data we can observe. Specifically, our {\emph a priori} expectation is that both parameters will be relatively peaked, since each phrase, $\p$, should relatively unambiguous belong to particular category, and each category to generate a relatively small number of context strings, $\textbf{c}$.
+\section{Model}
+\label{sec:npmodel}
-To encode these prior beliefs, we make use of Pitman-Yor processes \citep{pitman:1997}, which can capture these intuitions and which have already been demonstrated to be particularly effective models for language \citep{teh:2006,goldwater:2006}.
+This section describes the details of the phrase clustering model model. Each observed phrase (pair), $\p$, is characterized by a finite mixture of categories, $\theta_{\p}$. The collection of contexts for each phrase, $C_{\p}$, is generated as follows. A category type $z_i$ is drawn from $\theta_{\p}$, and this generates the observed context, $\textbf{c}_i$, according to a category-specific distribution over contexts types, $\phi_{z_i}$. Since we do not know the values of $\theta_{\p}$ and $\phi_z$, we place priors on the distributions, to reflect our prior beliefs about the shape these distributions should have and infer their values from the data we can observe. Specifically, our {\emph a priori} expectation is that both parameters will be relatively peaked, since each phrase, $\p$, should relatively unambiguous belong to particular category, and each category to generate a relatively small number of context strings, $\textbf{c}$. To encode these intuitions, we make use of Pitman-Yor processes \citep{pitman:1997}, which have already been demonstrated to be particularly effective models for language \citep{teh:2006,goldwater:2006}.
-Our models assume a fixed number of categories, $K$. The category type, $z \in \{ 1 , 2 , \ldots , K \}$, is generated from a PYP with a uniform base distribution:
+Our model assumes a fixed number of categories, $K$. The category type, $z \in \{ 1 , 2 , \ldots , K \}$, is generated from a PYP with a uniform base distribution:
\begin{align*}
z &| \p & \sim \theta_{\p} \\
\theta_{\p} &| a_{\p},b_{\p},K & \sim \textrm{PYP}(a_{\p},b_{\p},\frac{1}{K})
\end{align*}
-\noindent Alternatively, we used hierarchical PYP process which shares statistics about the use of categories across phrases:
+\noindent As a variation on this, we define a variant of the model with a hierarchical prior on the distribution over categories for a phrase. We share statistics about category use across phrases, encourage a more peaked distribution of categories:
\begin{align*}
z &| \p & \sim \theta_{\p} \\
\theta_{\p} &| a_{\p},b_{\p} & \sim \textrm{PYP}(a_{\p},b_{\p},\theta_0) \\
\theta_0 &| a_0,b_0,K & \sim \textrm{PYP}(a_0,b_0,\frac{1}{K})
\end{align*}
-\noindent Each category $z$ token then generates the context $\textbf{c}_i$. We again model this using a PYP, which will tend to cluster commonly used contexts across phrases into a single category. Additionally, by using hierarchical PYPs, we can smooth highly specific contexts by backing off to less specific contexts (e.g., composed of fewer words or word classes).
+\noindent Now that we have described how category labels are generated, we describe how contexts are generated from the category. We again model this process using a PYP. Not only does this model tend to favor solutions where contexts used repeatedly are clustered, but it provides a natural way to do smoothing. Since many contexts may be only infrequently observed in the training data, proper smoothing is crucial. Specifically, we can smooth specific contexts by backing off to less specific contexts (e.g., composed of fewer words or word classes).
The most basic version of our model uses a uniform base distribution over contexts. This model was most useful when generating contexts consisting of a single word or word class (i.e., $\textbf{c}=c_{-1}c_1$) in either the source or target language on either side.