summaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authorphilblunsom@gmail.com <philblunsom@gmail.com@ec762483-ff6d-05da-a07a-a48fb63a330f>2010-08-11 16:03:03 +0000
committerphilblunsom@gmail.com <philblunsom@gmail.com@ec762483-ff6d-05da-a07a-a48fb63a330f>2010-08-11 16:03:03 +0000
commit549a2bf240bc968e414b668b938db475b232cf91 (patch)
tree981a8a354a5f4760ff7782e397d547ef27ac15c0 /report
parentcfa303b746be4d3625c62fa0234ffda71bd7617d (diff)
More intro...
git-svn-id: https://ws10smt.googlecode.com/svn/trunk@526 ec762483-ff6d-05da-a07a-a48fb63a330f
Diffstat (limited to 'report')
-rw-r--r--report/introduction.tex4
1 files changed, 4 insertions, 0 deletions
diff --git a/report/introduction.tex b/report/introduction.tex
index adcd15b0..21e0e907 100644
--- a/report/introduction.tex
+++ b/report/introduction.tex
@@ -115,6 +115,10 @@ We were able to show that each of these techniques could lead to faster decoding
Chapter \ref{chap:decoding} describes this work.
\paragraph{3) Discriminative training labelled SCFG translation models}
+The third stream of the workshop focussed on implementing discriminative training algorithms for the labelled SCFG translation models produced by our unsupervised grammar induction algorithms.
+Though the existing MERT \cite{och02mert} training algorithm is directly applicable to these grammars, it doesn't allow us to optimise models with large numbers of fine grained features extracted from the labels we've induced.
+In order to maximise the benefit from our induced grammars we explored and implemented discriminative training algorithms capable of handling thousands, rather than tens, of features.
+The algorithms we explored were Maximum Expected Bleu \cite{smith,li} and MIRA \cite{chiang}.
Chapter \ref{chap:training} describes this work.
The remainder of this introductory chapter provides a formal definition of SCFGs and describes the language pairs that we experimented with.