From 0c3c534edae38f83079e5d45db9406c9bcc98926 Mon Sep 17 00:00:00 2001 From: Patrick Simianer
Date: Fri, 14 Oct 2011 15:48:58 +0200
Subject: test
---
dtrain/README.md | 34 ++++++++++++----------------------
1 file changed, 12 insertions(+), 22 deletions(-)
(limited to 'dtrain')
diff --git a/dtrain/README.md b/dtrain/README.md
index 71641bd8..d6699cb4 100644
--- a/dtrain/README.md
+++ b/dtrain/README.md
@@ -3,34 +3,24 @@ dtrain
Ideas
-----
-* *MULTIPARTITE* ranking (108010, 1 vs all, cluster modelscore;score)
-* what about RESCORING?
-* REMEMBER kbest (merge) weights?
-* SELECT iteration with highest (real) BLEU?
-* GENERATED data? (multi-task, ability to learn, perfect translation in nbest, at first all modelscore 1)
-* CACHING (ngrams for scoring)
-* hadoop PIPES imlementation
-* SHARED LM (kenlm actually does this!)?
-* ITERATION variants
- * once -> average
- * shuffle resulting weights
-* weights AVERAGING in reducer (global Ngram counts)
-* BATCH implementation (no update after each Kbest list)
-* set REFERENCE for cdec (rescoring)?
-* MORE THAN ONE reference for BLEU?
-* kbest NICER (do not iterate twice)!? -> shared_ptr?
-* DO NOT USE Decoder::Decode (input caching as WordID)!?
-* sparse vector instead of vector