From 0269777fc54bc554c12107bdd5498f743df2a1ce Mon Sep 17 00:00:00 2001 From: Patrick Simianer
Date: Thu, 8 Sep 2011 00:06:52 +0200
Subject: a lot of stuff, fast_sparse_vector, perceptron, removed sofia, sample
[...]
---
dtrain/README | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
(limited to 'dtrain/README')
diff --git a/dtrain/README b/dtrain/README
index 74bac6a0..b3f513be 100644
--- a/dtrain/README
+++ b/dtrain/README
@@ -1,7 +1,7 @@
NOTES
learner gets all used features (binary! and dense (logprob is sum of logprobs!))
weights: see decoder/decoder.cc line 548
- 40k sents, k=100 = ~400M mem, 1 iteration 45min
+ (40k sents, k=100 = ~400M mem, 1 iteration 45min)?
utils/weights.cc: why wv_?
FD, Weights::wv_ grow too large, see utils/weights.cc;
decoder/hg.h; decoder/scfg_translator.cc; utils/fdict.cc
@@ -15,25 +15,26 @@ TODO
GENERATED data? (multi-task, ability to learn, perfect translation in nbest, at first all modelscore 1)
CACHING (ngrams for scoring)
hadoop PIPES imlementation
- SHARED LM?
+ SHARED LM (kenlm actually does this!)?
ITERATION variants
once -> average
shuffle resulting weights
weights AVERAGING in reducer (global Ngram counts)
BATCH implementation (no update after each Kbest list)
- SOFIA --eta_type explicit
set REFERENCE for cdec (rescoring)?
MORE THAN ONE reference for BLEU?
kbest NICER (do not iterate twice)!? -> shared_ptr?
DO NOT USE Decoder::Decode (input caching as WordID)!?
sparse vector instead of vector