summaryrefslogtreecommitdiff
path: root/training
diff options
context:
space:
mode:
authorPatrick Simianer <p@simianer.de>2013-06-24 17:45:32 +0200
committerPatrick Simianer <p@simianer.de>2013-06-24 17:45:32 +0200
commit5c4ef3f9206fd7e1ddfe252132582d854d62f0a4 (patch)
tree19d59c00e9629669ed7bfff6ee431d49dee134af /training
parent230d7667eac7a229d1c5809022b17c6137f67065 (diff)
documentation
Diffstat (limited to 'training')
-rw-r--r--training/dtrain/README.md11
-rw-r--r--training/dtrain/examples/parallelized/dtrain.ini2
2 files changed, 11 insertions, 2 deletions
diff --git a/training/dtrain/README.md b/training/dtrain/README.md
index 2ab2f232..2bae6b48 100644
--- a/training/dtrain/README.md
+++ b/training/dtrain/README.md
@@ -17,6 +17,17 @@ To build only parts needed for dtrain do
cd training/dtrain/; make
```
+Ideas
+-----
+ * get approx_bleu to work?
+ * implement minibatches (Minibatch and Parallelization for Online Large Margin Structured Learning)
+ * learning rate 1/T?
+ * use an oracle? mira-like (model vs. BLEU), feature repr. of reference!?
+ * implement lc_bleu properly
+ * merge kbest lists of previous epochs (as MERT does)
+ * ``walk entire regularization path''
+ * rerank after each update?
+
Running
-------
See directories under test/ .
diff --git a/training/dtrain/examples/parallelized/dtrain.ini b/training/dtrain/examples/parallelized/dtrain.ini
index f19ef891..0b0932d6 100644
--- a/training/dtrain/examples/parallelized/dtrain.ini
+++ b/training/dtrain/examples/parallelized/dtrain.ini
@@ -11,6 +11,4 @@ pair_sampling=XYX
hi_lo=0.1
select_weights=last
print_weights=Glue WordPenalty LanguageModel LanguageModel_OOV PhraseModel_0 PhraseModel_1 PhraseModel_2 PhraseModel_3 PhraseModel_4 PhraseModel_5 PhraseModel_6 PassThrough
-# newer version of the grammar extractor use different feature names:
-#print_weights=Glue WordPenalty LanguageModel LanguageModel_OOV PhraseModel_0 PhraseModel_1 PhraseModel_2 PhraseModel_3 PhraseModel_4 PhraseModel_5 PhraseModel_6 PassThrough
decoder_config=cdec.ini