summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPatrick Simianer <p@simianer.de>2012-03-13 09:15:46 +0100
committerPatrick Simianer <p@simianer.de>2012-03-13 09:15:46 +0100
commit10a232656a0c882b3b955d2bcfac138ce11e8a2e (patch)
tree134e2637908cd85b3548d68ac8590f3aad8d1c49
parente77078e31cd75f0e5983d332b990809a3644b0fb (diff)
polish
-rw-r--r--dtrain/README.md437
-rw-r--r--dtrain/dtrain.cc18
-rw-r--r--dtrain/dtrain.h2
-rwxr-xr-xdtrain/hstreaming/avg.rb2
-rw-r--r--dtrain/hstreaming/cdec.ini2
-rwxr-xr-xdtrain/hstreaming/hadoop-streaming-job.sh23
-rw-r--r--dtrain/hstreaming/rule_count/red.rb2
-rw-r--r--dtrain/hstreaming/rule_count/rulecount.rb2
-rw-r--r--dtrain/pairsampling.h2
-rw-r--r--dtrain/score.cc4
-rw-r--r--dtrain/test/example/cdec.ini8
-rw-r--r--dtrain/test/example/dtrain.ini28
-rw-r--r--dtrain/test/mtm11/logreg_cd/bin_class.cc (renamed from dtrain/test/logreg_cd/bin_class.cc)0
-rw-r--r--dtrain/test/mtm11/logreg_cd/bin_class.h (renamed from dtrain/test/logreg_cd/bin_class.h)0
-rw-r--r--dtrain/test/mtm11/logreg_cd/log_reg.cc (renamed from dtrain/test/logreg_cd/log_reg.cc)0
-rw-r--r--dtrain/test/mtm11/logreg_cd/log_reg.h (renamed from dtrain/test/logreg_cd/log_reg.h)0
-rw-r--r--dtrain/test/mtm11/mira_update/Hildreth.cpp (renamed from dtrain/test/mira_update/Hildreth.cpp)0
-rw-r--r--dtrain/test/mtm11/mira_update/Hildreth.h (renamed from dtrain/test/mira_update/Hildreth.h)0
-rw-r--r--dtrain/test/mtm11/mira_update/dtrain.cc (renamed from dtrain/test/mira_update/dtrain.cc)0
-rw-r--r--dtrain/test/mtm11/mira_update/sample.h (renamed from dtrain/test/mira_update/sample.h)0
-rw-r--r--dtrain/test/test.in3
-rw-r--r--dtrain/test/toy/dtrain.ini11
-rw-r--r--dtrain/test/toy/in2
-rw-r--r--dtrain/test/toy/input2
24 files changed, 93 insertions, 455 deletions
diff --git a/dtrain/README.md b/dtrain/README.md
index 91cf0704..c39d94d2 100644
--- a/dtrain/README.md
+++ b/dtrain/README.md
@@ -1,409 +1,40 @@
-dtrain
-======
+This is a really fast (parallelizable) tuning method for cdec as used here:
+ "Joint Feature Selection in Distributed Stochastic
+ Learning for Large-Scale Discriminative Training in
+ SMT" Simianer, Riezler, Dyer
+ ACL 2012
-Build & run
------------
-build ..
-<pre>
-git clone git://github.com/qlt/cdec-dtrain.git
-cd cdec-dtrain
-autoreconf -if[v]
-./configure [--disable-gtest]
-make
-</pre>
-and run:
-<pre>
-cd dtrain/hstreaming/
-(edit ini files)
-edit the vars in hadoop-streaming-job.sh ($ID, $IN and $OUT)
-./hadoop-streaming-job.sh
-</pre>
-
-Ideas
------
-* *MULTIPARTITE* ranking (1 vs rest, cluster model/score)
-* *REMEMBER* sampled translations (merge kbest lists)
-* *SELECT* iteration with highest _real_ BLEU on devtest?
-* *SYNTHETIC* data? (perfect translation always in kbest)
-* *CACHE* ngrams for scoring
-* hadoop *PIPES* implementation
-* *ITERATION* variants (shuffle resulting weights, re-iterate)
-* *MORE THAN ONE* reference for BLEU, paraphrases?
-* *RANDOM RESTARTS* or random directions
-* use separate *TEST SET* for each shard
-* *REDUCE* training set (50k?)
-* *SYNTAX* features (CD)
-* distribute *DEV* set to all nodes, avg
-Notes
--------------------------------
-* cdec kbest vs 1best (no -k param), rescoring (ref?)? => ok(?)
-* no sparse vector in decoder => fixed/'ok'
-* PhraseModel features 0..99, mapping?
-* flex scanner jams on bad input, we could skip that
-* input/grammar caching (vector<string> -> vector<WordID>)
-* why loo grammars larger? are they? (sort psgs | uniq -> grammar)
-* lower beam size to be faster?
-* why is <unk> -100 in lm so good?
-* noise helps for discriminative training?
-* what does srilm do with -unk but nothing mapped to unk (<unk> unigram)?
- => this: http://www-speech.sri.com/pipermail/srilm-user/2007q4/000543.html
-* does AER correlate with BLEU? paper?
-* learning rate tuned with perceptron?
-* dtrain (perceptron) used for some tests because no optimizer instability
-* http://www.ark.cs.cmu.edu/cdyer/dtrain/
-* repeat as often as max needed by any learner!
-* don't compare lms (perplex.) with diff vocab (see stupid backoff paper)
-* what does mira/pro optimize exactly?
-* early stopping (epsilon, no change in kbest list)
-* 10-20k rules per sent are normal
-* giza vs. berkeleyaligner: giza more/less noise?
-* compound splitting -> more rules?
-* loo (jackknifing) => ref can't be reached?
-* prune singletons -> less noise? (do I do this?)
-* random sample: take fixed X at random
-* scale of features/weights?
-
-Features
+Building
--------
-* baseline features (take whatever cdec implements for VEST)
-* rule identifiers (feature name = rule as string)
-* rule discounts (taken from frequency i or frequency interval [i,j] of rule in extraction from parallel training data) bins
- => from PRO
-* target ngrams (from nonterminals in rule rhs), with gaps?
-* source-target unigrams (from word alignments used in rule extraction, if they are?)
-* lhs, rhs, rule length features
-* all other features depend on syntax annotation.
-* word alignment
-
-Todo
------------
-* merge dtrain part-X files, for better blocks (how to do this with 4.5tb ep)
-* mapred count shard sents
-* mapred stats for learning curve (output weights per iter for eval on devtest)
-* 250 forest sampling is real bad, bug?
-* metric reporter of bleu for each shard (reporters, status?)
- to draw learning curves for all shards in 1 plot
-* kenlm not portable (i7-2620M vs Intel(R) Xeon(R) CPU E5620 @ 2.40GHz)
-* mapred chaining? hamake?
-* make our sigtest work with cdec
-* l1l2 red (tsuroke)?
-* epsilon stopping criterion
-* normalize weight vector to get proper model scores for forest sampling
-* 108010 with gap(s), and/or fix (same score in diff groups)
-* 108010: combine model score + bleu
-* visualize weight vector
-* *100 runs stats
-* correlation of *_bleu to ibm_bleu
-* ep: open lm, cutoff @1
-* tune regs
-* 3x3 4x4 5x5 .. 10x10 until standard dev ok, moving avg
-* avg weight vector for dtrain? (mira non-avg)
-* repeat lm choose with mira/pro
-* shuffle training data
-* learning rate dynamic (Duh? Tsuroka?)
-* divide updates by ?
-* mira: 5/10/15, pro: (5)/10/20/30 (on devtest!)
-* sample pairs like in pro
-* mira forest sampling
-* platform specific (108010!)
-
-Data
-----
-<pre>
-nc-v6.de-en apegd
-nc-v6.de-en.loo apegd
-nc-v6.de-en.giza apegd
-nc-v6.de-en.giza.loo apegd
-nc-v6.de-en.cs.giza apegd
-nc-v6.de-en.cs.giza.loo apegd
-nv-v6.de-en.cs apegd
-nc-v6.de-en.cs.loo apegd
---
-ep-v6.de-en.cs apegd
-ep-v6.de-en.cs.loo apegd
-
-a: alignment:, p: prep, e: extract,
-g: grammar, d: dtrain
-</pre>
-
-Experiments
+builds when building cdec, see ../BUILDING
+
+Running
+-------
+To run this on a dev set locally (default):
+<code>
+#define DTRAIN_LOCAL
+</code>
+otherwise remove that line or undef. You need a single grammar file
+or per-sentence-grammars (psg) as you would use with cdec.
+Additionally you need to give dtrain a file with
+references (--refs).
+
+The input for use with hadoop streaming looks like this:
+<code>
+<id>\t<source>\t<ref>\t<grammar rules separated by tab>
+</code>
+To convert a psg to this format you need to replace all "\n"
+by "\t". Make sure there are no tabs in your data.
+
+For an example of local usage (with 'distributed' format)
+the see test/example/ . This expects dtrain to be built without
+DTRAIN_LOCAL param.
+
+Legal stuff
-----------
-[grammar stats
- oov on dev/devtest/test
- size
- #rules (uniq)
- time for building
- ep: 1.5 days on 278 slots (30 nodes)
- nc: ~2 hours ^^^
-
- lm stats
- oov on dev/devtest/test
- perplex on train/dev/devtest/test?]
-
-[0]
-which word alignment?
- berkeleyaligner
- giza++ as of Sep 24 2011, mgizapp 0.6.3
- --symgiza as of Oct 1 2011--
- ---
- NON LOO
- (symgiza unreliable)
- randomly sample 100 from train with loo
- run dtrain for 100 iterations
- w/o all other feats (lm, wp, ...) +Glue
- measure ibm bleu on exact same sents
- ep -> berkeleyaligner ??? (mb per sent, rules per sent)
-
-*100 -> triples, quadruples
-
-[1]
-lm?
- 3-4-5
- open
- unk
- nounk (-100 for unk)
- --
- lm oov weight pos? -100
- no tuning, -100 prob for unk EXPECT: nounk
- tuning with dtrain EXPECT: open
- =>
- lmtest on cs.giza.loo???
-
-[2]
-cs?
- 'default' weights
-
-[3]
-loo vs non-loo
- 'jackknifing'
- generalization (determ.!) on dev, test on devtest
-
-[4]
-stability
- all with default params
- mira: 100
- pro: 100
- vest: 100
- dtrain: 100
-
-[undecided]
-do we even need loo for ep?
-pro metaparam
- (max) iter
- regularization
- ???
-
-mira metaparam
- (max) iter: 10 (nc???) vs 15 (ep???)
-
-features to try
- NgramFeatures -> target side ngrams
- RuleIdentityFeatures
- RuleNgramFeatures -> source side ngrams from rule
- RuleShape -> relative orientation of X's and terminals
- SpanFeatures -> http://www.cs.cmu.edu/~cdyer/wmt11-sysdesc.pdf
- ArityPenalty -> Arity=0 Arity=1 and Arity=2
-
----
-shard size: 500-2k
-iterations, re-iterate (shuffle w): 10
-gamma, eta
-SVM, perceptron
-reducer: avg (feats/shard), l1l2, active on all shards
-sentence sampling: forest
-pair sampling: all, rand, 108010 (sort), PRO
-out of domain test?
-
----
-variables to control
-
-[alignment]
-
-[lm]
-
-[vest]
-
-[mira]
-
-[dtrain]
-
-[pro]
-
-
---------
-In PRO, a continually growing list of candidates is maintained for
-each sentence by concatenating k-best lists from each decoding run,
-and the training pairs are sampled from them. This is done to ensure
-that the optimizer doesn't forget about bad places in the parameter
-space that it visited previously (since some training samples will be
-selected from that space). Something like your approach should work
-well though, provided you don't overfit to the sentence pair you're
-looking at in each iteration. So I guess the question is: what are you
-doing in step 2 exactly? A complete optimization? Taking one step? The
-other thing is, do you maintain n-best hypotheses from previous
-iterations?
-
---------
-good grammar? => ability to overfit
- berkeley vs giza
- not LOO
- NO optimizer instability
- 20+ iterations
- approx_bleu-4
- train on dev => test on dev
- train on devtest => test on devtest
- dev on dev better?
- devtest on devtest better?
- (train/test on loo? => lower!)
- (test on others => real bad)
-
-
-loo vs non-loo? => generalization
- (cs vs non-cs?)
- giza||berkeley
- LOO + non LOO
- 2 fold cross validation
- train on dev, test on devtest
- train on devtest, test on dev
- as above ^^^
-
-
- ---
-
-as PRO
- - UPDATES: perceptron
- - LEARNING RATE: 0.0005
- - GAMMA: -
- - #ITERATIONS: 30
- - SCORER: stupid_bleu@4
- - K: 100, 1500?(top X pairs)
- - SAMPLE: kbest uniq, kbest no
- - PAIR SAMPLING: all, PRO?TODO
- - SELECT: best
- - FEATURES: baseline, RuleShape+SpanFeatures
- ---
- - Note: no weight interpolation
- no early stopping based on kbest lists (epsilon?TODO)
-
-dtrain tune reg
- - updates: SVM
- - pair sampling important!
- - learning_rate= 100 50 10 5 1 0.5 0.1 0.05 0.01 0.005 0.001 0.0005 0.0001 0.00005 0.00001 0.000005 0.000001 0.0000005 0.0000001 0.0000000001
-
- - gamma=
-
- - scorer: stupid_bleu 3
- - test weights: last
- -
- -
- - test: devtest
-
-
----
-weights visualization (blocks, color coded)
-zig zag!?
-repeat all basic exps with training set
-merge?
-
-
-
-
---sample_from
---k
---filter
---pair_sampling
---N
---epochs
---scorer
---learning_rate
---gamma
---select_weights
-[--unit_weight_vector]
-[--l1_reg]
-[--l1_reg_strength]
-
----------
-corr best = really best?
-108010gaps
-
-coltrane: 9
-gillespie: 9
-staley: 2
-io: 6
-ioh: 4
- slots
-
-
-when does overfitting begin?
----
-Variables
- k 100..1500 higher better
- N 3/4
- learning rate
- reg/gamma
- epochs -> best on devtest (10..30) (select_weights)
- scorer -> approx_bleu correlates ok (stupid bleu, bleu, smooth bleu)
- sample from -> kbest | forest
- filter -> no uniq (kbest)
- pair sampling -> all 5050 108010 PRO alld
- update_ok -> update towards correctly ranked
- features
- 6x tm
- 2x lm
- wp
- Glue
- rule ids
- rule ngrams
- rule shape
- span features
-
-
-PRO
- k = 1500
- N = 4
- learning rate = 0.0005
- gamma = 0
- epochs = 30
- scorer = stupid bleu (Bleu+1)
- sample from = kbest
- filter = no
- pair sampling = PRO
- update_ok
- features = base
-
-cur:
- shard_sz 500 1k 3k
- PRO with forest sampling
- PRO w/o update_ok
- tune learning rate
- all with discard (not only top 50)
- filter kbest uniq?
-
- -> repeat most on Tset, lXlX stuff
- -> PRO approx bleu
- -> tune gamma
- -> best pair sampling method
- -> reduce k?
- => scorer => approx_bleu (test w PRO)
- -> PRO on training set
- -> PRO more features
- -> discard + 108010
-
-
-
---
-forest vs kbest count vocab?
-108010 select discard
-approx bleu
-
-
+Copyright (c) 2012 by Patrick Simianer <p@simianer.de>
+See the file ../LICENSE.txt for the licensing terms that this software is
+released under.
----
-re-iterate ruleids
-r_
-10s
-p30
-stopwords
-gillespie wtf
diff --git a/dtrain/dtrain.cc b/dtrain/dtrain.cc
index 3111ce5d..fb6c6880 100644
--- a/dtrain/dtrain.cc
+++ b/dtrain/dtrain.cc
@@ -376,15 +376,16 @@ main(int argc, char** argv)
vector<ScoredHyp>* samples = observer->GetSamples();
if (verbose) {
- cerr << "--- ref for " << ii << " ";
+ cerr << "--- ref for " << ii << ": ";
if (t > 0) printWordIDVec(ref_ids_buf[ii]);
else printWordIDVec(ref_ids);
+ cerr << endl;
for (unsigned u = 0; u < samples->size(); u++) {
cerr << _p5 << _np << "[" << u << ". '";
printWordIDVec((*samples)[u].w);
cerr << "'" << endl;
- cerr << "SCORE=" << (*samples)[0].score << ",model="<< (*samples)[0].model << endl;
- cerr << "F{" << (*samples)[0].f << "} ]" << endl << endl;
+ cerr << "SCORE=" << (*samples)[u].score << ",model="<< (*samples)[u].model << endl;
+ cerr << "F{" << (*samples)[u].f << "} ]" << endl << endl;
}
}
@@ -434,11 +435,7 @@ main(int argc, char** argv)
}
}
- ////////
- // TEST THIS
- // reset cumulative_penalties after 1 iter?
- // do this only once per INPUT (not per pair)
-if (false) {
+ // l1 regularization
if (l1naive) {
for (unsigned d = 0; d < lambdas.size(); d++) {
weight_t v = lambdas.get(d);
@@ -471,9 +468,8 @@ if (false) {
}
}
}
+
}
-}
- ////////
if (rescale) lambdas /= lambdas.l2norm();
@@ -523,7 +519,7 @@ if (false) {
if (!quiet || hstreaming) nonz = (unsigned)lambdas.size_nonzero();
if (!quiet) {
- cerr << _p5 << _p << "WEIGHTS" << endl;
+ cerr << _p9 << _p << "WEIGHTS" << endl;
for (vector<string>::iterator it = print_weights.begin(); it != print_weights.end(); it++) {
cerr << setw(18) << *it << " = " << lambdas.get(FD::Convert(*it)) << endl;
}
diff --git a/dtrain/dtrain.h b/dtrain/dtrain.h
index 783aa179..59ceb6f6 100644
--- a/dtrain/dtrain.h
+++ b/dtrain/dtrain.h
@@ -13,7 +13,7 @@
#include "filelib.h"
-#define DTRAIN_LOCAL
+//#define DTRAIN_LOCAL
#define DTRAIN_DOTS 10 // when to display a '.'
#define DTRAIN_GRAMMAR_DELIM "########EOS########"
diff --git a/dtrain/hstreaming/avg.rb b/dtrain/hstreaming/avg.rb
index e0899144..91d4e29a 100755
--- a/dtrain/hstreaming/avg.rb
+++ b/dtrain/hstreaming/avg.rb
@@ -1,4 +1,4 @@
-# avg.rb
+#!/usr/bin/env ruby
shard_count_key = "__SHARD_COUNT__"
diff --git a/dtrain/hstreaming/cdec.ini b/dtrain/hstreaming/cdec.ini
index ce1e1ae2..61f13e86 100644
--- a/dtrain/hstreaming/cdec.ini
+++ b/dtrain/hstreaming/cdec.ini
@@ -4,7 +4,7 @@ scfg_max_span_limit=15
intersection_strategy=cube_pruning
cubepruning_pop_limit=200
feature_function=WordPenalty
-feature_function=KLanguageModel test/example/nc-wmt11.en.srilm.gz
+feature_function=KLanguageModel nc-wmt11.en.srilm.gz
#feature_function=ArityPenalty
#feature_function=CMR2008ReorderingFeatures
#feature_function=InputIndicator
diff --git a/dtrain/hstreaming/hadoop-streaming-job.sh b/dtrain/hstreaming/hadoop-streaming-job.sh
index 4c0238f3..90c2b790 100755
--- a/dtrain/hstreaming/hadoop-streaming-job.sh
+++ b/dtrain/hstreaming/hadoop-streaming-job.sh
@@ -1,26 +1,31 @@
-#!/bin/bash
+#!/bin/sh
-EXP=test
+EXP=a_simple_test
+# change these vars to fit your hadoop installation
HADOOP_HOME=/usr/lib/hadoop-0.20
JAR=contrib/streaming/hadoop-streaming-0.20.2-cdh3u1.jar
HSTREAMING="$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/$JAR"
+# ^^^
- IN=nc-v6.de-en.cs.giza.loo/nc-v6.de-en.cs.giza.loo-dtrain1.sz2
-OUT=out/$EXP-weights
+ IN=input_on_hdfs
+OUT=output_weights_on_hdfs
+# you can remove the -reducer line if you want to
+# do feature selection/averaging locally (e.g. to
+# keep weights of the iterations)
$HSTREAMING \
-mapper "dtrain.sh" \
- -reducer "red-avg.rb" \
+ -reducer "lplp.rb l2 select_k 100000" \
-input $IN \
-output $OUT \
-file dtrain.sh \
- -file red-avg.rb \
- -file ~/exp/cdec-dtrain-ro/dtrain/dtrain \
+ -file lplp.rb \
+ -file ../dtrain \
-file dtrain.ini \
-file cdec.ini \
- -file ~/exp/data/nc-v6.en.3.unk.probing.kenv5 \
- -jobconf mapred.reduce.tasks=1 \
+ -file ../test/example/nc-wmt11.en.srilm.gz \
+ -jobconf mapred.reduce.tasks=30 \
-jobconf mapred.max.map.failures.percent=0 \
-jobconf mapred.job.name="dtrain $EXP"
diff --git a/dtrain/hstreaming/rule_count/red.rb b/dtrain/hstreaming/rule_count/red.rb
index 8f9109cc..874ae7ac 100644
--- a/dtrain/hstreaming/rule_count/red.rb
+++ b/dtrain/hstreaming/rule_count/red.rb
@@ -1,3 +1,5 @@
+#!/usr/bin/env ruby
+
STDIN.set_encoding 'utf-8'
STDOUT.set_encoding 'utf-8'
diff --git a/dtrain/hstreaming/rule_count/rulecount.rb b/dtrain/hstreaming/rule_count/rulecount.rb
index 035bdf06..67361fa4 100644
--- a/dtrain/hstreaming/rule_count/rulecount.rb
+++ b/dtrain/hstreaming/rule_count/rulecount.rb
@@ -1,3 +1,5 @@
+#!/usr/bin/env ruby
+
STDIN.set_encoding 'utf-8'
STDOUT.set_encoding 'utf-8'
diff --git a/dtrain/pairsampling.h b/dtrain/pairsampling.h
index e866c8a0..1fc5b8a0 100644
--- a/dtrain/pairsampling.h
+++ b/dtrain/pairsampling.h
@@ -32,7 +32,7 @@ all_pairs(vector<ScoredHyp>* s, vector<pair<ScoredHyp,ScoredHyp> >& training, sc
* multipartite ranking
* sort by bleu
* compare top 10% to middle 80% and low 10%
- * 80% to low 10%
+ * cmp middle 80% to low 10%
*/
bool
_108010_cmp_hyp_by_score(ScoredHyp a, ScoredHyp b)
diff --git a/dtrain/score.cc b/dtrain/score.cc
index f5e920a0..4cde638a 100644
--- a/dtrain/score.cc
+++ b/dtrain/score.cc
@@ -11,7 +11,7 @@ namespace dtrain
* of Machine Translation"
* (Papineni et al. '02)
*
- * NOTE: 0 if one n in {1..N} has 0 count
+ * NOTE: 0 if for one n \in {1..N} count is 0
*/
score_t
BleuScorer::Bleu(NgramCounts& counts, const unsigned hyp_len, const unsigned ref_len)
@@ -96,6 +96,8 @@ SmoothBleuScorer::Score(vector<WordID>& hyp, vector<WordID>& ref,
* as in "Online Large-Margin Training of Syntactic
* and Structural Translation Features"
* (Chiang et al. '08)
+ *
+ * NOTE: needs some code in dtrain.cc
*/
score_t
ApproxBleuScorer::Score(vector<WordID>& hyp, vector<WordID>& ref,
diff --git a/dtrain/test/example/cdec.ini b/dtrain/test/example/cdec.ini
index ad958ca6..fe5ca759 100644
--- a/dtrain/test/example/cdec.ini
+++ b/dtrain/test/example/cdec.ini
@@ -5,6 +5,7 @@ intersection_strategy=cube_pruning
cubepruning_pop_limit=30
feature_function=WordPenalty
feature_function=KLanguageModel test/example/nc-wmt11.en.srilm.gz
+# all currently working feature function for translation:
#feature_function=ArityPenalty
#feature_function=CMR2008ReorderingFeatures
#feature_function=Dwarf
@@ -14,9 +15,10 @@ feature_function=KLanguageModel test/example/nc-wmt11.en.srilm.gz
#feature_function=NgramFeatures
#feature_function=NonLatinCount
#feature_function=OutputIndicator
-#feature_function=RuleIdentityFeatures
-#feature_function=RuleNgramFeatures
-#feature_function=RuleShape
+feature_function=RuleIdentityFeatures
+feature_function=RuleNgramFeatures
+feature_function=RuleShape
#feature_function=SourceSpanSizeFeatures
#feature_function=SourceWordPenalty
#feature_function=SpanFeatures
+# ^^^ features active that were used in the ACL paper
diff --git a/dtrain/test/example/dtrain.ini b/dtrain/test/example/dtrain.ini
index ed1b7e5f..68173e11 100644
--- a/dtrain/test/example/dtrain.ini
+++ b/dtrain/test/example/dtrain.ini
@@ -1,20 +1,20 @@
input=test/example/nc-wmt11.1k.gz # use '-' for stdin
-output=w.gz # a weights file
-decoder_config=test/example/cdec.ini # a ini for cdec
+output=- # a weights file or stdout
+decoder_config=test/example/cdec.ini # ini for cdec
# these will be printed on each iteration
print_weights=Glue WordPenalty LanguageModel LanguageModel_OOV PhraseModel_0 PhraseModel_1 PhraseModel_2 PhraseModel_3 PhraseModel_4 PhraseModel_5 PhraseModel_6 PassThrough
tmp=/tmp
-stop_after=20
+stop_after=10 # stop iteration after 10 inputs
# interesting stuff
-epochs=1
-k=100
-N=4
-learning_rate=0.0001
-gamma=0.00001
-scorer=stupid_bleu
-sample_from=kbest
-filter=uniq
-pair_sampling=108010
-pair_threshold=0.01
-select_weights=last
+epochs=3 # run over input 3 times
+k=200 # use 100best lists
+N=4 # optimize (approx) BLEU4
+learning_rate=0.0001 # learning rate
+gamma=0.00001 # use SVM reg
+scorer=stupid_bleu # use stupid BLEU+1 approx.
+sample_from=kbest # use kbest lists (as opposed to forest)
+filter=uniq # only uniq entries in kbest
+pair_sampling=108010 # 10 vs 80 vs 10 and 80 vs 10
+pair_threshold=0 # minimum distance in BLEU
+select_weights=last # just output last weights
diff --git a/dtrain/test/logreg_cd/bin_class.cc b/dtrain/test/mtm11/logreg_cd/bin_class.cc
index 19bcde25..19bcde25 100644
--- a/dtrain/test/logreg_cd/bin_class.cc
+++ b/dtrain/test/mtm11/logreg_cd/bin_class.cc
diff --git a/dtrain/test/logreg_cd/bin_class.h b/dtrain/test/mtm11/logreg_cd/bin_class.h
index 3466109a..3466109a 100644
--- a/dtrain/test/logreg_cd/bin_class.h
+++ b/dtrain/test/mtm11/logreg_cd/bin_class.h
diff --git a/dtrain/test/logreg_cd/log_reg.cc b/dtrain/test/mtm11/logreg_cd/log_reg.cc
index ec2331fe..ec2331fe 100644
--- a/dtrain/test/logreg_cd/log_reg.cc
+++ b/dtrain/test/mtm11/logreg_cd/log_reg.cc
diff --git a/dtrain/test/logreg_cd/log_reg.h b/dtrain/test/mtm11/logreg_cd/log_reg.h
index ecc560b8..ecc560b8 100644
--- a/dtrain/test/logreg_cd/log_reg.h
+++ b/dtrain/test/mtm11/logreg_cd/log_reg.h
diff --git a/dtrain/test/mira_update/Hildreth.cpp b/dtrain/test/mtm11/mira_update/Hildreth.cpp
index 0e67eb15..0e67eb15 100644
--- a/dtrain/test/mira_update/Hildreth.cpp
+++ b/dtrain/test/mtm11/mira_update/Hildreth.cpp
diff --git a/dtrain/test/mira_update/Hildreth.h b/dtrain/test/mtm11/mira_update/Hildreth.h
index 8d791085..8d791085 100644
--- a/dtrain/test/mira_update/Hildreth.h
+++ b/dtrain/test/mtm11/mira_update/Hildreth.h
diff --git a/dtrain/test/mira_update/dtrain.cc b/dtrain/test/mtm11/mira_update/dtrain.cc
index 933417a4..933417a4 100644
--- a/dtrain/test/mira_update/dtrain.cc
+++ b/dtrain/test/mtm11/mira_update/dtrain.cc
diff --git a/dtrain/test/mira_update/sample.h b/dtrain/test/mtm11/mira_update/sample.h
index 5c331bba..5c331bba 100644
--- a/dtrain/test/mira_update/sample.h
+++ b/dtrain/test/mtm11/mira_update/sample.h
diff --git a/dtrain/test/test.in b/dtrain/test/test.in
deleted file mode 100644
index 4f53335e..00000000
--- a/dtrain/test/test.in
+++ /dev/null
@@ -1,3 +0,0 @@
-0 vorrichtung means [X] ||| vorrichtung ||| apparatus ||| LogP=0 ||| 0-0 __NEXT_RULE__ [X] ||| vorrichtung ||| means ||| LogP=-100 ||| 0-0
-1 Test test [X] ||| Test ||| test ||| LogP=0 ||| 0-0 __NEXT_RULE__ [X] ||| Test ||| xxx ||| LogP=-100 ||| 0-0
-2 kaputt broken
diff --git a/dtrain/test/toy/dtrain.ini b/dtrain/test/toy/dtrain.ini
index 3548bbb6..abf22b94 100644
--- a/dtrain/test/toy/dtrain.ini
+++ b/dtrain/test/toy/dtrain.ini
@@ -1,11 +1,12 @@
decoder_config=test/toy/cdec.ini
-input=test/toy/in
+input=test/toy/input
output=-
-print_weights=logp use_shell use_house PassThrough
-
+print_weights=logp shell_rule house_rule small_rule little_rule PassThrough
k=4
-N=3
-epochs=2
+N=4
+epochs=3
scorer=stupid_bleu
sample_from=kbest
filter=uniq
+pair_sampling=all
+learning_rate=1
diff --git a/dtrain/test/toy/in b/dtrain/test/toy/in
deleted file mode 100644
index d7b7d080..00000000
--- a/dtrain/test/toy/in
+++ /dev/null
@@ -1,2 +0,0 @@
-0 ich sah ein kleines haus i saw a little house [S] ||| [NP,1] [VP,2] ||| [1] [2] ||| logp=0 [NP] ||| ich ||| i ||| logp=0 [NP] ||| ein [NN,1] ||| a [1] ||| logp=0 [NN] ||| [JJ,1] haus ||| [1] house ||| logp=0 use_house=1 [NN] ||| [JJ,1] haus ||| [1] shell ||| logp=0 use_shell=1 [JJ] ||| kleines ||| small ||| logp=0 [JJ] ||| kleines ||| little ||| logp=0 [JJ] ||| grosses ||| big ||| logp=0 [JJ] ||| grosses ||| large ||| logp=0 [VP] ||| [V,1] [NP,2] ||| [1] [2] ||| logp=0 [V] ||| sah ||| saw ||| logp=0 [V] ||| fand ||| found ||| logp=0
-1 ich fand ein grosses haus i found a large house [S] ||| [NP,1] [VP,2] ||| [1] [2] ||| logp=0 [NP] ||| ich ||| i ||| logp=0 [NP] ||| ein [NN,1] ||| a [1] ||| logp=0 [NN] ||| [JJ,1] haus ||| [1] house ||| logp=0 use_house=1 [NN] ||| [JJ,1] haus ||| [1] shell ||| logp=0 use_shell=1 [JJ] ||| kleines ||| small ||| logp=0 [JJ] ||| kleines ||| little ||| logp=0 [JJ] ||| grosses ||| big ||| logp=0 [JJ] ||| grosses ||| large ||| logp=0 [VP] ||| [V,1] [NP,2] ||| [1] [2] ||| logp=0 [V] ||| sah ||| saw ||| logp=0 [V] ||| fand ||| found ||| logp=0
diff --git a/dtrain/test/toy/input b/dtrain/test/toy/input
new file mode 100644
index 00000000..4d10a9ea
--- /dev/null
+++ b/dtrain/test/toy/input
@@ -0,0 +1,2 @@
+0 ich sah ein kleines haus i saw a little house [S] ||| [NP,1] [VP,2] ||| [1] [2] ||| logp=0 [NP] ||| ich ||| i ||| logp=0 [NP] ||| ein [NN,1] ||| a [1] ||| logp=0 [NN] ||| [JJ,1] haus ||| [1] house ||| logp=0 house_rule=1 [NN] ||| [JJ,1] haus ||| [1] shell ||| logp=0 shell_rule=1 [JJ] ||| kleines ||| small ||| logp=0 small_rule=1 [JJ] ||| kleines ||| little ||| logp=0 little_rule=1 [JJ] ||| grosses ||| big ||| logp=0 [JJ] ||| grosses ||| large ||| logp=0 [VP] ||| [V,1] [NP,2] ||| [1] [2] ||| logp=0 [V] ||| sah ||| saw ||| logp=0 [V] ||| fand ||| found ||| logp=0
+1 ich fand ein kleines haus i found a little house [S] ||| [NP,1] [VP,2] ||| [1] [2] ||| logp=0 [NP] ||| ich ||| i ||| logp=0 [NP] ||| ein [NN,1] ||| a [1] ||| logp=0 [NN] ||| [JJ,1] haus ||| [1] house ||| logp=0 house_rule=1 [NN] ||| [JJ,1] haus ||| [1] shell ||| logp=0 shell_rule=1 [JJ] ||| kleines ||| small ||| logp=0 small_rule=1 [JJ] ||| kleines ||| little ||| logp=0 little_rule=1 [JJ] ||| grosses ||| big ||| logp=0 [JJ] ||| grosses ||| large ||| logp=0 [VP] ||| [V,1] [NP,2] ||| [1] [2] ||| logp=0 [V] ||| sah ||| saw ||| logp=0 [V] ||| fand ||| found ||| logp=0