cdec cfg 'cdec.ini'
Loading the LM will be faster if you build a binary file.
Reading ../standard//nc-wmt11.en.srilm.gz
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Seeding random number sequence to 4126799437

dtrain
Parameters:
                       k 100
                       N 4
                       T 1
                  scorer 'stupid_bleu'
             sample from 'kbest'
                  filter 'uniq'
           learning rate 0.0001
                   gamma 0
             loss margin 1
       faster perceptron 0
                   pairs 'XYX'
                   hi lo 0.1
          pair threshold 0
          select weights 'last'
                  l1 reg 0 'none'
               max pairs 4294967295
                cdec cfg 'cdec.ini'
                   input 'work/shard.1.0.in'
                    refs 'work/shard.1.0.refs'
                  output 'work/weights.1.0'
(a dot represents 10 inputs)
Iteration #1 of 1.
  5
WEIGHTS
              Glue = -0.3815
       WordPenalty = +0.20064
     LanguageModel = +0.95304
 LanguageModel_OOV = -0.264
     PhraseModel_0 = -0.22362
     PhraseModel_1 = +0.12254
     PhraseModel_2 = +0.26328
     PhraseModel_3 = +0.38018
     PhraseModel_4 = -0.48654
     PhraseModel_5 = +0
     PhraseModel_6 = -0.3645
       PassThrough = -0.2216
        ---
       1best avg score: 0.10863 (+0.10863)
 1best avg model score: -4.9841 (-4.9841)
           avg # pairs: 1345.4
        avg # rank err: 822.4
     avg # margin viol: 501
    non0 feature count: 11
           avg list sz: 100
           avg f count: 11.814
(time 0.43 min, 5.2 s/S)

Writing weights file to 'work/weights.1.0' ...
done

---
Best iteration: 1 [SCORE 'stupid_bleu'=0.10863].
This took 0.43333 min.