cdec cfg 'cdec.ini' Loading the LM will be faster if you build a binary file. Reading ../example/nc-wmt11.en.srilm.gz ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100 **************************************************************************************************** Seeding random number sequence to 3121929377 dtrain Parameters: k 100 N 4 T 1 scorer 'stupid_bleu' sample from 'kbest' filter 'uniq' learning rate 0.0001 gamma 0 loss margin 1 pairs 'XYX' hi lo 0.1 pair threshold 0 select weights 'last' l1 reg 0 'none' max pairs 4294967295 cdec cfg 'cdec.ini' input 'work/shard.0.0.in' refs 'work/shard.0.0.refs' output 'work/weights.0.0' (a dot represents 10 inputs) Iteration #1 of 1. 5 WEIGHTS Glue = +0.2663 WordPenalty = -0.0079042 LanguageModel = +0.44782 LanguageModel_OOV = -0.0401 PhraseModel_0 = -0.193 PhraseModel_1 = +0.71321 PhraseModel_2 = +0.85196 PhraseModel_3 = -0.43986 PhraseModel_4 = -0.44803 PhraseModel_5 = -0.0538 PhraseModel_6 = -0.1788 PassThrough = -0.1477 --- 1best avg score: 0.17521 (+0.17521) 1best avg model score: 21.556 (+21.556) avg # pairs: 1671.2 avg # rank err: 1118.6 avg # margin viol: 552.6 non0 feature count: 12 avg list sz: 100 avg f count: 11.32 (time 0.37 min, 4.4 s/S) Writing weights file to 'work/weights.0.0' ... done --- Best iteration: 1 [SCORE 'stupid_bleu'=0.17521]. This took 0.36667 min.