summaryrefslogtreecommitdiff
path: root/dtrain/test/example/expected-output
blob: 4379848444da245a6cf032f48397e9ad7f5f64aa (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
                cdec cfg 'test/example/cdec.ini'
Loading the LM will be faster if you build a binary file.
Reading test/example/nc-wmt11.en.srilm.gz
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
  Example feature: Shape_S00000_T00000
Seeding random number sequence to 2108658507

dtrain
Parameters:
                       k 100
                       N 4
                       T 3
                 scorer 'stupid_bleu'
             sample from 'kbest'
                  filter 'uniq'
           learning rate 0.0001
                   gamma 0
             loss margin 0
                   pairs 'XYX'
                   hi lo 0.1
          pair threshold 0
          select weights 'VOID'
                  l1 reg 0 'none'
               max pairs 4294967295
                cdec cfg 'test/example/cdec.ini'
                   input 'test/example/nc-wmt11.1k.gz'
                  output '-'
              stop_after 100
(a dot represents 10 inputs)
Iteration #1 of 3.
 .......... 100
Stopping after 100 input sentences.
WEIGHTS
              Glue = -0.236
       WordPenalty = +0.056111
     LanguageModel = +0.71011
 LanguageModel_OOV = -0.489
     PhraseModel_0 = -0.21332
     PhraseModel_1 = -0.13038
     PhraseModel_2 = +0.085148
     PhraseModel_3 = -0.16982
     PhraseModel_4 = -0.026332
     PhraseModel_5 = +0.2133
     PhraseModel_6 = +0.1002
       PassThrough = -0.5541
        ---
       1best avg score: 0.16928 (+0.16928)
 1best avg model score: 2.4454 (+2.4454)
           avg # pairs: 1616.2
        avg # rank err: 769.6
     avg # margin viol: 0
    non0 feature count: 4068
           avg list sz: 96.65
           avg f count: 118.01
(time 1.3 min, 0.79 s/S)

Iteration #2 of 3.
 .......... 100
WEIGHTS
              Glue = -0.1721
       WordPenalty = -0.14132
     LanguageModel = +0.56023
 LanguageModel_OOV = -0.6786
     PhraseModel_0 = +0.14155
     PhraseModel_1 = +0.34218
     PhraseModel_2 = +0.22954
     PhraseModel_3 = -0.24762
     PhraseModel_4 = -0.25848
     PhraseModel_5 = -0.0453
     PhraseModel_6 = -0.0264
       PassThrough = -0.7436
        ---
       1best avg score: 0.19585 (+0.02657)
 1best avg model score: -16.311 (-18.757)
           avg # pairs: 1475.8
        avg # rank err: 668.48
     avg # margin viol: 0
    non0 feature count: 6300
           avg list sz: 96.08
           avg f count: 114.92
(time 1.3 min, 0.76 s/S)

Iteration #3 of 3.
 .......... 100
WEIGHTS
              Glue = -0.1577
       WordPenalty = -0.086902
     LanguageModel = +0.30136
 LanguageModel_OOV = -0.7848
     PhraseModel_0 = +0.11743
     PhraseModel_1 = +0.11142
     PhraseModel_2 = -0.0053865
     PhraseModel_3 = -0.18731
     PhraseModel_4 = -0.67144
     PhraseModel_5 = +0.1236
     PhraseModel_6 = -0.2665
       PassThrough = -0.8498
        ---
       1best avg score: 0.20034 (+0.0044978)
 1best avg model score: -7.2775 (+9.0336)
           avg # pairs: 1578.6
        avg # rank err: 705.77
     avg # margin viol: 0
    non0 feature count: 7313
           avg list sz: 96.84
           avg f count: 124.48
(time 1.5 min, 0.9 s/S)

Writing weights file to '-' ...
done

---
Best iteration: 3 [SCORE 'stupid_bleu'=0.20034].
This took 4.0833 min.