summaryrefslogtreecommitdiff
path: root/training/dtrain/examples/standard/expected-output
blob: a35bbe6ff1ca1f91aef2583045988e27d4b36b60 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
                cdec cfg './cdec.ini'
Loading the LM will be faster if you build a binary file.
Reading ./nc-wmt11.en.srilm.gz
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
  Example feature: Shape_S00000_T00000
Seeding random number sequence to 4049211323

dtrain
Parameters:
                       k 100
                       N 4
                       T 3
                  scorer 'fixed_stupid_bleu'
             sample from 'kbest'
                  filter 'uniq'
           learning rate 1
                   gamma 0
             loss margin 0
       faster perceptron 1
                   pairs 'XYX'
                   hi lo 0.1
          pair threshold 0
          select weights 'VOID'
                  l1 reg 0 'none'
                    pclr no
               max pairs 4294967295
                cdec cfg './cdec.ini'
                   input './nc-wmt11.de.gz'
                    refs './nc-wmt11.en.gz'
                  output '-'
              stop_after 10
(a dot represents 10 inputs)
Iteration #1 of 3.
 . 10
Stopping after 10 input sentences.
WEIGHTS
              Glue = -1100
       WordPenalty = -82.082
     LanguageModel = -3199.1
 LanguageModel_OOV = -192
     PhraseModel_0 = +3128.2
     PhraseModel_1 = -1610.2
     PhraseModel_2 = -4336.5
     PhraseModel_3 = +2910.3
     PhraseModel_4 = +2523.2
     PhraseModel_5 = +506
     PhraseModel_6 = +1467
       PassThrough = -387
        ---
       1best avg score: 0.16966 (+0.16966)
 1best avg model score: 2.9874e+05 (+2.9874e+05)
           avg # pairs: 906.3 (meaningless)
        avg # rank err: 906.3
     avg # margin viol: 0
    non0 feature count: 825
           avg list sz: 91.3
           avg f count: 139.77
(time 0.35 min, 2.1 s/S)

Iteration #2 of 3.
 . 10
WEIGHTS
              Glue = -1221
       WordPenalty = +836.89
     LanguageModel = +2332.3
 LanguageModel_OOV = -1451
     PhraseModel_0 = +1507.2
     PhraseModel_1 = -2728.4
     PhraseModel_2 = -4183.6
     PhraseModel_3 = +1816.3
     PhraseModel_4 = -2894.7
     PhraseModel_5 = +1403
     PhraseModel_6 = +35
       PassThrough = -1097
        ---
       1best avg score: 0.17399 (+0.004325)
 1best avg model score: 49369 (-2.4937e+05)
           avg # pairs: 662.4 (meaningless)
        avg # rank err: 662.4
     avg # margin viol: 0
    non0 feature count: 1235
           avg list sz: 91.3
           avg f count: 125.11
(time 0.27 min, 1.6 s/S)

Iteration #3 of 3.
 . 10
WEIGHTS
              Glue = -1574
       WordPenalty = -17.372
     LanguageModel = +6861.8
 LanguageModel_OOV = -3997
     PhraseModel_0 = -398.76
     PhraseModel_1 = -3419.6
     PhraseModel_2 = -3186.7
     PhraseModel_3 = +1050.8
     PhraseModel_4 = -2902.7
     PhraseModel_5 = -486
     PhraseModel_6 = -436
       PassThrough = -2985
        ---
       1best avg score: 0.30742 (+0.13343)
 1best avg model score: -1.5393e+05 (-2.0329e+05)
           avg # pairs: 623.8 (meaningless)
        avg # rank err: 623.8
     avg # margin viol: 0
    non0 feature count: 1770
           avg list sz: 91.3
           avg f count: 118.58
(time 0.25 min, 1.5 s/S)

Writing weights file to '-' ...
done

---
Best iteration: 3 [SCORE 'fixed_stupid_bleu'=0.30742].
This took 0.86667 min.