summaryrefslogtreecommitdiff
path: root/training/dtrain/examples/standard/expected-output
blob: 75f47337cecbecca5c479790c21eae8c47a2040c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
                cdec cfg './cdec.ini'
Loading the LM will be faster if you build a binary file.
Reading ./nc-wmt11.en.srilm.gz
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
  Example feature: Shape_S00000_T00000
Seeding random number sequence to 3751911392

dtrain
Parameters:
                       k 100
                       N 4
                       T 3
                   batch 0
                  scorer 'fixed_stupid_bleu'
             sample from 'kbest'
                  filter 'uniq'
           learning rate 0.1
                   gamma 0
             loss margin 0
       faster perceptron 1
                   pairs 'XYX'
                   hi lo 0.1
          pair threshold 0
          select weights 'VOID'
                  l1 reg 0 'none'
                    pclr no
               max pairs 4294967295
                  repeat 1
                cdec cfg './cdec.ini'
                   input './nc-wmt11.gz'
                  output '-'
              stop_after 10
(a dot represents 10 inputs)
Iteration #1 of 3.
 . 10
Stopping after 10 input sentences.
WEIGHTS
              Glue = -110
       WordPenalty = -8.2082
     LanguageModel = -319.91
 LanguageModel_OOV = -19.2
     PhraseModel_0 = +312.82
     PhraseModel_1 = -161.02
     PhraseModel_2 = -433.65
     PhraseModel_3 = +291.03
     PhraseModel_4 = +252.32
     PhraseModel_5 = +50.6
     PhraseModel_6 = +146.7
       PassThrough = -38.7
        ---
       1best avg score: 0.16966 (+0.16966)
 1best avg model score: 29874 (+29874)
           avg # pairs: 906.3
        avg # rank err: 0 (meaningless)
     avg # margin viol: 0
       k-best loss imp: 100%
    non0 feature count: 832
           avg list sz: 91.3
           avg f count: 139.77
(time 0.35 min, 2.1 s/S)

Iteration #2 of 3.
 . 10
WEIGHTS
              Glue = -122.1
       WordPenalty = +83.689
     LanguageModel = +233.23
 LanguageModel_OOV = -145.1
     PhraseModel_0 = +150.72
     PhraseModel_1 = -272.84
     PhraseModel_2 = -418.36
     PhraseModel_3 = +181.63
     PhraseModel_4 = -289.47
     PhraseModel_5 = +140.3
     PhraseModel_6 = +3.5
       PassThrough = -109.7
        ---
       1best avg score: 0.17399 (+0.004325)
 1best avg model score: 4936.9 (-24937)
           avg # pairs: 662.4
        avg # rank err: 0 (meaningless)
     avg # margin viol: 0
       k-best loss imp: 100%
    non0 feature count: 1240
           avg list sz: 91.3
           avg f count: 125.11
(time 0.27 min, 1.6 s/S)

Iteration #3 of 3.
 . 10
WEIGHTS
              Glue = -157.4
       WordPenalty = -1.7372
     LanguageModel = +686.18
 LanguageModel_OOV = -399.7
     PhraseModel_0 = -39.876
     PhraseModel_1 = -341.96
     PhraseModel_2 = -318.67
     PhraseModel_3 = +105.08
     PhraseModel_4 = -290.27
     PhraseModel_5 = -48.6
     PhraseModel_6 = -43.6
       PassThrough = -298.5
        ---
       1best avg score: 0.30742 (+0.13343)
 1best avg model score: -15393 (-20329)
           avg # pairs: 623.8
        avg # rank err: 0 (meaningless)
     avg # margin viol: 0
       k-best loss imp: 100%
    non0 feature count: 1776
           avg list sz: 91.3
           avg f count: 118.58
(time 0.28 min, 1.7 s/S)

Writing weights file to '-' ...
done

---
Best iteration: 3 [SCORE 'fixed_stupid_bleu'=0.30742].
This took 0.9 min.