summaryrefslogtreecommitdiff
path: root/training/dtrain/examples/parallelized/work/out.0.2
diff options
context:
space:
mode:
authorPatrick Simianer <p@simianer.de>2015-02-26 13:26:37 +0100
committerPatrick Simianer <p@simianer.de>2015-02-26 13:26:37 +0100
commit4223261682388944fe1b1cf31b9d51d88f9ad53b (patch)
treedaf072c310d60b0386587bde5e554312f193b3b2 /training/dtrain/examples/parallelized/work/out.0.2
parent2a37a7ad1b21ab54701de3b5b44dc4ea55a75307 (diff)
refactoring
Diffstat (limited to 'training/dtrain/examples/parallelized/work/out.0.2')
-rw-r--r--training/dtrain/examples/parallelized/work/out.0.274
1 files changed, 26 insertions, 48 deletions
diff --git a/training/dtrain/examples/parallelized/work/out.0.2 b/training/dtrain/examples/parallelized/work/out.0.2
index fcecc7e1..9c4b110b 100644
--- a/training/dtrain/examples/parallelized/work/out.0.2
+++ b/training/dtrain/examples/parallelized/work/out.0.2
@@ -1,66 +1,44 @@
- cdec cfg 'cdec.ini'
Loading the LM will be faster if you build a binary file.
Reading ../standard/nc-wmt11.en.srilm.gz
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
-Seeding random number sequence to 3693132895
-
dtrain
Parameters:
k 100
N 4
T 1
- batch 0
- scorer 'stupid_bleu'
- sample from 'kbest'
- filter 'uniq'
learning rate 0.0001
- gamma 0
- loss margin 1
- faster perceptron 0
- pairs 'XYX'
- hi lo 0.1
- pair threshold 0
- select weights 'last'
- l1 reg 0 'none'
- pclr no
- max pairs 4294967295
- repeat 1
- cdec cfg 'cdec.ini'
- input ''
+ error margin 1
+ l1 reg 0
+ decoder conf 'cdec.ini'
+ input 'work/shard.0.0.in'
output 'work/weights.0.2'
weights in 'work/weights.1'
-(a dot represents 10 inputs)
+(a dot per input)
Iteration #1 of 1.
- 3
+ .... 3
WEIGHTS
- Glue = -0.019275
- WordPenalty = +0.022192
- LanguageModel = +0.40688
- LanguageModel_OOV = -0.36397
- PhraseModel_0 = -0.36273
- PhraseModel_1 = +0.56432
- PhraseModel_2 = +0.85638
- PhraseModel_3 = -0.20222
- PhraseModel_4 = -0.48295
- PhraseModel_5 = +0.03145
- PhraseModel_6 = -0.26092
- PassThrough = -0.38122
+ Glue = -0.44422
+ WordPenalty = +0.1032
+ LanguageModel = +0.66474
+ LanguageModel_OOV = -0.62252
+ PhraseModel_0 = -0.59993
+ PhraseModel_1 = +0.78992
+ PhraseModel_2 = +1.3149
+ PhraseModel_3 = +0.21434
+ PhraseModel_4 = -1.0174
+ PhraseModel_5 = +0.02435
+ PhraseModel_6 = -0.18452
+ PassThrough = -0.65268
---
- 1best avg score: 0.18982 (+0.18982)
- 1best avg model score: 1.7096 (+1.7096)
- avg # pairs: 1524.3
- avg # rank err: 813.33
- avg # margin viol: 702.67
- k-best loss imp: 100%
- non0 feature count: 12
+ 1best avg score: 0.24722 (+0.24722)
+ 1best avg model score: 61.971
+ avg # pairs: 2017.7
+ non-0 feature count: 12
avg list sz: 100
- avg f count: 11.32
-(time 0.53 min, 11 s/S)
-
-Writing weights file to 'work/weights.0.2' ...
-done
+ avg f count: 10.42
+(time 0.3 min, 6 s/S)
---
-Best iteration: 1 [SCORE 'stupid_bleu'=0.18982].
-This took 0.53333 min.
+Best iteration: 1 [GOLD = 0.24722].
+This took 0.3 min.