Speech recognition 训练an4数据库时比例因子太小

Speech recognition 训练an4数据库时比例因子太小,speech-recognition,cmusphinx,Speech Recognition,Cmusphinx,我在为an4数据库培训CMU的sphinxtrain时遇到了一个问题 我在培训期间面临两个问题,exe文件崩溃: agg_seg.exe已停止工作 bw.exe已停止工作 我已将CFG\U CD\U列车设置为no 我在日志中看到以下数据: WARN: "kmeans.c", line 433: Empty cluster 251 WARN: "kmeans.c", line 433: Empty cluster 252 WARN: "kmeans.c", line 433: Empty clus

我在为an4数据库培训CMU的sphinxtrain时遇到了一个问题

我在培训期间面临两个问题,exe文件崩溃: agg_seg.exe已停止工作 bw.exe已停止工作

我已将CFG\U CD\U列车设置为no

我在日志中看到以下数据:

WARN: "kmeans.c", line 433: Empty cluster 251
WARN: "kmeans.c", line 433: Empty cluster 252
WARN: "kmeans.c", line 433: Empty cluster 253
WARN: "kmeans.c", line 433: Empty cluster 254
WARN: "kmeans.c", line 433: Empty cluster 255
INFO: main.c(613):  -> Aborting k-means, bad initialization
INFO: kmeans.c(155): km iter [0] 1.000000e+000 ..

INFO: main.c(613):  -> Aborting k-means, bad initialization
INFO: main.c(622):  best-so-far sqerr = -1.000000e+000
ERROR: "main.c", line 841: Too few observations for kmeans
ERROR: "main.c", line 1399: Unable to do k-means for state 0; skipping...
INFO: s3gau_io.c(228): Wrote C:/an4/model_parameters/an4.ci_semi_flatinitial/means [1x4x256 array]
INFO: s3gau_io.c(228): Wrote C:/an4/model_parameters/an4.ci_semi_flatinitial/variances [1x4x256 array]
INFO: main.c(1496): No mixing weight file given; none written
INFO: main.c(1628): TOTALS: km 3.516x 1.088e+000 var 0.000x 0.000e+000 em 0.000x 0.000e+000 all 3.516x 1.088e+000
Sun Mar 26 16:50:18 2017
Baum-Welch日志是:

WARN: "gauden.c", line 1349: Scaling factor too small: -3619594.211457
WARN: "gauden.c", line 1349: Scaling factor too small: -11594564.458543
WARN: "gauden.c", line 1349: Scaling factor too small: -1331977.053178
WARN: "gauden.c", line 1349: Scaling factor too small: -7872424.039773
ERROR: "backward.c", line 1011: alpha(1.637862e-008) <> sum of alphas * betas (0.000000e+000) in frame 135
ERROR: "baum_welch.c", line 324: fejs/an36-fejs-b ignored
INFO: cmn.c(133): CMN: 86.13 -15.45 -5.41 -0.95 -3.54 -2.01 -2.37 -2.70 -4.07 -2.48 -0.52 -0.59 -0.21 
WARN: "gauden.c", line 1349: Scaling factor too small: -3795379.078838
WARN: "gauden.c", line 1349: Scaling factor too small: -9396257.864064
WARN: "gauden.c", line 1349: Scaling factor too small: -913946.511732
WARN: "gauden.c", line 1349: Scaling factor too small: -5152383.210186
ERROR: "backward.c", line 1011: alpha(3.533284e-011) <> sum of alphas * betas (0.000000e+000) in frame 144
ERROR: "baum_welch.c", line 324: fejs/an37-fejs-b ignored
INFO: cmn.c(133): CMN: 84.40 -8.91 -3.11 -3.99 -1.67  2.64 -5.08 -1.13 -2.05  1.96  0.50  0.33 -0.61 
WARN: "gauden.c", line 1349: Scaling factor too small: -2312100utt>    37               an38-fejs-b   80    0    16 14 
utt>    38               an39-fejs-b   77    0    48 36 
utt>    39               an40-fejs-b  120    0    76 58 
utt>    40               cen1-fejs-b  211    0    72 62 
utt>    41               cen2-fejs-b  198    0    32 30 
utt>    42               cen3-fejs-b  149    0    68 56 
utt>    43               cen4-fejs-b   93    0    88 57 
Sun Mar 26 16:50:30 2017

您下载了错误的数据集,您需要的是sph数据,而不是bigendian raw。我不这么认为,wav文件夹中的文件都是类型。sphinx_train.cfg配置错误,功能值显然是错误的。@NikolayShmyrev我已经添加了配置文件。你能告诉我出了什么问题吗?因为我完全遵循了教程。你下载了错误的数据集,你需要的是sph数据,而不是bigendian raw。我不这么认为,wav文件夹中的文件都是类型。sph如果你错误配置了sphinx_train.cfg,那么特征值显然是错误的。@NikolayShmyrev我已经添加了配置文件。你能告诉我出了什么问题吗?因为我完全遵循了教程。
# Configuration script for sphinx trainer                  -*-mode:Perl-*-

$CFG_VERBOSE = 1;       # Determines how much goes to the screen.

# These are filled in at configuration time
$CFG_DB_NAME = "an4";
# Experiment name, will be used to name model files and log files
$CFG_EXPTNAME = "$CFG_DB_NAME";

# Directory containing SphinxTrain binaries
$CFG_BASE_DIR = "C:/Users/mrajeev/Downloads/Sphinx/an4";
$CFG_SPHINXTRAIN_DIR = "C:/Users/mrajeev/Downloads/Sphinx/sphinxtrain";
$CFG_BIN_DIR = "C:/Users/mrajeev/Downloads/Sphinx/sphinxtrain/bin/Debug/Win32";
$CFG_SCRIPT_DIR = "C:/Users/mrajeev/Downloads/Sphinx/sphinxtrain/scripts";


# Audio waveform and feature file information
$CFG_WAVFILES_DIR = "$CFG_BASE_DIR/wav";
$CFG_WAVFILE_EXTENSION = 'sph';
$CFG_WAVFILE_TYPE = 'nist'; # one of nist, mswav, raw
$CFG_FEATFILES_DIR = "$CFG_BASE_DIR/feat";
$CFG_FEATFILE_EXTENSION = 'mfc';

# Feature extraction parameters
$CFG_WAVFILE_SRATE = 16000.0;
$CFG_NUM_FILT = 25; # For wideband speech it's 25, for telephone 8khz reasonable value is 15
$CFG_LO_FILT = 130; # For telephone 8kHz speech value is 200
$CFG_HI_FILT = 6800; # For telephone 8kHz speech value is 3500
$CFG_TRANSFORM = "dct"; # Previously legacy transform is used, but dct is more accurate
$CFG_LIFTER = "22"; # Cepstrum lifter is smoothing to improve recognition
$CFG_VECTOR_LENGTH = 13; # 13 is usually enough

$CFG_MIN_ITERATIONS = 1;  # BW Iterate at least this many times
$CFG_MAX_ITERATIONS = 10; # BW Don't iterate more than this, somethings likely wrong.

# (none/max) Type of AGC to apply to input files
$CFG_AGC = 'none';
# (current/none) Type of cepstral mean subtraction/normalization
# to apply to input files
$CFG_CMN = 'batch';
# (yes/no) Normalize variance of input files to 1.0
$CFG_VARNORM = 'no';
# (yes/no) Train full covariance matrices
$CFG_FULLVAR = 'no';
# (yes/no) Use diagonals only of full covariance matrices for
# Forward-Backward evaluation (recommended if CFG_FULLVAR is yes)
$CFG_DIAGFULL = 'no';

# (yes/no) Perform vocal tract length normalization in training.  This
# will result in a "normalized" model which requires VTLN to be done
# during decoding as well.
$CFG_VTLN = 'no';
# Starting warp factor for VTLN
$CFG_VTLN_START = 0.80;
# Ending warp factor for VTLN
$CFG_VTLN_END = 1.40;
# Step size of warping factors
$CFG_VTLN_STEP = 0.05;

# Directory to write queue manager logs to
$CFG_QMGR_DIR = "$CFG_BASE_DIR/qmanager";
# Directory to write training logs to
$CFG_LOG_DIR = "$CFG_BASE_DIR/logdir";
# Directory for re-estimation counts
$CFG_BWACCUM_DIR = "$CFG_BASE_DIR/bwaccumdir";
# Directory to write model parameter files to
$CFG_MODEL_DIR = "$CFG_BASE_DIR/model_parameters";

# Directory containing transcripts and control files for
# speaker-adaptive training
$CFG_LIST_DIR = "$CFG_BASE_DIR/etc";

# Decoding variables for MMIE training
$CFG_LANGUAGEWEIGHT = "11.5";
$CFG_BEAMWIDTH      = "1e-100";
$CFG_WORDBEAM       = "1e-80";
$CFG_LANGUAGEMODEL  = "$CFG_LIST_DIR/$CFG_DB_NAME.lm.DMP";
$CFG_WORDPENALTY    = "0.2";

# Lattice pruning variables
$CFG_ABEAM              = "1e-50";
$CFG_NBEAM              = "1e-10";
$CFG_PRUNED_DENLAT_DIR  = "$CFG_BASE_DIR/pruned_denlat";

# MMIE training related variables
$CFG_MMIE = "no";
$CFG_MMIE_MAX_ITERATIONS = 5;
$CFG_LATTICE_DIR = "$CFG_BASE_DIR/lattice";
$CFG_MMIE_TYPE   = "rand"; # Valid values are "rand", "best" or "ci"
$CFG_MMIE_CONSTE = "3.0";
$CFG_NUMLAT_DIR  = "$CFG_BASE_DIR/numlat";
$CFG_DENLAT_DIR  = "$CFG_BASE_DIR/denlat";

# Variables used in main training of models
$CFG_DICTIONARY     = "$CFG_LIST_DIR/$CFG_DB_NAME.dic";
$CFG_RAWPHONEFILE   = "$CFG_LIST_DIR/$CFG_DB_NAME.phone";
$CFG_FILLERDICT     = "$CFG_LIST_DIR/$CFG_DB_NAME.filler";
$CFG_LISTOFFILES    = "$CFG_LIST_DIR/${CFG_DB_NAME}_train.fileids";
$CFG_TRANSCRIPTFILE = "$CFG_LIST_DIR/${CFG_DB_NAME}_train.transcription";
$CFG_FEATPARAMS     = "$CFG_LIST_DIR/feat.params";

# Variables used in characterizing models

#$CFG_HMM_TYPE = '.cont.'; # Sphinx 4, PocketSphinx
$CFG_HMM_TYPE  = '.semi.'; # PocketSphinx
#$CFG_HMM_TYPE  = '.ptm.'; # PocketSphinx (larger data sets)

if (($CFG_HMM_TYPE ne ".semi.")
    and ($CFG_HMM_TYPE ne ".ptm.")
    and ($CFG_HMM_TYPE ne ".cont.")) {
  die "Please choose one CFG_HMM_TYPE out of '.cont.', '.ptm.', or '.semi.', " .
    "currently $CFG_HMM_TYPE\n";
}

# This configuration is fastest and best for most acoustic models in
# PocketSphinx and Sphinx-III.  See below for Sphinx-II.
$CFG_STATESPERHMM = 3;
$CFG_SKIPSTATE = 'no';

if ($CFG_HMM_TYPE eq '.semi.') {
  $CFG_DIRLABEL = 'semi';
# Four stream features for PocketSphinx
  $CFG_FEATURE = "s2_4x";
  $CFG_NUM_STREAMS = 4;
  $CFG_INITIAL_NUM_DENSITIES = 256;
  $CFG_FINAL_NUM_DENSITIES = 256;
  die "For semi continuous models, the initial and final models have the same density" 
    if ($CFG_INITIAL_NUM_DENSITIES != $CFG_FINAL_NUM_DENSITIES);
} elsif ($CFG_HMM_TYPE eq '.ptm.') {
  $CFG_DIRLABEL = 'ptm';
# Four stream features for PocketSphinx
  $CFG_FEATURE = "s2_4x";
  $CFG_NUM_STREAMS = 4;
  $CFG_INITIAL_NUM_DENSITIES = 64;
  $CFG_FINAL_NUM_DENSITIES = 64;
  die "For phonetically tied models, the initial and final models have the same density" 
    if ($CFG_INITIAL_NUM_DENSITIES != $CFG_FINAL_NUM_DENSITIES);
} elsif ($CFG_HMM_TYPE eq '.cont.') {
  $CFG_DIRLABEL = 'cont';
# Single stream features - Sphinx 3
  $CFG_FEATURE = "1s_c_d_dd";
  $CFG_NUM_STREAMS = 1;
  $CFG_INITIAL_NUM_DENSITIES = 1;
  $CFG_FINAL_NUM_DENSITIES = 8;
  die "The initial has to be less than the final number of densities" 
    if ($CFG_INITIAL_NUM_DENSITIES > $CFG_FINAL_NUM_DENSITIES);
}

# Number of top gaussians to score a frame. A little bit less accurate computations
# make training significantly faster. Uncomment to apply this during the training
# For good accuracy make sure you are using the same setting in decoder
# In theory this can be different for various training stages. For example 4 for
# CI stage and 16 for CD stage
# $CFG_CI_TOPN = 4;
# $CFG_CD_TOPN = 16;

# (yes/no) Train multiple-gaussian context-independent models (useful
# for alignment, use 'no' otherwise) in the models created
# specifically for forced alignment
$CFG_FALIGN_CI_MGAU = 'no';
# (yes/no) Train multiple-gaussian context-independent models (useful
# for alignment, use 'no' otherwise)
$CFG_CI_MGAU = 'no';
# (yes/no) Train context-dependent models
$CFG_CD_TRAIN = 'no';
# Number of tied states (senones) to create in decision-tree clustering
$CFG_N_TIED_STATES = 200;
# How many parts to run Forward-Backward estimatinon in
$CFG_NPART = 1;

# (yes/no) Train a single decision tree for all phones (actually one
# per state) (useful for grapheme-based models, use 'no' otherwise)
$CFG_CROSS_PHONE_TREES = 'no';

# Use force-aligned transcripts (if available) as input to training
$CFG_FORCEDALIGN = 'no';

# Use a specific set of models for force alignment.  If not defined,
# context-independent models for the current experiment will be used.
$CFG_FORCE_ALIGN_MODELDIR = "$CFG_MODEL_DIR/$CFG_EXPTNAME.falign_ci_$CFG_DIRLABEL";

# Use a specific dictionary and filler dictionary for force alignment.
# If these are not defined, a dictionary and filler dictionary will be
# created from $CFG_DICTIONARY and $CFG_FILLERDICT, with noise words
# removed from the filler dictionary and added to the dictionary (this
# is because the force alignment is not very good at inserting them)

# $CFG_FORCE_ALIGN_DICTIONARY = "$ST::CFG_BASE_DIR/falignout$ST::CFG_EXPTNAME.falign.dict";;
# $CFG_FORCE_ALIGN_FILLERDICT = "$ST::CFG_BASE_DIR/falignout/$ST::CFG_EXPTNAME.falign.fdict";;

# Use a particular beam width for force alignment.  The wider
# (i.e. smaller numerically) the beam, the fewer sentences will be
# rejected for bad alignment.
$CFG_FORCE_ALIGN_BEAM = 1e-60;

# Calculate an LDA/MLLT transform?
$CFG_LDA_MLLT = 'no';
# Dimensionality of LDA/MLLT output
$CFG_LDA_DIMENSION = 29;

# This is actually just a difference in log space (it doesn't make
# sense otherwise, because different feature parameters have very
# different likelihoods)
$CFG_CONVERGENCE_RATIO = 0.1;

# Queue::POSIX for multiple CPUs on a local machine
# Queue::PBS to use a PBS/TORQUE queue
$CFG_QUEUE_TYPE = "Queue";

# Name of queue to use for PBS/TORQUE
$CFG_QUEUE_NAME = "workq";

# (yes/no) Build questions for decision tree clustering automatically
$CFG_MAKE_QUESTS = "yes";
# If CFG_MAKE_QUESTS is yes, questions are written to this file.
# If CFG_MAKE_QUESTS is no, questions are read from this file.
$CFG_QUESTION_SET = "${CFG_BASE_DIR}/model_architecture/${CFG_EXPTNAME}.tree_questions";
#$CFG_QUESTION_SET = "${CFG_BASE_DIR}/linguistic_questions";

$CFG_CP_OPERATION = "${CFG_BASE_DIR}/model_architecture/${CFG_EXPTNAME}.cpmeanvar";

# Configuration for grapheme-to-phoneme model
$CFG_G2P_MODEL= 'no';

# Configuration script for sphinx decoder 

# Variables starting with $DEC_CFG_ refer to decoder specific
# arguments, those starting with $CFG_ refer to trainer arguments,
# some of them also used by the decoder.

$DEC_CFG_VERBOSE = 1;       # Determines how much goes to the screen.

# These are filled in at configuration time

# Name of the decoding script to use (psdecode.pl or s3decode.pl, probably)
$DEC_CFG_SCRIPT = 'psdecode.pl';

$DEC_CFG_EXPTNAME = "$CFG_EXPTNAME";
$DEC_CFG_JOBNAME  = "$CFG_EXPTNAME"."_job";

# Models to use.
$DEC_CFG_MODEL_NAME = "$CFG_EXPTNAME.cd_${CFG_DIRLABEL}_${CFG_N_TIED_STATES}";

$DEC_CFG_FEATFILES_DIR = "$CFG_BASE_DIR/feat";
$DEC_CFG_FEATFILE_EXTENSION = '.mfc';
$DEC_CFG_AGC = $CFG_AGC;
$DEC_CFG_CMN = $CFG_CMN;
$DEC_CFG_VARNORM = $CFG_VARNORM;

$DEC_CFG_QMGR_DIR = "$CFG_BASE_DIR/qmanager";
$DEC_CFG_LOG_DIR = "$CFG_BASE_DIR/logdir";
$DEC_CFG_MODEL_DIR = "$CFG_MODEL_DIR";

$DEC_CFG_DICTIONARY     = "$CFG_BASE_DIR/etc/$CFG_DB_NAME.dic";
$DEC_CFG_FILLERDICT     = "$CFG_BASE_DIR/etc/$CFG_DB_NAME.filler";
$DEC_CFG_LISTOFFILES    = "$CFG_BASE_DIR/etc/${CFG_DB_NAME}_test.fileids";
$DEC_CFG_TRANSCRIPTFILE = "$CFG_BASE_DIR/etc/${CFG_DB_NAME}_test.transcription";
$DEC_CFG_RESULT_DIR     = "$CFG_BASE_DIR/result";
$DEC_CFG_PRESULT_DIR     = "$CFG_BASE_DIR/presult";

# This variables, used by the decoder, have to be user defined, and
# may affect the decoder output


$DEC_CFG_LANGUAGEMODEL  = "$CFG_BASE_DIR/etc/${CFG_DB_NAME}.lm.DMP";
# Or can be JSGF or FSG too, used if uncommented
# $DEC_CFG_GRAMMAR  = "$CFG_BASE_DIR/etc/${CFG_DB_NAME}.jsgf";
# $DEC_CFG_FSG  = "$CFG_BASE_DIR/etc/${CFG_DB_NAME}.fsg";

$DEC_CFG_LANGUAGEWEIGHT = "10";
$DEC_CFG_BEAMWIDTH = "1e-80";
$DEC_CFG_WORDBEAM = "1e-40";
$DEC_CFG_WORDPENALTY = "0.2";

$DEC_CFG_ALIGN = "builtin";

$DEC_CFG_NPART = 1;     #  Define how many pieces to split decode in

# This variable has to be defined, otherwise utils.pl will not load.
$CFG_DONE = 1;

return 1;