Improving Speech Recognition Learning through Lazy Training

Citation data:

IEEE

Publication Year:
2002
Usage 18
Downloads 18
Repository URL:
https://scholarsarchive.byu.edu/facpub/1071; https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=2070&context=facpub
Author(s):
Martinez, Tony R.; Rimer, Michael E.; Wilson, D. Randall
Publisher(s):
IEEE
Tags:
lazy training; overfit; generalization; neural networks; Computer Sciences
article description
Multi-layer backpropagation, like most learning algorithms that can create complex decision surfaces, is prone to overfitting. We present a novel approach, called lazy training, for reducing the overfit in multiple-layer networks. Lazy training consistently reduces generalization error of optimized neural networks by more than half on a large OCR dataset and on several real world problems from the UCI machine learning database repository. Here, lazy training is shown to be effective in a multi-layered adaptive learning system, reducing the error of an optimized backpropagation network in a speech recognition system by 50.0% on the TIDIGITS corpus.