A fast algorithm for finding global minima of error functions in layered neural networks
1990 IJCNN International Joint Conference on Neural Networks, Page: 715-720 vol.1
1990
- 8Citations
- 4Usage
- 3Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Citations8
- Citation Indexes8
- CrossRef8
- Usage4
- Abstract Views4
- Captures3
- Readers3
Conference Paper Description
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The proposed algorithm is based on random optimization methods with dynamic annealing. The algorithm does not require the computation of error function gradients and guarantees convergence to global minima. When applied to multiple-layer neural networks, the proposed algorithm updates, in batch mode, all neuron weights by Gaussian-distributed increments in a direction which reduces total decision error. The variance of the Gaussian distribution is automatically controlled so that the random search step is concentrated in potential minimum energy/error regions. Also demonstrated is a hybrid method which combines a gradient-descent phase followed by a phase of dynamically annealed random search suitable for optimal search in difficult learning tasks like parity. Extensive simulations are performed which show substantial convergence speedup of the proposed learning method as compared to gradient search methods like backpropagation. The proposed algorithm is also shown to be simple to implement and computationally effective and to lead to global minima over wide ranges of parameter settings.
Bibliographic Details
http://ieeexplore.ieee.org/document/5726613/; http://xplorestaging.ieee.org/ielx2/148/3745/05726613.pdf?arnumber=5726613; http://dx.doi.org/10.1109/ijcnn.1990.137653; https://nsuworks.nova.edu/gscis_facarticles/482; https://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1511&context=gscis_facarticles
Institute of Electrical and Electronics Engineers (IEEE)
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know