A general framework and guidelines for benchmarking computational intelligence algorithms applied to forecasting problems derived from an application domain-oriented survey
Applied Soft Computing, ISSN: 1568-4946, Vol: 89, Page: 106103
2020
- 19Citations
- 56Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Benchmarking computational intelligence algorithms provides valuable knowledge for selecting the best or, at least, the proper algorithm for a certain problem. The experimental results of the computational intelligence techniques applications in various domains, as well as the comparative studies that were reported in the literature can be analyzed and synthesized as development strategies for new successful applications of CI algorithms. Starting from an application domain-oriented survey of selected recently reported research work, the paper presents a general benchmarking framework applicable to computational intelligence algorithms and a set of guidelines for the selection of the best or more suitable CI algorithm for solving forecasting problems. Our approach proposes the integration of software and knowledge engineering best practice towards CI benchmarking, being a computational intelligence engineering methodology. The framework uses two knowledge bases, one for the application domain and one for the CI algorithms, providing heuristic knowledge for a more informed and efficient benchmarking, a case base in which solved problems are recorded with their solution and lessons that were learned, and a knowledge-based problem instance features selection. Some examples of how to apply the framework for problems of forecasting in seismology, environmental protection, hydrology and energy are also discussed. We point out that the framework might be implemented as a software tool (e.g. a decision support system) or as a tool suite. The main conclusion of our research work is that the integration of the derived knowledge from an application domain-oriented survey into the general benchmarking framework along with the set of guidelines for best or proper CI algorithms selection can improve significantly the forecasting accuracy and the response time, in case of real time forecasters.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S1568494620300430; http://dx.doi.org/10.1016/j.asoc.2020.106103; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85078667991&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S1568494620300430; https://dx.doi.org/10.1016/j.asoc.2020.106103
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know