Sobolev trained neural network surrogate models for optimization
Computers & Chemical Engineering, ISSN: 0098-1354, Vol: 153, Page: 107419
2021
- 12Citations
- 24Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Neural network surrogate models are often used to replace complex mathematical models in black-box and grey-box optimization. This strategy essentially uses samples generated from a complex model to fit a data-driven, reduced-order model more amenable for optimization. Neural network models can be trained in Sobolev spaces, i.e., models are trained to match the complex function not only in terms of output values, but also the values of their derivatives to arbitrary degree. This paper examines the direct impacts of Sobolev training on neural network surrogate models embedded in optimization problems, and proposes a systematic strategy for scaling Sobolev-space targets during NN training. In particular, it is shown that Sobolev training results in surrogate models with more accurate derivatives (in addition to more accurately predicting outputs), with direct benefits in gradient-based optimization. Three case studies demonstrate the approach: black-box optimization of the Himmelblau function, and grey-box optimizations of a two-phase flash separator and two flashes in series. The results show that the advantages of Sobolev training are especially significant in cases of low data volume and/or optimal points near the boundary of the training dataset—areas where NN models traditionally struggle.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0098135421001976; http://dx.doi.org/10.1016/j.compchemeng.2021.107419; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85109464688&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S0098135421001976; https://dx.doi.org/10.1016/j.compchemeng.2021.107419
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know