Training with Augmented Data: GAN-based Flame-Burning Image Synthesis for Fire Segmentation in Warehouse
Fire Technology, ISSN: 1572-8099, Vol: 58, Issue: 1, Page: 183-215
2022
- 17Citations
- 21Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
The training of video fire detection models based on deep learning relies on a large number of positive and negative samples, namely, fire video and scenario video with other disturbances similar to fire. Due to the prohibition of ignition in lots of indoor occasions, the fire video samples in the scene are insufficient. In this paper, a method based on generative adversarial network is proposed to generate flame images which are then migrated into specified scenes, thus increasing fire video samples in those restricted situations. Flame kernel is pre-implanted into the specified scene to keep its characteristics intact. The flame and scene are blended together by adding styling information such as blurry edge and ground reflection. This method overcomes background distortion which is caused by existing multimodal image translation on as a result of information loss and is able to guarantee the diversity of flames in specified scenes and produce perceptually realistic results. Compared with other multimodal image-to-image translation schemes, the FID and LPIPS values of images generated by our method are the highest, reaches 118.4 and 0.1322 respectively. In addition, Unet and the SA-Unet, in which a self-attention mechanism is involved, are used as fire segmenting networks to evaluate the enhancement of the augmented data on improving the accuracy of segmented network. Their F1-scores reaches 0.8905 and 0.9082 respectively after Unet and SA-Unet are trained with GAN-based augmented dataset generated by our model. The F1-scores are second only to 0.9259 and 0.9291 which are obtained when Unet and SA-Unet are trained with real picture serving as augmented dataset.
Bibliographic Details
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know