1. Introduction
Forest-fires are real threats to human lives, environmental systems and infrastructure. It is predicted that forest fires could destroy half of the world’s forests by the year 2030 [1]. The only efficient way to minimize the forest fires damage is adopt early fire detection mechanisms. Thus, forest-fire detection systems are gaining a lot of attention on several research centers and universities around the world. Currently, there exists many commercial fire detection sensor systems, but all of them are difficult to apply in big open areas like forests, due to their delay in response, necessary maintenance, high cost and other problems.
In this study, image processing based has been used due to several reasons such as quick development of digital cameras technology, the camera can cover large areas with excellent results, the response time of image processing methods is better than that of the existing sensor systems, and the overall cost of the image processing systems is lower than sensor systems.
Several forest-fire detection methods based on image processing have been proposed. The methods presented in [2,3] share the same framework. These methods proposed forest fire detection using YCbCr color space. In these methods, detection of the forest-fire is based on four rules: the first and second rules are used to segment flame regions, while the third and fourth rules are used to segment high-temperature regions. The first one is based on the fact that, in any fire image, the red color value is larger than the green and the green is larger than the blue, this fact is represented in YCbCr as luminance Y is larger than chrominance blue (Y>Cb). In the second rule, the luminance Y value is larger than the average values of the Y component for the same image (Y>Ymean) while the Cb component is smaller than the average values of the Cb (Cb< Cbmean). Additionally, the Cr is larger than the average values Cr (Cr>Crmean). The third rule depends on the fact that the fire region center at high temperature is white in color, this results in reducing the red component and increasing the blue component at the fire center, which is presented as (Cb>Y>Cr). The fourth rule is that the Cr is smaller than the standard deviation for the same image (Crstd) multiplied by constant τ (CrG>B), (R>Rmean). The RGB images are then converted to HSV color space. Fire pixels are determined if the following conditions are met: 0≤H≤60, 0.2≤S≤1, 100≤V≤255. For smoke detection, RGB and k-means algorithms are used. Standard RGB smokes values C are taken from the image with significant smoke. The C value must be experimentally adjusted based on the results. Cluster center P is determined from video stream after the image frames are clustered by k-means algorithm. Smoke is detected if |P–C| < threshold. This method works well, nevertheless, smoke can spread quickly and has different colors based on the burning materials, leading to false alarm. Chen et al. [5] designed a fire detection algorithm which combines the saturation channel of the HSV color and the RGB color. This method detected fire using three rules: R≥RT, R≥G>B, and S≥((255-R)*ST/RT). Determinations of two thresholds (ST and RT) are needed. Based on the experimental results, the selected range is 55–65 for ST values and 115–135 for RT. This method is fast and computationally simple compared to the other methods. However, it suffers from false-positive alarms in case of moving fire-like objects.
In this study, a forest-fire detection method is proposed. It depends on multi-stages to identify forestfire. The final results indicate that the proposed algorithm has a good detection rate and fewer false alarms. The proposed algorithm is able to distinguish between fire and fire-like objects which are the main crucial problems for most of the existing methods.
The paper is organized as follows: Section 2 describes the Methodology, Section 3 presents the experimental results, and Section 4 summarizes the achieved results and potential future direction.
2. Methodology
In this part, the proposed method is presented. It consists of multi-stages. First, background subtraction is applied, because the fire boundaries continuously changes. Second, a color segmentation model is used to mark the candidate regions. Third, special wavelet analysis is carried out to distinguish between actual fire and fire like objects. Finally, support vector machine (SVM) is used for classifying the candidate regions to either actual fire or non-fire. The proposed algorithm stages will be described in details in the following subsections. Fig. 1 shows a flowchart of the proposed method.
Fig. 1. The proposed method flowchart.
2.1 Background Subtraction
Detecting moving objects is an essential step in most of the fire detection methods based on a video, because the fire boundaries continuously fluctuates. Eq. (1) calculates the contrast between the current image and background to determine the region of motion. Fig. 2 shows an example of background subtraction. A pixel at (x, y) is supposed to be moving if it satisfies Eq. (1) as follows.
\(\left|I_{n}(x, y)-B_{n}(x, y)\right|>t h r\) (1)
where In(x, y) and Bn(x, y) represents the pixel value at (x, y) for the current and background frame, and thr refers to a threshold value which is set to 3 experimentally.
The background value is continuously updated using Eq. (2) as follows:
\(B_{n+1}(i, j)=\left\{\begin{array}{c} B_{n}(x, y)+1 i f f_{n}(x, y)>B_{n}(x, y) \\ B_{n}(x, y)-1 \text { if } f_{n}(x, y)<B_{n}(x, y) \\ B_{n}(x, y) \quad \text { if } f_{n}(x, y)=B_{a}(x, y) \end{array}\right.\) (2)
where Bn+1(x, y) and Bn(x, y) represents intensity pixel value at (x, y) for the current and previous background [6].
Fig. 2. An original frame containing fire (a) and the frame containing fire after background subtraction (b).
2.2 Color-based Segmentation
Different kinds of moving things (e.g., trees, people, birds, etc.) as well fire can be included after applying background subtraction. Thus, CIE L∗a∗b∗ color is used to select candidate regions of fire color.
2.2.1 RGB to CIE L*a*b* conversion
The conversion from RGB to CIE L∗a∗b∗ color space is performed by using Eq. (3):
\(\begin{aligned} &\left[\begin{array}{c} X \\ Y \\ Z \end{array}\right]=\left[\begin{array}{ccc} 0.412453 & 0.357580 & 0.180423 \\ 0.212671 & 0.715160 & 0.072169 \\ 0.019334 & 0.119193 & 0.950227 \end{array}\right] *\left[\begin{array}{c} R \\ G \\ B \end{array}\right]\\ &L^{*}=\left\{\begin{array}{c} 116 *\left(Y / Y_{n}\right)-16, \text { if }\left(Y / Y_{n}\right)>0.008856 \\ 903.3 *\left(Y / Y_{n}\right), \text { Otherwise } \end{array}\right.\\ &a^{*}=500 *\left(f\left(X / X_{n}\right)-f\left(Y / Y_{n}\right)\right.\\ &b^{*}=200 *\left(f\left(Y / Y_{n}\right)-f\left(Z / Z_{n}\right)\right.\\ &f(t)=\left\{\begin{array}{c} t^{1 / 3}, \text { ift }>0.008856 \\ 7.787 * t+16 / 116, \text { Otherwise } \end{array}\right. \end{aligned}\) (3)
where Xn, Yn, and Zn represents the reference color (white) values. The RGB colors channels range is from 0 to 255 for 8-bit data representation, and the ranges of L*, a*, and b* are [0, 100], [–110, 110], and [–110, 110], respectively.
After calculating the values of color channels (L*, a*, b*), the values of average channel (L*m, a*m, b*m) are obtained using the following equations:
\(\begin{array}{l} L_{m}^{*}=\frac{1}{N} \sum_{x} \sum_{y} L^{*}(x, y) \\ a_{m}^{*}=\frac{1}{N} \sum_{x} \sum_{y} a^{*}(x, y) \\ b_{m}^{*}=\frac{1}{N} \sum_{x} \sum_{y} b^{*}(x, y) \end{array}\) (4)
where L*m, a*m and b*m are the average CIE L*a*b* channels values, and N is the image pixels’ total number.
To detect the candidate fire region using CIE L*a*b*, four rules are defined based on the notion that the fire region is the brightest area with near red color in the image. The rules are as follows:
\(R 1(x, y)=\left\{\begin{array}{lc} 1 & \text { if }\left(L *(x, y) \geq L^{*} m\right) \\ 0 & \text { Otherwise } \end{array}\right.\) (5)
\(R 2(x, y)=\left\{\begin{array}{l} 1 \text { if }\left(a *(x, y) \geq a^{*} m\right) \\ 0 \quad \text { Otherwise } \end{array}\right.\) (6)
\(R 3(x, y)=\left\{\begin{array}{ll} 1 & \text {if }\left(b^{*}(x, y) \geq b^{*} m\right) \\ 0 & \text { Otherwise } \end{array}\right.\) (7)
\(R 4(x, y)=\left\{\begin{array}{ll} 1 & \text { if }\left(b *(x, y) \geq a^{*}(x, y)\right) \\ 0 & \text { Otherwise } \end{array}\right.\) (8)
where R1(x, y), R2(x, y), R3(x, y), and R4(x, y) are binary images. Fig. 3 shows the applying rules (5) through (8).
Fig. 3. Applying the rules from (5)-(8) to the input images: (i) original RGB images, (ii) binary imagesusing rule (5), (iii) binary images using rule (6), (iv) binary images using rule (7), (v) binary images usingrule (8), and (vi) binary images using rules (5) through (8).
2.3 Spatial Wavelet Analysis for Color Variations
There is high luminance contrast in genuine fire regions than in fire-like colored objects, due to the turbulent fire flicker. Spatial wavelet analysis is a good image-processing method that can be used to distinguish between genuine fire regions and fire-like colored regions. Thus, a 2D wavelet filter is used on the red channel and the spatial wavelet energy is calculated for each pixel. Fig. 4 shows the wavelet energies of two videos, one contains actual fire and the other contains fire-like objects. It is clear that these regions containing actual fires have high variations and high wavelet energy. The following formula is used to calculate the wavelet energy:
\(E(x, y)=\left(H L(x, y)^{2}+L H(x, y)^{2}+H H(x, y)^{2}\right)\) (9)
where E(x, y) is the spatial wavelet energy for specific pixel, HL, LH and HH are low high, high low and high-high wavelet sub-images. The spatial wavelet energy for each block is calculated by adding the specific energy of each pixel in the block as follows [7].
\(E_{\text {block}}=\frac{1}{N_{b}} \sum_{x, y} E(x, y)\) (10)
where Nb is the total number of pixel’s in the block. Eblock is used in the next stage as SVM input, to classify the regions of interest to either fire or non-fire.
Fig. 4. Wavelet energy for actual fire (a) and fire-like object (b).
2.4 Classification using SVM
SVM nowadays is commonly used in different fields of pattern recognition systems, because it provides high performance and accurate classification results with limited training data set. The SVM idea is to create an optimal hyperplane to divide the input dataset into two classes with maximum margins. In this study, SVM is used to classify the regions of interest to either fire or non-fire. SVM classification function defined in the following formula:
\(f(x)=\operatorname{sign}\left(\sum_{i=0}^{1-1} w_{i} \cdot k\left(x, x_{i}\right)+b\right)\) (11)
where sign() is to determine whether the class of x either belongs to fire or non-fire (+1 class and –1 class). wi are output weights of the kernel, k() represents a kernel function, xi are the support vectors, i is support vectors number. In our proposed method, a one-dimension feature vector has been used. The data in this study is nonlinearly separable, no hyper-plane may exist to separate the input data into two parts, therefore, non-linear radial basis function (RBF) [8] is used, as follows:
\(k(x, y)=\exp \left(-\frac{\|x-y\|^{2}}{2 \sigma^{2}}\right) \text { for } \sigma>0\) (12)
where x, y represent the input feature vectors, σ is a parameter for controlling the width of the effective basis function, experimentally set to 0.1 which gives a good performance. To train the SVM, dataset consisting of 500 wavelet energies from actual fire video and 500 fire-like and non-fire moving pixels were used.
3. Results
In this part, experimental results of the proposed method have been presented. The model is implemented using MATLAB (R2017a) and tested on an Intel Core i7 2.97 GHz PC 8 GB RAM PC.
To measure the proposed algorithm performance, 10 videos collected from the Internet (http://www.ultimatechase.com), eight of them are used with dimensions of 256×256. Table 1 shows a snapshot of the tested videos. True positive is counted if an image frame has a fire pixel, and it is determined by the proposed algorithm as fire and if the image frame has no fire. It is determined by the proposed algorithm as fire, it counts as a false-positive. The results are shown in Table 2.
Table 1. Videos used for the proposed algorithm evaluation
Table 2. Experimental results for testing the proposed forest-fire detection method
The experimental final results in Table 2 show that the proposed method has an average true positive rate (93.46%) in the eight fire videos and false positive rate (6.89%) in the two fire-color moving object videos. These results indicate the good performance of the proposed method.
3.1 Performance Evaluation
To evaluate the performance of the proposed algorithm, comparisons between the above-mentioned methods and the proposed algorithm are carried out. All of these methods are tested in data sets consisting of 300 images (200 forest-fire images and 100 non-fire images) collected from the Internet. The Algorithms’ performances are calculated using the evaluation metric F-score.
3.1.1 F-score
The F-score [9] is used to evaluate the performance of the detection methods. For any given detection method, there are four possible outcomes; If an image has fire pixels, and it was determined by the algorithm as fire, then it is a true-positive; if the same image is determined not to be fire pixels by the algorithm, it is false-negative. If an image has no fire, and it was determined by the algorithm as no fire, it is true-negative, but if it was identified as fire by the algorithm, it counts as a false-positive. Fire detection methods are evaluated using the following equations:
\(F=2 * \frac{(\text {precision reall})}{(\text {precision }+\text {recall})}\) (13)
\(\text {precision}=\frac{T P}{(T P+F P)}\) (14)
\(\text {recall}=\frac{T P}{(T P+F N)}\) (15)
where F refers to F-score; TP, TN, FP and FN are a true positive, true negative, false positive, and false negative, respectively. A higher algorithm F-score means a better overall performance. Table 3 shows the comparison results.
ㆍTP rate is TP divided by the overall number of fire images.
ㆍTN rate is TN divided by the overall number of non-fire images.
ㆍFN rate is FN divided by the overall number of fire images.
ㆍFP rate is FP divided by the overall number of non-fire images.
Table 3. Evaluations of the four tested fire detection methods
4. Conclusion
This work presented an effective forest-fire detection method using image processing. Background subtraction and special wavelet analysis are used. In addition, SVM is used for classifying the candidate region to either real fire or non-fire. Comparison between the existing methods and the proposed method is carried out. The final results indicate that the proposed forest fires detection method achieves a good detection rate (93.46%) and a low false-alarm rate (6.89%) in fire-like objects. These results indicate that the proposed method is accurate and can be used in automatic forest-fire alarm systems.
For future work, the method’s accuracy could be improved by extracting more fire features and increasing the training data set.
Acknowledgement
The work is supported by Fundamental Research Funds for the central universities (No. 2572017PZ10).
References
- D. Stipanicev, T. Vuko, D. Krstinic, M. Stula, and L. Bodrozic, "Forest fire protection by advanced video detection system: Croatian experiences," in Proceedings of the 3rd TIEMS Workshop on Improvement of Disaster Management Systems: Local and Global Trends, Trogir, Croatia, 2006.
- C. E. Premal and S. S. Vinsley, "Image processing based forest fire detection using YCbCr colour model," in Proceedings of 2014 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 2014, pp. 1229-1237.
- V. Vipin, "Image processing based forest fire detection," International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 2, pp. 87-95, 2012.
- Y. L. Wang and J. Y. Ye, "Research on the algorithm of prevention forest fire disaster in the Poyang Lake Ecological Economic Zone," Advanced Materials Research, vol. 518-523, pp. 5257-5260, 2012. https://doi.org/10.4028/www.scientific.net/AMR.518-523.5257
- T. H. Chen, P. H. Wu, and Y. C. Chiou, "An early fire-detection method based on image processing," in Proceedings of 2004 International Conference on Image Processing, Singapore, 2004, pp. 1707-1710.
- M. Kang, T. X. Tung, and J. M. Kim, "Efficient video-equipped fire detection approach for automatic fire alarm systems," Optical Engineering, vol. 52, no. 1, article no. 017002, 2013.
- B. U. Toreyin, Y. Dedeoglu, U. Gudukbay, and A. E. Cetin, "Computer vision based method for real-time fire and flame detection," Pattern Recognition Letters, vol. 27, no. 1 pp. 49-58, 2006. https://doi.org/10.1016/j.patrec.2005.06.015
- S. Theodoridis, A. Pikrakis, K. Koutroumbas, and D. Cavouras, Introduction to Pattern Recognition: A Matlab Approach. New York, NY: Academic Press, 2010.
- T. Fawcett, "ROC graphs: notes and practical considerations for researchers," 2004; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.9777