DOI QR코드

DOI QR Code

Setting of the Operating Conditions of Stereo CCTV Cameras by Weather Condition

  • Moon, Kwang (Department of Smart ICT Convergence, Konkuk University) ;
  • Pyeon, Mu Wook (Department of Civil Engineering, Social Eco-Tech Institute) ;
  • Lee, Soo Bong (Department of Advanced Technology Fusion, Konkuk University) ;
  • Lee, Do Rim (Advance Tech. Research Institute, LoDICS Co.)
  • 투고 : 2014.11.27
  • 심사 : 2014.12.26
  • 발행 : 2014.12.31

초록

A wide variety of image application methods, such as aerial image, terrestrial image, terrestrial laser, and stereo image point are currently under investigation to develop three-dimensional 3D geospatial information. In this study, matching points, which are needed to build a 3D model, were examined under diverse weather conditions by analyzing the stereo images recorded by closed circuit television (CCTV) cameras installed in the U-City. The tests on illuminance and precipitation conditions showed that the changes in the number of matching points were very sensitively correlated with the changes in the illuminance levels. Based on the performances of the CCTV cameras used in the test, this study was able to identify the optimal values of the shutter speed and iris. As a result, compared to an automatic control mode, improved matching points may be obtained for images filmed using the data obtained through this test in relation to different weather and illuminance conditions.

키워드

1. Introduction

The development of the closed circuit television (CCTV) is based on the advancement of information and optics technology. Typically, CCTV is used to trace spatial information services and targets in a crime after storing the images provided through a telecommunications network. Recently, as a part of the ubiquitous urban services, new technologies are under development to detect and trace automatic targets in order to confirm, on a real-time basis, the current conditions of the monitoring space and analyze the behavior. Notably, some of these systems have already been adopted and are currently in operation (Kim et al., 2008; Shan et al., 2007). To configure the two-dimensional 2D locational information using CCTV images, field surveys and on-the-spot measurements are needed. Lately, the use of stereo CCTV images that can configure a threedimensional 3D geospace and trace objects has been investigated (Kim and Shin, 2011; Song, 2012).

CCTV images often feature quite different characteristics by weather condition from the occlusion formed by buildings or seasonal vegetation. Therefore, a study of the optimum location of the CCTV cameras is important to secure the largest monitoring region (Choi et al., 2010; Lee and Kang, 2011; Moon et al., 2013). As the monitoring equipment layout factors include the image acquisition performance, which is directly related to weather and illuminance conditions, the in situ circumstances should be taken into consideration(Kwon, 2004). Although investigations have been conducted on the calibration methods of CCTV images under environmental effects, such as illuminance, rain, and snow, only a few studies have been performed on direct quantitative assessments regarding the failure in the interpretation of the acquired images. In addition, it is very rare to target the production of 3D spatial information (Cho et al., 2013; Choi and Lee, 2014). Typical environmental impact factors include wind, fog, illuminance, rain, and snow (Cho et al., 2013; Park et al., 2007). As the correction of the acquired images, apart from the hardware calibration, causes a loss of image information, understanding the limitations of the process is beneficial in order to capitalize on the original images as much as possible (Song, 2012).

This study investigates the matching among the images after acquiring stereo CCTV images under various illuminance and rainfall conditions. Instead of employing subjective indicators such as personal interpretations, the quality of the acquired images was quantitatively analyzed based on the number of matching points. The limitations of using the stereo CCTV images were then reviewed for different illuminance and rainfall conditions.

 

2. Data and Methodology

2.1 Test area and equipment

The area around the New Millennium Hall of Konkuk University was selected for the tests for a number of reasons such as: (1) Place where shooting distance from which overall matching point distribution in the target area is noticeable can be ensured, (2) the minimization of interfering effects, other than weather and illuminance conditions, (e.g., vegetation or shadows causing seasonal occlusions), and (3) a stable supply of power by avoiding rain at the installation of the testing equipment. Fig. 1 shows the camera positions and camera viewing angles. In this test, two CCTV cameras of the same model (AXIS Q1755-E) were fixed to the slider with a 1.1-meter interval between them to secure stereoscopic images. To minimize the impact of climate changes, in addition, they were installed in a shaded spot surrounded by walls and roof. In terms of an illuminometer used to measure illuminance, CEM’s portable Digital Illuminance Meter Model 1308 was selected. The CCTV cameras detailed specifications are reported in Table 1.

Table 1.AXIS Q1755-E network camera specifications

Fig. 1.CCTV camera location and stereo images in the test area (a) CCTV camera location, (b) left image, and (c) right image

As the same geometric sensor conditions should be maintained for every filming run, a test-customized equipment installation frame was fabricated to fix the CCTV cameras. After the installation, it was allowed to film the installation position several times at the same position.

To minimize the impact of climate changes, illuminance was measured in the spots marked in Fig 2, using the lens of the CCTV camera installed in the shade. The arithmetic mean of the measured illuminance values was calculated and categorized according to the date of measurement, time, humidity, PM10, rainfall, and illuminance. The reliable and easily accessible data from the Korea Meteorological Administration (http://www.kma.go.kr/) were utilized.

Fig. 2.Locations of the illumination measurements

2.2 Test methods

In relation to the environment of the testing area, an analysis was conducted to determine whether matching points are acquired depending on weather conditions for regularly spaced buildings, and the number of matching points was subsequently compared. Besides, the filming performance was kept consistent by selecting the AUTO OFF mode. The selected settings are reported in Table 2.

Table 2.AXIS Q1755 network camera settings selected for the experiment

Apart from the iris and shuttering speed, the settings of the camera were fixed. For the iris, a total of 17 values in the range of F1.8-26 were used, while shutter speeds ranging from 1/2 s to 1/2000 s were calibrated in 17 spots. Then, shutter speed and iris settings were adjusted because the depth of field (DOF) differs depending on the size of the iris opening. The image is shifted from 3D to 2D space In the process, an out-of-focus occurs depending on the length because as the size of the iris opening increases, the circle of confusion is enlarged. Then, an image may look broken. This hard-to-make-out state is called ‘outof- focus.’ In contrast, decrease in the size of the circle of confusion by tightening the iris is named ‘pan focus.’ In certain environments such as landscape, in other words, a vivid photo can be obtained when photographed in a panfocus mode only by tightening the iris as long as there are no external factors. Shutter speed is adjusted because when a moving object is photographed, low-blurring images can be obtained if shutter speed is high while a motor blur takes place in case shutter speed is low. If low-speed shutter is selected when a CCTV camera shakes, or when a moving object is in the image, a panning shot in which a background moves when an object is detected occurs. Lastly, because exposure can be secured through low-speed shutter when there is a lack of light at night or in the dark place, the two options were adjusted. Moreover, the images used for analysis on changes in the number of matching points were obtained by shooting videos (about 120 second-long video at a time) using stereo CCTV cameras. Using MATLAB, in addition, one 1 frame was extracted and used in sixty 60 seconds

The images were filmed 289 times in total and matched using the scale invariant feature transform SIFT technique (Eo et al., 2012). A matching program was written in C++, and the SIFT technique was implemented based on the algorithm released by Lowe (2004). Fig. 3 shows the overall flow of the test

Fig. 3.Experiment flowchart

 

3. Test Results and Analysis

The buildings that complied with all the necessary requirements were selected and filmed to measure the change in the number of matching points. After considering the variation of the calibration values according to the type of CCTV camera before filming, the backlight and AUTO focus were turned off Table 2. Additionally, the amplification value on the video signal was set to zero for minimum calibration in order to minimize the noise production.

3.1 Test on the effects caused by the weather conditions

To test the effects of the weather conditions, the images were filmed 153 times in total (17×9) on a fine day, from 13:30 to 18:30, adjusting the stereo CCTV camera options. At the time of filming, a temperature in the range of 19- 25 ℃, a humidity of 43-54%, and 12-30 ㎍/㎥ of particulate matter were observed. The illuminance ranged from 0.01 to 78.8 Klux, exhibiting a considerable difference. Fig. 4 shows the matching points obtained under such weather conditions.

Fig. 4.Number of matching points measured on a fine day. (a) number of matching points versus iris at different fixed shutter speeds, and (b) number of matching points versus shutter speed at different fixed iris settings

Fig. 4 shows the matching points acquired on a fine day by using two different methods. In the case of Fig. 4(a), the shutter speed was adjusted, whereas the iris was maintained constant. Conversely, in Fig. 4(b), the iris was adjusted, whereas the shutter speed was kept fixed. On a fine day, a total of 17 iris and 9 shutter speed values were used as camera option adjustment values. Because of the high illuminance during filming, three values of F-stop (F14, F8, and F1.8) and shutter speed (1/125 s, 1/500 s, and 1/1500 s) were selected and are shown in Fig. 4.

The highest number of matching points was observed on October 11, 2014 when the illuminance was equal to 61.4 Klux. Based on the images obtained with a 1/1500 s shutter speed and F14 iris, a total of 506 matching points were observed. In Fig. 4(a), the number of matching points was 470 at a shutter speed of 1/125 s and iris F14, and 436 at a shutter speed of 1/500 s and iris F8. In Fig. 4(b), in contrast, the number of matching points was 436 at a shutter speed of 1/500 s and iris F8, and 405 at a shutter speed of 1/250 s and iris F1.8.

According to the test results, when the automatic correction option was turned off without considering the wrong match, a shutter speed of 1/1500 s and an iris with F14 were the best settings to extract matching points under 60 Klux. In this case, the optimum light received by the CCTV image sensor was kept constant, increasing the brightness of the image frame and generating clearer boundaries between an object and its background. Conversely, in relation to the matching point reduction factors, below F14, the image significantly brightened because of the excessive amount of light hitting the image sensor at 60 Klux and shutter speed 1/1500 s. In addition, owing to the noise production and elimination of the boundaries between the objects, the number of matching points decreased. Above F14, the image appeared dark because of the limited amount of light hitting the image sensor, causing a decrease in the number of matching points. Moreover, uncorrelated graphs were observed, as shown in Fig. 4(b), because of a partial decline in illuminance due to cloud cover and an irregular number of matching points resulting from changes in the building shadows caused by the sunset.

3.2 Testing in precipitation conditions

Images were filmed 136 times (17×8) in a broad time range, from 11:00 to 18:30, to perform a test in precipitation conditions. During the test, the temperature ranged from 18.6 to 20.5 ℃, while the values of the humidity and particulate matter were equal to 92-98% and 30-64 ㎍/㎥, respectively. Compared to a fine day, humidity and particulate matter were twice as high, whereas the illuminance was generally very low, with values in the range of 0.01-13.94 Klux.

Instead of using the actual measurements, the precipitation data from the Automatic Weather System (AWS) at the Korea Meteorological Administration were adopted. The precipitation accumulated per 60 min was calculated every minute. The AWS observatory is situated in Gwangjin-gu, Seoul, nearly 1.2 km from the test area. According to the observatory, the precipitation was 1.5 mm on average based on the 60-min cumulative distribution. The matching points acquired under such weather conditions are shown in Fig. 5.

Fig. 5.Number of matching points measured on a rainy day. (a) number of matching points versus iris at different fixed shutter speeds, and (b) number of matching points versus shutter speed at different fixed iris settings

Fig. 5 shows the number of matching points with different stereo CCTV camera options (shutter speed, iris) on a rainy day. Regarding the camera option adjustment values, three values F-stop (F6.8, F3.4, F2.4) out of a total of 17 were used Similarly, three values of the shutter speed (1/15s, 1/30s, and 1/60s) out of eight were used. The highest number of matching points on a rainy day was observed on September 29, 2014 when the illuminance was 4.27 Klux. For a shutter speed of 1/60 s and iris F3.4, a total of 318 matching points were observed. In Fig. 5(a), the number of matching points was 267 at a shutter speed of 1/15 s, iris F11, and 9.18 Klux, whereas it was equal to 268 at a shutter speed of 1/30 s, iris F6.8, and 4.05 Klux. The number of matching points only differed by one unit. In contrast, in Fig. 5(b), the number of matching points was 318 at a shutter speed of 1/60 s, iris F3.4, and 4.27 Klux, and 233 at a shutter speed of 1/30 s, iris F6.8, and 4.05 Klux.

The analysis of the results showed that the number of matching points for a 3 mm rainfall was greater than that obtained for a 2.5 mm rainfall; notably, only one frame was used in extracting the matching points and a 60-min cumulative precipitation was converted into a "per minute" value, probably causing a slight difference. Overall, the rain was light and when the CCTV camera failed to detect it, the number of matching points reached an average of 106.4– 156.8. After 6 h(6 h cumulative rainfall) the precipitation was 8.5 mm, and the number of matching points ranged from 37.64 to 49.41. When the precipitation increased, the number of matching points decreased. However, as the illuminance also decreased, its influence cannot be ignored.

 

4. Conclusion

In this study, a comprehensive investigation was conducted to determine the changes in the image quality as a function of the filming environment. The testing was performed both on a fine day and on a rainy day, and the variation in the number of matching points was analyzed in relation to the stereo CCTV camera filming settings and external weather conditions. The test results showed that the illuminance was maintained at 70 Klux after noon and dramatically dropped between 16:00 and 17:00. Along with the variation in illuminance, the number of matching points changed. At 61.4 Klux, for 1/1500 s and F14, a high number of matching points, 506, were found. In rainy weather conditions, when the rain was detected by the CCTV cameras and appeared on the screen, the matching points appeared irregular. In the case of low precipitation, the rain was undetectable and the number of matching points remained constant. At 3.95 Klux, a high number of matching points, 318, were observed at a shutter speed of 1.60 s and an iris of F3.4.

These test results may not be able to confirm the performances of all the CCTV cameras. However, based on the findings, it is expected that the quality of the images could be kept constant by correcting the degradation due to the weather conditions. Furthermore, as there is no need for separate facilities, there will be almost no additional cost. This is immediately applicable to any CCTV camera anywhere across the nation.

In future studies, a test needs to be performed in an open area to make the CCTV cameras and target object tested under the same illuminance. To test the accuracy and reliability of the proposed method, further studies are required on the measurement of the number of matching points for objects that artificially manipulate the environment and to understand their actual coordinates in an indoor space where no environmental changes occur. Finally, a test of the variation in the number of matching points and illuminance as a function of the weather conditions (e.g., rain, snow, fog, yellow sand) is planned.

참고문헌

  1. Choi, J.H., Kim, J.H., and Ryu, T.H. (2010), Evaluating locational validity of anti-crime CCTV installation point in university campus : a KNU case study, Proceedings of the Korean Association of Geographic Information Studies Conference, 12-13 November, Sunchon University, Korea, pp.175-179. (in Korean with English abstract)
  2. Choi, K.A. and Lee, I.P. (2014), Quantitative evaluation on surveillance performance of CCTV systems based on camera modeling and 3D spatial analysis, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 32, No. 2, pp. 153-162. (in Korean with English abstract) https://doi.org/10.7848/ksgpc.2014.32.2.153
  3. Cho, M.H., Park, Y.J., and Lee, J.W. (2013), Prevention measures of performance degradation for natural environment based on intelligent CCTV, Journal of Korean Society of Hazard Mitigation, 21 February, Hanyang University, Korea, Vol. 13, pp. 253-253. (in Korean) https://doi.org/10.9798/KOSHAM.2013.13.1.253
  4. Eo, Y.D., Pyeon, M.W., Kim, S.W., Kim, J.R., and Han, D.Y. (2012), Coregistration of terrestrial lidar points by adaptive scale-invariant feature transformation with constrained geometry, Automation in Construction, Vol. 25, pp. 49-58. https://doi.org/10.1016/j.autcon.2012.04.011
  5. Kim, I.S. and Shin, H.S. (2011), 3D GIS system using CCTV camera, Journal of the Korea Institute of Electronic Communication Sciences, Vol. 6, No. 4, pp. 559-565. (in Korean with English abstract)
  6. Kim, I.S., Yoo, J.D., and Kim, B.H. (2008), A monitoring way and installation of monitoring system using intelligent CCTV under the u-City environment, Journal of The Korea Institute of Electronic Communication Sciences, Vol. 3, No. 4, pp. 295-303. (in Korean with English abstract)
  7. Kwon, T.M. (2004), Atmospheric Visibility Measurements using Video Cameras: Relative Visibility, No. CTS 04-03, Department of Electrical and Computer Engineering, University of Minnesota Duluth, USA, 44p.
  8. Lee, S.J. and Kang, S.J. (2011), A study on the optical positioning of surveillance facilities based on study on the optimal positioning of surveillance facilities based on criminal targets: focus on the visual analysis of exhibition space in a museum and open space in an apartment, Journal of the Architectural Institute of Korea, Vol. 27, No. 12, pp. 145-152. (in Korean with English abstract)
  9. Lowe, D.G. (2004), Distinctive image features from scaleinvariant keypoints, International Journal of Computer Vision, Vol. 60, No. 2, pp.91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  10. Moon, S.J., Jeon, M.C., Eo, Y.D., Im, S.B., and Park, B.W. (2013), Campus CCTV allocation simulation for maximizing monitoring areas, Advances in Information Sciences and Service Sciences(AISS), Vol. 5, No. 7, pp. 1192-1198. https://doi.org/10.4156/aiss.vol5.issue7.141
  11. Park, B.Y., Nam, K.S., and Lee, H.S. (2007), Development of the weather information detection algorithm using CCTV images, Korean Society of Transportation, Vol. 14-B, No. 6, pp. 525-530.
  12. Shan, T., Chen, S., Sanderson, C., and Lovell, B.C. (2007), Towards robust face recognition for Intelligent-CCTV based surveillance using one gallery image, IEEE Conference on Advanced Video and Signal Based Surveillance, AVSS 2007, 11-12 September, London, England, pp. 470-475.
  13. Song, J.U. (2012), Cloud computing based system for 3D GIS case study, Journal of the Korea Information Sciences, Vol. 30, No. 5, pp. 29-34. (in Korean)

피인용 문헌

  1. Changes in the Number of Matching Points in CCTV's Stereo Images by Indoor/Outdoor Illuminance vol.23, pp.1, 2015, https://doi.org/10.7319/kogsis.2015.23.1.129
  2. 건설현장 적용을 위한 디지털맵 노이즈 제거 알고리즘 성능평가 vol.10, pp.4, 2020, https://doi.org/10.13161/kibim.2020.10.4.032