• Title/Summary/Keyword: Poor visibility

Search Result 68, Processing Time 0.028 seconds

Experimental study on applicability of Air-Curtain system in train fire at subsea tunnel rescue station (해저터널 열차 화재 시 구난역 에어커튼 시스템의 성능에 관한 실험 연구)

  • Park, Byoung-Jik;Shin, Hyun-Jun;Yoo, Yong-Ho;Park, Jin-Ouk;Kim, Hwi-Seong;Kim, Yang-Kyun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2018
  • Visibility is very poor in tunnel fire because of confined space where the fire may easily lead to the mass casuality incident because of fast smoke spread. In this test, air curtain and the fan were installed at rescue station in a bid to make use of rescue station in safe way during the train fire in undersea tunnel and a full-scale fire test was conducted to identify the applicability of air curtain system. Air curtain system was installed at a real rescue station and the test was continued for 2 minutes till heptane which was used as fire source was completely burned out. When air curtain was working, difference in temperature between inside and outside the platform was $160^{\circ}C$ and carbon monoxide measured inside the platform was less than the case of no air curtain system by 160 ppm. Thus a full-scale fire test demonstrated that the air curtain system installed at rescue station in undersea tunnel was able to effectively block the heat and smoke generated from the fire.

Single Image Dehazing Based on Depth Map Estimation via Generative Adversarial Networks (생성적 대립쌍 신경망을 이용한 깊이지도 기반 연무제거)

  • Wang, Yao;Jeong, Woojin;Moon, Young Shik
    • Journal of Internet Computing and Services
    • /
    • v.19 no.5
    • /
    • pp.43-54
    • /
    • 2018
  • Images taken in haze weather are characteristic of low contrast and poor visibility. The process of reconstructing clear-weather image from a hazy image is called dehazing. The main challenge of image dehazing is to estimate the transmission map or depth map for an input hazy image. In this paper, we propose a single image dehazing method by utilizing the Generative Adversarial Network(GAN) for accurate depth map estimation. The proposed GAN model is trained to learn a nonlinear mapping between the input hazy image and corresponding depth map. With the trained model, first the depth map of the input hazy image is estimated and used to compute the transmission map. Then a guided filter is utilized to preserve the important edge information of the hazy image, thus obtaining a refined transmission map. Finally, the haze-free image is recovered via atmospheric scattering model. Although the proposed GAN model is trained on synthetic indoor images, it can be applied to real hazy images. The experimental results demonstrate that the proposed method achieves superior dehazing results against the state-of-the-art algorithms on both the real hazy images and the synthetic hazy images, in terms of quantitative performance and visual performance.

A STUDY FOR THE DETERMINATION OF KOMPSAT I CROSSING TIME OVER KOREA (I): EXAMINATION OF SOLAR AND ATMOSPHERIC VARIABLES (다목적 실용위성 1호의 한반도 통과시각 결정을 위한 연구 (I): 태양 및 대기 변수 조사)

  • 권태영;이성훈;오성남;이동한
    • Journal of Astronomy and Space Sciences
    • /
    • v.14 no.2
    • /
    • pp.330-346
    • /
    • 1997
  • Korea Multi-Purpose Satellite I (KOMPSAT-I, the first multi-purpose Korean satellite) will be launched in the third quarter of 1999, which is operated on the sun-synchronous orbit for cartography, ocean color monitoring, and space environment monitoring. The main mission of Electro-Optical Camera(EOC) which is one of KOMPSAT-I sensors is to provide images for the production of scale maps of Korea. EOC collects panchromatic imagery with the ground sample distance of 6.6m at nadir through visible spectral band of 510~730nm. For determining KOMPSAT-I crossing time over Korea, this study examines the diurnal variation of solar and atmospheric variables that can exert a great influence on the EOC imagery. The results are as follows: 1) After 10:30 a.m. at the winter solstice, solar zenith angle is less than $70^{\circ}$ and expected flux of EOC spectral band over land for clear sky is greater than about $2.4mW/cm^2$. 2) For daytime the distribution of cloud cover (clear sky) shows minimum (maximum) at about 11:00 a.m. Although the occurrence frequency of poor visibility by fog decreases from early morning toward noon, its effect on the distribution of clear sky is negligible. From the above examination it is concluded that determining KOMPSAT-I crossing time over Korea between 10:30 and 11:30 a.m. is adequate.

  • PDF

A Study on Selection of Bicycle Road Hazard Detection Elements For Mobile IoT Sensor Device Operation (이동형 IoT 센서 장비 운용을 위한 자전거도로 위험 감지요소 선정 연구)

  • Woochul Choi;Bong-Joo Jang;Sun-Kyum Kim;Intaek Jung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.4
    • /
    • pp.37-53
    • /
    • 2024
  • This study selected bicycle road hazard detection factors for mobile IoT sensor device operation and developed service application plans. Twelve bicycle road hazard detection factors were derived through a focused group interview, and a fuzzy AHP-based importance analysis was conducted on 30 road and transportation experts. As a result, 'damage to pavement' (1st overall) and 'environmental obstacle' (2nd) with low visibility but a high risk of accidents were selected the most. The factors in terms of facility management, such as 'disconnected route occurrence' (4th), 'artificial obstacle' (5th), 'effective width' (6th), and 'poor drainage' (7th), were selected as the upper and middle areas. Factors that are not direct accident-inducing factors, such as 'loss of road markings' (11th) and 'free space width' (12th), were selected the least. Based on this, a plan was presented to apply the bicycle road hazard detection service and a service operation strategy according to real-time performance. Nevertheless, follow-up studies, such as human behavioral analysis based on bicycle operators, analysis according to the bicycle road type, service demonstration, and pilot operation, will be needed to develop safe bicycle road management is expected.

Development of a deep-learning based tunnel incident detection system on CCTVs (딥러닝 기반 터널 영상유고감지 시스템 개발 연구)

  • Shin, Hyu-Soung;Lee, Kyu-Beom;Yim, Min-Jin;Kim, Dong-Gyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.6
    • /
    • pp.915-936
    • /
    • 2017
  • In this study, current status of Korean hazard mitigation guideline for tunnel operation is summarized. It shows that requirement for CCTV installation has been gradually stricted and needs for tunnel incident detection system in conjunction with the CCTV in tunnels have been highly increased. Despite of this, it is noticed that mathematical algorithm based incident detection system, which are commonly applied in current tunnel operation, show very low detectable rates by less than 50%. The putative major reasons seem to be (1) very weak intensity of illumination (2) dust in tunnel (3) low installation height of CCTV to about 3.5 m, etc. Therefore, an attempt in this study is made to develop an deep-learning based tunnel incident detection system, which is relatively insensitive to very poor visibility conditions. Its theoretical background is given and validating investigation are undertaken focused on the moving vehicles and person out of vehicle in tunnel, which are the official major objects to be detected. Two scenarios are set up: (1) training and prediction in the same tunnel (2) training in a tunnel and prediction in the other tunnel. From the both cases, targeted object detection in prediction mode are achieved to detectable rate to higher than 80% in case of similar time period between training and prediction but it shows a bit low detectable rate to 40% when the prediction times are far from the training time without further training taking place. However, it is believed that the AI based system would be enhanced in its predictability automatically as further training are followed with accumulated CCTV BigData without any revision or calibration of the incident detection system.

A Survey on the Consumer's Recognition of Food Labeling in Seoul Area (서울지역 소비자들의 식품표시에 대한 인식도 조사)

  • Choi, Mi-Hee;Youn, Su-Jin;Ahn, Yeong-Sun;Seo, Kab-Jong;Park, Ki-Hwan;Kim, Gun-Hee
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.39 no.10
    • /
    • pp.1555-1564
    • /
    • 2010
  • This study investigated consumer's recognition of food labeling in order to contribute to the development of food labels which are more informative to consumers. The questionnaires had been collected from 120 male and female customers living in Seoul with the age between 10's and 60's from November 2nd to November 7th 2009. For checking the food label at the time of purchase, 58.3% of the consumers checked the food label and the main reason for checking the food label was to confirm sell-by date (60.1%). Sixty percent of the consumers were satisfied with the current food labeling. Among those who are not satisfied, 30.6% complained about difficult terms to understand and 25.8% were dissatisfied with insufficient information. In every age group, most people were not satisfied with labeling on food ingredient and additives, followed by date of manufacture and sell-by date. 53.1% of consumers demanded to label date of manufacture and sell-by date together. For more clear information, consumers wanted use-by date (47.5%) rather than sell-by date (23.3%). 56.7% of consumers was dissatisfied with warning information such as allergic warning and the reasons for dissatisfaction were poor visibility (37.5%) and insufficient information (33.4%). Moreover most consumers (90.0%) showed little knowledge on irradiation. To improve of the food labeling standards into consumer-oriented standards, both amendment of the food labeling standards and consumer education will be necessary.

Sea Fog Level Estimation based on Maritime Digital Image for Protection of Aids to Navigation (항로표지 보호를 위한 디지털 영상기반 해무 강도 측정 알고리즘)

  • Ryu, Eun-Ji;Lee, Hyo-Chan;Cho, Sung-Yoon;Kwon, Ki-Won;Im, Tae-Ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.25-32
    • /
    • 2021
  • In line with future changes in the marine environment, Aids to Navigation has been used in various fields and their use is increasing. The term "Aids to Navigation" means an aid to navigation prescribed by Ordinance of the Ministry of Oceans and Fisheries which shows navigating ships the position and direction of the ships, position of obstacles, etc. through lights, shapes, colors, sound, radio waves, etc. Also now the use of Aids to Navigation is transforming into a means of identifying and recording the marine weather environment by mounting various sensors and cameras. However, Aids to Navigation are mainly lost due to collisions with ships, and in particular, safety accidents occur because of poor observation visibility due to sea fog. The inflow of sea fog poses risks to ports and sea transportation, and it is not easy to predict sea fog because of the large difference in the possibility of occurrence depending on time and region. In addition, it is difficult to manage individually due to the features of Aids to Navigation distributed throughout the sea. To solve this problem, this paper aims to identify the marine weather environment by estimating sea fog level approximately with images taken by cameras mounted on Aids to Navigation and to resolve safety accidents caused by weather. Instead of optical and temperature sensors that are difficult to install and expensive to measure sea fog level, sea fog level is measured through the use of general images of cameras mounted on Aids to Navigation. Furthermore, as a prior study for real-time sea fog level estimation in various seas, the sea fog level criteria are presented using the Haze Model and Dark Channel Prior. A specific threshold value is set in the image through Dark Channel Prior(DCP), and based on this, the number of pixels without sea fog is found in the entire image to estimate the sea fog level. Experimental results demonstrate the possibility of estimating the sea fog level using synthetic haze image dataset and real haze image dataset.

Development of High-Resolution Fog Detection Algorithm for Daytime by Fusing GK2A/AMI and GK2B/GOCI-II Data (GK2A/AMI와 GK2B/GOCI-II 자료를 융합 활용한 주간 고해상도 안개 탐지 알고리즘 개발)

  • Ha-Yeong Yu;Myoung-Seok Suh
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1779-1790
    • /
    • 2023
  • Satellite-based fog detection algorithms are being developed to detect fog in real-time over a wide area, with a focus on the Korean Peninsula (KorPen). The GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI, GK2A) satellite offers an excellent temporal resolution (10 min) and a spatial resolution (500 m), while GEO-KOMPSAT-2B/Geostationary Ocean Color Imager-II (GK2B/GOCI-II, GK2B) provides an excellent spatial resolution (250 m) but poor temporal resolution (1 h) with only visible channels. To enhance the fog detection level (10 min, 250 m), we developed a fused GK2AB fog detection algorithm (FDA) of GK2A and GK2B. The GK2AB FDA comprises three main steps. First, the Korea Meteorological Satellite Center's GK2A daytime fog detection algorithm is utilized to detect fog, considering various optical and physical characteristics. In the second step, GK2B data is extrapolated to 10-min intervals by matching GK2A pixels based on the closest time and location when GK2B observes the KorPen. For reflectance, GK2B normalized visible (NVIS) is corrected using GK2A NVIS of the same time, considering the difference in wavelength range and observation geometry. GK2B NVIS is extrapolated at 10-min intervals using the 10-min changes in GK2A NVIS. In the final step, the extrapolated GK2B NVIS, solar zenith angle, and outputs of GK2A FDA are utilized as input data for machine learning (decision tree) to develop the GK2AB FDA, which detects fog at a resolution of 250 m and a 10-min interval based on geographical locations. Six and four cases were used for the training and validation of GK2AB FDA, respectively. Quantitative verification of GK2AB FDA utilized ground observation data on visibility, wind speed, and relative humidity. Compared to GK2A FDA, GK2AB FDA exhibited a fourfold increase in spatial resolution, resulting in more detailed discrimination between fog and non-fog pixels. In general, irrespective of the validation method, the probability of detection (POD) and the Hanssen-Kuiper Skill score (KSS) are high or similar, indicating that it better detects previously undetected fog pixels. However, GK2AB FDA, compared to GK2A FDA, tends to over-detect fog with a higher false alarm ratio and bias.